文章目录
- why
- -
- Abstract
- Introduction
-
- Contributions
- Related Works
-
- Filter-level pruning
- Block-level pruning
- Data Limited Knowledge Distillation
- Method
-
- Overview
- The motivation to drop blocks
- The recoverability of the pruned model
- Recover the accuracy of the pruned model
- Experimental Results
-
- Different amounts of training data
- Different acceleration ratios
- The data-latency-accuracy tradeoff
- Results on MobileNetV2
-
- Train with synthesized/out-of-domain images
- Different criteria for dropping blocks
- Conclusions and Future Works
paper
hh