Skip to content

Commit

Permalink
Release v2.0 of NNCF to master
Browse files Browse the repository at this point in the history
  • Loading branch information
vshampor committed Jul 19, 2021
1 parent 942f1ca commit 6673fc0
Show file tree
Hide file tree
Showing 2,077 changed files with 278,913 additions and 242,770 deletions.
3 changes: 3 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
*.png filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text

* text=auto eol=lf
*.{tfrecord,h5} binary
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -108,4 +108,4 @@ ENV/
*.tar

# object detection eval results
examples/object_detection/eval/
examples/torch/object_detection/eval/
1 change: 1 addition & 0 deletions .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,4 @@ good-names = logger,fn
[DESIGN]
max-statements=60
max-branches=13
max-parents=9
5 changes: 3 additions & 2 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
graft nncf/extensions
graft nncf/hw_configs
graft nncf/torch/extensions
graft nncf/common/hardware/configs
include LICENSE
include licensing/third-party-programs.txt
360 changes: 229 additions & 131 deletions README.md

Large diffs are not rendered by default.

53 changes: 53 additions & 0 deletions ReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,59 @@ samples distributed with the code. The samples demonstrate the usage of compres
public models and datasets for three different use cases: Image Classification, Object Detection,
and Semantic Segmentation.

## New in Release 2.0:
- Added TensorFlow 2.4.2 support - NNCF can now be used to apply the compression algorithms to models originally trained in TensorFlow.
NNCF with TensorFlow backend supports the following features:
- Compression algorithms:
- Quantization (with HW-specific targeting aligned with PyTorch)
- Sparsity:
- Magnitude Sparsity
- RB Sparsity
- Filter pruning
- Support for only Keras models consisting of standard Keras layers and created by:
- Keras Sequential API
- Keras Functional API
- Automatic, configurable model graph transformation to obtain the compressed model.
- Distributed training on multiple GPUs on one machine is supported using `tf.distribute.MirroredStrategy`.
- Exporting compressed models to SavedModel or Frozen Graph format, ready to use with OpenVINO™ toolkit.

- Added model compression samples for NNCF with TensorFlow backend:
- Classification
- Keras training loop.
- Models form the tf.keras.applications module (ResNets, MobileNets, Inception and etc.) are supported.
- TensorFlow Datasets (TFDS) and TFRecords (ImageNet2012, Cifar100, Cifar10) are supported.
- Compression results are claimed for MobileNet V2, MobileNet V3 small, MobileNet V3 large, ResNet50, Inception V3.
- Object Detection (Compression results are claimed for RetinaNet, YOLOv4)
- Custom training loop.
- TensorFlow Datasets (TFDS) and TFRecords for COCO2017 are supported.
- Compression results for are claimed for RetinaNet, YOLOv4.
- Instance Segmentation
- Custom training loop
- TFRecords for COCO2017 is supported.
- Compression results are claimed for MaskRCNN

- Accuracy-aware training available for filter pruning and sparsity in order to achieve best compression results within a given accuracy drop threshold in a fully automated fashion.
- Framework-specific checkpoints produced with NNCF now have NNCF-specific compression state information included, so that the exact compressed model state can be restored/loaded without having to provide the same NNCF config file that was used during the creation of the NNCF-compressed checkpoint
- Common interface for compression methods for both PyTorch and TensorFlow backends (https://github.com/openvinotoolkit/nncf/tree/develop/nncf/api).
- (PyTorch) Added an option to specify an effective learning rate multiplier for the trainable parameters of the compression algorithms via NNCF config, for finer control over which should tune faster - the underlying FP32 model weights or the compression parameters.
- (PyTorch) Unified scales for concat operations - the per-tensor quantizers that affect the concat operations will now have identical scales so that the resulting concatenated tensor can be represented without loss of accuracy w.r.t. the concatenated subcomponents.
- (TensorFlow) Algo-mixing: Added configuration files and reference checkpoints for filter-pruned + qunatized models: ResNet50@ImageNet2012(40% of filters pruned + INT8), RetinaNet@COCO2017(40% of filters pruned + INT8).
- (Experimental, PyTorch) [Learned Global Ranking]((https://arxiv.org/abs/1904.12368)) filter pruning mechanism for better pruning ratios with less accuracy drop for a broad range of models has been implemented.
- (Experimental, PyTorch) Knowledge distillation supported, ready to be used with any compression algorithm to produce an additional loss source of the compressed model against the uncompressed version

Breaking changes:
- `CompressionLevel` has been renamed to `CompressionStage`
- `"ignored_scopes"` and "target_scopes" no longer allow prefix matching - use full-fledged regular expression approach via {re} if anything more than an exact match is desired.
- (PyTorch) Removed version-agnostic name mapping for ReLU operations, i.e. the NNCF configs that referenced "RELU" (all caps) as an operation name will now have to reference an exact ReLU PyTorch function name such as "relu" or "relu_"
- (PyTorch) Removed the example of code modifications (Git patches and base commit IDs are provided) for [mmdetection](https://github.com/open-mmlab/mmdetection) repository.
- Batchnorm adaptation "forgetting" step has been removed since it has been observed to introduce accuracy degradation; the "num_bn_forget_steps" parameter in the corresponding NNCF config section has been removed.
- Framework-specific requirements no longer installed during `pip install nncf` or `python setup.py install` and are assumed to be present in the user's environment; the pip's "extras" syntax must be used to install the BKC requirements, e.g. by executing `pip install nncf[tf]`, `pip install nncf[torch]` or `pip install nncf[tf,torch]`
- `"quantizable_subgraph_patterns"` option removed from the NNCF config

Bugfixes:
- (PyTorch) Fixed a hang with batchnorm adaptation being applied in DDP mode
- (PyTorch) Fixed tracing of the operations that return NotImplemented

## New in Release 1.7.1:
Bugfixes:
- Fixed a bug with where compressed models that were supposed to return named tuples actually returned regular tuples
Expand Down
112 changes: 0 additions & 112 deletions beta/README.md

This file was deleted.

Loading

0 comments on commit 6673fc0

Please sign in to comment.