Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix links in Introduction to Quantization on Pytorch blog #1801

Open
wants to merge 9 commits into
base: site
Choose a base branch
from
4 changes: 2 additions & 2 deletions _posts/2020-3-26-introduction-to-quantization-on-pytorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ We developed three techniques for quantizing neural networks in PyTorch as part
import torch.quantization
quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
```
* See the documentation for the function [here](https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic) an end-to-end example in our tutorials [here](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html) and [here](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html).
* See the documentation for the function [here](https://pytorch.org/docs/stable/generated/torch.ao.quantization.quantize_dynamic.html) an end-to-end example in our tutorials [here](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html) and [here](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html).

2. ### **Post-Training Static Quantization**

Expand Down Expand Up @@ -197,7 +197,7 @@ Quantization provides a 4x reduction in the model size and a speedup of 2x to 3x
</div>

### **Accuracy results**
We also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we [compared](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) the F1 score of BERT on the GLUE benchmark for MRPC.
We also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we [compared](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py) the F1 score of BERT on the GLUE benchmark for MRPC.

#### **Computer Vision Model accuracy**

Expand Down