-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does torch.export preserve the quantize_per_tensor/dequantize_per_tensor ops? #986
Comments
@jerryzh168 we have a way to preserve these ops in export, right? |
I'll add a tutorial for this, but maybe you can try running ao/test/integration/test_integration.py Lines 1496 to 1497 in 9229df9
|
Thanks - is the ops in |
@justinchuby we tend to move away from
|
Could you point me to where the torch.ops.quant.* ops are declared, and is there a list of all ops available? |
@justinchuby you can search for the ops annotated with |
Does torch.export preserve the quantize_per_tensor/dequantize_per_tensor ops? I was testing with
There I don't seem to see the quant/dequant ops. I was hoping that they are preserved so that converting to onnx is easier. Or is there a different convention for representing the quantized operations?
The text was updated successfully, but these errors were encountered: