Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

is this only for linux? #957

Open
FurkanGozukara opened this issue Sep 26, 2024 · 15 comments
Open

is this only for linux? #957

FurkanGozukara opened this issue Sep 26, 2024 · 15 comments

Comments

@FurkanGozukara
Copy link

I installed on windows and failing

from torchao.quantization import quantize_

pip freeze

Microsoft Windows [Version 10.0.19045.4894]
(c) Microsoft Corporation. All rights reserved.

R:\CogVideoX_v1\CogVideoX_SECourses\venv\Scripts>activate

(venv) R:\CogVideoX_v1\CogVideoX_SECourses\venv\Scripts>pip freeze
accelerate==0.34.2
aiofiles==23.2.1
annotated-types==0.7.0
anyio==4.6.0
certifi==2024.8.30
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
contourpy==1.3.0
cycler==0.12.1
decorator==4.4.2
diffusers @ git+https://github.com/huggingface/diffusers.git@665c6b47a23bc841ad1440c4fe9cbb1782258656
distro==1.9.0
einops==0.8.0
exceptiongroup==1.2.2
fastapi==0.115.0
ffmpy==0.4.0
filelock==3.16.1
fonttools==4.54.1
fsspec==2024.9.0
gradio==4.44.0
gradio_client==1.3.0
h11==0.14.0
httpcore==1.0.5
httpx==0.27.2
huggingface-hub==0.25.1
idna==3.10
imageio==2.35.1
imageio-ffmpeg==0.5.1
importlib_metadata==8.5.0
importlib_resources==6.4.5
Jinja2==3.1.4
jiter==0.5.0
kiwisolver==1.4.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.2
mdurl==0.1.2
moviepy==1.0.3
mpmath==1.3.0
networkx==3.3
numpy==1.26.0
openai==1.48.0
opencv-python==4.10.0.84
orjson==3.10.7
packaging==24.1
pandas==2.2.3
Pillow==9.5.0
proglog==0.1.10
psutil==6.0.0
pydantic==2.9.2
pydantic_core==2.23.4
pydub==0.25.1
Pygments==2.18.0
pyparsing==3.1.4
python-dateutil==2.9.0.post0
python-multipart==0.0.10
pytz==2024.2
PyYAML==6.0.2
regex==2024.9.11
requests==2.32.3
rich==13.8.1
ruff==0.6.8
safetensors==0.4.5
scikit-video==1.1.11
scipy==1.14.1
semantic-version==2.10.0
sentencepiece==0.2.0
shellingham==1.5.4
six==1.16.0
sniffio==1.3.1
spandrel==0.4.0
starlette==0.38.6
sympy==1.13.3
tokenizers==0.20.0
tomlkit==0.12.0
torch==2.4.1+cu124
torchao==0.1
torchvision==0.19.1+cu124
tqdm==4.66.5
transformers==4.45.0
triton @ https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/triton-3.0.0-cp310-cp310-win_amd64.whl
typer==0.12.5
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
uvicorn==0.30.6
websockets==12.0
xformers==0.0.28.post1
zipp==3.20.2

(venv) R:\CogVideoX_v1\CogVideoX_SECourses\venv\Scripts>
Traceback (most recent call last):
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1764, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "C:\Python3108\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\models\t5\modeling_t5.py", line 38, in <module>
    from ...modeling_utils import PreTrainedModel
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\modeling_utils.py", line 58, in <module>
    from .quantizers import AutoHfQuantizer, HfQuantizer
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\quantizers\__init__.py", line 14, in <module>
    from .auto import AutoHfQuantizer, AutoQuantizationConfig
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\quantizers\auto.py", line 42, in <module>
    from .quantizer_torchao import TorchAoHfQuantizer
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\quantizers\quantizer_torchao.py", line 35, in <module>
    from torchao.quantization import quantize_
ImportError: cannot import name 'quantize_' from 'torchao.quantization' (R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\torchao\quantization\__init__.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 830, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "C:\Python3108\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\pipelines\cogvideo\pipeline_cogvideox.py", line 21, in <module>
    from transformers import T5EncoderModel, T5Tokenizer
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1755, in __getattr__
    value = getattr(module, name)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1754, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\transformers\utils\import_utils.py", line 1766, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
cannot import name 'quantize_' from 'torchao.quantization' (R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\torchao\quantization\__init__.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Python3108\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Python3108\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\__main__.py", line 39, in <module>
    cli.main()
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 430, in main
    run()
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 284, in run_file
    runpy.run_path(target, run_name="__main__")
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "c:\program files\microsoft visual studio\2022\community\common7\ide\extensions\microsoft\python\core\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
    exec(code, run_globals)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\app.py", line 14, in <module>
    from diffusers import (
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 821, in __getattr__
    value = getattr(module, name)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 821, in __getattr__
    value = getattr(module, name)
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 820, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\diffusers\utils\import_utils.py", line 832, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import diffusers.pipelines.cogvideo.pipeline_cogvideox because of the following error (look up to see its traceback):
Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
cannot import name 'quantize_' from 'torchao.quantization' (R:\CogVideoX_v1\CogVideoX_SECourses\venv\lib\site-packages\torchao\quantization\__init__.py)
Press any key to continue . . .
@jerryzh168
Copy link
Contributor

jerryzh168 commented Sep 26, 2024

it might be the torchao version is too low: "torchao==0.1", we introduced quantize_ in 0.4.0 I think: https://github.com/pytorch/ao/releases, but in the meantime our packages are only available in linux and mac right now I think

@jcaip
Copy link
Contributor

jcaip commented Sep 26, 2024

Can you try updating torchao? I don't think the top level quantize_ API is available in 0.1

But from what I understand, torch.compile() does not work on windows because triton is lacking windows support, which we use to codegen our quantization kernels, so I wouldn't expect this to work on windows.

@FurkanGozukara
Copy link
Author

i am about to test latest @jerryzh168 thank you will write here

@FurkanGozukara
Copy link
Author

FurkanGozukara commented Sep 26, 2024

it might be the torchao version is too low: "torchao==0.1", we introduced quantize_ in 0.4.0 I think: https://github.com/pytorch/ao/releases, but in the meantime our packages are only available in linux and mac right now I think

what pip finds latest version is

ERROR: Could not find a version that satisfies the requirement torchao==0.4.0 (from versions: 0.0.1, 0.0.3, 0.1)

how can i install latest on windows? python 3.10 windows 10

if you have wheel link i can directly install

@jerryzh168
Copy link
Contributor

we don't have a windows build today I think, cc @atalman can you provide a pointer to support a build in windows as well?

@FurkanGozukara
Copy link
Author

we don't have a windows build today I think, cc @atalman can you provide a pointer to support a build in windows as well?

awesome waiting to test thank you so much

@gau-nernst
Copy link
Collaborator

You can probably install torchao from source. If you don't need the CUDA extensions, you can do

USE_CPP=0 pip install git+https://github.com/pytorch/ao

But again, since torch.compile() doesn't work in windows, it's not very useful.

@abhi-vandit
Copy link

is there a way to make the quantizations work on windows+nvidia gpu without torch.compile and inductor backend? I am mostly concerned about inference speedups.

@Skquark
Copy link

Skquark commented Sep 28, 2024

I'm also in need of the wheel for torchao on Windows to get Quantization working for Flux, CogVideoX, etc. in my app. I'm fine without compile, but the other features are really needed to optimize vram. Tried installing from the github and running setup.py install from clone, but gave me errors. Hoping we can run something newer than v0.1 soon.. Thanks.

@FurkanGozukara
Copy link
Author

I'm also in need of the wheel for torchao on Windows to get Quantization working for Flux, CogVideoX, etc. in my app. I'm fine without compile, but the other features are really needed to optimize vram. Tried installing from the github and running setup.py install from clone, but gave me errors. Hoping we can run something newer than v0.1 soon.. Thanks.

so true

from this logic (that use linux not windows) why do we even have Python on Windows?, PyTorch on Windows? xFormers on Windows? if such stuff is not necessary on Windows?

I don't get logic of forcing people to use Linux. If we follow this mindset, why we have all these on Windows?

@gau-nernst
Copy link
Collaborator

If you don't need the CUDA extensions (right now they are only for backing FPx and sparse marlin kernels I think), and you don't mind the lack of torch.compile() support, you can install torchao from source on Windows like I mentioned previously

set USE_CPP=0
pip install git+https://github.com/pytorch/ao

I don't have access to a Windows machine right now, so I just googled how to set environment variable on Windows here. You might need to adjust accordingly.

You are welcome to improve torchao experience on Windows. In fact, there are past PRs by the community, including me, that help build torchao successfully on Windows, including CUDA extension support.

@abhi-vandit
Copy link

If you don't need the CUDA extensions (right now they are only for backing FPx and sparse marlin kernels I think), and you don't mind the lack of torch.compile() support, you can install torchao from source on Windows like I mentioned previously

set USE_CPP=0
pip install git+https://github.com/pytorch/ao

I don't have access to a Windows machine right now, so I just googled how to set environment variable on Windows here. You might need to adjust accordingly.

You are welcome to improve torchao experience on Windows. In fact, there are past PRs by the community, including me, that help build torchao successfully on Windows, including CUDA extension support.

Thanks for the reply. I have a couple of clarifying questions - It seems that previously one was able to build torchao with cuda extension support on windows. What changed since then? Also, since torch.compile is not available on windows, what kind of speedups(if any) on gpu- can we expect for normal pytorch models quantized by torchao

@gau-nernst
Copy link
Collaborator

@abhi-vandit Since there is no Windows CI, there is no guarantee that new CUDA extensions in torchao can be built correctly on Windows. However, most of the errors usually come from Unix-specific features, thus the fix is usually simple e.g. #951 #396. I think torchao welcome small fixes like these.

I mentioned not building CUDA extensions previously since usually it's quite involved to set up C++ and CUDA compiler on Windows. So if you don't need the CUDA extensions, it's not really worth the efforts.

what kind of speedups(if any) on gpu- can we expect for normal pytorch models quantized by torchao

I think most likely you will only see slow down. Perhaps you can still get some memory savings.

@abhi-vandit
Copy link

@gau-nernst Thanks for the prompt reply. Hope this changes in near future and we are able to utilize quantization for inference time speedups on windows as well.

@FurkanGozukara
Copy link
Author

If you don't need the CUDA extensions (right now they are only for backing FPx and sparse marlin kernels I think), and you don't mind the lack of torch.compile() support, you can install torchao from source on Windows like I mentioned previously

set USE_CPP=0
pip install git+https://github.com/pytorch/ao

I don't have access to a Windows machine right now, so I just googled how to set environment variable on Windows here. You might need to adjust accordingly.

You are welcome to improve torchao experience on Windows. In fact, there are past PRs by the community, including me, that help build torchao successfully on Windows, including CUDA extension support.

this worked

(venv) C:\Users\Furkan\Videos\a\venv\Scripts>pip freeze
filelock==3.13.1
fsspec==2024.2.0
Jinja2==3.1.3
MarkupSafe==2.1.5
mpmath==1.3.0
networkx==3.2.1
numpy==1.26.3
pillow==10.2.0
sympy==1.12
torch==2.4.1+cu124
torchao==0.6.0+git83d5b63
torchaudio==2.4.1+cu124
torchvision==0.19.1+cu124
typing_extensions==4.9.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants