You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to install torchchat on a new machine, w/ a new clone, and following instructions ran into this issue.
$ git describe --all --long
heads/main-0-g56be609
[...]
$ ./install/install_requirements.sh # in venv
[...]
INFO: pip is looking at multiple versions of lm-eval to determine which version is compatible with other requirements. This could take a while.
ERROR: Ignored the following versions that require a different python version: 0.55.2 Requires-Python <3.5; 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11; 1.26.0 Requires-Python <3.13,>=3.9; 1.26.1 Requires-Python <3.13,>=3.9
ERROR: Could not find a version that satisfies the requirement torch>=1.8 (from lm-eval) (from versions: none)
ERROR: No matching distribution found for torch>=1.8
Collecting environment information...
PyTorch version: 2.6.0.dev20241213
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.2
Libc version: N/A
Python version: 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit-Mach-O
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0.dev20241213
[pip3] torchao==0.8.0+git2f97b095
[pip3] torchtune==0.5.0.dev20241126+cpu
[pip3] torchvision==0.22.0.dev20241213
[conda] Could not collect
The text was updated successfully, but these errors were encountered:
🐛 Describe the bug
I tried to install torchchat on a new machine, w/ a new clone, and following instructions ran into this issue.
Commenting this package out works.
Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241213
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.2
Libc version: N/A
Python version: 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit-Mach-O
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0.dev20241213
[pip3] torchao==0.8.0+git2f97b095
[pip3] torchtune==0.5.0.dev20241126+cpu
[pip3] torchvision==0.22.0.dev20241213
[conda] Could not collect
The text was updated successfully, but these errors were encountered: