Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timm_efficientdet NotImplementedError: The original model code forces the use of CUDA. #492

Open
mengfei25 opened this issue Jun 27, 2024 · 2 comments

Comments

@mengfei25
Copy link
Contributor

🐛 Describe the bug

torchbench_amp_fp16_training
xpu train timm_efficientdet
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 4177, in run
) = runner.load_model(
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 320, in load_model
benchmark = benchmark_cls(
File "/home/sdp/actions-runner/_work/torch-xpu-ops/benchmark/torchbenchmark/util/model.py", line 39, in call
obj = type.call(cls, *args, **kwargs)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/benchmark/torchbenchmark/models/timm_efficientdet/init.py", line 55, in init
raise NotImplementedError("The original model code forces the use of CUDA.")
NotImplementedError: The original model code forces the use of CUDA.

model_fail_to_load

Versions

torch-xpu-ops: 31c4001
pytorch: 0f81473d7b4a1bf09246410712df22541be7caf3 + PRs: 127277,129120
device: PVC 1100, 803.61, 0.5.1

@weishi-deng
Copy link
Contributor

This model requests us to add xpu support for both the benchmark repo and third-party repo efficientdet-pytorch as it writes hard code with cuda like: (in https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/loader.py).

@weishi-deng
Copy link
Contributor

PR pytorch/benchmark#2374

@chuanqi129 chuanqi129 modified the milestones: PT2.5, PT2.6 Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants