You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try and use this executor on a compute server with an RTX 3090, I get the following warning:
executor0_1 | NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
executor0_1 | The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
I'm using the latest-gpu tag of the executor, inside a Docker container, from running jina export docker-compose on this flow:
Unfortunately we build our docker image with the default pytorch version that is available on pypi and which is incompatible with the rtx 3090. This version being the one which is compatible for most of the people with have to stick with it.
In your case, unfortunately, the only current solution is to build the docker image for the TransformerTorchEncoder yourself.
When I try and use this executor on a compute server with an RTX 3090, I get the following warning:
I'm using the
latest-gpu
tag of the executor, inside a Docker container, from runningjina export docker-compose
on this flow:The text was updated successfully, but these errors were encountered: