Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for round robin allocation of cuda cards to workers #36

Merged
merged 1 commit into from
Oct 9, 2023

Conversation

richardbeare
Copy link
Contributor

A gunicorn post_fork hook has been added to set CUDA_VISIBLE_DEVICES, which sets the device torch will use.

A app level config variable "APP_CUDA_DEVICE_COUNT" is required to indicate how many devices are to be used.

The devices are allocated to the docker in the docker compose configuration.

A gunicorn post_fork hook has been added to set CUDA_VISIBLE_DEVICES, which
sets the device torch will use.

A app level config variable "APP_CUDA_DEVICE_COUNT" is required to
indicate how many devices are to be used.

The devices are allocated to the docker in the docker compose configuration.
@vladd-bit vladd-bit merged commit 5dbef5d into CogStack:master Oct 9, 2023
0 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants