-
Notifications
You must be signed in to change notification settings - Fork 84
Writing generic CI jobs
For most PyTorch developers we build and test our projects on Linux using both CPUs and GPUs. Unfortunately the process of setting up machines / workflows to run both of these successfully is sometimes a process with lots of boiler plate. Fortunately we've built a generic solution that should allow you to focus on writing good tests / automation and not have to worry about the overhead of maintaining the infrastructure code to set up your runners!
For up to date arguments you can use please reference linux_job.yml
. The list of available runners can be found in this scale_config.yml
file.
name: Test build/test linux cpu
on:
pull_request:
push:
branches:
- main
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test linux gpu
on:
pull_request:
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.4xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "11.6"
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test linux gpu
on:
pull_request:
jobs:
build:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.2xlarge
# specify cuda here to get the correct build image that contains CUDA build toolkit
gpu-arch-type: cuda
gpu-arch-version: "11.6"
upload-artifact: my-build-artifact
script: |
pip install -r requirements.txt
# ${RUNNER_ARTIFACT_DIR} should always point to where the workflow is expecting to find artifacts
python setup.py -d "${RUNNER_ARTIFACT_DIR}" bdist_wheel
test:
# Specify dependency here so these run sequentially
needs: build
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.4xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "11.6"
download-artifact: my-build-artifact
script: |
pip install -r requirements.txt
pip install "${RUNNER_ARTIFACT_DIR}/*.whl"
make test
We added support for secrets in reusable workflows. This is accomplished by exporting the secrets and providing them as environment variables. The variables in the environment follow the following formatting:
re.sub(r'[^a-zA-Z0-9_]', '', f'SECRET_{x.upper()}'.replace('-', '_'))
By default all secrets are exported and available, if you rather only limit the available secrets (recommended) you can do this by setting the secrets-env
workflow input with the secrets space separated, such as:
test-secrets-filter-var:
uses: ./.github/workflows/linux_job.yml
secrets: inherit
with:
job-name: "test-secrets-filter-var"
runner: linux.2xlarge
secrets-env: "NOT_A_SECRET_USED_FOR_TESTING"
test-infra-repository: ${{ github.repository }}
test-infra-ref: ${{ github.ref }}
gpu-arch-type: cpu
gpu-arch-version: ""
script: |
[[ "${SECRET_NOT_A_SECRET_USED_FOR_TESTING}" == "ASDF" ]] || exit 1
For up to date arguments you can use please reference windows_job.yml
name: Test build/test windows cpu
on:
pull_request:
push:
branches:
- main
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test windows gpu
on:
pull_request:
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
runner: windows.8xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "11.6"
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test windows gpu
on:
pull_request:
jobs:
build:
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
runner: windows.4xlarge
upload-artifact: my-build-artifact
script: |
pip install -r requirements.txt
# ${RUNNER_ARTIFACT_DIR} should always point to where the workflow is expecting to find artifacts
python setup.py -d "${RUNNER_ARTIFACT_DIR}" bdist_wheel
test:
# Specify dependency here so these run sequentially
needs: build
uses: pytorch/test-infra/.github/workflows/windows_job.yml@main
with:
runner: windows.8xlarge.nvidia.gpu
download-artifact: my-build-artifact
script: |
pip install -r requirements.txt
pip install "${RUNNER_ARTIFACT_DIR}/*.whl"
make test
For up to date arguments you can use please reference windows_job.yml
name: Test build/test windows cpu
on:
pull_request:
push:
branches:
- main
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/macos_job.yml@main
with:
script: |
pip install -r requirements.txt
python setup.py develop
make test
name: Test build/test macOS Apple silicon
on:
pull_request:
jobs:
build-test:
uses: pytorch/test-infra/.github/workflows/macos_job.yml@main
with:
runner: macos-m1-12
script: |
pip install -r requirements.txt
python setup.py develop
make test