We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I saw #6362 but there's no example training script found? For example, if I have multiple TPU v3-8 VMs, how would I achieve this with SPMD/FSDPv2?
I'm currently sending the commands to all TPU VMs this way:
python3.10 podrun --include-local -- hostname
The text was updated successfully, but these errors were encountered:
*Update: I found https://github.com/pytorch/xla/blob/master/docs/source/perf/spmd_distributed_checkpoint.md#process-groups Is there any example script for this?
Sorry, something went wrong.
Anybody can help? I'm still stuck on this
No branches or pull requests
❓ Questions and Help
I saw #6362 but there's no example training script found? For example, if I have multiple TPU v3-8 VMs, how would I achieve this with SPMD/FSDPv2?
I'm currently sending the commands to all TPU VMs this way:
The text was updated successfully, but these errors were encountered: