You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i create an diffusers pipeline and then to xla device and assign the pipeline to variable pipe. next i need to reassign pipe to another pipeline (different pipeline from the first one). so i do pipe.to('cpu') and then pipe = diffusers.pipeline. i see i can use torch.cuda.empty_cache() to clean gpu memory. i want to know how to clean tpu memory?
The text was updated successfully, but these errors were encountered:
@chaowenguo. Speaking from experience. I believe within torch_xla, there isn't a direct equivalent to torch.cuda.empty_cache(). However, by using pipe.to('cpu'), you are effectively offloading the pipeline from the XLA device's memory to the CPU. This process triggers the execution of the XLA graph, which in turn clears the memory utilized by the pipeline. If you are still experiencing memory issues.
you may also consider explicitly deleting the pipeline object (using del pipe) and calling xm.mark_step() to finalize the memory cleanup. But moving to cpu should be enough
❓ Questions and Help
i create an diffusers pipeline and then to xla device and assign the pipeline to variable pipe. next i need to reassign pipe to another pipeline (different pipeline from the first one). so i do pipe.to('cpu') and then pipe = diffusers.pipeline. i see i can use torch.cuda.empty_cache() to clean gpu memory. i want to know how to clean tpu memory?
The text was updated successfully, but these errors were encountered: