You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The motivation for this proposal is to expand the support of Tensordict to include Huawei Ascend NPU and other third-party devices. Currently, Tensordict only supports CPU and GPU, which limits its applicability in environments where Ascend NPUs are prevalent. This limitation can be frustrating for users who have invested in Ascend NPUs and would like to leverage Tensordict for their machine learning tasks. By supporting more devices, Tensordict can become more versatile and accessible to a broader range of users.
Solution
A clear and concise description of what you want to happen is for Tensordict to support Huawei Ascend NPU and other third-party devices. This would involve:
Device Compatibility:
Extend Tensordict to recognize and utilize Huawei Ascend NPUs for tensor operations.
Ensure that Tensordict can handle data transfer and computation on Ascend NPUs efficiently.
API Consistency:
Maintain the same API for CPU, GPU, and Ascend NPU operations to ensure a seamless user experience.
Performance Optimization:
Optimize the performance of Tensordict operations on Ascend NPUs to match or exceed the performance on GPUs.
Alternatives
One alternative solution considered is to use a different library that already supports Huawei Ascend NPUs. However, this would require users to switch from Tensordict, which may involve significant changes to their existing workflows and codebases. Another alternative is to wait for Huawei to develop their own tensor library, but this would not address the immediate need for a versatile tensor library that supports multiple devices.
Additional context
Huawei Ascend NPUs are increasingly being adopted in various industries, particularly in China, due to their high performance and efficiency. Supporting these devices would make Tensordict more attractive to a broader audience and enhance its competitiveness in the machine learning ecosystem.
Checklist
I have checked that there is no similar issue in the repo (required)
The text was updated successfully, but these errors were encountered:
Thanks for proposing this.
I'm not familiar with what needs to be done to execute a Pytorch model on npu.
Would that require a third party library to be installed?
Motivation
The motivation for this proposal is to expand the support of Tensordict to include Huawei Ascend NPU and other third-party devices. Currently, Tensordict only supports CPU and GPU, which limits its applicability in environments where Ascend NPUs are prevalent. This limitation can be frustrating for users who have invested in Ascend NPUs and would like to leverage Tensordict for their machine learning tasks. By supporting more devices, Tensordict can become more versatile and accessible to a broader range of users.
Solution
A clear and concise description of what you want to happen is for Tensordict to support Huawei Ascend NPU and other third-party devices. This would involve:
Device Compatibility:
API Consistency:
Performance Optimization:
Alternatives
One alternative solution considered is to use a different library that already supports Huawei Ascend NPUs. However, this would require users to switch from Tensordict, which may involve significant changes to their existing workflows and codebases. Another alternative is to wait for Huawei to develop their own tensor library, but this would not address the immediate need for a versatile tensor library that supports multiple devices.
Additional context
Huawei Ascend NPUs are increasingly being adopted in various industries, particularly in China, due to their high performance and efficiency. Supporting these devices would make Tensordict more attractive to a broader audience and enhance its competitiveness in the machine learning ecosystem.
Checklist
The text was updated successfully, but these errors were encountered: