You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The memory associated with CUFFT plans is not always reclaimed. This was a big problem for CUDA v5.3.4 because plan memory was not consistently reclaimed/reused. On master (as of a2a9b13) the situation is much improved, but there still seems to be a leak of one plan's worth of memory.
Here is the behavior I'm seeing when using master (a2a9b13):
The data array uses 1 GiB of GPU memory. The first plan uses 1 GiB of memory, but it is not reclaimed after the plan is (presumably) GC'd. The second plan does not reuse the first plan's memory, so GU memory usage goes up to 3 GiB, but this is reclaimed when the second plan is GC'd. The third plan behave the same as the second plan.
Curiously, p.handle goes back and forth between 1 and 2 with each plan creation. I'm not sure if that's relevant, but I think the desired bevavior might be to keep re-using handle 1?
The text was updated successfully, but these errors were encountered:
The memory associated with CUFFT plans is not always reclaimed. This was a big problem for CUDA v5.3.4 because plan memory was not consistently reclaimed/reused. On master (as of a2a9b13) the situation is much improved, but there still seems to be a leak of one plan's worth of memory.
Here is the behavior I'm seeing when using master (a2a9b13):
The data array uses 1 GiB of GPU memory. The first plan uses 1 GiB of memory, but it is not reclaimed after the plan is (presumably) GC'd. The second plan does not reuse the first plan's memory, so GU memory usage goes up to 3 GiB, but this is reclaimed when the second plan is GC'd. The third plan behave the same as the second plan.
Curiously,
p.handle
goes back and forth between 1 and 2 with each plan creation. I'm not sure if that's relevant, but I think the desired bevavior might be to keep re-using handle 1?The text was updated successfully, but these errors were encountered: