You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from torch import nn
from rnn import FastGRNNCUDA
x = torch.randn((8192, 4, 512)).cuda()
h0 = torch.zeros((4, 512)).cuda()
gru = nn.GRU(512, 512, batch_first=False).cuda()
grnn = FastGRNNCUDA(512, 512, batch_first=False).cuda()
Timing (with proper cuda synchronisation) gives a loop time of 0.1s for the GRU and 0.35s for the GRNN. Am I doing something wrong, surely the GRNN should be at least on par with the GRU since it is less operations.
The text was updated successfully, but these errors were encountered:
Thanks for the timing numbers. IIRC, I don't think we optimized GRNN to the level of inbuilt GRU and that shows evidently here. The speedups are obvious when you use GRU and GRNN from our own naive implementations.
Timing (with proper cuda synchronisation) gives a loop time of 0.1s for the GRU and 0.35s for the GRNN. Am I doing something wrong, surely the GRNN should be at least on par with the GRU since it is less operations.
The text was updated successfully, but these errors were encountered: