-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of the model on mobile/browser devices #46
Comments
@10dimensions how where you able to deploy this on something else than the tx2? how do you recompile the models for other platforms? would love to hear how this is done as i'm currently failing on that. |
Hi @martinjuhasz, I have set up the model on my pc without tx2. Are you still interested to know more about it? |
Yes. I did try to convert the .pth model to onnx, and also tfjs graph (+ bins) On tensorflowjs (node runtime), I was able to compile and run it on CLI. But still the fps was very low, around 0.5 fps, as opposed to the expected 20+ fps |
@10dimensions thanks for the info @niharsalunke yeah, still interested! |
@10dimensions can you share the onnx model? |
@dpredie Nope, I don't at the moment. But I guess PyTorch has converters in-built. |
I tried converting the .pt (torch) model to both .onnx and tfjs formats.
To correspondingly deploy them on browser as well on a node server (on CPU).
And the inference speeds average around 1500-1700 ms?
At the same time I found an iOS example on fastdepth.github.io which averages to an excellent 40 fps.
Am I missing anything on my browser/cpu implementations? Any additional processing to be done?
Thanks
The text was updated successfully, but these errors were encountered: