-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EfficientTAM #1384
Comments
cc @HDCharles , @drisspg on adding new models |
Similar to pytorch#1384 to exercise the server , but for multimodal 1 - Run server: 1a - in background 1b - capture server_pid 2 - enable query using curl 3 - shutdown server with server pid captured in server_pid
cc @cpuhrsch who has been doing alot of work on the sam front |
Hm, are there particular changes you'd like to see us host @bhack? |
We could see how it will perform e2e e.g. we could re-use something we have done already in VIT in the ao repo, then evaluate if we could do something in the cross attention. I don't know if we have other margins. |
Oh I see, hm, maybe that's something we can use in https://github.com/yformer/EfficientTAM directly? For ViT we've been able to use autoquant for performance gain in some cases. cc @jerryzh168 |
We have also a WIP PR on attention in AO repo, twe could check see how it performs on his Also you already have some inference tricks in Sam2 that we could reuse like your recent video predictor or batching stuffs. |
@bhack - Makes sense. I don't think those are easily portable right now, but should hopefully become more portable soon. I can't give much guidance at the moment (pressed for time), but, of course, please feel free to copy-paste anything you like :) |
As we have already SAM2 in the repo now It could be nice to have an example with Meta new EfficientTAM:
https://github.com/yformer/EfficientTAM
The text was updated successfully, but these errors were encountered: