-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huggingface models do not work #1031
Comments
@MartinMayday Let's see where the problem may be. Can you give an example of some model you tried and that failed? Or try this Latvian model it definitely works Most models are open and do not require any tokens. If you need to specify the token see this https://huggingface.co/docs/huggingface_hub/quick-start as Buzz uses the Downloading models manually is a bit tricky due to the necessary internal structure of the download scripts, but if you get errors downloading maybe something is broken in the caches. See this https://chidiwilliams.github.io/buzz/docs/faq#7-can-i-use-buzz-on-a-computer-without-internet to find the cache folder and maybe delete the old caches. Buzz supports all OpenAI compatible APIs, also the Groq. See this discussion on more information on how to configure the Groq #827 This will also let you connect Buzz to any local server that supports OpenAI API format. If you have more questions or something is still confusing, let me know, we will figure this out. Even if all works I would love to hear on sections that are not easy to understand, maybe we can find some areas of the documentation to improve. |
I just want to have access to a local (fast) model via http
or alternative through the buzz app with folder watch.
Hardware:
Intel 6 core 32GB ram
Radeon RX Vega 56 vram 8GB
Test results (used 84 min audio):
distil-whisper/distil-large-v3 = 42 min
whisper large-v3-turbo = 66 min
RaivisDejus/whisper-tiny-lv = 51 min
Models triad to get working (FAILED):
nyrahealth/CrisperWhisper (Gated repos - status: ACCEPTED)
Systran/faster-whisper-large-v3
deepdml/whisper-large-v3-turbo
deepdml/faster-whisper-large-v3-turbo-ct2
Brainiac77/whisper-base-bn-s2t
pedrogarcias/whisper-small-v2-jax
distil-whisper/distil-large-v3-ct2
what is working:
RaivisDejus/whisper-tiny-lv
distil-whisper/distil-large-v3
Other thing triad (FAILED):
https://github.com/SYSTRAN/faster-whisper
https://github.com/sanchit-gandhi/whisper-jax.git
https://github.com/Vaibhavs10/insanely-fast-whisper.git
https://github.com/shashikg/WhisperS2T.git
https://github.com/m-bain/whisperX.git
i am sure if i knew how to use python and docker i could figure it out better.
Other things triad (working):
https://github.com/ahmetoner/whisper-asr-webservice.git
Thanks in advance
p.s. i will look if i can get the cli working and try to clear cashe
Sent with Spark
…On 26 Dec 2024 at 11.28 +0100, Raivis Dejus ***@***.***>, wrote:
@MartinMayday Let's see where the problem may be.
Can you give an example of some model you tried and that failed? Or try this Latvian model it definitely works RaivisDejus/whisper-tiny-lv. Also huggingface models will only work with Huggingface whisper type, so select that in the whisper type selection box.
Most models are open and do not require any tokens. If you need to specify the token see this https://huggingface.co/docs/huggingface_hub/quick-start as Buzz uses the huggingface_hub under the hood to download models. The huggingface-cli login may be what you need or set the HF_TOKEN environment variable to pass the token to the download scripts.
Downloading models manually is a bit tricky due to the necessary internal structure of the download scripts, but if you get errors downloading maybe something is broken in the caches. See this https://chidiwilliams.github.io/buzz/docs/faq#7-can-i-use-buzz-on-a-computer-without-internet to find the cache folder and maybe delete the old caches.
Buzz supports all OpenAI compatible APIs, also the Groq. See this discussion on more information on how to configure the Groq #827 This will also let you connect Buzz to any local server that supports OpenAI API format.
If you have more questions or something is still confusing, let me know, we will figure this out. Even if all works I would love to hear on sections that are not easy to understand, maybe we can find some areas of the documentation to improve.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I really had high hopes getting the ChrisperWhisper working after seeing it on the huggingface transcribe leaderboard. but its gated and without any option to add username and token i cant figure out how to workaround this.
would be awesome if i just could to something like ***@***.***/CrisperWhisper"
attaching image to illustrate:
Sent with Spark
…On 26 Dec 2024 at 11.28 +0100, Raivis Dejus ***@***.***>, wrote:
@MartinMayday Let's see where the problem may be.
Can you give an example of some model you tried and that failed? Or try this Latvian model it definitely works RaivisDejus/whisper-tiny-lv. Also huggingface models will only work with Huggingface whisper type, so select that in the whisper type selection box.
Most models are open and do not require any tokens. If you need to specify the token see this https://huggingface.co/docs/huggingface_hub/quick-start as Buzz uses the huggingface_hub under the hood to download models. The huggingface-cli login may be what you need or set the HF_TOKEN environment variable to pass the token to the download scripts.
Downloading models manually is a bit tricky due to the necessary internal structure of the download scripts, but if you get errors downloading maybe something is broken in the caches. See this https://chidiwilliams.github.io/buzz/docs/faq#7-can-i-use-buzz-on-a-computer-without-internet to find the cache folder and maybe delete the old caches.
Buzz supports all OpenAI compatible APIs, also the Groq. See this discussion on more information on how to configure the Groq #827 This will also let you connect Buzz to any local server that supports OpenAI API format.
If you have more questions or something is still confusing, let me know, we will figure this out. Even if all works I would love to hear on sections that are not easy to understand, maybe we can find some areas of the documentation to improve.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Note on other whisper types.
Support for the Turbo models is built in the Buzz, you can select them in the dropdown for |
I wish to use other faster models than the default whisper large v3 turbo.
Ive triad downloading 50+ huggingsface models, but the app crashes or fails to download.
is there a way we can either download with our huggingface token or bypass huggingface security restriction?
maybe downloading manually and loading manually into the app?
alternative, is there a way to add groq distil whisper api to the http section instead of openai api?
many questions, would love any feedback (i am not a coder, so please be gentle)
p.s. another question is there a way to connect to the app via local http similar to whisper-asr-server?
p.p.s i love the folder watch feature, and thats why i use the app, but my computer is to slow to get any optimal workflow using the default whisper large v3 turbo models.
Best regards
The text was updated successfully, but these errors were encountered: