-
Notifications
You must be signed in to change notification settings - Fork 46
Issues: intel/intel-npu-acceleration-library
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Thanks. I think the issue has been fixed in my Ubuntu after recreate the environment.
#127
opened Sep 26, 2024 by
slllu1
[BUG]: wrong argtype definitions when calling the c++ lib from python
#124
opened Sep 19, 2024 by
Naplesoul
Is there going to be support for other distros besides Ubuntu?
#116
opened Aug 16, 2024 by
epangelias
Why Llama series int4 quantization has fp16 attention layers?
bug
Something isn't working
#111
opened Jul 30, 2024 by
kyang-06
After passing a certain amount of prompts, llama3 spits out the chat template
#110
opened Jul 27, 2024 by
enricorampazzo
[Llama3.1 8B] Need pass your input's
attention_mask
to obtain reliable results.
#109
opened Jul 26, 2024 by
ChenYuYeh
Does this library support Qwen/Qwen2-7B-Instruct?
bug
Something isn't working
#85
opened Jul 2, 2024 by
qwebug
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.