Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use low-bit KV Cache in flashinfer? #125

Open
zhaoyang-star opened this issue Feb 18, 2024 · 7 comments
Open

How to use low-bit KV Cache in flashinfer? #125

zhaoyang-star opened this issue Feb 18, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@zhaoyang-star
Copy link

From the blog I noticed that FlashInfer implements low-precision attention kernels so that we can achieve nearly linear speedup to the compression ratio (~4x for 4bit, ~2x for 8bit). This feature is great! and I try to use it. But there is no demo or toy code about how to use it. Could you please share more details about it?

@yzh119
Copy link
Collaborator

yzh119 commented Feb 18, 2024

I haven't exposed low-bit KV-Cache in PyTorch APIs (they are available in C++ APIs), will do it tmr :)

@yzh119 yzh119 added the enhancement New feature or request label Feb 18, 2024
@zhaoyang-star
Copy link
Author

zhaoyang-star commented Feb 18, 2024

Glad to hear that! Cannot wait to try it out. I think quantizing KV Cache from float16/bfloat16 to 4-bits will need calibration. It will be better if the feature released with demo and benchmark results (latency, throughput or accuracy).

BTW, there is already someone trying to port flashinfer to vLLM (see #2772) to boost decode phase. I also ported FlashAttention to vLLM (see #2744) and plan to benchmark FA and flashinfer in vLLM framwork.

@yzh119
Copy link
Collaborator

yzh119 commented Feb 18, 2024

Thanks for letting me know, it's interesting to see that FlashAttention starts supporting paged kv-cache.

@yzh119
Copy link
Collaborator

yzh119 commented Feb 18, 2024

It will be better if the feature released with demo and benchmark results (latency, throughput or accuracy).

You can check our manuscript: Atom: Low-bit Quantization for Efficient and Accurate LLM Serving.

@yzh119
Copy link
Collaborator

yzh119 commented Mar 5, 2024

PyTorch APIs for fp8 kv-cache are exposed in #156 .

I'm finalizing the int4/int8 fused-dequant attention kernels with some optimizations such as fast int4/int8-to-float16 conversions. I expect to merge these changes by this Thursday.

@zhyncs
Copy link
Member

zhyncs commented Mar 28, 2024

PyTorch APIs for fp8 kv-cache are exposed in #156 .

I'm finalizing the int4/int8 fused-dequant attention kernels with some optimizations such as fast int4/int8-to-float16 conversions. I expect to merge these changes by this Thursday.

Hi @yzh119 As mentioned in https://flashinfer.ai/2024/02/02/introduce-flashinfer.html.

Our next release will include the 4-bit fused dequantize+attention operators proposed in Atom and LoRA operators used in Punica.

When is Atom quantization expected to be fully integrated into FlashInfer? Is there a detailed timeline available? Thanks.

@SherrySwift
Copy link

Hi, is there any plan to integrate the 4-bit fused dequantize+attention operators proposed in Atom into FlashInfer? Looking forward for this new feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants