@librarian-bot\n\t recommend\n","updatedAt":"2024-06-05T20:44:49.272Z","author":{"_id":"646b8e6f31968a60a0201a12","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg","fullname":")))?!?(((","name":"stereoplegic","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3755}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7918877601623535},"editors":["stereoplegic"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/646b8e6f31968a60a0201a12/SU2Gs1NPuk1zoXHwFHl0U.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"6660ce49d41bbe5ca56e5d0e","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2024-06-05T20:44:57.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Efficient LLM Inference with Kcache](https://huggingface.co/papers/2404.18057) (2024)\n* [Reducing Transformer Key-Value Cache Size with Cross-Layer Attention](https://huggingface.co/papers/2405.12981) (2024)\n* [Self-Selected Attention Span for Accelerating Large Language Model Inference](https://huggingface.co/papers/2404.09336) (2024)\n* [Sequence can Secretly Tell You What to Discard](https://huggingface.co/papers/2404.15949) (2024)\n* [SqueezeAttention: 2D Management of KV-Cache in LLM Inference via Layer-wise Optimal Budget](https://huggingface.co/papers/2404.04793) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-06-05T20:44:57.413Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7776325941085815},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"6660ce413c06be65a7d2e9f6"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2403.08058","authors":[{"_id":"663553cbe036475309649d4a","name":"Saurabh Agarwal","hidden":false},{"_id":"663553cbe036475309649d4b","name":"Bilge Acun","hidden":false},{"_id":"663553cbe036475309649d4c","name":"Basil Hosmer","hidden":false},{"_id":"663553cbe036475309649d4d","user":{"_id":"63c9725ebedad7e2bf160bdc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c9725ebedad7e2bf160bdc/wzPuyhOXCYBNGwZDshbnL.jpeg","isPro":false,"fullname":"Mostafa Elhoushi","user":"melhoushi","type":"user"},"name":"Mostafa Elhoushi","status":"claimed_verified","statusLastChangedAt":"2025-07-09T08:55:11.269Z","hidden":false},{"_id":"663553cbe036475309649d4e","name":"Yejin Lee","hidden":false},{"_id":"663553cbe036475309649d4f","name":"Shivaram Venkataraman","hidden":false},{"_id":"663553cbe036475309649d50","name":"Dimitris Papailiopoulos","hidden":false},{"_id":"663553cbe036475309649d51","name":"Carole-Jean Wu","hidden":false}],"publishedAt":"2024-03-12T20:10:04.000Z","title":"CHAI: Clustered Head Attention for Efficient LLM Inference","summary":"Large Language Models (LLMs) with hundreds of billions of parameters have\ntransformed the field of machine learning. However, serving these models at\ninference time is both compute and memory intensive, where a single request can\nrequire multiple GPUs and tens of Gigabytes of memory. Multi-Head Attention is\none of the key components of LLMs, which can account for over 50% of LLMs\nmemory and compute requirement. We observe that there is a high amount of\nredundancy across heads on which tokens they pay attention to. Based on this\ninsight, we propose Clustered Head Attention (CHAI). CHAI combines heads with a\nhigh amount of correlation for self-attention at runtime, thus reducing both\nmemory and compute. In our experiments, we show that CHAI is able to reduce the\nmemory requirements for storing K,V cache by up to 21.4% and inference time\nlatency by up to 1.73x without any fine-tuning required. CHAI achieves this\nwith a maximum 3.2% deviation in accuracy across 3 different models (i.e.\nOPT-66B, LLAMA-7B, LLAMA-33B) and 5 different evaluation datasets.","upvotes":0,"discussionId":"663553cce036475309649e82","ai_summary":"Clustered Head Attention reduces memory and inference time for Large Language Models by compressing correlated attention heads without significant accuracy loss.","ai_keywords":["Large Language Models","Multi-Head Attention","Clustered Head Attention","CHAI","K","V cache"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[],"acceptLanguages":["*"]}">
Clustered Head Attention reduces memory and inference time for Large Language Models by compressing correlated attention heads without significant accuracy loss.
AI-generated summary
Large Language Models (LLMs) with hundreds of billions of parameters have
transformed the field of machine learning. However, serving these models at
inference time is both compute and memory intensive, where a single request can
require multiple GPUs and tens of Gigabytes of memory. Multi-Head Attention is
one of the key components of LLMs, which can account for over 50% of LLMs
memory and compute requirement. We observe that there is a high amount of
redundancy across heads on which tokens they pay attention to. Based on this
insight, we propose Clustered Head Attention (CHAI). CHAI combines heads with a
high amount of correlation for self-attention at runtime, thus reducing both
memory and compute. In our experiments, we show that CHAI is able to reduce the
memory requirements for storing K,V cache by up to 21.4% and inference time
latency by up to 1.73x without any fine-tuning required. CHAI achieves this
with a maximum 3.2% deviation in accuracy across 3 different models (i.e.
OPT-66B, LLAMA-7B, LLAMA-33B) and 5 different evaluation datasets.