The following papers were recommended by the Semantic Scholar API
\n- \n
- Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention (2023) \n
- CLEX: Continuous Length Extrapolation for Large Language Models (2023) \n
- TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction (2023) \n
- SCCA: Shifted Cross Chunk Attention for long contextual semantic expansion (2023) \n
- Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation (2023) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n","updatedAt":"2023-12-21T14:47:15.460Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7153509259223938},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2312.09571","authors":[{"_id":"657fd75742fc53e18b88c436","user":{"_id":"6430dcc08dee1e48c234e6e5","avatarUrl":"/avatars/8ffd87c55dd236c78c3154116c98e492.svg","isPro":false,"fullname":"Fei Weizhi","user":"fwz","type":"user"},"name":"Weizhi Fei","status":"admin_assigned","statusLastChangedAt":"2023-12-18T11:15:31.017Z","hidden":false},{"_id":"657fd75742fc53e18b88c437","user":{"_id":"68b65343dd7f21b75891e446","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68b65343dd7f21b75891e446/g4dtudmiuSBZY63eLFxJ8.jpeg","isPro":false,"fullname":"Xueyan Niu","user":"niuxueyan","type":"user"},"name":"Xueyan Niu","status":"extracted_confirmed","statusLastChangedAt":"2025-09-02T02:27:22.251Z","hidden":false},{"_id":"657fd75742fc53e18b88c438","user":{"_id":"5f55e3e98bf55658acfed1c8","avatarUrl":"/avatars/299d103ca32d62d11c7c800b74cd46e7.svg","isPro":false,"fullname":"zhoupingyi","user":"zhoupingyi","type":"user"},"name":"Pingyi Zhou","status":"admin_assigned","statusLastChangedAt":"2023-12-18T11:15:57.124Z","hidden":false},{"_id":"657fd75742fc53e18b88c439","user":{"_id":"5f86882fee5616341bc51da1","avatarUrl":"/avatars/b0c5883d6253ef67639d1cea591fa337.svg","isPro":false,"fullname":"Lu Hou","user":"houlu369","type":"user"},"name":"Lu Hou","status":"admin_assigned","statusLastChangedAt":"2023-12-18T11:16:03.722Z","hidden":false},{"_id":"657fd75742fc53e18b88c43a","user":{"_id":"6667bdfd215b4c38f7ef17bb","avatarUrl":"/avatars/cad66dd576c3261ec5c276ae00e666d1.svg","isPro":false,"fullname":"Bo Bai","user":"atomistic","type":"user"},"name":"Bo Bai","status":"claimed_verified","statusLastChangedAt":"2024-06-11T06:59:58.884Z","hidden":false},{"_id":"657fd75742fc53e18b88c43b","name":"Lei Deng","hidden":false},{"_id":"657fd75742fc53e18b88c43c","name":"Wei Han","hidden":false}],"publishedAt":"2023-12-15T07:04:33.000Z","submittedOnDailyAt":"2023-12-18T02:53:35.913Z","title":"Extending Context Window of Large Language Models via Semantic\n Compression","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Transformer-based Large Language Models (LLMs) often impose limitations on\nthe length of the text input to ensure the generation of fluent and relevant\nresponses. This constraint restricts their applicability in scenarios involving\nlong texts. We propose a novel semantic compression method that enables\ngeneralization to texts that are 6-8 times longer, without incurring\nsignificant computational costs or requiring fine-tuning. Our proposed\nframework draws inspiration from source coding in information theory and\nemploys a pre-trained model to reduce the semantic redundancy of long inputs\nbefore passing them to the LLMs for downstream tasks. Experimental results\ndemonstrate that our method effectively extends the context window of LLMs\nacross a range of tasks including question answering, summarization, few-shot\nlearning, and information retrieval. Furthermore, the proposed semantic\ncompression method exhibits consistent fluency in text generation while\nreducing the associated computational overhead.","upvotes":16,"discussionId":"657fd75742fc53e18b88c451","ai_summary":"A semantic compression method extends the context window of Transformer-based LLMs for longer texts without fine-tuning, maintaining fluency and reducing computational costs.","ai_keywords":["Transformer-based LLMs","semantic compression","source coding","information theory","pre-trained model","semantic redundancy","context window","question answering","summarization","few-shot learning","information retrieval"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64b6cf1f204fe1fe80ce59c2","avatarUrl":"/avatars/f4bf703992b0a8851f4ec7f8741e2c8a.svg","isPro":false,"fullname":"Marasescu Denis","user":"DenisTheDev","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"65196fd5ee8c603b705024e8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/ZuDn7hCH2Www-cngSxWKO.jpeg","isPro":false,"fullname":"Joshua Drake","user":"jdrake2","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"64ca7c04710645aa7bdbbfff","avatarUrl":"/avatars/c12f4cb6dc1ff0010edb3ef4cfcccd7c.svg","isPro":false,"fullname":"Lize Pirenne","user":"Inversta","type":"user"},{"_id":"64331b3e6c6ecd587981702a","avatarUrl":"/avatars/edae68979decbd77f1eed88d1b2b659c.svg","isPro":false,"fullname":"Zekun","user":"atom1024","type":"user"},{"_id":"640603e2c3ab325efa94bc4a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/640603e2c3ab325efa94bc4a/jBLC7JH2dBAkDHYzFXZmr.jpeg","isPro":false,"fullname":"Mohammed Machrouh","user":"medmac01","type":"user"},{"_id":"60c8d264224e250fb0178f77","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60c8d264224e250fb0178f77/i8fbkBVcoFeJRmkQ9kYAE.png","isPro":true,"fullname":"Adam Lee","user":"Abecid","type":"user"},{"_id":"62d1f18534da11f991086541","avatarUrl":"/avatars/21b5c3c3b6363c07d11eed1466d01502.svg","isPro":false,"fullname":"Rico Pagliuca","user":"ricofix","type":"user"},{"_id":"6032802e1f993496bc14d9e3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png","isPro":false,"fullname":"Omar Sanseviero","user":"osanseviero","type":"user"},{"_id":"6549135c196ae037a74e10a3","avatarUrl":"/avatars/86194456844c7b2b5389de36cb258472.svg","isPro":false,"fullname":"Richrich","user":"RichardForests","type":"user"},{"_id":"6430dcc08dee1e48c234e6e5","avatarUrl":"/avatars/8ffd87c55dd236c78c3154116c98e492.svg","isPro":false,"fullname":"Fei Weizhi","user":"fwz","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">Extending Context Window of Large Language Models via Semantic Compression
Abstract
A semantic compression method extends the context window of Transformer-based LLMs for longer texts without fine-tuning, maintaining fluency and reducing computational costs.
Transformer-based Large Language Models (LLMs) often impose limitations on the length of the text input to ensure the generation of fluent and relevant responses. This constraint restricts their applicability in scenarios involving long texts. We propose a novel semantic compression method that enables generalization to texts that are 6-8 times longer, without incurring significant computational costs or requiring fine-tuning. Our proposed framework draws inspiration from source coding in information theory and employs a pre-trained model to reduce the semantic redundancy of long inputs before passing them to the LLMs for downstream tasks. Experimental results demonstrate that our method effectively extends the context window of LLMs across a range of tasks including question answering, summarization, few-shot learning, and information retrieval. Furthermore, the proposed semantic compression method exhibits consistent fluency in text generation while reducing the associated computational overhead.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention (2023)
- CLEX: Continuous Length Extrapolation for Large Language Models (2023)
- TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction (2023)
- SCCA: Shifted Cross Chunk Attention for long contextual semantic expansion (2023)
- Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper