lynx   »   [go: up one dir, main page]

https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T02:51:51.632Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4920518696308136},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2205.14135","authors":[{"_id":"6411c77e6b75ddced389089d","user":{"_id":"64b8a6b5cf14c2fabe98159b","avatarUrl":"/avatars/dbc009451865435bf290791beadc4723.svg","isPro":false,"fullname":"Tri Dao","user":"tridao","type":"user"},"name":"Tri Dao","status":"admin_assigned","statusLastChangedAt":"2023-08-14T15:35:28.898Z","hidden":false},{"_id":"6411c77e6b75ddced389089e","name":"Daniel Y. Fu","hidden":false},{"_id":"6411c77e6b75ddced389089f","name":"Stefano Ermon","hidden":false},{"_id":"6411c77e6b75ddced38908a0","name":"Atri Rudra","hidden":false},{"_id":"6411c77e6b75ddced38908a1","name":"Christopher Ré","hidden":false}],"publishedAt":"2022-05-27T17:53:09.000Z","title":"FlashAttention: Fast and Memory-Efficient Exact Attention with\n IO-Awareness","summary":"Transformers are slow and memory-hungry on long sequences, since the time and\nmemory complexity of self-attention are quadratic in sequence length.\nApproximate attention methods have attempted to address this problem by trading\noff model quality to reduce the compute complexity, but often do not achieve\nwall-clock speedup. We argue that a missing principle is making attention\nalgorithms IO-aware -- accounting for reads and writes between levels of GPU\nmemory. We propose FlashAttention, an IO-aware exact attention algorithm that\nuses tiling to reduce the number of memory reads/writes between GPU high\nbandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of\nFlashAttention, showing that it requires fewer HBM accesses than standard\nattention, and is optimal for a range of SRAM sizes. We also extend\nFlashAttention to block-sparse attention, yielding an approximate attention\nalgorithm that is faster than any existing approximate attention method.\nFlashAttention trains Transformers faster than existing baselines: 15%\nend-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the\nMLPerf 1.1 training speed record, 3times speedup on GPT-2 (seq. length 1K),\nand 2.4times speedup on long-range arena (seq. length 1K-4K). FlashAttention\nand block-sparse FlashAttention enable longer context in Transformers, yielding\nhigher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on\nlong-document classification) and entirely new capabilities: the first\nTransformers to achieve better-than-chance performance on the Path-X challenge\n(seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1%\naccuracy).","upvotes":15,"discussionId":"641192353ea54b1aa7e2f2a7","ai_summary":"FlashAttention is an IO-aware attention algorithm that reduces GPU memory accesses and significantly speeds up Transformer training, enabling better performance on long sequences.","ai_keywords":["self-attention","approximate attention","FlashAttention","IO-aware","exact attention","tiling","HBM","SRAM","block-sparse attention","BERT-large","GPT-2","long-range arena","perplexity","long-document classification","Path-X challenge","Path-256 challenge"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"5dd96eb166059660ed1ee413","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg","isPro":true,"fullname":"Julien Chaumond","user":"julien-c","type":"user"},{"_id":"631313e1b46fc4e2432ebe56","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/631313e1b46fc4e2432ebe56/r2sDFz8uwmqPZq_0JO_eY.jpeg","isPro":false,"fullname":"Rishabh Singh","user":"lulzx","type":"user"},{"_id":"63ce40a846a80f412ea355fb","avatarUrl":"/avatars/83317b220560e08258d8920f7a1658c6.svg","isPro":false,"fullname":"Divya Patel","user":"divyapatel4","type":"user"},{"_id":"606cee4030a26330b5b4293e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/606cee4030a26330b5b4293e/ayjO6SuTh0z6HKteTJazG.jpeg","isPro":false,"fullname":"Pramit Choudhary","user":"maverick84","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"637b85c8787dab5e6b33ca5c","avatarUrl":"/avatars/17b6e6e73d853fa34327bcfaaa444dcd.svg","isPro":false,"fullname":"Ennio Rampello","user":"enniorampello","type":"user"},{"_id":"62cfa7c0180d2ba1cd060565","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62cfa7c0180d2ba1cd060565/ZL5IC0J6q8B-UbRWRT8M0.jpeg","isPro":false,"fullname":"Ankush Chander","user":"Ankush-Chander","type":"user"},{"_id":"645865edead43697df2c9ac2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/uaib754QhqmR6c0YPtI6h.jpeg","isPro":false,"fullname":"Enzo Shiraishi","user":"eshiraishi","type":"user"},{"_id":"662021e148cbc73a15141302","avatarUrl":"/avatars/20cd6505edd2f388dc4fd99c23424b8e.svg","isPro":false,"fullname":"Jeffrey Magder","user":"jmagder","type":"user"},{"_id":"60aef0fbee40717d1a8fa6a5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1624676266012-60aef0fbee40717d1a8fa6a5.png","isPro":false,"fullname":"Mayank Bhaskar","user":"cataluna84","type":"user"},{"_id":"63107b18e87051f3e3e0f598","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63107b18e87051f3e3e0f598/R9onir4Y0MZuq1jEWCZ2-.jpeg","isPro":false,"fullname":"Unchun Yang","user":"ucyang","type":"user"},{"_id":"6467933fe13ff05d6187f84b","avatarUrl":"/avatars/7c0db5494aa7495284e9f4d3ef198557.svg","isPro":false,"fullname":"vignesh yaadav","user":"viai957","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2205.14135

FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness

Published on May 27, 2022
Authors:
,
,
,

Abstract

FlashAttention is an IO-aware attention algorithm that reduces GPU memory accesses and significantly speeds up Transformer training, enabling better performance on long sequences.

AI-generated summary

Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length. Approximate attention methods have attempted to address this problem by trading off model quality to reduce the compute complexity, but often do not achieve wall-clock speedup. We argue that a missing principle is making attention algorithms IO-aware -- accounting for reads and writes between levels of GPU memory. We propose FlashAttention, an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of FlashAttention, showing that it requires fewer HBM accesses than standard attention, and is optimal for a range of SRAM sizes. We also extend FlashAttention to block-sparse attention, yielding an approximate attention algorithm that is faster than any existing approximate attention method. FlashAttention trains Transformers faster than existing baselines: 15% end-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the MLPerf 1.1 training speed record, 3times speedup on GPT-2 (seq. length 1K), and 2.4times speedup on long-range arena (seq. length 1K-4K). FlashAttention and block-sparse FlashAttention enable longer context in Transformers, yielding higher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on long-document classification) and entirely new capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge (seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy).

Community

This comment has been hidden
·

FlashAttention: Revolutionizing Transformer Efficiency!

Links 🔗:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 337

Browse 337 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 1,940

Collections including this paper 19

Лучший частный хостинг