lynx   »   [go: up one dir, main page]

https://twitter.com/m_elhoushi/status/1783800052986655203

\n

Happy to answer any questions

\n","updatedAt":"2024-04-26T20:13:39.097Z","author":{"_id":"63c9725ebedad7e2bf160bdc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c9725ebedad7e2bf160bdc/wzPuyhOXCYBNGwZDshbnL.jpeg","fullname":"Mostafa Elhoushi","name":"melhoushi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":36}},"numEdits":2,"identifiedLanguage":{"language":"en","probability":0.8606787323951721},"editors":["melhoushi"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63c9725ebedad7e2bf160bdc/wzPuyhOXCYBNGwZDshbnL.jpeg"],"reactions":[{"reaction":"😎","users":["mikelabs","pcuenq"],"count":2},{"reaction":"🔥","users":["HengJay","pcuenq"],"count":2}],"isReport":false},"replies":[{"id":"662bb63715e70123090a7341","author":{"_id":"6486638da4cf2081f20c40ec","avatarUrl":"/avatars/0bc16a7447cd71ac18828a678313bd83.svg","fullname":"Mike Young","name":"mikelabs","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12},"createdAt":"2024-04-26T14:12:07.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Plain english rewrite of the paper here, would love your feedback as an author! https://www.aimodels.fyi/papers/arxiv/layer-skip-enabling-early-exit-inference-self","html":"

Plain english rewrite of the paper here, would love your feedback as an author! https://www.aimodels.fyi/papers/arxiv/layer-skip-enabling-early-exit-inference-self

\n","updatedAt":"2024-04-26T14:12:07.839Z","author":{"_id":"6486638da4cf2081f20c40ec","avatarUrl":"/avatars/0bc16a7447cd71ac18828a678313bd83.svg","fullname":"Mike Young","name":"mikelabs","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8706541657447815},"editors":["mikelabs"],"editorAvatarUrls":["/avatars/0bc16a7447cd71ac18828a678313bd83.svg"],"reactions":[],"isReport":false,"parentCommentId":"662b7de45c2a283207d9bf7d"}},{"id":"662bbac07f459b2bbd704eb8","author":{"_id":"6486638da4cf2081f20c40ec","avatarUrl":"/avatars/0bc16a7447cd71ac18828a678313bd83.svg","fullname":"Mike Young","name":"mikelabs","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12},"createdAt":"2024-04-26T14:31:28.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"@julien-c I was wondering if there was a way we could integrate this service into what you guys have built here with /papers? That's what I was going to DM you about","html":"

\n\n@julien-c\n\t I was wondering if there was a way we could integrate this service into what you guys have built here with /papers? That's what I was going to DM you about

\n","updatedAt":"2024-04-26T14:31:28.209Z","author":{"_id":"6486638da4cf2081f20c40ec","avatarUrl":"/avatars/0bc16a7447cd71ac18828a678313bd83.svg","fullname":"Mike Young","name":"mikelabs","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":12}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9960048198699951},"editors":["mikelabs"],"editorAvatarUrls":["/avatars/0bc16a7447cd71ac18828a678313bd83.svg"],"reactions":[],"isReport":false,"parentCommentId":"662b7de45c2a283207d9bf7d"}}]},{"id":"662ba2d289f38fd9cebc0635","author":{"_id":"635f9fd1ae7144a6674c839b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1667211208219-noauth.jpeg","fullname":"Marcus Gawronsky","name":"marcusinthesky","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":8},"createdAt":"2024-04-26T12:49:22.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"This is a lot like Mixture-of-Depths. ","html":"

This is a lot like Mixture-of-Depths.

\n","updatedAt":"2024-04-27T12:26:53.429Z","author":{"_id":"635f9fd1ae7144a6674c839b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1667211208219-noauth.jpeg","fullname":"Marcus Gawronsky","name":"marcusinthesky","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":8}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.9603829383850098},"editors":["marcusinthesky"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1667211208219-noauth.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"662bd11ce666f706516f8be3","author":{"_id":"5f7fbd813e94f16a85448745","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg","fullname":"Sayak Paul","name":"sayakpaul","type":"user","isPro":true,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":732},"createdAt":"2024-04-26T16:06:52.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"How so? Because of the adaptive computation nature? ","html":"

How so? Because of the adaptive computation nature?

\n","updatedAt":"2024-04-26T16:06:52.439Z","author":{"_id":"5f7fbd813e94f16a85448745","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg","fullname":"Sayak Paul","name":"sayakpaul","type":"user","isPro":true,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":732}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7663930058479309},"editors":["sayakpaul"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg"],"reactions":[{"reaction":"👍","users":["marcusinthesky","AndyB12"],"count":2}],"isReport":false,"parentCommentId":"662ba2d289f38fd9cebc0635"}}]},{"id":"662c539c982b04e6a0bbc832","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2024-04-27T01:23:40.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration](https://huggingface.co/papers/2404.12022) (2024)\n* [Direct Alignment of Draft Model for Speculative Decoding with Chat-Fine-Tuned LLMs](https://huggingface.co/papers/2403.00858) (2024)\n* [Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding](https://huggingface.co/papers/2404.08698) (2024)\n* [Accelerating Inference in Large Language Models with a Unified Layer Skipping Strategy](https://huggingface.co/papers/2404.06954) (2024)\n* [Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference](https://huggingface.co/papers/2403.09636) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2024-04-27T01:23:40.410Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7723754048347473},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"662db32919a5317830a2f155","author":{"_id":"6445eb08219dc6142a6ab80e","avatarUrl":"/avatars/d2e7b46dbeb7534ccf1e16f7f29b0d10.svg","fullname":"zihao.hzh","name":"zjsfxpm1","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1},"createdAt":"2024-04-28T02:23:37.000Z","type":"comment","data":{"edited":true,"hidden":true,"hiddenBy":"","latest":{"raw":"This comment has been hidden","html":"This comment has been hidden","updatedAt":"2024-04-28T02:24:00.557Z","author":{"_id":"6445eb08219dc6142a6ab80e","avatarUrl":"/avatars/d2e7b46dbeb7534ccf1e16f7f29b0d10.svg","fullname":"zihao.hzh","name":"zjsfxpm1","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1}},"numEdits":0,"editors":[],"editorAvatarUrls":[],"reactions":[]}},{"id":"666508cf11491b459762d862","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142},"createdAt":"2024-06-09T01:43:43.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"# Supercharging AI: How LayerSkip Enhances Language Model Speed and Efficiency\n\nhttps://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/6hiCtJRDbiJLJdXtB_2VP.mp4 \n\n## Links 🔗:\n👉 Subscribe: https://www.youtube.com/@Arxflix\n👉 Twitter: https://x.com/arxflix\n👉 LMNT (Partner): https://lmnt.com/\n\n\nBy Arxflix\n![9t4iCUHx_400x400-1.jpg](https://cdn-uploads.huggingface.co/production/uploads/6186ddf6a7717cb375090c01/v4S5zBurs0ouGNwYj1GEd.jpeg)","html":"

Supercharging AI: How LayerSkip Enhances Language Model Speed and Efficiency

\n

\n\n

Links 🔗:

\n

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T01:43:43.678Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5171573758125305},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}},{"id":"675146cb5d873b8ed2557b4c","author":{"_id":"66c8264759227bf53ddced74","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66c8264759227bf53ddced74/tJXlxUAfl7kdCkJhcM513.jpeg","fullname":"ryan-u","name":"ryan-u","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4},"createdAt":"2024-12-05T06:23:07.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Thanks for sharing great paper and models! I have a question on the impact of the Layer Dropout. Based on experimental results in the paper and consideration to the mechanism of LayerSkip which is just early exit, Layer Dropout looks like it does not have effect on the performance. I wonder experimental results for the Table 1 or 2 without Layer Dropout (which is LayerSkip-EE).","html":"

Thanks for sharing great paper and models! I have a question on the impact of the Layer Dropout. Based on experimental results in the paper and consideration to the mechanism of LayerSkip which is just early exit, Layer Dropout looks like it does not have effect on the performance. I wonder experimental results for the Table 1 or 2 without Layer Dropout (which is LayerSkip-EE).

\n","updatedAt":"2024-12-05T06:23:07.834Z","author":{"_id":"66c8264759227bf53ddced74","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66c8264759227bf53ddced74/tJXlxUAfl7kdCkJhcM513.jpeg","fullname":"ryan-u","name":"ryan-u","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9574925303459167},"editors":["ryan-u"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/66c8264759227bf53ddced74/tJXlxUAfl7kdCkJhcM513.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"6751cd84022dc61bf75a490e","author":{"_id":"63c9725ebedad7e2bf160bdc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c9725ebedad7e2bf160bdc/wzPuyhOXCYBNGwZDshbnL.jpeg","fullname":"Mostafa Elhoushi","name":"melhoushi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":36},"createdAt":"2024-12-05T15:57:56.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Thanks @ryan-u ! We noticed that:\n- Dropout has more effect on pretraining from scratch versus continual pretraining \n- Dropout can be more effective if we increase it to larger rates like 0.5 but that would require increasing the learning rate and/or train for more steps to compensate drop in accuracy in last layer.","html":"

Thanks \n\n@ryan-u\n\t ! We noticed that:

\n
    \n
  • Dropout has more effect on pretraining from scratch versus continual pretraining
  • \n
  • Dropout can be more effective if we increase it to larger rates like 0.5 but that would require increasing the learning rate and/or train for more steps to compensate drop in accuracy in last layer.
  • \n
\n","updatedAt":"2024-12-05T15:57:56.030Z","author":{"_id":"63c9725ebedad7e2bf160bdc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c9725ebedad7e2bf160bdc/wzPuyhOXCYBNGwZDshbnL.jpeg","fullname":"Mostafa Elhoushi","name":"melhoushi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":36}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9285772442817688},"editors":["melhoushi"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63c9725ebedad7e2bf160bdc/wzPuyhOXCYBNGwZDshbnL.jpeg"],"reactions":[{"reaction":"👍","users":["ryan-u"],"count":1},{"reaction":"🔥","users":["ryan-u"],"count":1}],"isReport":false,"parentCommentId":"675146cb5d873b8ed2557b4c"}},{"id":"675247a6c09f9715658e77f8","author":{"_id":"66c8264759227bf53ddced74","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66c8264759227bf53ddced74/tJXlxUAfl7kdCkJhcM513.jpeg","fullname":"ryan-u","name":"ryan-u","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4},"createdAt":"2024-12-06T00:39:02.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Thank you for kind comments! I missed the case of pretraining from scratch. looking forward to next step :)","html":"

Thank you for kind comments! I missed the case of pretraining from scratch. looking forward to next step :)

\n","updatedAt":"2024-12-06T00:39:02.418Z","author":{"_id":"66c8264759227bf53ddced74","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66c8264759227bf53ddced74/tJXlxUAfl7kdCkJhcM513.jpeg","fullname":"ryan-u","name":"ryan-u","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9187198281288147},"editors":["ryan-u"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/66c8264759227bf53ddced74/tJXlxUAfl7kdCkJhcM513.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"675146cb5d873b8ed2557b4c"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2404.16710","authors":[{"_id":"662b1842ef7a4675bdfd9323","user":{"_id":"63c9725ebedad7e2bf160bdc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63c9725ebedad7e2bf160bdc/wzPuyhOXCYBNGwZDshbnL.jpeg","isPro":false,"fullname":"Mostafa Elhoushi","user":"melhoushi","type":"user"},"name":"Mostafa Elhoushi","status":"extracted_confirmed","statusLastChangedAt":"2024-04-26T19:58:42.088Z","hidden":false},{"_id":"662b1842ef7a4675bdfd9324","user":{"_id":"64b6d105f616b195a39f462a","avatarUrl":"/avatars/f33805c3e42e476a135c3cf869cc423a.svg","isPro":false,"fullname":"Akshat","user":"akshats07","type":"user"},"name":"Akshat Shrivastava","status":"extracted_pending","statusLastChangedAt":"2024-04-26T02:58:11.458Z","hidden":false},{"_id":"662b1842ef7a4675bdfd9325","name":"Diana Liskovich","hidden":false},{"_id":"662b1842ef7a4675bdfd9326","user":{"_id":"631183a88894270062a2ea36","avatarUrl":"/avatars/5da43a25768f5b677bbd02bd955210e7.svg","isPro":false,"fullname":"Basil Hosmer","user":"bhosmer","type":"user"},"name":"Basil Hosmer","status":"admin_assigned","statusLastChangedAt":"2024-04-26T07:47:01.676Z","hidden":false},{"_id":"662b1842ef7a4675bdfd9327","user":{"_id":"631171f231257261d20f9f57","avatarUrl":"/avatars/9d6273d9e7dc4e4b21752ba4b17ab1a7.svg","isPro":false,"fullname":"Bram Wasti","user":"bwasti","type":"user"},"name":"Bram Wasti","status":"admin_assigned","statusLastChangedAt":"2024-04-26T07:47:08.369Z","hidden":false},{"_id":"662b1842ef7a4675bdfd9328","user":{"_id":"64b705c73240387159397c71","avatarUrl":"/avatars/da339f07eb58f58756b91eba9af557a1.svg","isPro":false,"fullname":"Liangzhen Lai","user":"liangzhen-lai","type":"user"},"name":"Liangzhen Lai","status":"admin_assigned","statusLastChangedAt":"2024-04-26T07:47:15.337Z","hidden":false},{"_id":"662b1842ef7a4675bdfd9329","user":{"_id":"64a588d26c0223510c6047a9","avatarUrl":"/avatars/6cc422be5cd2f0f78c0b0cc9c3743e02.svg","isPro":false,"fullname":"Anas Mahmoud","user":"nasmahmoud","type":"user"},"name":"Anas Mahmoud","status":"admin_assigned","statusLastChangedAt":"2024-04-26T07:47:22.751Z","hidden":false},{"_id":"662b1842ef7a4675bdfd932a","user":{"_id":"650e0307bfb7dd98bba68563","avatarUrl":"/avatars/1b5287354dc32d85190f38b44ed081f3.svg","isPro":false,"fullname":"Bilge","user":"bilgeacun","type":"user"},"name":"Bilge Acun","status":"admin_assigned","statusLastChangedAt":"2024-04-26T07:47:30.164Z","hidden":false},{"_id":"662b1842ef7a4675bdfd932b","name":"Saurabh Agarwal","hidden":false},{"_id":"662b1842ef7a4675bdfd932c","name":"Ahmed Roman","hidden":false},{"_id":"662b1842ef7a4675bdfd932d","name":"Ahmed A Aly","hidden":false},{"_id":"662b1842ef7a4675bdfd932e","user":{"_id":"64b732f832403871593e082c","avatarUrl":"/avatars/dd21932b0c167131ee7545a622c46c3c.svg","isPro":false,"fullname":"Beidi Chen","user":"beidic","type":"user"},"name":"Beidi Chen","status":"admin_assigned","statusLastChangedAt":"2024-04-26T07:48:13.257Z","hidden":false},{"_id":"662b1842ef7a4675bdfd932f","name":"Carole-Jean Wu","hidden":false}],"publishedAt":"2024-04-25T16:20:23.000Z","submittedOnDailyAt":"2024-04-26T01:28:11.479Z","title":"LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"We present LayerSkip, an end-to-end solution to speed-up inference of large\nlanguage models (LLMs). First, during training we apply layer dropout, with low\ndropout rates for earlier layers and higher dropout rates for later layers, and\nan early exit loss where all transformer layers share the same exit. Second,\nduring inference, we show that this training recipe increases the accuracy of\nearly exit at earlier layers, without adding any auxiliary layers or modules to\nthe model. Third, we present a novel self-speculative decoding solution where\nwe exit at early layers and verify and correct with remaining layers of the\nmodel. Our proposed self-speculative decoding approach has less memory\nfootprint than other speculative decoding approaches and benefits from shared\ncompute and activations of the draft and verification stages. We run\nexperiments on different Llama model sizes on different types of training:\npretraining from scratch, continual pretraining, finetuning on specific data\ndomain, and finetuning on specific task. We implement our inference solution\nand show speedups of up to 2.16x on summarization for CNN/DM documents, 1.82x\non coding, and 2.0x on TOPv2 semantic parsing task. We open source our code and\ncheckpoints at https://github.com/facebookresearch/LayerSkip.","upvotes":79,"discussionId":"662b1843ef7a4675bdfd935d","githubRepo":"https://github.com/facebookresearch/LayerSkip","ai_summary":"LayerSkip, an end-to-end method, accelerates large language model inference through layer dropout and self-speculative decoding without auxiliary layers.","ai_keywords":["layer dropout","early exit loss","transformer layers","early exit","self-speculative decoding","memory footprint","shared compute","pretraining","continual pretraining","finetuning","summarization","coding","semantic parsing"],"githubStars":338},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"644e1b1d9b4e87c31bab0a14","avatarUrl":"/avatars/88bb4c4a67dc8958069e9014f5e73a0b.svg","isPro":false,"fullname":"Michael Barry","user":"MichaelBarryUK","type":"user"},{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"64747f7e33192631bacd8831","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64747f7e33192631bacd8831/dstkZJ4sHJSeqLesV5cOC.jpeg","isPro":false,"fullname":"Taufiq Dwi Purnomo","user":"taufiqdp","type":"user"},{"_id":"5f353bb37e58354338621655","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1639773384591-5f353bb37e58354338621655.jpeg","isPro":false,"fullname":"Nicholas Broad","user":"nbroad","type":"user"},{"_id":"61c98b68e3d96b1fa2fd0b6a","avatarUrl":"/avatars/8860b175ae0d292bb5ad8502a97b9b9f.svg","isPro":false,"fullname":"Mous","user":"Anony","type":"user"},{"_id":"635fd74e14657fb8cff2bc13","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635fd74e14657fb8cff2bc13/lUlHB0z1CRPJpwwT3JcnO.jpeg","isPro":false,"fullname":"Chan Kim","user":"chanmuzi","type":"user"},{"_id":"65b85e75d421fb7f6a0c04bb","avatarUrl":"/avatars/3e9a9c3349e282d72e558febd93d6f12.svg","isPro":false,"fullname":"quyettv","user":"quyettv","type":"user"},{"_id":"6362ddb7d3be91534c30bfd6","avatarUrl":"/avatars/dac76ebd3b8a08099497ec0b0524bc7c.svg","isPro":false,"fullname":"Art Atk","user":"ArtAtk","type":"user"},{"_id":"662b401ec90d81a39612e0aa","avatarUrl":"/avatars/7a73f062fa66cb808a20b4e4bea65e22.svg","isPro":false,"fullname":"Gokul S","user":"mastergokul","type":"user"},{"_id":"615bdce523f3c5e91441a38a","avatarUrl":"/avatars/be948559fc7f701eaa3e928801d07bac.svg","isPro":false,"fullname":"Dixit Trivedi","user":"dixitrivedi","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1}">
Papers
arxiv:2404.16710

LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding

Published on Apr 25, 2024
· Submitted by AK on Apr 26, 2024
#1 Paper of the day
Authors:
,
,
,
,

Abstract

LayerSkip, an end-to-end method, accelerates large language model inference through layer dropout and self-speculative decoding without auxiliary layers.

AI-generated summary

We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for earlier layers and higher dropout rates for later layers, and an early exit loss where all transformer layers share the same exit. Second, during inference, we show that this training recipe increases the accuracy of early exit at earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel self-speculative decoding solution where we exit at early layers and verify and correct with remaining layers of the model. Our proposed self-speculative decoding approach has less memory footprint than other speculative decoding approaches and benefits from shared compute and activations of the draft and verification stages. We run experiments on different Llama model sizes on different types of training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and finetuning on specific task. We implement our inference solution and show speedups of up to 2.16x on summarization for CNN/DM documents, 1.82x on coding, and 2.0x on TOPv2 semantic parsing task. We open source our code and checkpoints at https://github.com/facebookresearch/LayerSkip.

Community

Wow this is very good

Author here. Thanks for posting. I have created a thread on X to explain the paper: https://twitter.com/m_elhoushi/status/1783800052986655203

Happy to answer any questions

·

Plain english rewrite of the paper here, would love your feedback as an author! https://www.aimodels.fyi/papers/arxiv/layer-skip-enabling-early-exit-inference-self

This is a lot like Mixture-of-Depths.

·

How so? Because of the adaptive computation nature?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

This comment has been hidden

Supercharging AI: How LayerSkip Enhances Language Model Speed and Efficiency

Links 🔗:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Thanks for sharing great paper and models! I have a question on the impact of the Layer Dropout. Based on experimental results in the paper and consideration to the mechanism of LayerSkip which is just early exit, Layer Dropout looks like it does not have effect on the performance. I wonder experimental results for the Table 1 or 2 without Layer Dropout (which is LayerSkip-EE).

·
Paper author

Thanks @ryan-u ! We noticed that:

  • Dropout has more effect on pretraining from scratch versus continual pretraining
  • Dropout can be more effective if we increase it to larger rates like 0.5 but that would require increasing the learning rate and/or train for more steps to compensate drop in accuracy in last layer.

Sign up or log in to comment

Models citing this paper 9

Browse 9 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.16710 in a Space README.md to link it from this page.

Collections including this paper 22

Лучший частный хостинг