lynx   »   [go: up one dir, main page]

https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

\n

By Arxflix
\"9t4iCUHx_400x400-1.jpg\"

\n","updatedAt":"2024-06-09T01:46:48.582Z","author":{"_id":"6186ddf6a7717cb375090c01","avatarUrl":"/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg","fullname":"Julien BLANCHON","name":"blanchon","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":142}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4940226972103119},"editors":["blanchon"],"editorAvatarUrls":["/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2303.09556","authors":[{"_id":"649d7f1b14769994039d8b31","name":"Tiankai Hang","hidden":false},{"_id":"649d7f1b14769994039d8b32","user":{"_id":"64c38fcf573c5a427e12cd37","avatarUrl":"/avatars/2b9de06f29147ed2c212e920afba0eaf.svg","isPro":false,"fullname":"cientgu","user":"cientgu","type":"user"},"name":"Shuyang Gu","status":"claimed_verified","statusLastChangedAt":"2024-07-10T08:21:56.548Z","hidden":false},{"_id":"649d7f1b14769994039d8b33","name":"Chen Li","hidden":false},{"_id":"649d7f1b14769994039d8b34","name":"Jianmin Bao","hidden":false},{"_id":"649d7f1b14769994039d8b35","name":"Dong Chen","hidden":false},{"_id":"649d7f1b14769994039d8b36","name":"Han Hu","hidden":false},{"_id":"649d7f1b14769994039d8b37","name":"Xin Geng","hidden":false},{"_id":"649d7f1b14769994039d8b38","name":"Baining Guo","hidden":false}],"publishedAt":"2023-03-16T17:59:56.000Z","title":"Efficient Diffusion Training via Min-SNR Weighting Strategy","summary":"Denoising diffusion models have been a mainstream approach for image\ngeneration, however, training these models often suffers from slow convergence.\nIn this paper, we discovered that the slow convergence is partly due to\nconflicting optimization directions between timesteps. To address this issue,\nwe treat the diffusion training as a multi-task learning problem, and introduce\na simple yet effective approach referred to as Min-SNR-gamma. This method\nadapts loss weights of timesteps based on clamped signal-to-noise ratios, which\neffectively balances the conflicts among timesteps. Our results demonstrate a\nsignificant improvement in converging speed, 3.4times faster than previous\nweighting strategies. It is also more effective, achieving a new record FID\nscore of 2.06 on the ImageNet 256times256 benchmark using smaller\narchitectures than that employed in previous state-of-the-art. The code is\navailable at https://github.com/TiankaiHang/Min-SNR-Diffusion-Training.","upvotes":4,"discussionId":"649d7f1e14769994039d8b8a","ai_summary":"A new method, Min-SNR-$\\gamma$, improves convergence speed and performance of denoising diffusion models by adapting loss weights based on signal-to-noise ratios.","ai_keywords":["denoising diffusion models","multi-task learning","Min-SNR-$\\gamma$","signal-to-noise ratios","FID score","ImageNet $256\\times256$ benchmark"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63411a7cdf7779d7885f8ce6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63411a7cdf7779d7885f8ce6/hdsTjL8-aQOYrLfyX1dR6.png","isPro":false,"fullname":"Yaman Ahlawat","user":"yamanahlawat","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"65025370b6595dc45c397340","avatarUrl":"/avatars/9469599b176034548042922c0afa7051.svg","isPro":true,"fullname":"J C","user":"dark-pen","type":"user"},{"_id":"62a6fbc07b20d46fad76df5f","avatarUrl":"/avatars/eef4f04f5824c068d8f5b1821c278288.svg","isPro":false,"fullname":"kayson hong","user":"Kayson","type":"user"}],"acceptLanguages":["*"]}">
Papers
arxiv:2303.09556

Efficient Diffusion Training via Min-SNR Weighting Strategy

Published on Mar 16, 2023
Authors:
,
,
,
,
,
,

Abstract

A new method, Min-SNR-$\gamma$, improves convergence speed and performance of denoising diffusion models by adapting loss weights based on signal-to-noise ratios.

AI-generated summary

Denoising diffusion models have been a mainstream approach for image generation, however, training these models often suffers from slow convergence. In this paper, we discovered that the slow convergence is partly due to conflicting optimization directions between timesteps. To address this issue, we treat the diffusion training as a multi-task learning problem, and introduce a simple yet effective approach referred to as Min-SNR-gamma. This method adapts loss weights of timesteps based on clamped signal-to-noise ratios, which effectively balances the conflicts among timesteps. Our results demonstrate a significant improvement in converging speed, 3.4times faster than previous weighting strategies. It is also more effective, achieving a new record FID score of 2.06 on the ImageNet 256times256 benchmark using smaller architectures than that employed in previous state-of-the-art. The code is available at https://github.com/TiankaiHang/Min-SNR-Diffusion-Training.

Community

Accelerating Diffusion Training: The Min-SNR Weighting Strategy

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 5

Browse 5 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2303.09556 in a dataset README.md to link it from this page.

Spaces citing this paper 12

Collections including this paper 1

Π›ΡƒΡ‡ΡˆΠΈΠΉ частный хостинг