https://github.com/Alpha-VLLM/Lumina-Video\n","updatedAt":"2025-02-11T06:00:25.396Z","author":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","fullname":"AK","name":"akhaliq","type":"user","isPro":false,"isHf":true,"isHfAdmin":false,"isMod":false,"followerCount":8230}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.541473925113678},"editors":["akhaliq"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg"],"reactions":[],"isReport":false}},{"id":"67abfaa8fc5958651890925d","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-02-12T01:34:32.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longer Video Generation](https://huggingface.co/papers/2412.18597) (2024)\n* [TransPixeler: Advancing Text-to-Video Generation with Transparency](https://huggingface.co/papers/2501.03006) (2025)\n* [VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models](https://huggingface.co/papers/2502.02492) (2025)\n* [CascadeV: An Implementation of Wurstchen Architecture for Video Generation](https://huggingface.co/papers/2501.16612) (2025)\n* [RelightVid: Temporal-Consistent Diffusion Model for Video Relighting](https://huggingface.co/papers/2501.16330) (2025)\n* [Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers](https://huggingface.co/papers/2501.03931) (2025)\n* [Ingredients: Blending Custom Photos with Video Diffusion Transformers](https://huggingface.co/papers/2501.01790) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-02-12T01:34:32.248Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.64686518907547},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.06782","authors":[{"_id":"67aae76c71a9983f50e134ef","user":{"_id":"646f1bef075e11ca78da3bb7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/646f1bef075e11ca78da3bb7/gNS-ikyZXYeMrf4a7HTQE.jpeg","isPro":false,"fullname":"Dongyang Liu (Chris Liu)","user":"Cxxs","type":"user"},"name":"Dongyang Liu","status":"claimed_verified","statusLastChangedAt":"2025-04-13T19:27:55.452Z","hidden":false},{"_id":"67aae76c71a9983f50e134f0","user":{"_id":"64539588fc2b5f69e8faac76","avatarUrl":"/avatars/111441eeb0dd4d8ad2f0d3f28277952a.svg","isPro":false,"fullname":"Li Shicheng","user":"lscpku","type":"user"},"name":"Shicheng Li","status":"claimed_verified","statusLastChangedAt":"2025-06-05T12:42:28.636Z","hidden":false},{"_id":"67aae76c71a9983f50e134f1","name":"Yutong Liu","hidden":false},{"_id":"67aae76c71a9983f50e134f2","user":{"_id":"6285a9133ab6642179158944","avatarUrl":"/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg","isPro":false,"fullname":"Zhen Li","user":"Paper99","type":"user"},"name":"Zhen Li","status":"claimed_verified","statusLastChangedAt":"2025-02-26T15:37:44.523Z","hidden":false},{"_id":"67aae76c71a9983f50e134f3","name":"Kai Wang","hidden":false},{"_id":"67aae76c71a9983f50e134f4","user":{"_id":"66aba287b0f0b7411f511a47","avatarUrl":"/avatars/1450f182c38e80066ae5ea5df4fa218f.svg","isPro":false,"fullname":"Xinyue Li","user":"Xxxy13","type":"user"},"name":"Xinyue Li","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:38:44.243Z","hidden":false},{"_id":"67aae76c71a9983f50e134f5","user":{"_id":"66bb136002fd8eb58bc84ffb","avatarUrl":"/avatars/122cb8f59c502392768099b3c2afe043.svg","isPro":false,"fullname":"qinqi","user":"Dakerqi","type":"user"},"name":"Qi Qin","status":"claimed_verified","statusLastChangedAt":"2025-03-28T08:38:46.150Z","hidden":false},{"_id":"67aae76c71a9983f50e134f6","name":"Yufei Liu","hidden":false},{"_id":"67aae76c71a9983f50e134f7","name":"Yi Xin","hidden":false},{"_id":"67aae76c71a9983f50e134f8","name":"Zhongyu Li","hidden":false},{"_id":"67aae76c71a9983f50e134f9","name":"Bin Fu","hidden":false},{"_id":"67aae76c71a9983f50e134fa","user":{"_id":"635f8ed47c05eb9f59963d3a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/635f8ed47c05eb9f59963d3a/uQf4p9N9pSaFy87Wg9v4k.jpeg","isPro":false,"fullname":"ChenyangSi","user":"ChenyangSi","type":"user"},"name":"Chenyang Si","status":"claimed_verified","statusLastChangedAt":"2025-06-10T09:31:17.850Z","hidden":false},{"_id":"67aae76c71a9983f50e134fb","name":"Yuewen Cao","hidden":false},{"_id":"67aae76c71a9983f50e134fc","name":"Conghui He","hidden":false},{"_id":"67aae76c71a9983f50e134fd","name":"Ziwei Liu","hidden":false},{"_id":"67aae76c71a9983f50e134fe","name":"Yu Qiao","hidden":false},{"_id":"67aae76c71a9983f50e134ff","name":"Qibin Hou","hidden":false},{"_id":"67aae76c71a9983f50e13500","name":"Hongsheng Li","hidden":false},{"_id":"67aae76c71a9983f50e13501","name":"Peng Gao","hidden":false}],"publishedAt":"2025-02-10T18:58:11.000Z","submittedOnDailyAt":"2025-02-11T03:30:25.383Z","title":"Lumina-Video: Efficient and Flexible Video Generation with Multi-scale\n Next-DiT","submittedOnDailyBy":{"_id":"60f1abe7544c2adfd699860c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg","isPro":false,"fullname":"AK","user":"akhaliq","type":"user"},"summary":"Recent advancements have established Diffusion Transformers (DiTs) as a\ndominant framework in generative modeling. Building on this success,\nLumina-Next achieves exceptional performance in the generation of\nphotorealistic images with Next-DiT. However, its potential for video\ngeneration remains largely untapped, with significant challenges in modeling\nthe spatiotemporal complexity inherent to video data. To address this, we\nintroduce Lumina-Video, a framework that leverages the strengths of Next-DiT\nwhile introducing tailored solutions for video synthesis. Lumina-Video\nincorporates a Multi-scale Next-DiT architecture, which jointly learns multiple\npatchifications to enhance both efficiency and flexibility. By incorporating\nthe motion score as an explicit condition, Lumina-Video also enables direct\ncontrol of generated videos' dynamic degree. Combined with a progressive\ntraining scheme with increasingly higher resolution and FPS, and a multi-source\ntraining scheme with mixed natural and synthetic data, Lumina-Video achieves\nremarkable aesthetic quality and motion smoothness at high training and\ninference efficiency. We additionally propose Lumina-V2A, a video-to-audio\nmodel based on Next-DiT, to create synchronized sounds for generated videos.\nCodes are released at https://www.github.com/Alpha-VLLM/Lumina-Video.","upvotes":14,"discussionId":"67aae76e71a9983f50e1357d","ai_summary":"Lumina-Video enhances video generation by combining Multi-scale Next-DiT with motion scoring and multi-source training, achieving high quality and efficiency.","ai_keywords":["Diffusion Transformers","DiTs","Next-DiT","Lumina-Next","Lumina-Video","Multi-scale Next-DiT","patchifications","motion score","progressive training","FPS","multi-source training","Lumina-V2A","video-to-audio model"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6285a9133ab6642179158944","avatarUrl":"/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg","isPro":false,"fullname":"Zhen Li","user":"Paper99","type":"user"},{"_id":"67756c9c846a267749304255","avatarUrl":"/avatars/01f09805b561887c55d1b9ad4e96b461.svg","isPro":false,"fullname":"Jingfeng Yao","user":"MapleF9","type":"user"},{"_id":"66f612b934b8ac9ffa44f084","avatarUrl":"/avatars/6836c122e19c66c90f1673f28b30d7f0.svg","isPro":false,"fullname":"Tang","user":"tommysally","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"650c8bfb3d3542884da1a845","avatarUrl":"/avatars/863a5deebf2ac6d4faedc4dd368e0561.svg","isPro":false,"fullname":"Adhurim ","user":"Limi07","type":"user"},{"_id":"656ee8008bb9f4f8d95bd8f7","avatarUrl":"/avatars/4069d70f1279d928da521211c495d638.svg","isPro":false,"fullname":"Hyeonho Jeong","user":"hyeonho-jeong-video","type":"user"},{"_id":"65d64a3f7f4ce81310cd74ff","avatarUrl":"/avatars/8f99a4c67d25273bb269fa0e5f46192e.svg","isPro":false,"fullname":"Alan Wang","user":"kaiw7","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"6342796a0875f2c99cfd313b","avatarUrl":"/avatars/98575092404c4197b20c929a6499a015.svg","isPro":false,"fullname":"Yuseung \"Phillip\" Lee","user":"phillipinseoul","type":"user"},{"_id":"64a25268cd362b2c08c99997","avatarUrl":"/avatars/68ca4abcd35d818b7094b7f5df1822ce.svg","isPro":false,"fullname":"gaopeng","user":"gaopengcuhk","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/DxjZG1XT4H3ZHF7qHxWxk.jpeg","isPro":true,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"63bd06a8141c7d395c54fe03","avatarUrl":"/avatars/a7129200d839e0659d25bdc873ee09ef.svg","isPro":false,"fullname":"insu kim","user":"neorinse","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Lumina-Video enhances video generation by combining Multi-scale Next-DiT with motion scoring and multi-source training, achieving high quality and efficiency.
AI-generated summary
Recent advancements have established Diffusion Transformers (DiTs) as a
dominant framework in generative modeling. Building on this success,
Lumina-Next achieves exceptional performance in the generation of
photorealistic images with Next-DiT. However, its potential for video
generation remains largely untapped, with significant challenges in modeling
the spatiotemporal complexity inherent to video data. To address this, we
introduce Lumina-Video, a framework that leverages the strengths of Next-DiT
while introducing tailored solutions for video synthesis. Lumina-Video
incorporates a Multi-scale Next-DiT architecture, which jointly learns multiple
patchifications to enhance both efficiency and flexibility. By incorporating
the motion score as an explicit condition, Lumina-Video also enables direct
control of generated videos' dynamic degree. Combined with a progressive
training scheme with increasingly higher resolution and FPS, and a multi-source
training scheme with mixed natural and synthetic data, Lumina-Video achieves
remarkable aesthetic quality and motion smoothness at high training and
inference efficiency. We additionally propose Lumina-V2A, a video-to-audio
model based on Next-DiT, to create synchronized sounds for generated videos.
Codes are released at https://www.github.com/Alpha-VLLM/Lumina-Video.