lynx   »   [go: up one dir, main page]

https://ml-gsai.github.io/LLaDA-V-demo/

\n","updatedAt":"2025-05-23T02:56:20.855Z","author":{"_id":"624f909eac5dd186b01ac3f5","avatarUrl":"/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg","fullname":"Zebin You","name":"yyyou","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.48409411311149597},"editors":["yyyou"],"editorAvatarUrls":["/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg"],"reactions":[],"isReport":false},"replies":[{"id":"683032ab755c326286560cfa","author":{"_id":"624f909eac5dd186b01ac3f5","avatarUrl":"/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg","fullname":"Zebin You","name":"yyyou","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2},"createdAt":"2025-05-23T08:32:43.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"github link: https://github.com/ML-GSAI/LLaDA-V","html":"

github link: https://github.com/ML-GSAI/LLaDA-V

\n","updatedAt":"2025-05-23T08:32:43.054Z","author":{"_id":"624f909eac5dd186b01ac3f5","avatarUrl":"/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg","fullname":"Zebin You","name":"yyyou","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8091254830360413},"editors":["yyyou"],"editorAvatarUrls":["/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg"],"reactions":[],"isReport":false,"parentCommentId":"682fe3d4e3102e71872f3a9a"}}]},{"id":"6831225b100fa9f84c280b75","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-05-24T01:35:23.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models](https://huggingface.co/papers/2504.10479) (2025)\n* [Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs](https://huggingface.co/papers/2504.17432) (2025)\n* [The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer](https://huggingface.co/papers/2504.10462) (2025)\n* [MMaDA: Multimodal Large Diffusion Language Models](https://huggingface.co/papers/2505.15809) (2025)\n* [MASSV: Multimodal Adaptation and Self-Data Distillation for Speculative Decoding of Vision-Language Models](https://huggingface.co/papers/2505.10526) (2025)\n* [TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation](https://huggingface.co/papers/2505.05422) (2025)\n* [LangBridge: Interpreting Image as a Combination of Language Embeddings](https://huggingface.co/papers/2503.19404) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-05-24T01:35:23.801Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6922959089279175},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.16933","authors":[{"_id":"682fe37bb998c9f79463b563","user":{"_id":"624f909eac5dd186b01ac3f5","avatarUrl":"/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg","isPro":false,"fullname":"Zebin You","user":"yyyou","type":"user"},"name":"Zebin You","status":"admin_assigned","statusLastChangedAt":"2025-05-27T08:36:23.049Z","hidden":false},{"_id":"682fe37bb998c9f79463b564","name":"Shen Nie","hidden":false},{"_id":"682fe37bb998c9f79463b565","user":{"_id":"67513d6d3b8586521cda5d76","avatarUrl":"/avatars/0f95cc5c23a0a1da289aa785bd33b616.svg","isPro":false,"fullname":"Xiaolu Zhang","user":"xiaolu0714","type":"user"},"name":"Xiaolu Zhang","status":"admin_assigned","statusLastChangedAt":"2025-05-27T08:36:49.675Z","hidden":false},{"_id":"682fe37bb998c9f79463b566","name":"Jun Hu","hidden":false},{"_id":"682fe37bb998c9f79463b567","name":"Jun Zhou","hidden":false},{"_id":"682fe37bb998c9f79463b568","user":{"_id":"6351e3035b904878f1d36719","avatarUrl":"/avatars/a7b3a5faacaa483fab1a5b1b95bcc0a4.svg","isPro":false,"fullname":"Zhiwu Lu","user":"zewlu","type":"user"},"name":"Zhiwu Lu","status":"admin_assigned","statusLastChangedAt":"2025-05-27T08:37:09.401Z","hidden":false},{"_id":"682fe37bb998c9f79463b569","user":{"_id":"64b8c89052b7353d8c6a1013","avatarUrl":"/avatars/cd59fffe81f6b07b4519540b8ff3d95f.svg","isPro":false,"fullname":"Ji-Rong Wen","user":"jrwen","type":"user"},"name":"Ji-Rong Wen","status":"admin_assigned","statusLastChangedAt":"2025-05-27T08:37:03.209Z","hidden":false},{"_id":"682fe37bb998c9f79463b56a","user":{"_id":"64c07b488e2612254361153b","avatarUrl":"/avatars/ade0f783cc4c2d3e73f402637f595471.svg","isPro":false,"fullname":"chongxuan li","user":"zhenxuan00","type":"user"},"name":"Chongxuan Li","status":"admin_assigned","statusLastChangedAt":"2025-05-27T08:36:56.958Z","hidden":false}],"publishedAt":"2025-05-22T17:23:26.000Z","submittedOnDailyAt":"2025-05-23T01:26:20.847Z","title":"LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning","submittedOnDailyBy":{"_id":"624f909eac5dd186b01ac3f5","avatarUrl":"/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg","isPro":false,"fullname":"Zebin You","user":"yyyou","type":"user"},"summary":"In this work, we introduce LLaDA-V, a purely diffusion-based Multimodal Large\nLanguage Model (MLLM) that integrates visual instruction tuning with masked\ndiffusion models, representing a departure from the autoregressive paradigms\ndominant in current multimodal approaches. Built upon LLaDA, a representative\nlarge language diffusion model, LLaDA-V incorporates a vision encoder and MLP\nconnector that projects visual features into the language embedding space,\nenabling effective multimodal alignment. Our empirical investigation reveals\nseveral intriguing results: First, LLaDA-V demonstrates promising multimodal\nperformance despite its language model being weaker on purely textual tasks\nthan counterparts like LLaMA3-8B and Qwen2-7B. When trained on the same\ninstruction data, LLaDA-V is highly competitive to LLaMA3-V across multimodal\ntasks with better data scalability. It also narrows the performance gap to\nQwen2-VL, suggesting the effectiveness of its architecture for multimodal\ntasks. Second, LLaDA-V achieves state-of-the-art performance in multimodal\nunderstanding compared to existing hybrid autoregressive-diffusion and purely\ndiffusion-based MLLMs. Our findings suggest that large language diffusion\nmodels show promise in multimodal contexts and warrant further investigation in\nfuture research. Project page and codes:\nhttps://ml-gsai.github.io/LLaDA-V-demo/.","upvotes":34,"discussionId":"682fe37cb998c9f79463b5ae","projectPage":"https://ml-gsai.github.io/LLaDA-V-demo/","githubRepo":"https://github.com/ML-GSAI/LLaDA-V","ai_summary":"A diffusion-based Multimodal Large Language Model (LLaDA-V) with integrated visual instruction tuning performs competitively on multimodal tasks and outperforms existing models in multimodal understanding.","ai_keywords":["diffusion-based","Multimodal Large Language Model (MLLM)","visual instruction tuning","masked diffusion models","autoregressive paradigms","vision encoder","MLP connector","language embedding space","multimodal performance","LLaDA","LLaMA3-8B","Qwen2-7B","LLaMA3-V","Qwen2-VL","multimodal understanding","hybrid autoregressive-diffusion"],"githubStars":231},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64b7df742f5a966b973e25f7","avatarUrl":"/avatars/e24e7769188d441317b3b7d10ef8fd60.svg","isPro":false,"fullname":"Wenkai Yang","user":"Keven16","type":"user"},{"_id":"624f909eac5dd186b01ac3f5","avatarUrl":"/avatars/0aafdb1cbb492fda52a0303031cc6c14.svg","isPro":false,"fullname":"Zebin You","user":"yyyou","type":"user"},{"_id":"66275146a270211504c3b241","avatarUrl":"/avatars/68b591167a92245e05787d7f335d7057.svg","isPro":false,"fullname":"Lei Wang","user":"zxcptss","type":"user"},{"_id":"64245f2c089d5fae56b4549a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64245f2c089d5fae56b4549a/qUHFsL9Svwyj5BKpfMtaY.jpeg","isPro":false,"fullname":"Pengxiang Li","user":"pengxiang","type":"user"},{"_id":"66ea8b5d895ee753e5bd1dc2","avatarUrl":"/avatars/58ad1fe54b9cd33f0c90e79f5b7dede8.svg","isPro":false,"fullname":"Liming Wu","user":"Limiww","type":"user"},{"_id":"640dd9a5fdeaae13908208a7","avatarUrl":"/avatars/61f8f1d5f6ef1c4af1f47285e9cc0217.svg","isPro":false,"fullname":"nieshen","user":"nieshen","type":"user"},{"_id":"682e8e6d007cd8c2f2cd0afd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/az0B5USOPlLR3vq872JeN.png","isPro":false,"fullname":"Chenyu Zheng","user":"ChenyuZheng","type":"user"},{"_id":"66cde57cb1fe4c78fe3ab770","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66cde57cb1fe4c78fe3ab770/0R1aA-f_XLjCfy1HwqZ-p.jpeg","isPro":false,"fullname":"Yanqi Dai","user":"YanqiDai","type":"user"},{"_id":"66d9e820f5693ea15f87d271","avatarUrl":"/avatars/2ecb85469832b9f63e760b3c6b1e1598.svg","isPro":false,"fullname":"weiyao_ruc","user":"weiweiruc","type":"user"},{"_id":"67d05bf2cb23087198f49b18","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/loe2FlAU7mQtiWjHJNISN.png","isPro":false,"fullname":"Yang Zequn","user":"bjlfzs","type":"user"},{"_id":"656949b71d7c2ca7b7aae5f2","avatarUrl":"/avatars/e7b23e260eb348cc26b849aaa601a503.svg","isPro":false,"fullname":"Jingyang Ou","user":"JingyangOu","type":"user"},{"_id":"660d3bac47295c3aa4e34ca4","avatarUrl":"/avatars/ad853b9bfed5d4113b0eefcf3d282734.svg","isPro":false,"fullname":"ZZH","user":"ZhZh2","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2505.16933

LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning

Published on May 22
· Submitted by Zebin You on May 23
Authors:
,
,
,

Abstract

A diffusion-based Multimodal Large Language Model (LLaDA-V) with integrated visual instruction tuning performs competitively on multimodal tasks and outperforms existing models in multimodal understanding.

AI-generated summary

In this work, we introduce LLaDA-V, a purely diffusion-based Multimodal Large Language Model (MLLM) that integrates visual instruction tuning with masked diffusion models, representing a departure from the autoregressive paradigms dominant in current multimodal approaches. Built upon LLaDA, a representative large language diffusion model, LLaDA-V incorporates a vision encoder and MLP connector that projects visual features into the language embedding space, enabling effective multimodal alignment. Our empirical investigation reveals several intriguing results: First, LLaDA-V demonstrates promising multimodal performance despite its language model being weaker on purely textual tasks than counterparts like LLaMA3-8B and Qwen2-7B. When trained on the same instruction data, LLaDA-V is highly competitive to LLaMA3-V across multimodal tasks with better data scalability. It also narrows the performance gap to Qwen2-VL, suggesting the effectiveness of its architecture for multimodal tasks. Second, LLaDA-V achieves state-of-the-art performance in multimodal understanding compared to existing hybrid autoregressive-diffusion and purely diffusion-based MLLMs. Our findings suggest that large language diffusion models show promise in multimodal contexts and warrant further investigation in future research. Project page and codes: https://ml-gsai.github.io/LLaDA-V-demo/.

Community

Paper author Paper submitter
·
Paper author

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.16933 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.16933 in a Space README.md to link it from this page.

Collections including this paper 7

Лучший частный хостинг