lynx   »   [go: up one dir, main page]

https://homepage.jackli.org/projects/lavida_o/index.html

\n","updatedAt":"2025-09-25T03:57:57.754Z","author":{"_id":"6310531914aa81e1044363ed","avatarUrl":"/avatars/ae7767e591cb7199ea2f62d2db89fc7f.svg","fullname":"Shufan Li","name":"jacklishufan","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5478543043136597},"editors":["jacklishufan"],"editorAvatarUrls":["/avatars/ae7767e591cb7199ea2f62d2db89fc7f.svg"],"reactions":[{"reaction":"🚀","users":["BryanW"],"count":1}],"isReport":false},"replies":[{"id":"68d60611eb1213d7afeaac29","author":{"_id":"63fccdac93b993a4ebd7789a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63fccdac93b993a4ebd7789a/KRx8vpdoDjsZBRw0j8Vg8.jpeg","fullname":"Jinbin Bai","name":"BryanW","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10},"createdAt":"2025-09-26T03:18:41.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Impressive!","html":"

Impressive!

\n","updatedAt":"2025-09-26T03:18:41.116Z","author":{"_id":"63fccdac93b993a4ebd7789a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63fccdac93b993a4ebd7789a/KRx8vpdoDjsZBRw0j8Vg8.jpeg","fullname":"Jinbin Bai","name":"BryanW","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8270281553268433},"editors":["BryanW"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/63fccdac93b993a4ebd7789a/KRx8vpdoDjsZBRw0j8Vg8.jpeg"],"reactions":[],"isReport":false,"parentCommentId":"68d4bdc5924e3147f10fd290"}}]},{"id":"68d5edb1d479f2553b133067","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-09-26T01:34:41.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [MANZANO: A Simple and Scalable Unified Multimodal Model with a Hybrid Vision Tokenizer](https://huggingface.co/papers/2509.16197) (2025)\n* [UniLiP: Adapting CLIP for Unified Multimodal Understanding, Generation and Editing](https://huggingface.co/papers/2507.23278) (2025)\n* [Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents](https://huggingface.co/papers/2508.05954) (2025)\n* [OneCAT: Decoder-Only Auto-Regressive Model for Unified Understanding and Generation](https://huggingface.co/papers/2509.03498) (2025)\n* [NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale](https://huggingface.co/papers/2508.10711) (2025)\n* [UNCAGE: Contrastive Attention Guidance for Masked Generative Transformers in Text-to-Image Generation](https://huggingface.co/papers/2508.05399) (2025)\n* [UnifiedVisual: A Framework for Constructing Unified Vision-Language Datasets](https://huggingface.co/papers/2509.14738) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-09-26T01:34:41.537Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7185752391815186},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2509.19244","authors":[{"_id":"68d4b9c136950a9dff1568e2","user":{"_id":"6310531914aa81e1044363ed","avatarUrl":"/avatars/ae7767e591cb7199ea2f62d2db89fc7f.svg","isPro":false,"fullname":"Shufan Li","user":"jacklishufan","type":"user"},"name":"Shufan Li","status":"admin_assigned","statusLastChangedAt":"2025-09-25T11:57:08.415Z","hidden":false},{"_id":"68d4b9c136950a9dff1568e3","user":{"_id":"642467708d97ce93878e8124","avatarUrl":"/avatars/f0e464ddb4bd790f470fc0f10275fa26.svg","isPro":false,"fullname":"Jiuxiang Gu","user":"JoshuaGu","type":"user"},"name":"Jiuxiang Gu","status":"admin_assigned","statusLastChangedAt":"2025-09-25T11:57:15.642Z","hidden":false},{"_id":"68d4b9c136950a9dff1568e4","user":{"_id":"6711e867f25bb4f32f880872","avatarUrl":"/avatars/02ec6a9e59f4c4eec6cc7142939cea79.svg","isPro":false,"fullname":"Kangning Liu","user":"kl3141","type":"user"},"name":"Kangning Liu","status":"admin_assigned","statusLastChangedAt":"2025-09-25T11:57:24.441Z","hidden":false},{"_id":"68d4b9c136950a9dff1568e5","name":"Zhe Lin","hidden":false},{"_id":"68d4b9c136950a9dff1568e6","name":"Zijun Wei","hidden":false},{"_id":"68d4b9c136950a9dff1568e7","name":"Aditya Grover","hidden":false},{"_id":"68d4b9c136950a9dff1568e8","user":{"_id":"65384eaeef7d2bb8c4baf1b2","avatarUrl":"/avatars/b9217561a59309d6b0ddb417324c6cad.svg","isPro":false,"fullname":"Jason Kuen","user":"xternalz","type":"user"},"name":"Jason Kuen","status":"admin_assigned","statusLastChangedAt":"2025-09-25T11:57:41.466Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/6310531914aa81e1044363ed/Eq_uxsS_vHgNCRpS4jxzb.mp4"],"publishedAt":"2025-09-23T17:05:46.000Z","submittedOnDailyAt":"2025-09-25T02:12:13.904Z","title":"Lavida-O: Elastic Large Masked Diffusion Models for Unified Multimodal\n Understanding and Generation","submittedOnDailyBy":{"_id":"6310531914aa81e1044363ed","avatarUrl":"/avatars/ae7767e591cb7199ea2f62d2db89fc7f.svg","isPro":false,"fullname":"Shufan Li","user":"jacklishufan","type":"user"},"summary":"We propose Lavida-O, a unified Masked Diffusion Model (MDM) for multimodal\nunderstanding and generation. Unlike existing multimodal MDMs such as MMaDa and\nMuddit which only support simple image-level understanding tasks and\nlow-resolution image generation, Lavida-O presents a single framework that\nenables image-level understanding, object grounding, image editing, and\nhigh-resolution (1024px) text-to-image synthesis. Lavida-O incorporates a novel\nElastic Mixture-of-Transformers (Elastic-MoT) architecture that couples a\nlightweight generation branch with a larger understanding branch, supported by\ntoken compression, universal text conditioning and stratified sampling for\nefficient and high-quality generation. Lavida-O further incorporates planning\nand iterative self-reflection in image generation and editing tasks, seamlessly\nboosting generation quality with its understanding capabilities. Lavida-O\nachieves state-of-the-art performance on a wide range of benchmarks including\nRefCOCO object grounding, GenEval text-to-image generation, and ImgEdit image\nediting, outperforming existing autoregressive models and continuous diffusion\nmodels such as Qwen2.5-VL and FluxKontext-dev, while offering considerable\nspeedup at inference. These advances establish Lavida-O as a new paradigm for\nscalable multimodal reasoning and generation.","upvotes":7,"discussionId":"68d4b9c236950a9dff1568e9","projectPage":"https://homepage.jackli.org/projects/lavida_o/index.html","ai_summary":"Lavida-O, a unified Masked Diffusion Model, excels in multimodal understanding and generation tasks, including object grounding, image editing, and high-resolution text-to-image synthesis, outperforming existing models with improved efficiency and quality.","ai_keywords":["Masked Diffusion Model","Elastic Mixture-of-Transformers","token compression","universal text conditioning","stratified sampling","planning","iterative self-reflection","RefCOCO","GenEval","ImgEdit","autoregressive models","continuous diffusion models"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6310531914aa81e1044363ed","avatarUrl":"/avatars/ae7767e591cb7199ea2f62d2db89fc7f.svg","isPro":false,"fullname":"Shufan Li","user":"jacklishufan","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"651f8133dbf879b8c58f5136","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/651f8133dbf879b8c58f5136/0L8Ecgi5Ietkm_DchJwE-.png","isPro":false,"fullname":"Zikai Zhou","user":"Klayand","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"65bb837dbfb878f46c77de4c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/PKyQ_-wTNH1Hyv5HxhWdX.jpeg","isPro":true,"fullname":"Prithiv Sakthi","user":"prithivMLmods","type":"user"},{"_id":"66f82ff88d215c6331be7abd","avatarUrl":"/avatars/70a5cbd0824a6cfe0a291a41094644d9.svg","isPro":false,"fullname":"Qipeng Chen","user":"lechatelierlenz","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2509.19244

Lavida-O: Elastic Large Masked Diffusion Models for Unified Multimodal Understanding and Generation

Published on Sep 23
· Submitted by Shufan Li on Sep 25
Authors:
,
,
,

Abstract

Lavida-O, a unified Masked Diffusion Model, excels in multimodal understanding and generation tasks, including object grounding, image editing, and high-resolution text-to-image synthesis, outperforming existing models with improved efficiency and quality.

AI-generated summary

We propose Lavida-O, a unified Masked Diffusion Model (MDM) for multimodal understanding and generation. Unlike existing multimodal MDMs such as MMaDa and Muddit which only support simple image-level understanding tasks and low-resolution image generation, Lavida-O presents a single framework that enables image-level understanding, object grounding, image editing, and high-resolution (1024px) text-to-image synthesis. Lavida-O incorporates a novel Elastic Mixture-of-Transformers (Elastic-MoT) architecture that couples a lightweight generation branch with a larger understanding branch, supported by token compression, universal text conditioning and stratified sampling for efficient and high-quality generation. Lavida-O further incorporates planning and iterative self-reflection in image generation and editing tasks, seamlessly boosting generation quality with its understanding capabilities. Lavida-O achieves state-of-the-art performance on a wide range of benchmarks including RefCOCO object grounding, GenEval text-to-image generation, and ImgEdit image editing, outperforming existing autoregressive models and continuous diffusion models such as Qwen2.5-VL and FluxKontext-dev, while offering considerable speedup at inference. These advances establish Lavida-O as a new paradigm for scalable multimodal reasoning and generation.

Community

Paper author Paper submitter

We propose Lavida-O, a unified Masked Diffusion Model (MDM) for multimodal understanding and generation. Unlike existing multimodal MDMs such as MMaDa and Muddit which only support simple image-level understanding tasks and low-resolution image generation, Lavida-O presents a single framework that enables image-level understanding, object grounding, image editing, and high-resolution (1024px) text-to-image synthesis. Lavida-O incorporates a novel Elastic Mixture-of-Transformers (Elastic-MoT) architecture that couples a lightweight generation branch with a larger understanding branch, supported by token compression, universal text conditioning and stratified sampling for efficient and high-quality generation. Lavida-O further incorporates planning and iterative self-reflection in image generation and editing tasks, seamlessly boosting generation quality with its understanding capabilities. Lavida-O achieves state-of-the-art performance on a wide range of benchmarks including RefCOCO object grounding, GenEval text-to-image generation, and ImgEdit image editing, outperforming existing autoregressive models and continuous diffusion models such as Qwen2.5-VL and FluxKontext-dev, while offering considerable speedup at inference. These advances establish Lavida-O as a new paradigm for scalable multimodal reasoning and generation.

Paper author Paper submitter
·

Impressive!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.19244 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.19244 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.19244 in a Space README.md to link it from this page.

Collections including this paper 2

Лучший частный хостинг