lynx   »   [go: up one dir, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-06-12T01:35:03.958Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6945553421974182},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2506.09040","authors":[{"_id":"6848fff842e4f9106973f321","name":"Dianyi Wang","hidden":false},{"_id":"6848fff842e4f9106973f322","user":{"_id":"665eccf5ffd59344a22533a8","avatarUrl":"/avatars/2ae2710753ce34a04937384bc6dddf70.svg","isPro":false,"fullname":"Wei Song","user":"Songweii","type":"user"},"name":"Wei Song","status":"claimed_verified","statusLastChangedAt":"2025-06-11T20:21:19.066Z","hidden":false},{"_id":"6848fff842e4f9106973f323","user":{"_id":"627b73728b6ecd7ece822825","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/627b73728b6ecd7ece822825/QV-sT0vwupGZYg-loLPRw.jpeg","isPro":false,"fullname":"Yikun Wang (SII)","user":"LibraTree","type":"user"},"name":"Yikun Wang","status":"claimed_verified","statusLastChangedAt":"2025-06-11T16:02:20.996Z","hidden":false},{"_id":"6848fff842e4f9106973f324","name":"Siyuan Wang","hidden":false},{"_id":"6848fff842e4f9106973f325","name":"Kaicheng Yu","hidden":false},{"_id":"6848fff842e4f9106973f326","name":"Zhongyu Wei","hidden":false},{"_id":"6848fff842e4f9106973f327","user":{"_id":"66084749d81d611249a18239","avatarUrl":"/avatars/a7256b4e3b686133aaa481a72e3058de.svg","isPro":false,"fullname":"Alex Wang","user":"Alex11556666","type":"user"},"name":"Jiaqi Wang","status":"claimed_verified","statusLastChangedAt":"2025-06-18T12:17:20.045Z","hidden":false}],"publishedAt":"2025-06-10T17:57:50.000Z","submittedOnDailyAt":"2025-06-11T05:55:58.221Z","title":"Autoregressive Semantic Visual Reconstruction Helps VLMs Understand\n Better","submittedOnDailyBy":{"_id":"64b4eec4faa3181a5eab9c46","avatarUrl":"/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg","isPro":true,"fullname":"Jiaqi Wang","user":"myownskyW7","type":"user"},"summary":"Typical large vision-language models (LVLMs) apply autoregressive supervision\nsolely to textual sequences, without fully incorporating the visual modality\ninto the learning process. This results in three key limitations: (1) an\ninability to utilize images without accompanying captions, (2) the risk that\ncaptions omit critical visual details, and (3) the challenge that certain\nvision-centric content cannot be adequately conveyed through text. As a result,\ncurrent LVLMs often prioritize vision-to-language alignment while potentially\noverlooking fine-grained visual information. While some prior works have\nexplored autoregressive image generation, effectively leveraging autoregressive\nvisual supervision to enhance image understanding remains an open challenge. In\nthis paper, we introduce Autoregressive Semantic Visual Reconstruction (ASVR),\nwhich enables joint learning of visual and textual modalities within a unified\nautoregressive framework. We show that autoregressively reconstructing the raw\nvisual appearance of images does not enhance and may even impair multimodal\nunderstanding. In contrast, autoregressively reconstructing the semantic\nrepresentation of images consistently improves comprehension. Notably, we find\nthat even when models are given continuous image features as input, they can\neffectively reconstruct discrete semantic tokens, resulting in stable and\nconsistent improvements across a wide range of multimodal understanding\nbenchmarks. Our approach delivers significant performance gains across varying\ndata scales (556k-2M) and types of LLM bacbones. Specifically, ASVR improves\nLLaVA-1.5 by 5% in average scores across 14 multimodal benchmarks. The code is\navailable at https://github.com/AlenjandroWang/ASVR.","upvotes":34,"discussionId":"6848fff842e4f9106973f328","ai_summary":"Autoregressive Semantic Visual Reconstruction (ASVR) improves multimodal understanding by focusing on semantic reconstruction rather than raw visual appearance, enhancing performance across various benchmarks.","ai_keywords":["autoregressive supervision","large vision-language models (LVLMs)","visual modality","image captions","autoregressive image generation","multimodal learning","semantic representation","discrete semantic tokens","multimodal understanding benchmarks","LLaVA-1.5"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6049ae30f7b7068fd1ede43e","avatarUrl":"/avatars/920365cd5e7c45f358d7c97e77cd5b9a.svg","isPro":false,"fullname":"Zhihao Fan","user":"Libert","type":"user"},{"_id":"64b4eec4faa3181a5eab9c46","avatarUrl":"/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg","isPro":true,"fullname":"Jiaqi Wang","user":"myownskyW7","type":"user"},{"_id":"679b2fbf3c0102760f054a57","avatarUrl":"/avatars/a05a9944992bee0ba2bfd56ac2d4a627.svg","isPro":true,"fullname":"Eric Lee","user":"Salmonnn","type":"user"},{"_id":"624862b4a460a8870c9d6a48","avatarUrl":"/avatars/479bc415ee624528e910f22bdb344b23.svg","isPro":false,"fullname":"Tianhang Wang","user":"huiyuwangth","type":"user"},{"_id":"65028e8389707f182386588c","avatarUrl":"/avatars/86a748a3264e6e0f4ee5eaf8f7032ecb.svg","isPro":false,"fullname":"Zhenglin Cheng","user":"kenshinn","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"669e0665dac1eb34c0efbfb7","avatarUrl":"/avatars/b4d58b9d9f566ae14c2d471df7ed39a1.svg","isPro":false,"fullname":"LRL","user":"LLMbeginer","type":"user"},{"_id":"68025f865fcd44fe9078c8a2","avatarUrl":"/avatars/528b634513e32e73e4858008f14ed7d3.svg","isPro":false,"fullname":"Zhuangqiu Huang","user":"H2OOPS","type":"user"},{"_id":"62f0c4abe2999b231e5a893c","avatarUrl":"/avatars/90da268b877a7ffe6665075c84018a83.svg","isPro":false,"fullname":"Mingzhe Zheng","user":"Dunge0nMaster","type":"user"},{"_id":"64eda1f7a7f7ab0efe27864b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64eda1f7a7f7ab0efe27864b/zPSKvxBrB6mDjEavJk-Iq.jpeg","isPro":false,"fullname":"Yushun Xiang","user":"Yushun-Xiang","type":"user"},{"_id":"64c3171dd2027dcbea2d6722","avatarUrl":"/avatars/a9d938e48b236bc578527f108a5d4d06.svg","isPro":false,"fullname":"Zhuohan Long","user":"Nanshine","type":"user"},{"_id":"68084fb1411039d7f4c451b9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/UZXq2mj605oYwTbF-PMx_.png","isPro":false,"fullname":"ZCGZ","user":"zcgz","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2506.09040

Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better

Published on Jun 10
· Submitted by Jiaqi Wang on Jun 11
#2 Paper of the day
Authors:
,
,
,
,

Abstract

Autoregressive Semantic Visual Reconstruction (ASVR) improves multimodal understanding by focusing on semantic reconstruction rather than raw visual appearance, enhancing performance across various benchmarks.

AI-generated summary

Typical large vision-language models (LVLMs) apply autoregressive supervision solely to textual sequences, without fully incorporating the visual modality into the learning process. This results in three key limitations: (1) an inability to utilize images without accompanying captions, (2) the risk that captions omit critical visual details, and (3) the challenge that certain vision-centric content cannot be adequately conveyed through text. As a result, current LVLMs often prioritize vision-to-language alignment while potentially overlooking fine-grained visual information. While some prior works have explored autoregressive image generation, effectively leveraging autoregressive visual supervision to enhance image understanding remains an open challenge. In this paper, we introduce Autoregressive Semantic Visual Reconstruction (ASVR), which enables joint learning of visual and textual modalities within a unified autoregressive framework. We show that autoregressively reconstructing the raw visual appearance of images does not enhance and may even impair multimodal understanding. In contrast, autoregressively reconstructing the semantic representation of images consistently improves comprehension. Notably, we find that even when models are given continuous image features as input, they can effectively reconstruct discrete semantic tokens, resulting in stable and consistent improvements across a wide range of multimodal understanding benchmarks. Our approach delivers significant performance gains across varying data scales (556k-2M) and types of LLM bacbones. Specifically, ASVR improves LLaVA-1.5 by 5% in average scores across 14 multimodal benchmarks. The code is available at https://github.com/AlenjandroWang/ASVR.

Community

Paper submitter

🧠 ASVR: Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better
(Pronounced “as-we-are”)

🤔 Motivation: Can autoregressive visual generation supervision enhance VLMs' understanding?

📉 Simply reconstructing raw image pixels doesn’t help multimodal understanding—and can even hurt performance.

🧱 Instead, autoregressively reconstructing semantic representations leads to stronger visual-language comprehension.

🚀 This semantic-centric approach provides consistent improvements across various benchmarks.

🔁 ASVR demonstrates: it’s not about predicting pixels, but about predicting meaning. A simple yet effective recipe for training better VLMs.

💻 Code: github.com/AlenjandroWang/ASVR

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.09040 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.09040 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.09040 in a Space README.md to link it from this page.

Collections including this paper 3

Лучший частный хостинг