https://s-vco.github.io/\n","updatedAt":"2025-02-21T18:42:50.563Z","author":{"_id":"65222f97ef06bb99753cb829","avatarUrl":"/avatars/f1a743d74e6d38b916acaec91b4e7e4f.svg","fullname":"Shengguang Wu","name":"danielwusg","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5581551790237427},"editors":["danielwusg"],"editorAvatarUrls":["/avatars/f1a743d74e6d38b916acaec91b4e7e4f.svg"],"reactions":[],"isReport":false}},{"id":"67b9293fcb423896ad5cd519","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-02-22T01:32:47.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference Optimization](https://huggingface.co/papers/2502.13146) (2025)\n* [Probing Visual Language Priors in VLMs](https://huggingface.co/papers/2501.00569) (2024)\n* [Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Captioning in Multimodal Large Language Models](https://huggingface.co/papers/2502.01419) (2025)\n* [Why Vision Language Models Struggle with Visual Arithmetic? Towards Enhanced Chart and Geometry Understanding](https://huggingface.co/papers/2502.11492) (2025)\n* [Supervision-free Vision-Language Alignment](https://huggingface.co/papers/2501.04568) (2025)\n* [ImageRef-VL: Enabling Contextual Image Referencing in Vision-Language Models](https://huggingface.co/papers/2501.12418) (2025)\n* [SMIR: Efficient Synthetic Data Pipeline To Improve Multi-Image Reasoning](https://huggingface.co/papers/2501.03675) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-02-22T01:32:47.711Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.701280951499939},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.13928","authors":[{"_id":"67b7cdac904136d47c3966d8","user":{"_id":"65222f97ef06bb99753cb829","avatarUrl":"/avatars/f1a743d74e6d38b916acaec91b4e7e4f.svg","isPro":true,"fullname":"Shengguang Wu","user":"danielwusg","type":"user"},"name":"Shengguang Wu","status":"claimed_verified","statusLastChangedAt":"2025-02-21T09:59:21.001Z","hidden":false},{"_id":"67b7cdac904136d47c3966d9","user":{"_id":"66287db3fe7ace12c40026b6","avatarUrl":"/avatars/ee86cddd6c7f6f456f7643ee0ddb084a.svg","isPro":false,"fullname":"Fan-Yun Sun","user":"sunfanyun","type":"user"},"name":"Fan-Yun Sun","status":"claimed_verified","statusLastChangedAt":"2025-03-19T09:48:05.193Z","hidden":false},{"_id":"67b7cdac904136d47c3966da","name":"Kaiyue Wen","hidden":false},{"_id":"67b7cdac904136d47c3966db","name":"Nick Haber","hidden":false}],"publishedAt":"2025-02-19T18:05:42.000Z","submittedOnDailyAt":"2025-02-21T16:12:50.546Z","title":"Symmetrical Visual Contrastive Optimization: Aligning Vision-Language\n Models with Minimal Contrastive Images","submittedOnDailyBy":{"_id":"65222f97ef06bb99753cb829","avatarUrl":"/avatars/f1a743d74e6d38b916acaec91b4e7e4f.svg","isPro":true,"fullname":"Shengguang Wu","user":"danielwusg","type":"user"},"summary":"Recent studies have shown that Large Vision-Language Models (VLMs) tend to\nneglect image content and over-rely on language-model priors, resulting in\nerrors in visually grounded tasks and hallucinations. We hypothesize that this\nissue arises because existing VLMs are not explicitly trained to generate texts\nthat are accurately grounded in fine-grained image details. To enhance visual\nfeedback during VLM training, we propose S-VCO (Symmetrical Visual Contrastive\nOptimization), a novel finetuning objective that steers the model toward\ncapturing important visual details and aligning them with corresponding text\ntokens. To further facilitate this detailed alignment, we introduce MVC, a\npaired image-text dataset built by automatically filtering and augmenting\nvisual counterfactual data to challenge the model with hard contrastive cases\ninvolving Minimal Visual Contrasts. Experiments show that our method\nconsistently improves VLM performance across diverse benchmarks covering\nvarious abilities and domains, achieving up to a 22% reduction in\nhallucinations, and significant gains in vision-centric and general tasks.\nNotably, these improvements become increasingly pronounced in benchmarks with\nhigher visual dependency. In short, S-VCO offers a significant enhancement of\nVLM's visually-dependent task performance while retaining or even improving the\nmodel's general abilities. We opensource our code at https://s-vco.github.io/","upvotes":4,"discussionId":"67b7cdb8904136d47c396910","ai_summary":"S-VCO, a novel fine-tuning objective using minimal visual contrasts, enhances VLM performance by improving visual detail alignment and reducing hallucinations in visually grounded tasks.","ai_keywords":["Large Vision-Language Models (VLMs)","Symmetrical Visual Contrastive Optimization (S-VCO)","MVC","visual contrastive optimization","minimal visual contrasts","fine-grained image details","text tokens","visual feedback","hallucinations","vision-centric tasks","general tasks"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"65222f97ef06bb99753cb829","avatarUrl":"/avatars/f1a743d74e6d38b916acaec91b4e7e4f.svg","isPro":true,"fullname":"Shengguang Wu","user":"danielwusg","type":"user"},{"_id":"66f612b934b8ac9ffa44f084","avatarUrl":"/avatars/6836c122e19c66c90f1673f28b30d7f0.svg","isPro":false,"fullname":"Tang","user":"tommysally","type":"user"},{"_id":"668cd4bbe990292e5f6974d3","avatarUrl":"/avatars/d1747b2372e94500ecb5fb56809b482d.svg","isPro":false,"fullname":"Jinyeong Kim","user":"rubatoyeong","type":"user"},{"_id":"66287db3fe7ace12c40026b6","avatarUrl":"/avatars/ee86cddd6c7f6f456f7643ee0ddb084a.svg","isPro":false,"fullname":"Fan-Yun Sun","user":"sunfanyun","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
S-VCO, a novel fine-tuning objective using minimal visual contrasts, enhances VLM performance by improving visual detail alignment and reducing hallucinations in visually grounded tasks.
AI-generated summary
Recent studies have shown that Large Vision-Language Models (VLMs) tend to
neglect image content and over-rely on language-model priors, resulting in
errors in visually grounded tasks and hallucinations. We hypothesize that this
issue arises because existing VLMs are not explicitly trained to generate texts
that are accurately grounded in fine-grained image details. To enhance visual
feedback during VLM training, we propose S-VCO (Symmetrical Visual Contrastive
Optimization), a novel finetuning objective that steers the model toward
capturing important visual details and aligning them with corresponding text
tokens. To further facilitate this detailed alignment, we introduce MVC, a
paired image-text dataset built by automatically filtering and augmenting
visual counterfactual data to challenge the model with hard contrastive cases
involving Minimal Visual Contrasts. Experiments show that our method
consistently improves VLM performance across diverse benchmarks covering
various abilities and domains, achieving up to a 22% reduction in
hallucinations, and significant gains in vision-centric and general tasks.
Notably, these improvements become increasingly pronounced in benchmarks with
higher visual dependency. In short, S-VCO offers a significant enhancement of
VLM's visually-dependent task performance while retaining or even improving the
model's general abilities. We opensource our code at https://s-vco.github.io/