lynx   »   [go: up one dir, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-03-15T01:34:18.920Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.690190315246582},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.10582","authors":[{"_id":"67d387ff45b17e31c16d05d1","user":{"_id":"6721451d41cc176331607843","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/HQ6_0b2Y-X1ORS6tAz2cq.png","isPro":false,"fullname":"Yiming Jia","user":"jymmmmm","type":"user"},"name":"Yiming Jia","status":"admin_assigned","statusLastChangedAt":"2025-03-14T10:49:09.988Z","hidden":false},{"_id":"67d387ff45b17e31c16d05d2","user":{"_id":"640d7ec5fdeaae13907fc488","avatarUrl":"/avatars/acf43ea155105a51c8612dacc4725091.svg","isPro":false,"fullname":"Jiachen Li","user":"jiachenli-ucsb","type":"user"},"name":"Jiachen Li","status":"admin_assigned","statusLastChangedAt":"2025-03-14T10:49:35.163Z","hidden":false},{"_id":"67d387ff45b17e31c16d05d3","user":{"_id":"6230d750d93e84e233882dbc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6230d750d93e84e233882dbc/4MGEekLW3oWzqeFWDWvIK.jpeg","isPro":false,"fullname":"Xiang Yue","user":"yuexiang96","type":"user"},"name":"Xiang Yue","status":"claimed_verified","statusLastChangedAt":"2025-07-02T16:08:18.402Z","hidden":false},{"_id":"67d387ff45b17e31c16d05d4","name":"Bo Li","hidden":false},{"_id":"67d387ff45b17e31c16d05d5","user":{"_id":"65358802a920f38780b3248a","avatarUrl":"/avatars/9415510b598079973c2b0436ad12db9c.svg","isPro":false,"fullname":"Ping Nie","user":"pingnieuk","type":"user"},"name":"Ping Nie","status":"claimed_verified","statusLastChangedAt":"2025-08-13T07:32:20.631Z","hidden":false},{"_id":"67d387ff45b17e31c16d05d6","name":"Kai Zou","hidden":false},{"_id":"67d387ff45b17e31c16d05d7","user":{"_id":"6313a86154e6e5d9f0f94e04","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1662232951344-6313a86154e6e5d9f0f94e04.jpeg","isPro":false,"fullname":"Wenhu Chen","user":"wenhu","type":"user"},"name":"Wenhu Chen","status":"extracted_pending","statusLastChangedAt":"2025-03-14T01:36:13.720Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/6313a86154e6e5d9f0f94e04/VBzj4fQkEBEzfx26BsANS.png"],"publishedAt":"2025-03-13T17:32:48.000Z","submittedOnDailyAt":"2025-03-14T00:47:38.699Z","title":"VisualWebInstruct: Scaling up Multimodal Instruction Data through Web\n Search","submittedOnDailyBy":{"_id":"6313a86154e6e5d9f0f94e04","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1662232951344-6313a86154e6e5d9f0f94e04.jpeg","isPro":false,"fullname":"Wenhu Chen","user":"wenhu","type":"user"},"summary":"Vision-Language Models have made significant progress on many\nperception-focused tasks, however, their progress on reasoning-focused tasks\nseem to be limited due to the lack of high-quality and diverse training data.\nIn this work, we aim to address the scarcity issue of reasoning-focused\nmultimodal datasets. We propose VisualWebInstruct - a novel approach that\nleverages search engine to create a diverse, and high-quality dataset spanning\nmultiple disciplines like math, physics, finance, chemistry, etc. Starting with\nmeticulously selected 30,000 seed images, we employ Google Image search to\nidentify websites containing similar images. We collect and process the HTMLs\nfrom over 700K unique URL sources. Through a pipeline of content extraction,\nfiltering and synthesis, we build a dataset of approximately 900K\nquestion-answer pairs, with 40% being visual QA pairs and the rest as text QA\npairs. Models fine-tuned on VisualWebInstruct demonstrate significant\nperformance gains: (1) training from Llava-OV-mid shows 10-20% absolute point\ngains across benchmarks, (2) training from MAmmoTH-VL shows 5% absoluate gain.\nOur best model MAmmoTH-VL2 shows state-of-the-art performance within the 10B\nparameter class on MMMU-Pro-std (40.7%), MathVerse (42.6%), and DynaMath\n(55.7%). These remarkable results highlight the effectiveness of our dataset in\nenhancing VLMs' reasoning capabilities for complex multimodal tasks.","upvotes":24,"discussionId":"67d3880d45b17e31c16d09d1","projectPage":"https://tiger-ai-lab.github.io/VisualWebInstruct/","githubRepo":"https://github.com/TIGER-AI-Lab/VisualWebInstruct","ai_summary":"The VisualWebInstruct approach enhances vision-language models' reasoning abilities through a large, diverse, and high-quality multimodal dataset created from search engine data.","ai_keywords":["vision-language models","multimodal datasets","question-answer pairs","visual QA","text QA","Llava-OV-mid","MAmmoTH-VL","MAmmoTH-VL2","MMMU-Pro-std","MathVerse","DynaMath"],"githubStars":29},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"65358802a920f38780b3248a","avatarUrl":"/avatars/9415510b598079973c2b0436ad12db9c.svg","isPro":false,"fullname":"Ping Nie","user":"pingnieuk","type":"user"},{"_id":"65c387c807a1445dfe1e9452","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65c387c807a1445dfe1e9452/NCsYIYrCQxQcPBiMe_4ZI.jpeg","isPro":false,"fullname":"Wentao Ma","user":"tonymwt","type":"user"},{"_id":"63842a11e58a1678ad7669c1","avatarUrl":"/avatars/556f82d7074734a5829afaf9147d5493.svg","isPro":false,"fullname":"ming li","user":"Frankboy","type":"user"},{"_id":"6721451d41cc176331607843","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/HQ6_0b2Y-X1ORS6tAz2cq.png","isPro":false,"fullname":"Yiming Jia","user":"jymmmmm","type":"user"},{"_id":"66f612b934b8ac9ffa44f084","avatarUrl":"/avatars/6836c122e19c66c90f1673f28b30d7f0.svg","isPro":false,"fullname":"Tang","user":"tommysally","type":"user"},{"_id":"63913b120cf6b11c487ca31d","avatarUrl":"/avatars/aec44edd5470dd6e767e0a25efd6fb5d.svg","isPro":true,"fullname":"Xin Li","user":"lixin4ever","type":"user"},{"_id":"648c9605565e3a44f3c9bb7b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/648c9605565e3a44f3c9bb7b/W5chvk17Zol6-2QSWkFVR.jpeg","isPro":true,"fullname":"Orr Zohar","user":"orrzohar","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"640d7ec5fdeaae13907fc488","avatarUrl":"/avatars/acf43ea155105a51c8612dacc4725091.svg","isPro":false,"fullname":"Jiachen Li","user":"jiachenli-ucsb","type":"user"},{"_id":"6270324ebecab9e2dcf245de","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6270324ebecab9e2dcf245de/cMbtWSasyNlYc9hvsEEzt.jpeg","isPro":false,"fullname":"Kye Gomez","user":"kye","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.10582

VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search

Published on Mar 13
· Submitted by Wenhu Chen on Mar 14
Authors:
,
,

Abstract

The VisualWebInstruct approach enhances vision-language models' reasoning abilities through a large, diverse, and high-quality multimodal dataset created from search engine data.

AI-generated summary

Vision-Language Models have made significant progress on many perception-focused tasks, however, their progress on reasoning-focused tasks seem to be limited due to the lack of high-quality and diverse training data. In this work, we aim to address the scarcity issue of reasoning-focused multimodal datasets. We propose VisualWebInstruct - a novel approach that leverages search engine to create a diverse, and high-quality dataset spanning multiple disciplines like math, physics, finance, chemistry, etc. Starting with meticulously selected 30,000 seed images, we employ Google Image search to identify websites containing similar images. We collect and process the HTMLs from over 700K unique URL sources. Through a pipeline of content extraction, filtering and synthesis, we build a dataset of approximately 900K question-answer pairs, with 40% being visual QA pairs and the rest as text QA pairs. Models fine-tuned on VisualWebInstruct demonstrate significant performance gains: (1) training from Llava-OV-mid shows 10-20% absolute point gains across benchmarks, (2) training from MAmmoTH-VL shows 5% absoluate gain. Our best model MAmmoTH-VL2 shows state-of-the-art performance within the 10B parameter class on MMMU-Pro-std (40.7%), MathVerse (42.6%), and DynaMath (55.7%). These remarkable results highlight the effectiveness of our dataset in enhancing VLMs' reasoning capabilities for complex multimodal tasks.

Community

Paper author Paper submitter

We propose an approach to automatically scale up the multimodal instruction tuning dataset. We obtain state-of-the-art performance across many multimodal reasoning tasks.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 3

Spaces citing this paper 1

Collections including this paper 6

Лучший частный хостинг