lynx   »   [go: up one dir, main page]

https://www.qbitai.com/2025/03/269180.html

\n","updatedAt":"2025-03-31T11:21:04.754Z","author":{"_id":"67163854c30d32112e7d0d66","avatarUrl":"/avatars/c7c9b482e7a1b08f801275c7df956033.svg","fullname":"Yuting Zhang","name":"Sonia755","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":2}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8756242394447327},"editors":["Sonia755"],"editorAvatarUrls":["/avatars/c7c9b482e7a1b08f801275c7df956033.svg"],"reactions":[],"isReport":false}},{"id":"67eb42df2d95c10a0dd44e00","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-04-01T01:35:27.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Visual-RFT: Visual Reinforcement Fine-Tuning](https://huggingface.co/papers/2503.01785) (2025)\n* [MM-Eureka: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning](https://huggingface.co/papers/2503.07365) (2025)\n* [Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering](https://huggingface.co/papers/2503.11197) (2025)\n* [Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning](https://huggingface.co/papers/2503.20752) (2025)\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [CLS-RL: Image Classification with Rule-Based Reinforcement Learning](https://huggingface.co/papers/2503.16188) (2025)\n* [Boosting the Generalization and Reasoning of Vision Language Models with Curriculum Reinforcement Learning](https://huggingface.co/papers/2503.07065) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-01T01:35:27.922Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.73453688621521},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67ec76583d9671fe4ce28310","author":{"_id":"65642d7401de72cb63165d22","avatarUrl":"/avatars/1f4417c4ac5e781ce73eae1060e3f7f2.svg","fullname":"ytaewon","name":"hamzzi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1},"createdAt":"2025-04-01T23:27:20.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Good work!","html":"

Good work!

\n","updatedAt":"2025-04-01T23:27:20.818Z","author":{"_id":"65642d7401de72cb63165d22","avatarUrl":"/avatars/1f4417c4ac5e781ce73eae1060e3f7f2.svg","fullname":"ytaewon","name":"hamzzi","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.882943868637085},"editors":["hamzzi"],"editorAvatarUrls":["/avatars/1f4417c4ac5e781ce73eae1060e3f7f2.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2503.16081","authors":[{"_id":"67ea70d128179c61bef64043","name":"Zhiyuan Liu","hidden":false},{"_id":"67ea70d128179c61bef64044","name":"Yuting Zhang","hidden":false},{"_id":"67ea70d128179c61bef64045","name":"Feng Liu","hidden":false},{"_id":"67ea70d128179c61bef64046","user":{"_id":"67ea7f597382053ae1ff676f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67ea7f597382053ae1ff676f/zy6b17SFRBfxZvTnq5m3Z.jpeg","isPro":false,"fullname":"Changwang ZHANG","user":"mLeoKing","type":"user"},"name":"Changwang Zhang","status":"claimed_verified","statusLastChangedAt":"2025-09-06T10:57:53.270Z","hidden":false},{"_id":"67ea70d128179c61bef64047","name":"Ying Sun","hidden":false},{"_id":"67ea70d128179c61bef64048","user":{"_id":"66da6c9e84f243eba3b49cf1","avatarUrl":"/avatars/d30d96025141d68f18a28005a5d6f5af.svg","isPro":false,"fullname":"junwang.lu","user":"jwanglux","type":"user"},"name":"Jun Wang","status":"extracted_pending","statusLastChangedAt":"2025-03-31T10:39:14.597Z","hidden":false}],"publishedAt":"2025-03-20T12:22:18.000Z","submittedOnDailyAt":"2025-03-31T09:51:04.725Z","title":"OThink-MR1: Stimulating multimodal generalized reasoning capabilities\n via dynamic reinforcement learning","submittedOnDailyBy":{"_id":"67163854c30d32112e7d0d66","avatarUrl":"/avatars/c7c9b482e7a1b08f801275c7df956033.svg","isPro":false,"fullname":"Yuting Zhang","user":"Sonia755","type":"user"},"summary":"Multimodal Large Language Models (MLLMs) have gained significant traction for\ntheir ability to process diverse input data types and generate coherent,\ncontextually relevant outputs across various applications. While supervised\nfine-tuning (SFT) has been the predominant approach to enhance MLLM\ncapabilities in task-specific optimization, it often falls short in fostering\ncrucial generalized reasoning abilities. Although reinforcement learning (RL)\nholds great promise in overcoming these limitations, it encounters two\nsignificant challenges: (1) its generalized capacities in multimodal tasks\nremain largely unexplored, and (2) its training constraints, including the\nconstant Kullback-Leibler divergence or the clamp strategy, often result in\nsuboptimal bottlenecks. To address these challenges, we propose OThink-MR1, an\nadvanced MLLM equipped with profound comprehension and reasoning capabilities\nacross multimodal tasks. Specifically, we introduce Group Relative Policy\nOptimization with a dynamic Kullback-Leibler strategy (GRPO-D), which markedly\nenhances reinforcement learning (RL) performance. For Qwen2-VL-2B-Instruct,\nGRPO-D achieves a relative improvement of more than 5.72% over SFT and more\nthan 13.59% over GRPO in same-task evaluation on two adapted datasets.\nFurthermore, GRPO-D demonstrates remarkable cross-task generalization\ncapabilities, with an average relative improvement of more than 61.63% over SFT\nin cross-task evaluation. These results highlight that the MLLM trained with\nGRPO-D on one multimodal task can be effectively transferred to another task,\nunderscoring the superior generalized reasoning capabilities of our proposed\nOThink-MR1 model.","upvotes":28,"discussionId":"67ea70d228179c61bef6408e","ai_summary":"OThink-MR1, an advanced MLLM using GRPO-D, enhances reinforcement learning performance and demonstrates superior cross-task generalization compared to supervised fine-tuning.","ai_keywords":["Multimodal Large Language Models (MLLMs)","supervised fine-tuning (SFT)","reinforcement learning (RL)","Group Relative Policy Optimization (GRPO)","dynamic Kullback-Leibler strategy (GRPO-D)"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"67163854c30d32112e7d0d66","avatarUrl":"/avatars/c7c9b482e7a1b08f801275c7df956033.svg","isPro":false,"fullname":"Yuting Zhang","user":"Sonia755","type":"user"},{"_id":"672c198760bdd070539fd7ed","avatarUrl":"/avatars/1064f0f5929c505589ee77f3e36df7e9.svg","isPro":false,"fullname":"Bohao Wang","user":"Baymax0110","type":"user"},{"_id":"67b6d039ea22340afaa12e10","avatarUrl":"/avatars/5bc69af5a5421df83c81f1147035080a.svg","isPro":false,"fullname":"Heyuan Huang","user":"Hhyuan","type":"user"},{"_id":"67ea7f597382053ae1ff676f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/67ea7f597382053ae1ff676f/zy6b17SFRBfxZvTnq5m3Z.jpeg","isPro":false,"fullname":"Changwang ZHANG","user":"mLeoKing","type":"user"},{"_id":"650ae0025d9ce91d31d25417","avatarUrl":"/avatars/259c28c27dfda14c0601d9a37162263a.svg","isPro":false,"fullname":"JI Yang","user":"Scaryang","type":"user"},{"_id":"624133aad35be2c16cd6f670","avatarUrl":"/avatars/1f340e7a96fc22cb2d0f5cb4a50cc6a0.svg","isPro":false,"fullname":"LiuZhiYuan","user":"Gengar","type":"user"},{"_id":"65f3a4e740717755a552d8ce","avatarUrl":"/avatars/6ff6ed4815016f97815360de536dfeb1.svg","isPro":false,"fullname":"Mingxu","user":"mingxuzhang","type":"user"},{"_id":"669dc9c3c9111326dc8354ee","avatarUrl":"/avatars/235bc5a5c7edfcbac8888ddda7c20a39.svg","isPro":false,"fullname":"Shuting Cui","user":"cst0420haha","type":"user"},{"_id":"66aa0ef97cda19fabe0f9c98","avatarUrl":"/avatars/e00c61ab0512dbf8f3ac371c72a40949.svg","isPro":false,"fullname":"Frank","user":"rky01","type":"user"},{"_id":"672c90deabf766c229570452","avatarUrl":"/avatars/2ce5f15a79b608b09c2f7d472bf74f3b.svg","isPro":false,"fullname":"Bo Yang","user":"Leonardoby","type":"user"},{"_id":"645322c20446280667d6fce8","avatarUrl":"/avatars/776cf496f58f67bc2c80d9c6e5ff433e.svg","isPro":false,"fullname":"Dazhong Sehn","user":"Dazhong","type":"user"},{"_id":"669929f9ed81d2c9bc724855","avatarUrl":"/avatars/f5c1c2c7c343e9678442c78695bfda13.svg","isPro":false,"fullname":"Yaming Guo","user":"Yaming98","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2503.16081

OThink-MR1: Stimulating multimodal generalized reasoning capabilities via dynamic reinforcement learning

Published on Mar 20
· Submitted by Yuting Zhang on Mar 31
Authors:
,
,
,
,

Abstract

OThink-MR1, an advanced MLLM using GRPO-D, enhances reinforcement learning performance and demonstrates superior cross-task generalization compared to supervised fine-tuning.

AI-generated summary

Multimodal Large Language Models (MLLMs) have gained significant traction for their ability to process diverse input data types and generate coherent, contextually relevant outputs across various applications. While supervised fine-tuning (SFT) has been the predominant approach to enhance MLLM capabilities in task-specific optimization, it often falls short in fostering crucial generalized reasoning abilities. Although reinforcement learning (RL) holds great promise in overcoming these limitations, it encounters two significant challenges: (1) its generalized capacities in multimodal tasks remain largely unexplored, and (2) its training constraints, including the constant Kullback-Leibler divergence or the clamp strategy, often result in suboptimal bottlenecks. To address these challenges, we propose OThink-MR1, an advanced MLLM equipped with profound comprehension and reasoning capabilities across multimodal tasks. Specifically, we introduce Group Relative Policy Optimization with a dynamic Kullback-Leibler strategy (GRPO-D), which markedly enhances reinforcement learning (RL) performance. For Qwen2-VL-2B-Instruct, GRPO-D achieves a relative improvement of more than 5.72% over SFT and more than 13.59% over GRPO in same-task evaluation on two adapted datasets. Furthermore, GRPO-D demonstrates remarkable cross-task generalization capabilities, with an average relative improvement of more than 61.63% over SFT in cross-task evaluation. These results highlight that the MLLM trained with GRPO-D on one multimodal task can be effectively transferred to another task, underscoring the superior generalized reasoning capabilities of our proposed OThink-MR1 model.

Community

Paper submitter

This paper proposes OThink-MR1, a dynamic reinforcement learning framework for fine-tuning MLLMs, which outperforms SFT in the same-task validation. This approach dynamically balances exploration and exploitation, resulting in more effective learning.
This paper among the first to demonstrate significant cross-task generalization of dynamic reinforcement learning for MLLMs. This demonstrates that models post-trained with GRPO-D on one multimodal task can be effectively transferred to other multimodal tasks, greatly reducing the need for extensive task-specific data collection and retraining across diverse applications.
push link: https://www.qbitai.com/2025/03/269180.html

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Good work!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.16081 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.16081 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.16081 in a Space README.md to link it from this page.

Collections including this paper 3

Лучший частный хостинг