This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\nThe following papers were recommended by the Semantic Scholar API
\n- \n
- Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning (2025) \n
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning (2025) \n
- Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning (2025) \n
- Multimodal Mathematical Reasoning with Diverse Solving Perspective (2025) \n
- GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning (2025) \n
- The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs (2025) \n
- EFRame: Deeper Reasoning via Exploration-Filter-Replay Reinforcement Learning Framework (2025) \n
Please give a thumbs up to this comment if you found it helpful!
\nIf you want recommendations for any Paper on Hugging Face checkout this Space
\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
I have a question about Equation 6 in the paper:
\n\n\n","updatedAt":"2025-08-01T06:12:12.421Z","author":{"_id":"64bf537194c0e3be4a28ed40","avatarUrl":"/avatars/326705ae49788f9eb5aec0e677afba55.svg","fullname":"范寒骁","name":"fanhx","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8730521202087402},"editors":["fanhx"],"editorAvatarUrls":["/avatars/326705ae49788f9eb5aec0e677afba55.svg"],"reactions":[],"isReport":false},"replies":[{"id":"689025854b4aafa2e595a3b2","author":{"_id":"604f67ef0fe8ff3ec13d71ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/604f67ef0fe8ff3ec13d71ef/KhUwWvZ3OJ9nEee3B-SXO.png","fullname":"Hou Pong (Ken) Chan","name":"kenchan0226","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11},"createdAt":"2025-08-04T03:14:13.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"Sorry, there is a typo in the previous version, we have updated Equation 6 in the paper, please let us know if there is any further question. \n\n\n\n","html":"This expression attains its maximum when $L_{i} = 0, but attains its minimum when (L_{i} = L_{{tgt}}).
\n
Could it be missing a negative sign?
Sorry, there is a typo in the previous version, we have updated Equation 6 in the paper, please let us know if there is any further question.
\n\n","updatedAt":"2025-08-04T03:14:38.143Z","author":{"_id":"604f67ef0fe8ff3ec13d71ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/604f67ef0fe8ff3ec13d71ef/KhUwWvZ3OJ9nEee3B-SXO.png","fullname":"Hou Pong (Ken) Chan","name":"kenchan0226","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.7438994646072388},"editors":["kenchan0226"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/604f67ef0fe8ff3ec13d71ef/KhUwWvZ3OJ9nEee3B-SXO.png"],"reactions":[],"isReport":false,"parentCommentId":"688c5abcb176681d5e744af3"}}]}],"primaryEmailConfirmed":false,"paper":{"id":"2507.22607","authors":[{"_id":"688b17b78b724c8c7187de81","name":"Ruifeng Yuan","hidden":false},{"_id":"688b17b78b724c8c7187de82","user":{"_id":"63108cc834c7d77420b0fd68","avatarUrl":"/avatars/2721e573a417a8ec0b81ee048c4b42ba.svg","isPro":false,"fullname":"chenghao xiao","user":"gowitheflow","type":"user"},"name":"Chenghao Xiao","status":"claimed_verified","statusLastChangedAt":"2025-07-31T21:00:45.632Z","hidden":false},{"_id":"688b17b78b724c8c7187de83","name":"Sicong Leng","hidden":false},{"_id":"688b17b78b724c8c7187de84","name":"Jianyu Wang","hidden":false},{"_id":"688b17b78b724c8c7187de85","user":{"_id":"6365d83ce7a78348d82572b0","avatarUrl":"/avatars/d50587902cced2c3640fd5de82ff78dd.svg","isPro":false,"fullname":"ll","user":"jianghuyihei","type":"user"},"name":"Long Li","status":"claimed_verified","statusLastChangedAt":"2025-08-13T07:29:40.056Z","hidden":false},{"_id":"688b17b78b724c8c7187de86","user":{"_id":"64118689756b9e455c7eac62","avatarUrl":"/avatars/cdb3da22593facf545a0bafbf548b07e.svg","isPro":false,"fullname":"Xu Weiwen","user":"xww033","type":"user"},"name":"Weiwen Xu","status":"claimed_verified","statusLastChangedAt":"2025-07-31T21:00:51.204Z","hidden":false},{"_id":"688b17b78b724c8c7187de87","user":{"_id":"604f67ef0fe8ff3ec13d71ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/604f67ef0fe8ff3ec13d71ef/KhUwWvZ3OJ9nEee3B-SXO.png","isPro":false,"fullname":"Hou Pong (Ken) Chan","user":"kenchan0226","type":"user"},"name":"Hou Pong Chan","status":"claimed_verified","statusLastChangedAt":"2025-07-31T08:14:42.264Z","hidden":false},{"_id":"688b17b78b724c8c7187de88","name":"Deli Zhao","hidden":false},{"_id":"688b17b78b724c8c7187de89","name":"Tingyang Xu","hidden":false},{"_id":"688b17b78b724c8c7187de8a","name":"Zhongyu Wei","hidden":false},{"_id":"688b17b78b724c8c7187de8b","user":{"_id":"64b7cd74ff6d81ae297feded","avatarUrl":"/avatars/880fbc96cc093f5e901ce84f32a1d21d.svg","isPro":false,"fullname":"ZHANG HAO","user":"26hzhang","type":"user"},"name":"Hao Zhang","status":"claimed_verified","statusLastChangedAt":"2025-07-31T08:14:40.297Z","hidden":false},{"_id":"688b17b78b724c8c7187de8c","user":{"_id":"642eecbf9b2484d7d8526781","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/642eecbf9b2484d7d8526781/4IvGbd66s49Wx5pZyZGHA.png","isPro":false,"fullname":"Yu Rong","user":"Swrooy","type":"user"},"name":"Yu Rong","status":"claimed_verified","statusLastChangedAt":"2025-07-31T21:00:49.172Z","hidden":false}],"publishedAt":"2025-07-30T12:23:21.000Z","submittedOnDailyAt":"2025-07-31T05:46:56.595Z","title":"VL-Cogito: Progressive Curriculum Reinforcement Learning for Advanced\n Multimodal Reasoning","submittedOnDailyBy":{"_id":"604f67ef0fe8ff3ec13d71ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/604f67ef0fe8ff3ec13d71ef/KhUwWvZ3OJ9nEee3B-SXO.png","isPro":false,"fullname":"Hou Pong (Ken) Chan","user":"kenchan0226","type":"user"},"summary":"Reinforcement learning has proven its effectiveness in enhancing the\nreasoning capabilities of large language models. Recent research efforts have\nprogressively extended this paradigm to multimodal reasoning tasks. Due to the\ninherent complexity and diversity of multimodal tasks, especially in semantic\ncontent and problem formulations, existing models often exhibit unstable\nperformance across various domains and difficulty levels. To address these\nlimitations, we propose VL-Cogito, an advanced multimodal reasoning model\ntrained via a novel multi-stage Progressive Curriculum Reinforcement Learning\n(PCuRL) framework. PCuRL systematically guides the model through tasks of\ngradually increasing difficulty, substantially improving its reasoning\nabilities across diverse multimodal contexts. The framework introduces two key\ninnovations: (1) an online difficulty soft weighting mechanism, dynamically\nadjusting training difficulty across successive RL training stages; and (2) a\ndynamic length reward mechanism, which encourages the model to adaptively\nregulate its reasoning path length according to task complexity, thus balancing\nreasoning efficiency with correctness. Experimental evaluations demonstrate\nthat VL-Cogito consistently matches or surpasses existing reasoning-oriented\nmodels across mainstream multimodal benchmarks spanning mathematics, science,\nlogic, and general understanding, validating the effectiveness of our approach.","upvotes":46,"discussionId":"688b17b88b724c8c7187de8d","ai_summary":"VL-Cogito, a multimodal reasoning model, uses a Progressive Curriculum Reinforcement Learning framework to improve performance across diverse tasks by dynamically adjusting difficulty and reasoning path length.","ai_keywords":["reinforcement learning","multimodal reasoning","Progressive Curriculum Reinforcement Learning","PCuRL","online difficulty soft weighting","dynamic length reward mechanism"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"604f67ef0fe8ff3ec13d71ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/604f67ef0fe8ff3ec13d71ef/KhUwWvZ3OJ9nEee3B-SXO.png","isPro":false,"fullname":"Hou Pong (Ken) Chan","user":"kenchan0226","type":"user"},{"_id":"64118689756b9e455c7eac62","avatarUrl":"/avatars/cdb3da22593facf545a0bafbf548b07e.svg","isPro":false,"fullname":"Xu Weiwen","user":"xww033","type":"user"},{"_id":"64b7cd74ff6d81ae297feded","avatarUrl":"/avatars/880fbc96cc093f5e901ce84f32a1d21d.svg","isPro":false,"fullname":"ZHANG HAO","user":"26hzhang","type":"user"},{"_id":"66fa54b9076c1a309d563a41","avatarUrl":"/avatars/2daaa10c4c5bfcde74c7f995d15be1e0.svg","isPro":false,"fullname":"Ruifen Yu","user":"csyrf","type":"user"},{"_id":"609115c79a8bcaa437b234a9","avatarUrl":"/avatars/1631a91030703d8397133363cf82c863.svg","isPro":true,"fullname":"Leng Sicong","user":"Sicong","type":"user"},{"_id":"6723079ad1306fe9c76a1d29","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/b4BNPCeZs59MKxo1qmT6r.png","isPro":false,"fullname":"Yu Sun","user":"YuSun-AI","type":"user"},{"_id":"67a5a25269f568c7eb4173cd","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/IFzcHm_K8s2UxTRCC79Xf.png","isPro":false,"fullname":"Tingyang Xu","user":"xuty007","type":"user"},{"_id":"6365d83ce7a78348d82572b0","avatarUrl":"/avatars/d50587902cced2c3640fd5de82ff78dd.svg","isPro":false,"fullname":"ll","user":"jianghuyihei","type":"user"},{"_id":"6814533922645cf8cb16a56a","avatarUrl":"/avatars/d0f7015271e70fb1eaf69ee6bce465d9.svg","isPro":false,"fullname":"Li","user":"Tonylmz","type":"user"},{"_id":"6571869cf3853f99bc558177","avatarUrl":"/avatars/505e3775831b50918a834258e04f7843.svg","isPro":false,"fullname":"Tian Bian","user":"TianB","type":"user"},{"_id":"646c2337b1202bc77c147a9e","avatarUrl":"/avatars/37dd52a5057a0103105ef3b9b7205894.svg","isPro":false,"fullname":"Yu Sun","user":"Skingy","type":"user"},{"_id":"6657f3ffef1d1674aae125ea","avatarUrl":"/avatars/a637b3c6e45eb03e8822fdfcdf47d589.svg","isPro":false,"fullname":"Swarley","user":"swarley2024","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">VL-Cogito: Progressive Curriculum Reinforcement Learning for Advanced Multimodal Reasoning
Abstract
VL-Cogito, a multimodal reasoning model, uses a Progressive Curriculum Reinforcement Learning framework to improve performance across diverse tasks by dynamically adjusting difficulty and reasoning path length.
Reinforcement learning has proven its effectiveness in enhancing the reasoning capabilities of large language models. Recent research efforts have progressively extended this paradigm to multimodal reasoning tasks. Due to the inherent complexity and diversity of multimodal tasks, especially in semantic content and problem formulations, existing models often exhibit unstable performance across various domains and difficulty levels. To address these limitations, we propose VL-Cogito, an advanced multimodal reasoning model trained via a novel multi-stage Progressive Curriculum Reinforcement Learning (PCuRL) framework. PCuRL systematically guides the model through tasks of gradually increasing difficulty, substantially improving its reasoning abilities across diverse multimodal contexts. The framework introduces two key innovations: (1) an online difficulty soft weighting mechanism, dynamically adjusting training difficulty across successive RL training stages; and (2) a dynamic length reward mechanism, which encourages the model to adaptively regulate its reasoning path length according to task complexity, thus balancing reasoning efficiency with correctness. Experimental evaluations demonstrate that VL-Cogito consistently matches or surpasses existing reasoning-oriented models across mainstream multimodal benchmarks spanning mathematics, science, logic, and general understanding, validating the effectiveness of our approach.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning (2025)
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning (2025)
- Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning (2025)
- Multimodal Mathematical Reasoning with Diverse Solving Perspective (2025)
- GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning (2025)
- The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs (2025)
- EFRame: Deeper Reasoning via Exploration-Filter-Replay Reinforcement Learning Framework (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
I have a question about Equation 6 in the paper:
This expression attains its maximum when $L_{i} = 0, but attains its minimum when (L_{i} = L_{{tgt}}).
Could it be missing a negative sign?
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper