lynx   »   [go: up one dir, main page]

https://github.com/PRIME-RL/PRIME
HF Collection: https://huggingface.co/PRIME-RL

\n","updatedAt":"2025-02-04T05:02:39.933Z","author":{"_id":"6321152b8c0da827c72c7c16","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678783813705-6321152b8c0da827c72c7c16.jpeg","fullname":"Hanbin Wang","name":"hanbin","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":14}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7379382848739624},"editors":["hanbin"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1678783813705-6321152b8c0da827c72c7c16.jpeg"],"reactions":[{"reaction":"🔥","users":["stingning","jaewany"],"count":2}],"isReport":false}},{"id":"67a411d121b9a08f7ecaeab9","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-02-06T01:35:13.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling](https://huggingface.co/papers/2501.11651) (2025)\n* [Improving Multi-Step Reasoning Abilities of Large Language Models with Direct Advantage Policy Optimization](https://huggingface.co/papers/2412.18279) (2024)\n* [Kimi k1.5: Scaling Reinforcement Learning with LLMs](https://huggingface.co/papers/2501.12599) (2025)\n* [InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model](https://huggingface.co/papers/2501.12368) (2025)\n* [rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking](https://huggingface.co/papers/2501.04519) (2025)\n* [Entropy-Regularized Process Reward Model](https://huggingface.co/papers/2412.11006) (2024)\n* [Diving into Self-Evolving Training for Multimodal Reasoning](https://huggingface.co/papers/2412.17451) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-06T01:35:13.972Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7428015470504761},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.01456","authors":[{"_id":"67a19d705efa4fab15497775","user":{"_id":"650eba9555dc1e841746f132","avatarUrl":"/avatars/af6f5ee78f161d25ec0afc45d2def8eb.svg","isPro":false,"fullname":"Ganqu Cui","user":"ganqu","type":"user"},"name":"Ganqu Cui","status":"claimed_verified","statusLastChangedAt":"2025-02-04T09:39:23.889Z","hidden":false},{"_id":"67a19d705efa4fab15497776","user":{"_id":"68d5de199a50db5a7a5006ec","avatarUrl":"/avatars/afbd1d9cc4ecdfc30ed7da3d2ec20bcb.svg","isPro":false,"fullname":"Lifan Yuan","user":"lifanyuan","type":"user"},"name":"Lifan Yuan","status":"extracted_pending","statusLastChangedAt":"2025-09-26T00:28:40.556Z","hidden":false},{"_id":"67a19d705efa4fab15497777","name":"Zefan Wang","hidden":false},{"_id":"67a19d705efa4fab15497778","user":{"_id":"6321152b8c0da827c72c7c16","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678783813705-6321152b8c0da827c72c7c16.jpeg","isPro":false,"fullname":"Hanbin Wang","user":"hanbin","type":"user"},"name":"Hanbin Wang","status":"claimed_verified","statusLastChangedAt":"2025-02-04T09:39:25.869Z","hidden":false},{"_id":"67a19d705efa4fab15497779","user":{"_id":"671bfaa29e5e675c7f5c4307","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/PwDA6OSSAmg6k4LliEQkZ.png","isPro":false,"fullname":"Wendi Li","user":"wendili","type":"user"},"name":"Wendi Li","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:51:24.261Z","hidden":false},{"_id":"67a19d705efa4fab1549777a","user":{"_id":"64c5e944979493279b700cb2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/vjFuPWw8Vl7b7gXB19Sk-.jpeg","isPro":false,"fullname":"Bingxiang He","user":"hbx","type":"user"},"name":"Bingxiang He","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:51:31.090Z","hidden":false},{"_id":"67a19d705efa4fab1549777b","user":{"_id":"672c2d7816766a76a747b7b5","avatarUrl":"/avatars/12c7b26d2b81721ccac3a5c71e32a1a1.svg","isPro":false,"fullname":"Yuchen Fan","user":"yuchenFan","type":"user"},"name":"Yuchen Fan","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:51:56.597Z","hidden":false},{"_id":"67a19d705efa4fab1549777c","user":{"_id":"64abc4aa6cadc7aca585dddf","avatarUrl":"/avatars/736afea979cd0021c7a37f68731524ea.svg","isPro":false,"fullname":"Tianyu Yu","user":"Yirany","type":"user"},"name":"Tianyu Yu","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:52:26.615Z","hidden":false},{"_id":"67a19d705efa4fab1549777d","user":{"_id":"6680f0b20b72be136708af26","avatarUrl":"/avatars/5d8fd5be0cf94e246b46abb9d3cc8f5c.svg","isPro":false,"fullname":"XuQixin","user":"Racktic","type":"user"},"name":"Qixin Xu","status":"claimed_verified","statusLastChangedAt":"2025-02-06T14:15:22.453Z","hidden":false},{"_id":"67a19d705efa4fab1549777e","user":{"_id":"648312243b7fe59c876c0dca","avatarUrl":"/avatars/c26ad76cd213529e4670bb599b8199bb.svg","isPro":false,"fullname":"weize","user":"weizechen","type":"user"},"name":"Weize Chen","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:52:46.343Z","hidden":false},{"_id":"67a19d705efa4fab1549777f","name":"Jiarui Yuan","hidden":false},{"_id":"67a19d705efa4fab15497780","user":{"_id":"6630f87ee53fcb71c3887df0","avatarUrl":"/avatars/50191a3d45bebf90cf08df09477e95db.svg","isPro":false,"fullname":"HuayuChen","user":"HuayuChen","type":"user"},"name":"Huayu Chen","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:53:06.620Z","hidden":false},{"_id":"67a19d705efa4fab15497781","user":{"_id":"60bc94cd85a3ab33829b6211","avatarUrl":"/avatars/b57d36c7577fbbb42ea5b963eef4144a.svg","isPro":false,"fullname":"Kaiyan Zhang","user":"iseesaw","type":"user"},"name":"Kaiyan Zhang","status":"claimed_verified","statusLastChangedAt":"2025-02-11T15:36:03.829Z","hidden":false},{"_id":"67a19d705efa4fab15497782","user":{"_id":"663f07d029be04778ba97871","avatarUrl":"/avatars/fb7c9d4a2c537d918a3267e7cbc03f04.svg","isPro":false,"fullname":"Xingtai Lv","user":"XingtaiHF","type":"user"},"name":"Xingtai Lv","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:53:21.172Z","hidden":false},{"_id":"67a19d705efa4fab15497783","name":"Shuo Wang","hidden":false},{"_id":"67a19d705efa4fab15497784","name":"Yuan Yao","hidden":false},{"_id":"67a19d705efa4fab15497785","name":"Xu Han","hidden":false},{"_id":"67a19d705efa4fab15497786","name":"Hao Peng","hidden":false},{"_id":"67a19d705efa4fab15497787","user":{"_id":"67017abfe4d49b157ac534d9","avatarUrl":"/avatars/997e1b9f54b27a7728a9d4abfee4ba91.svg","isPro":false,"fullname":"Yu Cheng","user":"ych133","type":"user"},"name":"Yu Cheng","status":"claimed_verified","statusLastChangedAt":"2025-02-04T14:48:37.956Z","hidden":false},{"_id":"67a19d705efa4fab15497788","user":{"_id":"6310a3cd531cc21f9e06de6a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6310a3cd531cc21f9e06de6a/aTGMx3O41lUARK9s3dAik.jpeg","isPro":false,"fullname":"Zhiyuan Liu","user":"acharkq","type":"user"},"name":"Zhiyuan Liu","status":"admin_assigned","statusLastChangedAt":"2025-02-04T14:53:42.497Z","hidden":false},{"_id":"67a19d705efa4fab15497789","name":"Maosong Sun","hidden":false},{"_id":"67a19d705efa4fab1549778a","name":"Bowen Zhou","hidden":false},{"_id":"67a19d705efa4fab1549778b","user":{"_id":"60cf4bcb1ce3775ebb86e5d5","avatarUrl":"/avatars/12bcd18d215abf91f297f93007733148.svg","isPro":false,"fullname":"Ning Ding","user":"stingning","type":"user"},"name":"Ning Ding","status":"claimed_verified","statusLastChangedAt":"2025-09-13T15:06:35.404Z","hidden":false}],"publishedAt":"2025-02-03T15:43:48.000Z","submittedOnDailyAt":"2025-02-04T02:32:39.922Z","title":"Process Reinforcement through Implicit Rewards","submittedOnDailyBy":{"_id":"6321152b8c0da827c72c7c16","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678783813705-6321152b8c0da827c72c7c16.jpeg","isPro":false,"fullname":"Hanbin Wang","user":"hanbin","type":"user"},"summary":"Dense process rewards have proven a more effective alternative to the sparse\noutcome-level rewards in the inference-time scaling of large language models\n(LLMs), particularly in tasks requiring complex multi-step reasoning. While\ndense rewards also offer an appealing choice for the reinforcement learning\n(RL) of LLMs since their fine-grained rewards have the potential to address\nsome inherent issues of outcome rewards, such as training efficiency and credit\nassignment, this potential remains largely unrealized. This can be primarily\nattributed to the challenges of training process reward models (PRMs) online,\nwhere collecting high-quality process labels is prohibitively expensive, making\nthem particularly vulnerable to reward hacking. To address these challenges, we\npropose PRIME (Process Reinforcement through IMplicit rEwards), which enables\nonline PRM updates using only policy rollouts and outcome labels through\nimplict process rewards. PRIME combines well with various advantage functions\nand forgoes the dedicated reward model training phrase that existing approaches\nrequire, substantially reducing the development overhead. We demonstrate\nPRIME's effectiveness on competitional math and coding. Starting from\nQwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several\nkey reasoning benchmarks over the SFT model. Notably, our resulting model,\nEurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning\nbenchmarks with 10% of its training data.","upvotes":61,"discussionId":"67a19d705efa4fab154977d0","ai_summary":"PRIME leverages implicit process rewards to improve the reinforcement learning of large language models, achieving better performance with less data compared to traditional methods.","ai_keywords":["dense process rewards","sparse outcome-level rewards","large language models","reinforcement learning","process reward models","reward hacking","policy rollouts","advantage functions","Qwen2.5-Math-7B-Base","reasoning benchmarks","Eurus-2-7B-PRIME"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6321152b8c0da827c72c7c16","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678783813705-6321152b8c0da827c72c7c16.jpeg","isPro":false,"fullname":"Hanbin Wang","user":"hanbin","type":"user"},{"_id":"650eba9555dc1e841746f132","avatarUrl":"/avatars/af6f5ee78f161d25ec0afc45d2def8eb.svg","isPro":false,"fullname":"Ganqu Cui","user":"ganqu","type":"user"},{"_id":"65e1cff8b9e84c6b72813f4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/iKjaF9_efrerS0haRWmRh.jpeg","isPro":false,"fullname":"kaneziki","user":"kaneziki","type":"user"},{"_id":"60cf4bcb1ce3775ebb86e5d5","avatarUrl":"/avatars/12bcd18d215abf91f297f93007733148.svg","isPro":false,"fullname":"Ning Ding","user":"stingning","type":"user"},{"_id":"622474f38dc6b0b64f5e903d","avatarUrl":"/avatars/d6b60a014277a8ec7d564163c5f644aa.svg","isPro":false,"fullname":"Yuxin Zuo","user":"yuxinzuo","type":"user"},{"_id":"6680f0b20b72be136708af26","avatarUrl":"/avatars/5d8fd5be0cf94e246b46abb9d3cc8f5c.svg","isPro":false,"fullname":"XuQixin","user":"Racktic","type":"user"},{"_id":"64c5e944979493279b700cb2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/vjFuPWw8Vl7b7gXB19Sk-.jpeg","isPro":false,"fullname":"Bingxiang He","user":"hbx","type":"user"},{"_id":"6445fa2ffc22e309d78bef3e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6445fa2ffc22e309d78bef3e/FQaINLd0PjgY9EnK_APRk.jpeg","isPro":false,"fullname":"Messi Hua","user":"Messi-Hua","type":"user"},{"_id":"663f07d029be04778ba97871","avatarUrl":"/avatars/fb7c9d4a2c537d918a3267e7cbc03f04.svg","isPro":false,"fullname":"Xingtai Lv","user":"XingtaiHF","type":"user"},{"_id":"6506a4ee2a9cebcc9bf293d9","avatarUrl":"/avatars/c08c6b16d9b65ce23c755c72b040d714.svg","isPro":false,"fullname":"yuhanchen","user":"MysticMizzle","type":"user"},{"_id":"669e50bc5bc23a062865b4e4","avatarUrl":"/avatars/22f046a3806b0940bc9b0250c0678efd.svg","isPro":false,"fullname":"Zefan Wang","user":"ZefanW","type":"user"},{"_id":"67a1ad5cd2591db64ac6a4e8","avatarUrl":"/avatars/1c8f6e78b2a2e12f87f9852aa208acab.svg","isPro":false,"fullname":"ZHAOYUAN YAO","user":"lovlo","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":3}">
Papers
arxiv:2502.01456

Process Reinforcement through Implicit Rewards

Published on Feb 3
· Submitted by Hanbin Wang on Feb 4
#3 Paper of the day
Authors:
,
,
,
,
,
,
,

Abstract

PRIME leverages implicit process rewards to improve the reinforcement learning of large language models, achieving better performance with less data compared to traditional methods.

AI-generated summary

Dense process rewards have proven a more effective alternative to the sparse outcome-level rewards in the inference-time scaling of large language models (LLMs), particularly in tasks requiring complex multi-step reasoning. While dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs since their fine-grained rewards have the potential to address some inherent issues of outcome rewards, such as training efficiency and credit assignment, this potential remains largely unrealized. This can be primarily attributed to the challenges of training process reward models (PRMs) online, where collecting high-quality process labels is prohibitively expensive, making them particularly vulnerable to reward hacking. To address these challenges, we propose PRIME (Process Reinforcement through IMplicit rEwards), which enables online PRM updates using only policy rollouts and outcome labels through implict process rewards. PRIME combines well with various advantage functions and forgoes the dedicated reward model training phrase that existing approaches require, substantially reducing the development overhead. We demonstrate PRIME's effectiveness on competitional math and coding. Starting from Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several key reasoning benchmarks over the SFT model. Notably, our resulting model, Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning benchmarks with 10% of its training data.

Community

Paper author Paper submitter

How to unlock advanced reasoning via scalable RL?

🚀Introducing PRIME (Process Reinforcement through Implicit Rewards) and Eurus-2, trained from Base model to surpass Qwen2.5-Math-Instruct using only 1/10 of the data.

Github: https://github.com/PRIME-RL/PRIME
HF Collection: https://huggingface.co/PRIME-RL

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 9

Browse 9 models citing this paper

Datasets citing this paper 3

Spaces citing this paper 2

Collections including this paper 12

Лучший частный хостинг