lynx   »   [go: up one dir, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-09-25T01:35:36.880Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7408339977264404},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2509.19249","authors":[{"_id":"68d352680e215259d193b1fa","user":{"_id":"66d45a8de5837f38ce3b73f7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66d45a8de5837f38ce3b73f7/3omslNRb8wV_c1xbrCmQC.jpeg","isPro":false,"fullname":"SihengLi","user":"Siheng99","type":"user"},"name":"Siheng Li","status":"claimed_verified","statusLastChangedAt":"2025-09-24T13:53:23.476Z","hidden":false},{"_id":"68d352680e215259d193b1fb","name":"Kejiao Li","hidden":false},{"_id":"68d352680e215259d193b1fc","user":{"_id":"66dd5a7468f47ec63e502794","avatarUrl":"/avatars/6497fd124e6009777167feef2558f058.svg","isPro":false,"fullname":"xu","user":"xavier-z","type":"user"},"name":"Zenan Xu","status":"claimed_verified","statusLastChangedAt":"2025-09-24T13:53:18.104Z","hidden":false},{"_id":"68d352680e215259d193b1fd","name":"Guanhua Huang","hidden":false},{"_id":"68d352680e215259d193b1fe","name":"Evander Yang","hidden":false},{"_id":"68d352680e215259d193b1ff","name":"Kun Li","hidden":false},{"_id":"68d352680e215259d193b200","user":{"_id":"6445e7b1b272430bdbf64e80","avatarUrl":"/avatars/d3e59a3b488f8539966c944bb16f7b90.svg","isPro":false,"fullname":"Haoyuan WU","user":"hywu","type":"user"},"name":"Haoyuan Wu","status":"claimed_verified","statusLastChangedAt":"2025-09-24T13:53:15.202Z","hidden":false},{"_id":"68d352680e215259d193b201","name":"Jiajia Wu","hidden":false},{"_id":"68d352680e215259d193b202","name":"Zihao Zheng","hidden":false},{"_id":"68d352680e215259d193b203","name":"Chenchen Zhang","hidden":false},{"_id":"68d352680e215259d193b204","name":"Kun Shi","hidden":false},{"_id":"68d352680e215259d193b205","name":"Kyrierl Deng","hidden":false},{"_id":"68d352680e215259d193b206","name":"Qi Yi","hidden":false},{"_id":"68d352680e215259d193b207","name":"Ruibin Xiong","hidden":false},{"_id":"68d352680e215259d193b208","name":"Tingqiang Xu","hidden":false},{"_id":"68d352680e215259d193b209","name":"Yuhao Jiang","hidden":false},{"_id":"68d352680e215259d193b20a","name":"Jianfeng Yan","hidden":false},{"_id":"68d352680e215259d193b20b","name":"Yuyuan Zeng","hidden":false},{"_id":"68d352680e215259d193b20c","name":"Guanghui Xu","hidden":false},{"_id":"68d352680e215259d193b20d","name":"Jinbao Xue","hidden":false},{"_id":"68d352680e215259d193b20e","name":"Zhijiang Xu","hidden":false},{"_id":"68d352680e215259d193b20f","name":"Zheng Fang","hidden":false},{"_id":"68d352680e215259d193b210","user":{"_id":"6525511b3f8d02205bf1e9ef","avatarUrl":"/avatars/7dfc3020337f3c403ecf133fa312b5d1.svg","isPro":false,"fullname":"Shuai Li","user":"DiveBlue","type":"user"},"name":"Shuai Li","status":"claimed_verified","statusLastChangedAt":"2025-09-24T13:53:20.582Z","hidden":false},{"_id":"68d352680e215259d193b211","name":"Qibin Liu","hidden":false},{"_id":"68d352680e215259d193b212","name":"Xiaoxue Li","hidden":false},{"_id":"68d352680e215259d193b213","name":"Zhuoyu Li","hidden":false},{"_id":"68d352680e215259d193b214","name":"Yangyu Tao","hidden":false},{"_id":"68d352680e215259d193b215","name":"Fei Gao","hidden":false},{"_id":"68d352680e215259d193b216","name":"Cheng Jiang","hidden":false},{"_id":"68d352680e215259d193b217","name":"Bo Chao Wang","hidden":false},{"_id":"68d352680e215259d193b218","name":"Kai Liu","hidden":false},{"_id":"68d352680e215259d193b219","name":"Jianchen Zhu","hidden":false},{"_id":"68d352680e215259d193b21a","name":"Wai Lam","hidden":false},{"_id":"68d352680e215259d193b21b","name":"Wayyt Wang","hidden":false},{"_id":"68d352680e215259d193b21c","name":"Bo Zhou","hidden":false},{"_id":"68d352680e215259d193b21d","name":"Di Wang","hidden":false}],"publishedAt":"2025-09-23T17:10:40.000Z","submittedOnDailyAt":"2025-09-24T00:37:49.095Z","title":"Reinforcement Learning on Pre-Training Data","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},"summary":"The growing disparity between the exponential scaling of computational\nresources and the finite growth of high-quality text data now constrains\nconventional scaling approaches for large language models (LLMs). To address\nthis challenge, we introduce Reinforcement Learning on Pre-Training data\n(RLPT), a new training-time scaling paradigm for optimizing LLMs. In contrast\nto prior approaches that scale training primarily through supervised learning,\nRLPT enables the policy to autonomously explore meaningful trajectories to\nlearn from pre-training data and improve its capability through reinforcement\nlearning (RL). While existing RL strategies such as reinforcement learning from\nhuman feedback (RLHF) and reinforcement learning with verifiable rewards (RLVR)\nrely on human annotation for reward construction, RLPT eliminates this\ndependency by deriving reward signals directly from pre-training data.\nSpecifically, it adopts a next-segment reasoning objective, rewarding the\npolicy for accurately predicting subsequent text segments conditioned on the\npreceding context. This formulation allows RL to be scaled on pre-training\ndata, encouraging the exploration of richer trajectories across broader\ncontexts and thereby fostering more generalizable reasoning skills. Extensive\nexperiments on both general-domain and mathematical reasoning benchmarks across\nmultiple models validate the effectiveness of RLPT. For example, when applied\nto Qwen3-4B-Base, RLPT yields absolute improvements of 3.0, 5.1, 8.1,\n6.0, 6.6, and 5.3 on MMLU, MMLU-Pro, GPQA-Diamond, KOR-Bench, AIME24, and\nAIME25, respectively. The results further demonstrate favorable scaling\nbehavior, suggesting strong potential for continued gains with more compute. In\naddition, RLPT provides a solid foundation, extending the reasoning boundaries\nof LLMs and enhancing RLVR performance.","upvotes":58,"discussionId":"68d352680e215259d193b21e","ai_summary":"Reinforcement Learning on Pre-Training data (RLPT) optimizes large language models by autonomously exploring meaningful trajectories in pre-training data, improving generalizable reasoning skills without human annotation.","ai_keywords":["Reinforcement Learning on Pre-Training data","RLPT","large language models","LLMs","reinforcement learning","RL","reinforcement learning from human feedback","RLHF","reinforcement learning with verifiable rewards","RLVR","next-segment reasoning objective","MMLU","MMLU-Pro","GPQA-Diamond","KOR-Bench","AIME24","AIME25"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"65d83362984cc240f2241e3a","avatarUrl":"/avatars/1f922987d7d69f553bb672c4d26ceef6.svg","isPro":false,"fullname":"young","user":"thkelper","type":"user"},{"_id":"616648c84c0937d31946f21b","avatarUrl":"/avatars/7ca27de5c5116c91ff1db61ba6277ed5.svg","isPro":false,"fullname":"Ziyang","user":"hzy","type":"user"},{"_id":"6323f399462470712720c155","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6323f399462470712720c155/SWsMNa7vETUSrOt9Qf-oe.png","isPro":false,"fullname":"Yinxu Pan","user":"cppowboy","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"622474f38dc6b0b64f5e903d","avatarUrl":"/avatars/d6b60a014277a8ec7d564163c5f644aa.svg","isPro":false,"fullname":"Yuxin Zuo","user":"yuxinzuo","type":"user"},{"_id":"638ecea26251c8bd7abc85e2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/638ecea26251c8bd7abc85e2/a-i9jvRyPINmgFNqZkZAv.png","isPro":true,"fullname":"Peiyong Wang","user":"Addwater","type":"user"},{"_id":"62ea79dd01ed9b0e8f61ccd3","avatarUrl":"/avatars/70af83e0e267be39fcd5f23b85e2dafa.svg","isPro":false,"fullname":"Chengsong Huang","user":"ChengsongHuang","type":"user"},{"_id":"66d45a8de5837f38ce3b73f7","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66d45a8de5837f38ce3b73f7/3omslNRb8wV_c1xbrCmQC.jpeg","isPro":false,"fullname":"SihengLi","user":"Siheng99","type":"user"},{"_id":"65f7da98e3e81bd2737474ba","avatarUrl":"/avatars/a29dedd8cd1bec3c694da1e849263c87.svg","isPro":false,"fullname":"zhou bo ","user":"yobobobo","type":"user"},{"_id":"64eb7384f494f8b2a08c9f2b","avatarUrl":"/avatars/b9ca4864b3d834cb6df55c574fde1a31.svg","isPro":false,"fullname":"CarlanLark","user":"CarlanLark","type":"user"},{"_id":"619507e7b74b6c591f794340","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/619507e7b74b6c591f794340/JbPDoy6Ko1V1-6oJJwFV8.jpeg","isPro":false,"fullname":"Weiyun Wang","user":"Weiyun1025","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":2}">
Papers
arxiv:2509.19249

Reinforcement Learning on Pre-Training Data

Published on Sep 23
· Submitted by taesiri on Sep 24
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Reinforcement Learning on Pre-Training data (RLPT) optimizes large language models by autonomously exploring meaningful trajectories in pre-training data, improving generalizable reasoning skills without human annotation.

AI-generated summary

The growing disparity between the exponential scaling of computational resources and the finite growth of high-quality text data now constrains conventional scaling approaches for large language models (LLMs). To address this challenge, we introduce Reinforcement Learning on Pre-Training data (RLPT), a new training-time scaling paradigm for optimizing LLMs. In contrast to prior approaches that scale training primarily through supervised learning, RLPT enables the policy to autonomously explore meaningful trajectories to learn from pre-training data and improve its capability through reinforcement learning (RL). While existing RL strategies such as reinforcement learning from human feedback (RLHF) and reinforcement learning with verifiable rewards (RLVR) rely on human annotation for reward construction, RLPT eliminates this dependency by deriving reward signals directly from pre-training data. Specifically, it adopts a next-segment reasoning objective, rewarding the policy for accurately predicting subsequent text segments conditioned on the preceding context. This formulation allows RL to be scaled on pre-training data, encouraging the exploration of richer trajectories across broader contexts and thereby fostering more generalizable reasoning skills. Extensive experiments on both general-domain and mathematical reasoning benchmarks across multiple models validate the effectiveness of RLPT. For example, when applied to Qwen3-4B-Base, RLPT yields absolute improvements of 3.0, 5.1, 8.1, 6.0, 6.6, and 5.3 on MMLU, MMLU-Pro, GPQA-Diamond, KOR-Bench, AIME24, and AIME25, respectively. The results further demonstrate favorable scaling behavior, suggesting strong potential for continued gains with more compute. In addition, RLPT provides a solid foundation, extending the reasoning boundaries of LLMs and enhancing RLVR performance.

Community

Paper submitter

The growing disparity between the exponential scaling of computational resources and the finite growth of high-quality text data now constrains conventional scaling approaches for large language models (LLMs). To address this challenge, we introduce Reinforcement Learning on Pre-Training data (RLPT), a new training-time scaling paradigm for optimizing LLMs. In contrast to prior approaches that scale training primarily through supervised learning, RLPT enables the policy to autonomously explore meaningful trajectories to learn from pre-training data and improve its capability through reinforcement learning (RL). While existing RL strategies such as reinforcement learning from human feedback (RLHF) and reinforcement learning with verifiable rewards (RLVR) rely on human annotation for reward construction, RLPT eliminates this dependency by deriving reward signals directly from pre-training data. Specifically, it adopts a next-segment reasoning objective, rewarding the policy for accurately predicting subsequent text segments conditioned on the preceding context. This formulation allows RL to be scaled on pre-training data, encouraging the exploration of richer trajectories across broader contexts and thereby fostering more generalizable reasoning skills. Extensive experiments on both general-domain and mathematical reasoning benchmarks across multiple models validate the effectiveness of RLPT. For example, when applied to Qwen3-4B-Base, RLPT yields absolute improvements of 3.0, 5.1, 8.1, 6.0, 6.6, and 5.3 on MMLU, MMLU-Pro, GPQA-Diamond, KOR-Bench, AIME24, and AIME25, respectively. The results further demonstrate favorable scaling behavior, suggesting strong potential for continued gains with more compute. In addition, RLPT provides a solid foundation, extending the reasoning boundaries of LLMs and enhancing RLVR performance.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.19249 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.19249 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.19249 in a Space README.md to link it from this page.

Collections including this paper 3

Лучший частный хостинг