lynx   »   [go: up one dir, main page]

\"summary_accuracy_comparison.png\"

\n

Employing self-learning with multidomain data, NEMOTRON-CROSSTHINK outperforms baseline models, including domainspecific training (Only Math) and OpenReasoner-Zero (ORZ-7B), achieving consistent
gains across all reasoning tasks.

\n

\"Group

\n

Moreover, NEMOTRON-CROSSTHINK exhibits significantly improved response efficiency—using 28% fewer tokens for correct answers—highlighting more
focused and effective reasoning.

\n","updatedAt":"2025-04-22T01:58:47.755Z","author":{"_id":"6338dd1776421c0543150467","avatarUrl":"/avatars/4539dcec644e40be33f4a0d419fa66cb.svg","fullname":"Syeda Nahida Akter","name":"SieraL","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7415422797203064},"editors":["SieraL"],"editorAvatarUrls":["/avatars/4539dcec644e40be33f4a0d419fa66cb.svg"],"reactions":[],"isReport":false}},{"id":"680725a2c26407b9ea32d6f2","author":{"_id":"62cbeb2d72dfd24b86bdf977","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62cbeb2d72dfd24b86bdf977/UcGYYSBNrCvPM5K9v-sro.png","fullname":"Zengzhi Wang","name":"SinclairWang","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":23},"createdAt":"2025-04-22T05:14:10.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Interesting work. Will this data be made available?","html":"

Interesting work. Will this data be made available?

\n","updatedAt":"2025-04-22T05:14:10.199Z","author":{"_id":"62cbeb2d72dfd24b86bdf977","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62cbeb2d72dfd24b86bdf977/UcGYYSBNrCvPM5K9v-sro.png","fullname":"Zengzhi Wang","name":"SinclairWang","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":23}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9046127796173096},"editors":["SinclairWang"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/62cbeb2d72dfd24b86bdf977/UcGYYSBNrCvPM5K9v-sro.png"],"reactions":[],"isReport":false}},{"id":"68079091efb0f523ef6e5031","author":{"_id":"6338dd1776421c0543150467","avatarUrl":"/avatars/4539dcec644e40be33f4a0d419fa66cb.svg","fullname":"Syeda Nahida Akter","name":"SieraL","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false},"createdAt":"2025-04-22T12:50:25.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Yes, we are working on releasing the data now.","html":"

Yes, we are working on releasing the data now.

\n","updatedAt":"2025-04-22T12:50:25.390Z","author":{"_id":"6338dd1776421c0543150467","avatarUrl":"/avatars/4539dcec644e40be33f4a0d419fa66cb.svg","fullname":"Syeda Nahida Akter","name":"SieraL","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9906744360923767},"editors":["SieraL"],"editorAvatarUrls":["/avatars/4539dcec644e40be33f4a0d419fa66cb.svg"],"reactions":[],"isReport":false}},{"id":"680843ecaca60e6178b12384","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-04-23T01:35:40.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Crossing the Reward Bridge: Expanding RL with Verifiable Rewards Across Diverse Domains](https://huggingface.co/papers/2503.23829) (2025)\n* [SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models](https://huggingface.co/papers/2504.11468) (2025)\n* [Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't](https://huggingface.co/papers/2503.16219) (2025)\n* [Exploring the Effect of Reinforcement Learning on Video Understanding: Insights from SEED-Bench-R1](https://huggingface.co/papers/2503.24376) (2025)\n* [DeepMath-103K: A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning](https://huggingface.co/papers/2504.11456) (2025)\n* [GRPO-LEAD: A Difficulty-Aware Reinforcement Learning Approach for Concise Mathematical Reasoning in Language Models](https://huggingface.co/papers/2504.09696) (2025)\n* [Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model](https://huggingface.co/papers/2503.24290) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-04-23T01:35:40.776Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.730332612991333},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2504.13941","authors":[{"_id":"6806f6ff67a715240a5ab9f8","user":{"_id":"6338dd1776421c0543150467","avatarUrl":"/avatars/4539dcec644e40be33f4a0d419fa66cb.svg","isPro":false,"fullname":"Syeda Nahida Akter","user":"SieraL","type":"user"},"name":"Syeda Nahida Akter","status":"admin_assigned","statusLastChangedAt":"2025-04-22T11:06:38.568Z","hidden":false},{"_id":"6806f6ff67a715240a5ab9f9","user":{"_id":"66980b9c9baa4382e1678809","avatarUrl":"/avatars/1a516bb7aa7871834c19de708cdd853a.svg","isPro":false,"fullname":"Shrimai Prabhumoye","user":"shrimai19","type":"user"},"name":"Shrimai Prabhumoye","status":"extracted_pending","statusLastChangedAt":"2025-04-22T01:55:12.561Z","hidden":false},{"_id":"6806f6ff67a715240a5ab9fa","name":"Matvei Novikov","hidden":false},{"_id":"6806f6ff67a715240a5ab9fb","name":"Seungju Han","hidden":false},{"_id":"6806f6ff67a715240a5ab9fc","name":"Ying Lin","hidden":false},{"_id":"6806f6ff67a715240a5ab9fd","user":{"_id":"64ecf888961565b46783bf08","avatarUrl":"/avatars/0c4ac90b7fa8a6091a67b0216dada7db.svg","isPro":false,"fullname":"Evelina Bakhturina","user":"ekmb","type":"user"},"name":"Evelina Bakhturi","status":"admin_assigned","statusLastChangedAt":"2025-04-22T11:06:52.587Z","hidden":false},{"_id":"6806f6ff67a715240a5ab9fe","user":{"_id":"651b3cff66fe9bc55b2828d0","avatarUrl":"/avatars/1552a97628364537ae3db9fdb30b35c9.svg","isPro":false,"fullname":"Eric Nyberg","user":"ericnyberg","type":"user"},"name":"Eric Nyberg","status":"admin_assigned","statusLastChangedAt":"2025-04-22T11:06:58.156Z","hidden":false},{"_id":"6806f6ff67a715240a5ab9ff","user":{"_id":"64d42729f63b01b7f676b176","avatarUrl":"/avatars/52e54bdd6a1fb6c774a40cd70f3d7925.svg","isPro":false,"fullname":"Yejin Choi","user":"yejinchoinka","type":"user"},"name":"Yejin Choi","status":"admin_assigned","statusLastChangedAt":"2025-04-22T11:07:04.801Z","hidden":false},{"_id":"6806f6ff67a715240a5aba00","user":{"_id":"630544b09d2531fabd156fd3","avatarUrl":"/avatars/7b5374244a887577834fb4524ff76d01.svg","isPro":false,"fullname":"Mostofa Patwary","user":"mpatwary","type":"user"},"name":"Mostofa Patwary","status":"admin_assigned","statusLastChangedAt":"2025-04-22T11:07:11.089Z","hidden":false},{"_id":"6806f6ff67a715240a5aba01","user":{"_id":"6641544c695975af2cbd0da6","avatarUrl":"/avatars/0ad3c18dcba585259b064fe9b00a07ce.svg","isPro":false,"fullname":"Mohammad Shoeybi","user":"shoeybi","type":"user"},"name":"Mohammad Shoeybi","status":"admin_assigned","statusLastChangedAt":"2025-04-22T11:07:16.923Z","hidden":false},{"_id":"6806f6ff67a715240a5aba02","user":{"_id":"6311021788942700629e6247","avatarUrl":"/avatars/e7adc1632b76e80e7e4a590033d1c20a.svg","isPro":false,"fullname":"Bryan Catanzaro","user":"ctnzr","type":"user"},"name":"Bryan Catanzaro","status":"admin_assigned","statusLastChangedAt":"2025-04-22T11:07:23.267Z","hidden":false}],"publishedAt":"2025-04-15T21:37:13.000Z","submittedOnDailyAt":"2025-04-22T00:28:47.499Z","title":"NEMOTRON-CROSSTHINK: Scaling Self-Learning beyond Math Reasoning","submittedOnDailyBy":{"_id":"6338dd1776421c0543150467","avatarUrl":"/avatars/4539dcec644e40be33f4a0d419fa66cb.svg","isPro":false,"fullname":"Syeda Nahida Akter","user":"SieraL","type":"user"},"summary":"Large Language Models (LLMs) have shown strong reasoning capabilities,\nparticularly when enhanced through Reinforcement Learning (RL). While prior\nwork has successfully applied RL to mathematical reasoning -- where rules and\ncorrectness are well-defined -- generalizing these methods to broader reasoning\ndomains remains challenging due to limited data, the lack of verifiable reward\nstructures, and diverse task requirements. In this work, we propose\nNEMOTRON-CROSSTHINK, a framework that systematically incorporates multi-domain\ncorpora, including both synthetic and real-world question-answer pairs, into RL\ntraining to improve generalization across diverse reasoning tasks.\nNEMOTRON-CROSSTHINK addresses key challenges by (1) incorporating data from\nvaried sources spanning STEM, humanities, social sciences, etc.; (2) applying\nstructured templates (e.g., multiple-choice and open-ended) to control\nanswer-space complexity; (3) filtering for verifiable answers; and (4)\noptimizing data blending strategies that utilizes data from multiple sources\neffectively. Our approach enables scalable and verifiable reward modeling\nbeyond mathematics and demonstrates improved accuracies on both math (MATH-500:\n+30.1%, AMC23:+27.5%) and non-math reasoning benchmarks (MMLU-PRO: +12.8%,\nGPQA-DIAMOND: +11.3%, AGIEVAL: +15.1%, SUPERGPQA: +3.8%). Moreover,\nNEMOTRON-CROSSTHINK exhibits significantly improved response efficiency --\nusing 28% fewer tokens for correct answers -- highlighting more focused and\neffective reasoning. Through NEMOTRON-CROSSTHINK, we demonstrate that\nintegrating multi-domain, multi-format data in RL leads to more accurate,\nefficient, and generalizable LLMs.","upvotes":11,"discussionId":"6806f70067a715240a5aba4c","ai_summary":"NEMOTRON-CROSSTHINK is a framework that incorporates diverse multi-domain data into RL training to enhance reasoning capabilities and efficiency of LLMs across various tasks and domains.","ai_keywords":["Reinforcement Learning","LLMs","NEMOTRON-CROSSTHINK","multi-domain corpora","structured templates","reward modeling","reasoning benchmarks","MATH-500","AMC23","MMLU-PRO","GPQA-DIAMOND","AGIEVAL","SUPERGPQA"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62cbeb2d72dfd24b86bdf977","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62cbeb2d72dfd24b86bdf977/UcGYYSBNrCvPM5K9v-sro.png","isPro":false,"fullname":"Zengzhi Wang","user":"SinclairWang","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"6807a2ef2df2acd9ceb6ecb1","avatarUrl":"/avatars/e28d804049179624064d512ceda29dd9.svg","isPro":false,"fullname":"Yolo Cherry","user":"yooloo99","type":"user"},{"_id":"653c39d53bd6135805ff9abc","avatarUrl":"/avatars/6ec27ff5d477a18c6e287bb086c652de.svg","isPro":false,"fullname":"Sanjeev Satheesh","user":"sanjeevnv","type":"user"},{"_id":"6602c06c9a5705881f7c5a8f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/jvVLop9rhcNt5mnuRGDje.jpeg","isPro":false,"fullname":"Bobbi Joe Ludwig","user":"LudwigSolutionsAI","type":"user"},{"_id":"642a7046508b7246f9cf9179","avatarUrl":"/avatars/b9947b4b89b3c2bf551fab4ded7d073d.svg","isPro":false,"fullname":"Eric NG","user":"Eric108","type":"user"},{"_id":"614c569f9691b6ffd13ff2ca","avatarUrl":"/avatars/49601b76b8a13136fe140161e455b7c7.svg","isPro":false,"fullname":"Seungju Han","user":"Seungjuhan","type":"user"},{"_id":"64b764bffdb702b3d8640610","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64b764bffdb702b3d8640610/lpHg0AX_NOmzw-ZxeOa1s.png","isPro":false,"fullname":"haoxintong","user":"haoxintong","type":"user"},{"_id":"65c20ee58aedd6edd2b89000","avatarUrl":"/avatars/db1bbf4c8f6a88459da967ec83e9bc08.svg","isPro":false,"fullname":"Chmielewski","user":"Eryk-Chmielewski","type":"user"},{"_id":"63c3e898c7d7f4c63a51594a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1673783426328-noauth.png","isPro":false,"fullname":"Suzie Oh","user":"ohsuz","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2504.13941

NEMOTRON-CROSSTHINK: Scaling Self-Learning beyond Math Reasoning

Published on Apr 15
· Submitted by Syeda Nahida Akter on Apr 22

Abstract

NEMOTRON-CROSSTHINK is a framework that incorporates diverse multi-domain data into RL training to enhance reasoning capabilities and efficiency of LLMs across various tasks and domains.

AI-generated summary

Large Language Models (LLMs) have shown strong reasoning capabilities, particularly when enhanced through Reinforcement Learning (RL). While prior work has successfully applied RL to mathematical reasoning -- where rules and correctness are well-defined -- generalizing these methods to broader reasoning domains remains challenging due to limited data, the lack of verifiable reward structures, and diverse task requirements. In this work, we propose NEMOTRON-CROSSTHINK, a framework that systematically incorporates multi-domain corpora, including both synthetic and real-world question-answer pairs, into RL training to improve generalization across diverse reasoning tasks. NEMOTRON-CROSSTHINK addresses key challenges by (1) incorporating data from varied sources spanning STEM, humanities, social sciences, etc.; (2) applying structured templates (e.g., multiple-choice and open-ended) to control answer-space complexity; (3) filtering for verifiable answers; and (4) optimizing data blending strategies that utilizes data from multiple sources effectively. Our approach enables scalable and verifiable reward modeling beyond mathematics and demonstrates improved accuracies on both math (MATH-500: +30.1%, AMC23:+27.5%) and non-math reasoning benchmarks (MMLU-PRO: +12.8%, GPQA-DIAMOND: +11.3%, AGIEVAL: +15.1%, SUPERGPQA: +3.8%). Moreover, NEMOTRON-CROSSTHINK exhibits significantly improved response efficiency -- using 28% fewer tokens for correct answers -- highlighting more focused and effective reasoning. Through NEMOTRON-CROSSTHINK, we demonstrate that integrating multi-domain, multi-format data in RL leads to more accurate, efficient, and generalizable LLMs.

Community

Paper author Paper submitter

summary_accuracy_comparison.png

Employing self-learning with multidomain data, NEMOTRON-CROSSTHINK outperforms baseline models, including domainspecific training (Only Math) and OpenReasoner-Zero (ORZ-7B), achieving consistent
gains across all reasoning tasks.

Group 114.png

Moreover, NEMOTRON-CROSSTHINK exhibits significantly improved response efficiency—using 28% fewer tokens for correct answers—highlighting more
focused and effective reasoning.

Interesting work. Will this data be made available?

Paper author Paper submitter

Yes, we are working on releasing the data now.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.13941 in a model README.md to link it from this page.

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.13941 in a Space README.md to link it from this page.

Collections including this paper 2

Лучший частный хостинг