https://youtu.be/dW0gb6PV0EM\n

\n","updatedAt":"2025-05-16T17:11:27.692Z","author":{"_id":"6813ee19c9b224a738fea856","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/g1uPHIKEgWe1ftHGHbo_U.png","fullname":"YJ","name":"yjh415","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.5793865919113159},"editors":["yjh415"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/g1uPHIKEgWe1ftHGHbo_U.png"],"reactions":[],"isReport":false}},{"id":"6827e8325b20a863e75028b5","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-05-17T01:36:50.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Nemotron-CrossThink: Scaling Self-Learning beyond Math Reasoning](https://huggingface.co/papers/2504.13941) (2025)\n* [SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models](https://huggingface.co/papers/2504.11468) (2025)\n* [How Difficulty-Aware Staged Reinforcement Learning Enhances LLMs' Reasoning Capabilities: A Preliminary Experimental Study](https://huggingface.co/papers/2504.00829) (2025)\n* [Exploring the Effect of Reinforcement Learning on Video Understanding: Insights from SEED-Bench-R1](https://huggingface.co/papers/2503.24376) (2025)\n* [SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild](https://huggingface.co/papers/2503.18892) (2025)\n* [Agentic Reasoning and Tool Integration for LLMs via Reinforcement Learning](https://huggingface.co/papers/2505.01441) (2025)\n* [Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language Models in Math](https://huggingface.co/papers/2504.21233) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
\n
\n
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2025-05-17T01:36:50.471Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7243580222129822},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.10554","authors":[{"_id":"6826a569ea77771e3880f793","user":{"_id":"64351475901c5734bcb64248","avatarUrl":"/avatars/12346d4301c1bfb00ce0ea128a93cc15.svg","isPro":false,"fullname":"Zhiyuan Hu","user":"zhiyuanhucs","type":"user"},"name":"Zhiyuan Hu","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:02:17.850Z","hidden":false},{"_id":"6826a569ea77771e3880f794","name":"Yibo Wang","hidden":false},{"_id":"6826a569ea77771e3880f795","user":{"_id":"63a3ff69f91ad3ea5703841d","avatarUrl":"/avatars/69227c4bce01d33747c1377b6f9672db.svg","isPro":false,"fullname":"Hanze Dong","user":"hendrydong","type":"user"},"name":"Hanze Dong","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:08:50.665Z","hidden":false},{"_id":"6826a569ea77771e3880f796","user":{"_id":"6602869253a0518b2a98cafd","avatarUrl":"/avatars/c14b5953a716f42c83ad28147f8308ae.svg","isPro":false,"fullname":"Yuhui Xu","user":"yuhuixu","type":"user"},"name":"Yuhui Xu","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:09:07.114Z","hidden":false},{"_id":"6826a569ea77771e3880f797","user":{"_id":"6461c2905dba83471db3be53","avatarUrl":"/avatars/6e36cf86201d590ac729a75d4a439cde.svg","isPro":false,"fullname":"Amrita Saha","user":"amritasaha87","type":"user"},"name":"Amrita Saha","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:09:22.879Z","hidden":false},{"_id":"6826a569ea77771e3880f798","user":{"_id":"649dbcc4e0fff1ed099dc80a","avatarUrl":"/avatars/c87c273ca628dbcddccbf1ee19b2ce33.svg","isPro":false,"fullname":"Caiming Xiong","user":"cxiong","type":"user"},"name":"Caiming Xiong","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:09:29.160Z","hidden":false},{"_id":"6826a569ea77771e3880f799","user":{"_id":"651d8032c50012d33e914f2f","avatarUrl":"/avatars/0a44c9f51fc50ce86582e328c361ea00.svg","isPro":false,"fullname":"Bryan Hooi","user":"bhooi","type":"user"},"name":"Bryan Hooi","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:09:35.640Z","hidden":false},{"_id":"6826a569ea77771e3880f79a","user":{"_id":"61f9d3b54ac99e8a1bae85f4","avatarUrl":"/avatars/ac47d13204dd22452e4bc46e280842d5.svg","isPro":false,"fullname":"JunnanLi","user":"JunnanLi","type":"user"},"name":"Junnan Li","status":"admin_assigned","statusLastChangedAt":"2025-05-16T08:09:57.841Z","hidden":false}],"publishedAt":"2025-05-15T17:58:33.000Z","submittedOnDailyAt":"2025-05-16T01:09:52.437Z","title":"Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large\n Reasoning Models","submittedOnDailyBy":{"_id":"64351475901c5734bcb64248","avatarUrl":"/avatars/12346d4301c1bfb00ce0ea128a93cc15.svg","isPro":false,"fullname":"Zhiyuan Hu","user":"zhiyuanhucs","type":"user"},"summary":"Large reasoning models (LRMs) already possess a latent capacity for long\nchain-of-thought reasoning. Prior work has shown that outcome-based\nreinforcement learning (RL) can incidentally elicit advanced reasoning\nbehaviors such as self-correction, backtracking, and verification phenomena\noften referred to as the model's \"aha moment\". However, the timing and\nconsistency of these emergent behaviors remain unpredictable and\nuncontrollable, limiting the scalability and reliability of LRMs' reasoning\ncapabilities. To address these limitations, we move beyond reliance on prompts\nand coincidental \"aha moments\". Instead, we explicitly align models with three\nmeta-abilities: deduction, induction, and abduction, using automatically\ngenerated, self-verifiable tasks. Our three stage-pipeline individual\nalignment, parameter-space merging, and domain-specific reinforcement learning,\nboosting performance by over 10\\% relative to instruction-tuned baselines.\nFurthermore, domain-specific RL from the aligned checkpoint yields an\nadditional 2\\% average gain in the performance ceiling across math, coding, and\nscience benchmarks, demonstrating that explicit meta-ability alignment offers a\nscalable and dependable foundation for reasoning. Code is available at:\nhttps://github.com/zhiyuanhubj/Meta-Ability-Alignment","upvotes":120,"discussionId":"6826a56aea77771e3880f7c8","githubRepo":"https://github.com/zhiyuanhubj/Meta-Ability-Alignment","ai_summary":"Explicit alignment of large reasoning models with deduction, induction, and abduction through a three-stage pipeline improves scalability and reliability in reasoning tasks.","ai_keywords":["large reasoning models","long chain-of-thought reasoning","outcome-based reinforcement learning","self-correction","backtracking","verification","meta-abilities","automatic task generation","parameter-space merging","domain-specific reinforcement learning"],"githubStars":79},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64351475901c5734bcb64248","avatarUrl":"/avatars/12346d4301c1bfb00ce0ea128a93cc15.svg","isPro":false,"fullname":"Zhiyuan Hu","user":"zhiyuanhucs","type":"user"},{"_id":"67ed538215808519137702bb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/yntBMCk-7ZytUkteeM2MU.png","isPro":false,"fullname":"Pan","user":"SigridPan","type":"user"},{"_id":"63a3ff69f91ad3ea5703841d","avatarUrl":"/avatars/69227c4bce01d33747c1377b6f9672db.svg","isPro":false,"fullname":"Hanze Dong","user":"hendrydong","type":"user"},{"_id":"6529f79e802e3d1a4f8ec662","avatarUrl":"/avatars/d05320c370a6497d8792ef5acb563dd5.svg","isPro":false,"fullname":"Yuliang Liu","user":"yuliang03181","type":"user"},{"_id":"6804bee95d8af4a366e05870","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/2f8nEwpfza0UJ_VyHXl57.png","isPro":false,"fullname":"Ming Wang","user":"hujunxianligong","type":"user"},{"_id":"66d86541c81167fc5e0c0d44","avatarUrl":"/avatars/02a18cca3c14093eb0d03950f0da9b96.svg","isPro":false,"fullname":"Fanshuang Kong","user":"kongfs","type":"user"},{"_id":"64ca18318d2d187c24df20ec","avatarUrl":"/avatars/cada297547bf4c84934c6196d2ee6abd.svg","isPro":false,"fullname":"James X. Zhao","user":"JamesXZ","type":"user"},{"_id":"62bb1e0f3ff437e49a3088e5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62bb1e0f3ff437e49a3088e5/MWNanci3x5g780xh-704U.png","isPro":true,"fullname":"Suyuchen Wang","user":"sheryc","type":"user"},{"_id":"624ea839ecce9a6f171d3b45","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/rZ2bRICC-kF9RJwHWpIoK.png","isPro":false,"fullname":"yssss","user":"yuansui","type":"user"},{"_id":"5ff2c923b44d9ce2371f8fd5","avatarUrl":"/avatars/16858cb830ff34ffea5e97895b678a22.svg","isPro":false,"fullname":"Brian","user":"aniloid2","type":"user"},{"_id":"6433d1494b34368fdbff9c63","avatarUrl":"/avatars/6acf8b0ea8d4a42ca838363fd5e3a7d0.svg","isPro":false,"fullname":"Naibo Wang","user":"Naibo","type":"user"},{"_id":"64c38c913c236dcedaeff7f5","avatarUrl":"/avatars/a6ffd362112c95ebdbe30a8c5fa04c07.svg","isPro":false,"fullname":"Wu Zhaoxuan","user":"ZhaoxuanWu","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":1}">
Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large
Reasoning Models
Published on May 15
#1 Paper of the day
Abstract
Explicit alignment of large reasoning models with deduction, induction, and abduction through a three-stage pipeline improves scalability and reliability in reasoning tasks.
Large reasoning models (LRMs) already possess a latent capacity for long
chain-of-thought reasoning. Prior work has shown that outcome-based
reinforcement learning (RL) can incidentally elicit advanced reasoning
behaviors such as self-correction, backtracking, and verification phenomena
often referred to as the model's "aha moment". However, the timing and
consistency of these emergent behaviors remain unpredictable and
uncontrollable, limiting the scalability and reliability of LRMs' reasoning
capabilities. To address these limitations, we move beyond reliance on prompts
and coincidental "aha moments". Instead, we explicitly align models with three
meta-abilities: deduction, induction, and abduction, using automatically
generated, self-verifiable tasks. Our three stage-pipeline individual
alignment, parameter-space merging, and domain-specific reinforcement learning,
boosting performance by over 10\% relative to instruction-tuned baselines.
Furthermore, domain-specific RL from the aligned checkpoint yields an
additional 2\% average gain in the performance ceiling across math, coding, and
science benchmarks, demonstrating that explicit meta-ability alignment offers a
scalable and dependable foundation for reasoning. Code is available at:
https://github.com/zhiyuanhubj/Meta-Ability-Alignment