lynx   »   [go: up one dir, main page]

https://github.com/ZJU-REAL/Self-Braking-Tuning
Project: https://zju-real.github.io/SBT

\n","updatedAt":"2025-05-23T05:48:34.210Z","author":{"_id":"5e1058e9fcf41d740b69966d","avatarUrl":"/avatars/ce74839ba871f2b54313a670a233ba82.svg","fullname":"Yongliang Shen","name":"tricktreat","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":29}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.6924192905426025},"editors":["tricktreat"],"editorAvatarUrls":["/avatars/ce74839ba871f2b54313a670a233ba82.svg"],"reactions":[],"isReport":false}},{"id":"683122ff5d3b79eb83689d35","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2025-05-24T01:38:07.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RL](https://huggingface.co/papers/2505.10832) (2025)\n* [Thought Manipulation: External Thought Can Be Efficient for Large Reasoning Models](https://huggingface.co/papers/2504.13626) (2025)\n* [Making Small Language Models Efficient Reasoners: Intervention, Supervision, Reinforcement](https://huggingface.co/papers/2505.07961) (2025)\n* [ShorterBetter: Guiding Reasoning Models to Find Optimal Inference Length for Efficient Reasoning](https://huggingface.co/papers/2504.21370) (2025)\n* [Hawkeye:Efficient Reasoning with Model Collaboration](https://huggingface.co/papers/2504.00424) (2025)\n* [DRP: Distilled Reasoning Pruning with Skill-aware Step Decomposition for Efficient Large Reasoning Models](https://huggingface.co/papers/2505.13975) (2025)\n* [When to Continue Thinking: Adaptive Thinking Mode Switching for Efficient Reasoning](https://huggingface.co/papers/2505.15400) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-05-24T01:38:07.446Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7195569276809692},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.14604","authors":[{"_id":"682f34b52b9fdc24ae9de371","user":{"_id":"6612a84c7554a7f1b7000b22","avatarUrl":"/avatars/f748fb577c5f2274222847acf9b01dea.svg","isPro":false,"fullname":"Haoran Zhao","user":"XinC6","type":"user"},"name":"Haoran Zhao","status":"claimed_verified","statusLastChangedAt":"2025-05-26T08:17:35.353Z","hidden":false},{"_id":"682f34b52b9fdc24ae9de372","user":{"_id":"64098738342c26884c792c93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64098738342c26884c792c93/SxBUd-wLrl-PjQsrVYJte.jpeg","isPro":false,"fullname":"Yuchen Yan","user":"yanyc","type":"user"},"name":"Yuchen Yan","status":"claimed_verified","statusLastChangedAt":"2025-05-26T08:17:33.082Z","hidden":false},{"_id":"682f34b52b9fdc24ae9de373","user":{"_id":"5e1058e9fcf41d740b69966d","avatarUrl":"/avatars/ce74839ba871f2b54313a670a233ba82.svg","isPro":false,"fullname":"Yongliang Shen","user":"tricktreat","type":"user"},"name":"Yongliang Shen","status":"claimed_verified","statusLastChangedAt":"2025-05-26T08:17:30.588Z","hidden":false},{"_id":"682f34b52b9fdc24ae9de374","user":{"_id":"6692aff88db712bad780f02a","avatarUrl":"/avatars/5dc4b1c27c70f6a64864711dbff4910f.svg","isPro":false,"fullname":"xhl","user":"zjuxhl","type":"user"},"name":"Haolei Xu","status":"claimed_verified","statusLastChangedAt":"2025-05-26T08:17:57.247Z","hidden":false},{"_id":"682f34b52b9fdc24ae9de375","name":"Wenqi Zhang","hidden":false},{"_id":"682f34b52b9fdc24ae9de376","name":"Kaitao Song","hidden":false},{"_id":"682f34b52b9fdc24ae9de377","name":"Jian Shao","hidden":false},{"_id":"682f34b52b9fdc24ae9de378","name":"Weiming Lu","hidden":false},{"_id":"682f34b52b9fdc24ae9de379","name":"Jun Xiao","hidden":false},{"_id":"682f34b52b9fdc24ae9de37a","name":"Yueting Zhuang","hidden":false}],"publishedAt":"2025-05-20T16:53:40.000Z","submittedOnDailyAt":"2025-05-23T03:36:02.584Z","title":"Let LLMs Break Free from Overthinking via Self-Braking Tuning","submittedOnDailyBy":{"_id":"5e1058e9fcf41d740b69966d","avatarUrl":"/avatars/ce74839ba871f2b54313a670a233ba82.svg","isPro":false,"fullname":"Yongliang Shen","user":"tricktreat","type":"user"},"summary":"Large reasoning models (LRMs), such as OpenAI o1 and DeepSeek-R1, have\nsignificantly enhanced their reasoning capabilities by generating longer chains\nof thought, demonstrating outstanding performance across a variety of tasks.\nHowever, this performance gain comes at the cost of a substantial increase in\nredundant reasoning during the generation process, leading to high\ncomputational overhead and exacerbating the issue of overthinking. Although\nnumerous existing approaches aim to address the problem of overthinking, they\noften rely on external interventions. In this paper, we propose a novel\nframework, Self-Braking Tuning (SBT), which tackles overthinking from the\nperspective of allowing the model to regulate its own reasoning process, thus\neliminating the reliance on external control mechanisms. We construct a set of\noverthinking identification metrics based on standard answers and design a\nsystematic method to detect redundant reasoning. This method accurately\nidentifies unnecessary steps within the reasoning trajectory and generates\ntraining signals for learning self-regulation behaviors. Building on this\nfoundation, we develop a complete strategy for constructing data with adaptive\nreasoning lengths and introduce an innovative braking prompt mechanism that\nenables the model to naturally learn when to terminate reasoning at an\nappropriate point. Experiments across mathematical benchmarks (AIME, AMC,\nMATH500, GSM8K) demonstrate that our method reduces token consumption by up to\n60% while maintaining comparable accuracy to unconstrained models.","upvotes":23,"discussionId":"682f34b62b9fdc24ae9de3be","ai_summary":"A novel Self-Braking Tuning framework reduces overthinking and unnecessary computational overhead in large reasoning models by enabling the model to self-regulate its reasoning process.","ai_keywords":["large reasoning models","self-braking tuning","overthinking","reasoning capabilities","mathematical benchmarks","adaptive reasoning lengths","braking prompt mechanism"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6612a84c7554a7f1b7000b22","avatarUrl":"/avatars/f748fb577c5f2274222847acf9b01dea.svg","isPro":false,"fullname":"Haoran Zhao","user":"XinC6","type":"user"},{"_id":"64098738342c26884c792c93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64098738342c26884c792c93/SxBUd-wLrl-PjQsrVYJte.jpeg","isPro":false,"fullname":"Yuchen Yan","user":"yanyc","type":"user"},{"_id":"5e1058e9fcf41d740b69966d","avatarUrl":"/avatars/ce74839ba871f2b54313a670a233ba82.svg","isPro":false,"fullname":"Yongliang Shen","user":"tricktreat","type":"user"},{"_id":"680a05485879991c2e550d96","avatarUrl":"/avatars/4030c1583cfdfee5a68f0c83b2e72eb0.svg","isPro":false,"fullname":"Hang Wu","user":"Leo-WU","type":"user"},{"_id":"6572a479b3d8dd7b92212a4e","avatarUrl":"/avatars/fc6d60211504547113a6e14e15ddb4fb.svg","isPro":false,"fullname":"lvshangke","user":"paradox122","type":"user"},{"_id":"66f82ff88d215c6331be7abd","avatarUrl":"/avatars/70a5cbd0824a6cfe0a291a41094644d9.svg","isPro":false,"fullname":"Qipeng Chen","user":"lechatelierlenz","type":"user"},{"_id":"682c0fcebbe0c6fa323f531b","avatarUrl":"/avatars/953e6f5ce0361c2f9693bc0ca82787b7.svg","isPro":false,"fullname":"yy","user":"yuy07","type":"user"},{"_id":"65ef2d78e26bcf263dc7a806","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65ef2d78e26bcf263dc7a806/3QSx6Yk_thl7YARek5sx4.png","isPro":false,"fullname":"Fan Yuan","user":"Leoyfan","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"68300f9c8c7c492a27f55179","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jHhU1hCRc_OK7OIZZPmqH.png","isPro":false,"fullname":"Xudong Cai","user":"ccdown","type":"user"},{"_id":"672a2a87ceba27d8932f5898","avatarUrl":"/avatars/a0d909313ce39c6bd3eeb18ee44b2193.svg","isPro":false,"fullname":"NUMB","user":"NUMB1234","type":"user"},{"_id":"6485bd278d14bcd5cdbb7c8d","avatarUrl":"/avatars/1427cf1a72b5db0cb263ad45885cf925.svg","isPro":false,"fullname":"Wenqi Zhang","user":"zwq2018","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2505.14604

Let LLMs Break Free from Overthinking via Self-Braking Tuning

Published on May 20
· Submitted by Yongliang Shen on May 23
Authors:
,
,
,
,
,

Abstract

A novel Self-Braking Tuning framework reduces overthinking and unnecessary computational overhead in large reasoning models by enabling the model to self-regulate its reasoning process.

AI-generated summary

Large reasoning models (LRMs), such as OpenAI o1 and DeepSeek-R1, have significantly enhanced their reasoning capabilities by generating longer chains of thought, demonstrating outstanding performance across a variety of tasks. However, this performance gain comes at the cost of a substantial increase in redundant reasoning during the generation process, leading to high computational overhead and exacerbating the issue of overthinking. Although numerous existing approaches aim to address the problem of overthinking, they often rely on external interventions. In this paper, we propose a novel framework, Self-Braking Tuning (SBT), which tackles overthinking from the perspective of allowing the model to regulate its own reasoning process, thus eliminating the reliance on external control mechanisms. We construct a set of overthinking identification metrics based on standard answers and design a systematic method to detect redundant reasoning. This method accurately identifies unnecessary steps within the reasoning trajectory and generates training signals for learning self-regulation behaviors. Building on this foundation, we develop a complete strategy for constructing data with adaptive reasoning lengths and introduce an innovative braking prompt mechanism that enables the model to naturally learn when to terminate reasoning at an appropriate point. Experiments across mathematical benchmarks (AIME, AMC, MATH500, GSM8K) demonstrate that our method reduces token consumption by up to 60% while maintaining comparable accuracy to unconstrained models.

Community

Paper author Paper submitter
edited May 23

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.14604 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.14604 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.14604 in a Space README.md to link it from this page.

Collections including this paper 2

Лучший частный хостинг