\n","updatedAt":"2024-10-14T01:52:21.713Z","author":{"_id":"64fde4e252e82dd432b74ce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64fde4e252e82dd432b74ce9/-CQZbBP7FsPPyawYrsi4z.jpeg","fullname":"Ling Yang","name":"Lingaaaaaaa","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10}},"numEdits":0,"editors":["Lingaaaaaaa"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64fde4e252e82dd432b74ce9/-CQZbBP7FsPPyawYrsi4z.jpeg"],"reactions":[],"isReport":false}},{"id":"670dc6916eefe354dff4c7aa","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264},"createdAt":"2024-10-15T01:34:09.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [S3c-Math: Spontaneous Step-level Self-correction Makes Large Language Models Better Mathematical Reasoners](https://huggingface.co/papers/2409.01524) (2024)\n* [Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification](https://huggingface.co/papers/2410.05318) (2024)\n* [Subtle Errors Matter: Preference Learning via Error-injected Self-editing](https://huggingface.co/papers/2410.06638) (2024)\n* [BEATS: Optimizing LLM Mathematical Capabilities with BackVerify and Adaptive Disambiguate based Efficient Tree Search](https://huggingface.co/papers/2409.17972) (2024)\n* [Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks](https://huggingface.co/papers/2410.04055) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
\n
The following papers were recommended by the Semantic Scholar API
Please give a thumbs up to this comment if you found it helpful!
\n
If you want recommendations for any Paper on Hugging Face checkout this Space
\n
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend
\n","updatedAt":"2024-10-15T01:34:09.083Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2410.09008","authors":[{"_id":"670c74c39f3ee99ddfb15805","user":{"_id":"64fde4e252e82dd432b74ce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64fde4e252e82dd432b74ce9/-CQZbBP7FsPPyawYrsi4z.jpeg","isPro":false,"fullname":"Ling Yang","user":"Lingaaaaaaa","type":"user"},"name":"Ling Yang","status":"admin_assigned","statusLastChangedAt":"2024-10-14T12:35:39.751Z","hidden":false},{"_id":"670c74c39f3ee99ddfb15806","user":{"_id":"64a131a7660cce8b86bf288d","avatarUrl":"/avatars/6c1a2475645a1a6ae3f804fe6c35a226.svg","isPro":false,"fullname":"zhao chen yu","user":"chenyu01","type":"user"},"name":"Zhaochen Yu","status":"admin_assigned","statusLastChangedAt":"2024-10-14T12:35:45.584Z","hidden":false},{"_id":"670c74c39f3ee99ddfb15807","user":{"_id":"6374cd6b6ea8da14f8fef8dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6374cd6b6ea8da14f8fef8dc/l13bg0tKDjCnUw3I895QZ.png","isPro":false,"fullname":"Tianjun Zhang","user":"tianjunz","type":"user"},"name":"Tianjun Zhang","status":"admin_assigned","statusLastChangedAt":"2024-10-14T12:35:51.068Z","hidden":false},{"_id":"670c74c39f3ee99ddfb15808","user":{"_id":"64c0e950aa57599de1c75dad","avatarUrl":"/avatars/374d53317cbccc30fae70e5152ca13e0.svg","isPro":false,"fullname":"Minkai Xu","user":"mkxu","type":"user"},"name":"Minkai Xu","status":"admin_assigned","statusLastChangedAt":"2024-10-14T12:35:57.450Z","hidden":false},{"_id":"670c74c39f3ee99ddfb15809","user":{"_id":"645d2e8401f4eaab2a0878ce","avatarUrl":"/avatars/1273c5fb607b4b622a746a42692fa632.svg","isPro":false,"fullname":"Joseph E. Gonzalez","user":"ProfJoeyG","type":"user"},"name":"Joseph E. Gonzalez","status":"admin_assigned","statusLastChangedAt":"2024-10-14T12:36:03.351Z","hidden":false},{"_id":"670c74c39f3ee99ddfb1580a","name":"Bin Cui","hidden":false},{"_id":"670c74c39f3ee99ddfb1580b","name":"Shuicheng Yan","hidden":false}],"publishedAt":"2024-10-11T17:25:52.000Z","submittedOnDailyAt":"2024-10-14T00:22:21.707Z","title":"SuperCorrect: Supervising and Correcting Language Models with\n Error-Driven Insights","submittedOnDailyBy":{"_id":"64fde4e252e82dd432b74ce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64fde4e252e82dd432b74ce9/-CQZbBP7FsPPyawYrsi4z.jpeg","isPro":false,"fullname":"Ling Yang","user":"Lingaaaaaaa","type":"user"},"summary":"Large language models (LLMs) like GPT-4, PaLM, and LLaMA have shown\nsignificant improvements in various reasoning tasks. However, smaller models\nsuch as Llama-3-8B and DeepSeekMath-Base still struggle with complex\nmathematical reasoning because they fail to effectively identify and correct\nreasoning errors. Recent reflection-based methods aim to address these issues\nby enabling self-reflection and self-correction, but they still face challenges\nin independently detecting errors in their reasoning steps. To overcome these\nlimitations, we propose SuperCorrect, a novel two-stage framework that uses a\nlarge teacher model to supervise and correct both the reasoning and reflection\nprocesses of a smaller student model. In the first stage, we extract\nhierarchical high-level and detailed thought templates from the teacher model\nto guide the student model in eliciting more fine-grained reasoning thoughts.\nIn the second stage, we introduce cross-model collaborative direct preference\noptimization (DPO) to enhance the self-correction abilities of the student\nmodel by following the teacher's correction traces during training. This\ncross-model DPO approach teaches the student model to effectively locate and\nresolve erroneous thoughts with error-driven insights from the teacher model,\nbreaking the bottleneck of its thoughts and acquiring new skills and knowledge\nto tackle challenging problems. Extensive experiments consistently demonstrate\nour superiority over previous methods. Notably, our SuperCorrect-7B model\nsignificantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and\nQwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA\nperformance among all 7B models. Code:\nhttps://github.com/YangLing0818/SuperCorrect-llm","upvotes":17,"discussionId":"670c74c39f3ee99ddfb1584b","ai_summary":"SuperCorrect uses a large teacher model to guide and correct a smaller student model's reasoning and reflection processes, significantly improving its performance in complex mathematical tasks.","ai_keywords":["large language models (LLMs)","LLaMA","DeepSeekMath","Llama-3-8B","hierarchical high-level","detailed thought templates","cross-model collaborative direct preference optimization (DPO)","self-correction","erroneous thoughts","MATH/GSM8K benchmarks"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64fde4e252e82dd432b74ce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64fde4e252e82dd432b74ce9/-CQZbBP7FsPPyawYrsi4z.jpeg","isPro":false,"fullname":"Ling Yang","user":"Lingaaaaaaa","type":"user"},{"_id":"6662a23c2f86097c6d828b96","avatarUrl":"/avatars/2aa31ab30874257529861f2e4024acc2.svg","isPro":false,"fullname":"liu","user":"miao6","type":"user"},{"_id":"6662a2ac9ced3e13879c524d","avatarUrl":"/avatars/fa5bb180daad40171c0fde6f5ce081f7.svg","isPro":false,"fullname":"liu","user":"miao66","type":"user"},{"_id":"6662a59cf8d1fcc749cbc5de","avatarUrl":"/avatars/0e965b6b996c154b8d39106c0cc5178d.svg","isPro":false,"fullname":"liu","user":"miao99","type":"user"},{"_id":"5e56829137cb5b49818287ea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e56829137cb5b49818287ea/8HYzJeRc4b9Wu7BfJwibS.png","isPro":true,"fullname":"Lee Junbum","user":"beomi","type":"user"},{"_id":"64966691990b342dcc9fccb5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64966691990b342dcc9fccb5/tQSrE3MkBeakk5QYfgHSo.jpeg","isPro":true,"fullname":"sixiang chen","user":"Ephemeral182","type":"user"},{"_id":"641b754d1911d3be6745cce9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/641b754d1911d3be6745cce9/DxjZG1XT4H3ZHF7qHxWxk.jpeg","isPro":true,"fullname":"atayloraerospace","user":"Taylor658","type":"user"},{"_id":"5f32b2367e583543386214d9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1635314457124-5f32b2367e583543386214d9.jpeg","isPro":false,"fullname":"Sergei Averkiev","user":"averoo","type":"user"},{"_id":"646def60df618b303b419323","avatarUrl":"/avatars/97aa761d5255abf230304cfeade87835.svg","isPro":false,"fullname":"Lei Wang","user":"demolei","type":"user"},{"_id":"63c227a6c58fcfeac18dec07","avatarUrl":"/avatars/47df92c937cfaf8d726c7e28cf352eb5.svg","isPro":false,"fullname":"tim huang","user":"timmyhhh","type":"user"},{"_id":"648eb1eb59c4e5c87dc116e0","avatarUrl":"/avatars/c636cea39c2c0937f01398c94ead5dad.svg","isPro":false,"fullname":"fdsqefsgergd","user":"T-representer","type":"user"},{"_id":"5efbdc4ac3896117eab961a9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1602668910270-5efbdc4ac3896117eab961a9.png","isPro":false,"fullname":"Data Mining and Information Systems Lab","user":"dmis-lab","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
SuperCorrect uses a large teacher model to guide and correct a smaller student model's reasoning and reflection processes, significantly improving its performance in complex mathematical tasks.
AI-generated summary
Large language models (LLMs) like GPT-4, PaLM, and LLaMA have shown
significant improvements in various reasoning tasks. However, smaller models
such as Llama-3-8B and DeepSeekMath-Base still struggle with complex
mathematical reasoning because they fail to effectively identify and correct
reasoning errors. Recent reflection-based methods aim to address these issues
by enabling self-reflection and self-correction, but they still face challenges
in independently detecting errors in their reasoning steps. To overcome these
limitations, we propose SuperCorrect, a novel two-stage framework that uses a
large teacher model to supervise and correct both the reasoning and reflection
processes of a smaller student model. In the first stage, we extract
hierarchical high-level and detailed thought templates from the teacher model
to guide the student model in eliciting more fine-grained reasoning thoughts.
In the second stage, we introduce cross-model collaborative direct preference
optimization (DPO) to enhance the self-correction abilities of the student
model by following the teacher's correction traces during training. This
cross-model DPO approach teaches the student model to effectively locate and
resolve erroneous thoughts with error-driven insights from the teacher model,
breaking the bottleneck of its thoughts and acquiring new skills and knowledge
to tackle challenging problems. Extensive experiments consistently demonstrate
our superiority over previous methods. Notably, our SuperCorrect-7B model
significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and
Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA
performance among all 7B models. Code:
https://github.com/YangLing0818/SuperCorrect-llm