lynx   »   [go: up one dir, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-05-23T01:38:41.633Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7229664921760559},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.15801","authors":[{"_id":"682f4bed557039a63173829f","user":{"_id":"64098738342c26884c792c93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64098738342c26884c792c93/SxBUd-wLrl-PjQsrVYJte.jpeg","isPro":false,"fullname":"Yuchen Yan","user":"yanyc","type":"user"},"name":"Yuchen Yan","status":"claimed_verified","statusLastChangedAt":"2025-05-26T08:17:16.142Z","hidden":false},{"_id":"682f4bed557039a6317382a0","name":"Jin Jiang","hidden":false},{"_id":"682f4bed557039a6317382a1","name":"Zhenbang Ren","hidden":false},{"_id":"682f4bed557039a6317382a2","name":"Yijun Li","hidden":false},{"_id":"682f4bed557039a6317382a3","name":"Xudong Cai","hidden":false},{"_id":"682f4bed557039a6317382a4","name":"Yang Liu","hidden":false},{"_id":"682f4bed557039a6317382a5","name":"Xin Xu","hidden":false},{"_id":"682f4bed557039a6317382a6","name":"Mengdi Zhang","hidden":false},{"_id":"682f4bed557039a6317382a7","name":"Jian Shao","hidden":false},{"_id":"682f4bed557039a6317382a8","user":{"_id":"5e1058e9fcf41d740b69966d","avatarUrl":"/avatars/ce74839ba871f2b54313a670a233ba82.svg","isPro":false,"fullname":"Yongliang Shen","user":"tricktreat","type":"user"},"name":"Yongliang Shen","status":"claimed_verified","statusLastChangedAt":"2025-05-26T08:17:13.646Z","hidden":false},{"_id":"682f4bed557039a6317382a9","name":"Jun Xiao","hidden":false},{"_id":"682f4bed557039a6317382aa","name":"Yueting Zhuang","hidden":false}],"publishedAt":"2025-05-21T17:54:43.000Z","submittedOnDailyAt":"2025-05-22T14:39:08.244Z","title":"VerifyBench: Benchmarking Reference-based Reward Systems for Large\n Language Models","submittedOnDailyBy":{"_id":"64098738342c26884c792c93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64098738342c26884c792c93/SxBUd-wLrl-PjQsrVYJte.jpeg","isPro":false,"fullname":"Yuchen Yan","user":"yanyc","type":"user"},"summary":"Large reasoning models such as OpenAI o1 and DeepSeek-R1 have achieved\nremarkable performance in the domain of reasoning. A key component of their\ntraining is the incorporation of verifiable rewards within reinforcement\nlearning (RL). However, existing reward benchmarks do not evaluate\nreference-based reward systems, leaving researchers with limited understanding\nof the accuracy of verifiers used in RL. In this paper, we introduce two\nbenchmarks, VerifyBench and VerifyBench-Hard, designed to assess the\nperformance of reference-based reward systems. These benchmarks are constructed\nthrough meticulous data collection and curation, followed by careful human\nannotation to ensure high quality. Current models still show considerable room\nfor improvement on both VerifyBench and VerifyBench-Hard, especially\nsmaller-scale models. Furthermore, we conduct a thorough and comprehensive\nanalysis of evaluation results, offering insights for understanding and\ndeveloping reference-based reward systems. Our proposed benchmarks serve as\neffective tools for guiding the development of verifier accuracy and the\nreasoning capabilities of models trained via RL in reasoning tasks.","upvotes":17,"discussionId":"682f4bee557039a6317382df","projectPage":"https://zju-real.github.io/VerifyBench/","githubRepo":"https://github.com/ZJU-REAL/VerifyBench","ai_summary":"Two new benchmarks, VerifyBench and VerifyBench-Hard, are introduced to evaluate the accuracy of reference-based reward systems in reinforcement learning for reasoning tasks.","ai_keywords":["reinforcement learning","RL","reward benchmarks","reference-based reward systems","verifiable rewards","reasoning models","OpenAI o1","DeepSeek-R1","human annotation","model evaluation","verifier accuracy"],"githubStars":16},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64098738342c26884c792c93","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64098738342c26884c792c93/SxBUd-wLrl-PjQsrVYJte.jpeg","isPro":false,"fullname":"Yuchen Yan","user":"yanyc","type":"user"},{"_id":"6612a84c7554a7f1b7000b22","avatarUrl":"/avatars/f748fb577c5f2274222847acf9b01dea.svg","isPro":false,"fullname":"Haoran Zhao","user":"XinC6","type":"user"},{"_id":"5e1058e9fcf41d740b69966d","avatarUrl":"/avatars/ce74839ba871f2b54313a670a233ba82.svg","isPro":false,"fullname":"Yongliang Shen","user":"tricktreat","type":"user"},{"_id":"6692aff88db712bad780f02a","avatarUrl":"/avatars/5dc4b1c27c70f6a64864711dbff4910f.svg","isPro":false,"fullname":"xhl","user":"zjuxhl","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"6572a479b3d8dd7b92212a4e","avatarUrl":"/avatars/fc6d60211504547113a6e14e15ddb4fb.svg","isPro":false,"fullname":"lvshangke","user":"paradox122","type":"user"},{"_id":"680a05485879991c2e550d96","avatarUrl":"/avatars/4030c1583cfdfee5a68f0c83b2e72eb0.svg","isPro":false,"fullname":"Hang Wu","user":"Leo-WU","type":"user"},{"_id":"67b970e414b1af2ce915c906","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/2l7CwmVEs8NVZWcYgHk36.png","isPro":false,"fullname":"邱怡文","user":"qywMichelle","type":"user"},{"_id":"66f82ff88d215c6331be7abd","avatarUrl":"/avatars/70a5cbd0824a6cfe0a291a41094644d9.svg","isPro":false,"fullname":"Qipeng Chen","user":"lechatelierlenz","type":"user"},{"_id":"682c0fcebbe0c6fa323f531b","avatarUrl":"/avatars/953e6f5ce0361c2f9693bc0ca82787b7.svg","isPro":false,"fullname":"yy","user":"yuy07","type":"user"},{"_id":"65ef2d78e26bcf263dc7a806","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65ef2d78e26bcf263dc7a806/3QSx6Yk_thl7YARek5sx4.png","isPro":false,"fullname":"Fan Yuan","user":"Leoyfan","type":"user"},{"_id":"672a2a87ceba27d8932f5898","avatarUrl":"/avatars/a0d909313ce39c6bd3eeb18ee44b2193.svg","isPro":false,"fullname":"NUMB","user":"NUMB1234","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2505.15801

VerifyBench: Benchmarking Reference-based Reward Systems for Large Language Models

Published on May 21
· Submitted by Yuchen Yan on May 22
Authors:
,
,
,
,
,
,
,
,
,

Abstract

Two new benchmarks, VerifyBench and VerifyBench-Hard, are introduced to evaluate the accuracy of reference-based reward systems in reinforcement learning for reasoning tasks.

AI-generated summary

Large reasoning models such as OpenAI o1 and DeepSeek-R1 have achieved remarkable performance in the domain of reasoning. A key component of their training is the incorporation of verifiable rewards within reinforcement learning (RL). However, existing reward benchmarks do not evaluate reference-based reward systems, leaving researchers with limited understanding of the accuracy of verifiers used in RL. In this paper, we introduce two benchmarks, VerifyBench and VerifyBench-Hard, designed to assess the performance of reference-based reward systems. These benchmarks are constructed through meticulous data collection and curation, followed by careful human annotation to ensure high quality. Current models still show considerable room for improvement on both VerifyBench and VerifyBench-Hard, especially smaller-scale models. Furthermore, we conduct a thorough and comprehensive analysis of evaluation results, offering insights for understanding and developing reference-based reward systems. Our proposed benchmarks serve as effective tools for guiding the development of verifier accuracy and the reasoning capabilities of models trained via RL in reasoning tasks.

Community

Paper author Paper submitter

We are happy to introduce VerifyBench, a benchmark designed to evaluate reference-based reward systems in the context of reinforcement learning training for reasoning models.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.15801 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.
Лучший частный хостинг