lynx   »   [go: up one dir, main page]

Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-05-30T01:35:03.933Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":264}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.690976083278656},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2505.22232","authors":[{"_id":"683815574d9866c160e88670","name":"Mehdi Ali","hidden":false},{"_id":"683815574d9866c160e88671","user":{"_id":"62fa1d95e8c9c532aa75331c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62fa1d95e8c9c532aa75331c/WFfk_n8gOj845pSkfdazA.jpeg","isPro":false,"fullname":"Manuel Brack","user":"mbrack","type":"user"},"name":"Manuel Brack","status":"claimed_verified","statusLastChangedAt":"2025-05-29T09:40:14.826Z","hidden":false},{"_id":"683815574d9866c160e88672","name":"Max Lübbering","hidden":false},{"_id":"683815574d9866c160e88673","user":{"_id":"6310ebaf631a69165c076b8d","avatarUrl":"/avatars/6ed432656913bbd77162187d158830f0.svg","isPro":false,"fullname":"Elias Wendt","user":"eliaswendt","type":"user"},"name":"Elias Wendt","status":"claimed_verified","statusLastChangedAt":"2025-06-03T08:50:36.473Z","hidden":false},{"_id":"683815574d9866c160e88674","name":"Abbas Goher Khan","hidden":false},{"_id":"683815574d9866c160e88675","name":"Richard Rutmann","hidden":false},{"_id":"683815574d9866c160e88676","name":"Alex Jude","hidden":false},{"_id":"683815574d9866c160e88677","user":{"_id":"6399acd4074f7c531d57cdc1","avatarUrl":"/avatars/83e89dda95e2139f95492eee0da2e471.svg","isPro":false,"fullname":"Maurice Kraus","user":"mkrausio","type":"user"},"name":"Maurice Kraus","status":"claimed_verified","statusLastChangedAt":"2025-06-02T07:48:02.697Z","hidden":false},{"_id":"683815574d9866c160e88678","name":"Alexander Arno Weber","hidden":false},{"_id":"683815574d9866c160e88679","name":"Felix Stollenwerk","hidden":false},{"_id":"683815574d9866c160e8867a","name":"David Kaczér","hidden":false},{"_id":"683815574d9866c160e8867b","name":"Florian Mai","hidden":false},{"_id":"683815574d9866c160e8867c","name":"Lucie Flek","hidden":false},{"_id":"683815574d9866c160e8867d","name":"Rafet Sifa","hidden":false},{"_id":"683815574d9866c160e8867e","name":"Nicolas Flores-Herr","hidden":false},{"_id":"683815574d9866c160e8867f","name":"Joachim Köhler","hidden":false},{"_id":"683815574d9866c160e88680","name":"Patrick Schramowski","hidden":false},{"_id":"683815574d9866c160e88681","user":{"_id":"64243ad773f771f6630871e4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64243ad773f771f6630871e4/Ds4g_5RyBKBDWAC32BqsH.jpeg","isPro":false,"fullname":"Michael Fromm","user":"mfromm","type":"user"},"name":"Michael Fromm","status":"claimed_verified","statusLastChangedAt":"2025-06-05T10:00:21.242Z","hidden":false},{"_id":"683815574d9866c160e88682","name":"Kristian Kersting","hidden":false}],"publishedAt":"2025-05-28T11:06:54.000Z","submittedOnDailyAt":"2025-05-29T07:38:07.888Z","title":"Judging Quality Across Languages: A Multilingual Approach to Pretraining\n Data Filtering with Language Models","submittedOnDailyBy":{"_id":"62fa1d95e8c9c532aa75331c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62fa1d95e8c9c532aa75331c/WFfk_n8gOj845pSkfdazA.jpeg","isPro":false,"fullname":"Manuel Brack","user":"mbrack","type":"user"},"summary":"High-quality multilingual training data is essential for effectively\npretraining large language models (LLMs). Yet, the availability of suitable\nopen-source multilingual datasets remains limited. Existing state-of-the-art\ndatasets mostly rely on heuristic filtering methods, restricting both their\ncross-lingual transferability and scalability. Here, we introduce JQL, a\nsystematic approach that efficiently curates diverse and high-quality\nmultilingual data at scale while significantly reducing computational demands.\nJQL distills LLMs' annotation capabilities into lightweight annotators based on\npretrained multilingual embeddings. These models exhibit robust multilingual\nand cross-lingual performance, even for languages and scripts unseen during\ntraining. Evaluated empirically across 35 languages, the resulting annotation\npipeline substantially outperforms current heuristic filtering methods like\nFineweb2. JQL notably enhances downstream model training quality and increases\ndata retention rates. Our research provides practical insights and valuable\nresources for multilingual data curation, raising the standards of multilingual\ndataset development.","upvotes":18,"discussionId":"683815594d9866c160e88708","projectPage":"https://huggingface.co/spaces/JQL-AI/JQL","githubRepo":"https://github.com/JQL-AI/JQL-Annotation-Pipeline","ai_summary":"JQL systematically curates high-quality multilingual training data using pretrained multilingual embeddings, outperforming heuristic methods and improving downstream model training across diverse languages.","ai_keywords":["pretraining","large language models","multilingual datasets","heuristic filtering methods","JQL","lightweight annotators","multilingual embeddings","cross-lingual transferability","annotation pipeline","data retention rates","multilingual data curation"],"githubStars":5},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"62fa1d95e8c9c532aa75331c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62fa1d95e8c9c532aa75331c/WFfk_n8gOj845pSkfdazA.jpeg","isPro":false,"fullname":"Manuel Brack","user":"mbrack","type":"user"},{"_id":"65e713e27f7c58041f5325ff","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65e713e27f7c58041f5325ff/gp826zvLae2jCCaeQW1Td.jpeg","isPro":false,"fullname":"Abbas Goher Khan","user":"AbasKhan","type":"user"},{"_id":"6310ebaf631a69165c076b8d","avatarUrl":"/avatars/6ed432656913bbd77162187d158830f0.svg","isPro":false,"fullname":"Elias Wendt","user":"eliaswendt","type":"user"},{"_id":"633441e1e5e52bbbf5f9b88f","avatarUrl":"/avatars/55c1ff770fd6768c3004442c4535d957.svg","isPro":false,"fullname":"Dominik","user":"D0miH","type":"user"},{"_id":"63d7fc0b07cd1aa3c49de905","avatarUrl":"/avatars/9212b6d0ed781dca0bfcf58377356bc1.svg","isPro":false,"fullname":"remunds","user":"remunds","type":"user"},{"_id":"64930e7f0ab1e556ca60596f","avatarUrl":"/avatars/c10e5a47ae643775a8ab8fe4efcee347.svg","isPro":false,"fullname":"Jonas Seng","user":"J0nasSeng","type":"user"},{"_id":"630c80ae4ca0a22768b583f6","avatarUrl":"/avatars/db73177a87842f730b80ef61a34ab58c.svg","isPro":false,"fullname":"Steven Braun","user":"steven-braun","type":"user"},{"_id":"62d9b2e5cfed764363b3145f","avatarUrl":"/avatars/b9f44d3fee8caa8888ca40280dbe8828.svg","isPro":false,"fullname":"Antonia Wüst","user":"toniwuest","type":"user"},{"_id":"67e5721b169edeab9a5cd781","avatarUrl":"/avatars/521cbfdd3691f7f02132339aaf1d32e9.svg","isPro":false,"fullname":"S","user":"sebawastaken","type":"user"},{"_id":"62e7dd4036a8e8a82700041c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62e7dd4036a8e8a82700041c/Dgk9mXYLVd4LpiNLWjn-q.jpeg","isPro":false,"fullname":"Felix Friedrich","user":"felfri","type":"user"},{"_id":"65aaf3125c84d7d5e7f4232a","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65aaf3125c84d7d5e7f4232a/3U74uYNM54wZ7S77jbQV3.jpeg","isPro":false,"fullname":"Max Lue","user":"max-lue","type":"user"},{"_id":"6324631a371b3b625ed7ce6d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6324631a371b3b625ed7ce6d/bb8zgYANPScc-FCqeVWZo.jpeg","isPro":false,"fullname":"Moritz Willig","user":"MoWi","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">
Papers
arxiv:2505.22232

Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models

Published on May 28
· Submitted by Manuel Brack on May 29
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

JQL systematically curates high-quality multilingual training data using pretrained multilingual embeddings, outperforming heuristic methods and improving downstream model training across diverse languages.

AI-generated summary

High-quality multilingual training data is essential for effectively pretraining large language models (LLMs). Yet, the availability of suitable open-source multilingual datasets remains limited. Existing state-of-the-art datasets mostly rely on heuristic filtering methods, restricting both their cross-lingual transferability and scalability. Here, we introduce JQL, a systematic approach that efficiently curates diverse and high-quality multilingual data at scale while significantly reducing computational demands. JQL distills LLMs' annotation capabilities into lightweight annotators based on pretrained multilingual embeddings. These models exhibit robust multilingual and cross-lingual performance, even for languages and scripts unseen during training. Evaluated empirically across 35 languages, the resulting annotation pipeline substantially outperforms current heuristic filtering methods like Fineweb2. JQL notably enhances downstream model training quality and increases data retention rates. Our research provides practical insights and valuable resources for multilingual data curation, raising the standards of multilingual dataset development.

Community

Paper author Paper submitter

JQL systematically curates high-quality multilingual training data using pretrained multilingual embeddings, outperforming heuristic methods and improving downstream model training across diverse languages.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 6

Browse 6 datasets citing this paper

Spaces citing this paper 1

Collections including this paper 1

Лучший частный хостинг