Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models
Abstract
JQL systematically curates high-quality multilingual training data using pretrained multilingual embeddings, outperforming heuristic methods and improving downstream model training across diverse languages.
High-quality multilingual training data is essential for effectively pretraining large language models (LLMs). Yet, the availability of suitable open-source multilingual datasets remains limited. Existing state-of-the-art datasets mostly rely on heuristic filtering methods, restricting both their cross-lingual transferability and scalability. Here, we introduce JQL, a systematic approach that efficiently curates diverse and high-quality multilingual data at scale while significantly reducing computational demands. JQL distills LLMs' annotation capabilities into lightweight annotators based on pretrained multilingual embeddings. These models exhibit robust multilingual and cross-lingual performance, even for languages and scripts unseen during training. Evaluated empirically across 35 languages, the resulting annotation pipeline substantially outperforms current heuristic filtering methods like Fineweb2. JQL notably enhances downstream model training quality and increases data retention rates. Our research provides practical insights and valuable resources for multilingual data curation, raising the standards of multilingual dataset development.
Community
JQL systematically curates high-quality multilingual training data using pretrained multilingual embeddings, outperforming heuristic methods and improving downstream model training across diverse languages.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Cross-Lingual Pitfalls: Automatic Probing Cross-Lingual Weakness of Multilingual Large Language Models (2025)
- Multilingual Prompt Engineering in Large Language Models: A Survey Across NLP Tasks (2025)
- From Unaligned to Aligned: Scaling Multilingual LLMs with Multi-Way Parallel Corpora (2025)
- Kaleidoscope: In-language Exams for Massively Multilingual Vision Evaluation (2025)
- M-Prometheus: A Suite of Open Multilingual LLM Judges (2025)
- RetrieveAll: A Multilingual Named Entity Recognition Framework with Large Language Models (2025)
- Cross-Lingual Representation Alignment Through Contrastive Image-Caption Tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend