Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Researchers from Tsinghua and ModelBest Release Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks

    Researchers from Tsinghua and ModelBest Release Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks

    May 15, 2025

    The data quality used in pretraining LLMs has become increasingly critical to their success. To build information-rich corpora, researchers have moved from heuristic filtering methods, such as rule-based noise removal and deduplication, to model-driven filtering, which leverages neural classifiers to identify high-quality samples. Despite its benefits, this approach still faces key issues: it lacks efficient validation mechanisms to assess data quality promptly and often relies on manually curated seed datasets that introduce subjectivity. While early datasets like C4 and Pile laid the groundwork for model development, recent efforts like RefinedWeb, Dolma, and DCLM have scaled significantly, incorporating up to trillions of tokens. Model-driven filtering has gained traction in these newer corpora for its ability to refine massive datasets and enhance LLM performance across downstream tasks.

    Nevertheless, the effectiveness of model-driven filtering is limited by the high costs and inefficiencies of current validation methods and the absence of clear standards for seed data selection. Recent datasets, such as FineWeb-edu and Ultra-FineWeb, have demonstrated improved model performance by using multiple classifiers to cross-verify data quality. These datasets outperform previous versions on benchmarks like MMLU, ARC, and C-Eval, indicating that refined filtering methods can enhance English and Chinese understanding. To further optimize this process, some studies propose using LLMs for multi-dimensional data evaluation via prompts or leveraging token-level perplexity scores. These innovations aim to lower computational overhead while improving data quality, ultimately enabling more effective training with fewer tokens. 

    Researchers from ModelBest Inc., Tsinghua University, and Soochow University developed an efficient data filtering pipeline to improve LLM training. They introduced a verification strategy that uses a nearly-trained LLM to evaluate new data by observing performance gains during final training steps, reducing computational costs. A lightweight fastText-based classifier further enhances filtering speed and accuracy. Applied to FineWeb and Chinese FineWeb datasets, this method produced the Ultra-FineWeb dataset, containing 1 trillion English and 120 billion Chinese tokens. LLMs trained on Ultra-FineWeb showed notable performance gains, confirming the pipeline’s effectiveness in improving data quality and training efficiency. 

    The study outlines an efficient, high-quality data filtering pipeline to reduce computational costs while maintaining data integrity. It begins by using a cost-effective verification strategy to select reliable seed samples from a candidate pool, which are then used to train a data classifier. Positive seeds are sourced from LLM annotations, curated datasets, textbooks, and synthesized content, while negatives come from diverse corpora. Classifier training avoids over-iteration, focusing instead on high-quality seed selection. A fastText-based classifier is used for scalable filtering, offering competitive performance at significantly lower inference costs compared to LLM-based methods, with preprocessing steps ensuring balanced, clean data input. 

    The models were trained using MegatronLM with the MiniCPM-1.2 B architecture on 100B tokens. Evaluations used Lighteval across English and Chinese benchmarks. The results show that models trained on Ultra-FineWeb consistently outperformed those trained on FineWeb and FineWeb-edu, individually and in mixed-language settings. Ultra-FineWeb-en achieved the highest English average score, while Ultra-FineWeb-zh improved performance on Chinese tasks. Ablation studies revealed that Ultra-FineWeb maintains balanced token lengths and benefits from efficient filtering strategies, highlighting its superior quality and effectiveness in improving model performance. 

    In conclusion, the study presents Ultra-FineWeb, a high-quality multilingual dataset comprising about 1 trillion English tokens and 120 billion Chinese tokens. Built upon FineWeb and Chinese FineWeb, it leverages a novel, efficient data filtering pipeline featuring a fastText-based lightweight classifier and a low-cost verification strategy. The pipeline enhances filtering accuracy, reduces reliance on manual seed data selection, and ensures robust performance with minimal computational overhead. Experimental results show that models trained on Ultra-FineWeb consistently outperform those trained on earlier datasets, demonstrating improved performance across benchmarks. The methodology ensures reproducibility and offers valuable insights for optimizing data quality in future LLM training. 


    Check out the Paper and Dataset. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

    The post Researchers from Tsinghua and ModelBest Release Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Step-by-Step Guide to Build an Automated Knowledge Graph Pipeline Using LangGraph and NetworkX
    Next Article Sectricity RedSOC Platform

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-39486 – ValvePress Rankie SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-5777 – Critical Citrix NetScaler Vulnerability

    Security

    Rebrand Direct Ferries

    Web Development

    This amplifier easily turns any of my Bluetooth devices into a modern home audio system

    News & Updates

    Highlights

    CVE-2025-4064 – ScriptAndTools Online-Travling-System Remote File Inclusion Vulnerability

    April 29, 2025

    CVE ID : CVE-2025-4064

    Published : April 29, 2025, 2:15 p.m. | 1 hour, 48 minutes ago

    Description : A vulnerability was found in ScriptAndTools Online-Travling-System 1.0. It has been classified as critical. This affects an unknown part of the file /admin/viewenquiry.php. The manipulation leads to improper access controls. It is possible to initiate the attack remotely. The exploit has been disclosed to the public and may be used.

    Severity: 5.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    How Apollo Tyres is unlocking machine insights using agentic AI-powered Manufacturing Reasoner

    June 17, 2025

    CVE-2025-6218: WinRAR Directory Traversal Bug Opens the Door to Remote Code Execution

    June 24, 2025

    CVE-2025-53937 – WeGIA SQL Injection Vulnerability

    July 16, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.