Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning

    ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning

    June 19, 2025

    The Challenge of Multimodal Reasoning

    Recent breakthroughs in text-based language models, such as DeepSeek-R1, have demonstrated that RL can aid in developing strong reasoning skills. Motivated by this, researchers have attempted to apply the same RL techniques to MLLMs to enhance their ability to reason across both visual and textual inputs. However, these attempts haven’t been entirely successful; MLLMs still struggle with complex reasoning tasks. This suggests that simply reusing RL strategies from text-only models may not work well in multimodal settings, where the interaction between different data types introduces new challenges that require more tailored approaches. 

    Evolution of Multimodal Language Models

    Recent research in MLLMs builds on the progress of LLMs by combining visual inputs with language understanding. Early models, such as CLIP and MiniGPT-4, laid the groundwork, followed by instruction-tuned models like LLaMA. While closed-source models demonstrate strong reasoning through lengthy CoT outputs, open-source models have primarily focused on fine-tuning and CoT adaptations. However, these often yield brief answers that limit in-depth rationale. RL, including techniques like RLHF and GRPO, has shown promise for enhancing reasoning in LLMs. Inspired by this, recent work now aims to apply RL in MLLMs to improve visual reasoning and support richer, longer outputs. 

    Introduction of ReVisual-R1

    Researchers from Tsinghua University, Shanghai Jiao Tong University, and the Shanghai Artificial Intelligence Laboratory have introduced ReVisual-R1, a 7B-parameter open-source MLLM that sets a new standard in multimodal reasoning. Their study reveals three key insights: (1) Careful text-only pretraining provides a strong cold-start, outperforming many existing MLLMs even before RL; (2) The commonly used GRPO algorithm suffers from gradient stagnation, which they address with a novel method called Prioritized Advantage Distillation (PAD); and (3) Adding a final text-only RL phase after multimodal RL further enhances reasoning. Their three-stage approach, which includes text pretraining, multimodal RL, and final text RL, strikes an effective balance between visual grounding and deep cognitive reasoning. 

    Developing the GRAMMAR Dataset

    The GRAMMAR dataset was developed after it was noticed that existing multimodal cold-start datasets lack the depth necessary to train strong reasoning models. Text-only datasets, like DeepMath, showed better gains in both text and multimodal tasks, suggesting that textual complexity better stimulates reasoning. To address this, GRAMMAR combines diverse textual and multimodal samples using a multi-stage curation process. This data fuels the Staged Reinforcement Optimization (SRO) framework, which first trains models using multimodal RL, enhanced by Prioritized Advantage Distillation to avoid stalled learning and an efficient-length reward to curb verbosity, followed by a text-only RL phase to boost reasoning and language fluency. 

    Three-Stage Training Pipeline

    The experiments for ReVisual-R1 followed a structured three-stage training process: starting with pure text data to build a language foundation, then incorporating multimodal reinforcement learning for visual-text reasoning, and finally fine-tuning with text-only RL to refine reasoning and fluency. It was tested across various benchmarks and outperformed both open-source and some commercial models in multimodal and math reasoning tasks. The model achieved top results on 9 out of 10 benchmarks. Ablation studies confirmed the importance of training order and the Prioritized Advantage Distillation method, which helped focus learning on high-quality responses, resulting in a significant improvement in overall performance. 

    Summary and Contributions

    In conclusion, ReVisual-R1 is a 7B open-source MLLM built to tackle the challenges of complex multimodal reasoning. Instead of relying solely on scale, it uses a well-designed three-stage training process: starting with high-quality text data for foundational rationale, followed by a multimodal RL phase enhanced with a new PAD technique for stability, and ending with a final text-based RL refinement. This thoughtful curriculum significantly boosts performance. ReVisual-R1 sets a new benchmark among 7B models, excelling in tasks like MathVerse and AIME. The work highlights how structured training can unlock deeper reasoning in MLLMs. 


    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post ReVisual-R1: An Open-Source 7B Multimodal Large Language Model (MLLMs) that Achieves Long, Accurate and Thoughtful Reasoning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleOpenAI Releases an Open‑Sourced Version of a Customer Service Agent Demo with the Agents SDK
    Next Article HtFLlib: A Unified Benchmarking Library for Evaluating Heterogeneous Federated Learning Methods Across Modalities

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Glassmorphism CSS Generator

    Web Development

    CVE-2025-47729 – TeleMessage End-to-End Encryption Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Do AI Models Act Like Insider Threats? Anthropic’s Simulations Say Yes

    Machine Learning

    CISA Warns SAP 0-day Vulnerability Exploited in the Wild

    Security

    Highlights

    CVE-2025-24798 – Meshtastic Route Crash Vulnerability (Denial of Service)

    July 10, 2025

    CVE ID : CVE-2025-24798

    Published : July 10, 2025, 10:15 p.m. | 24 minutes ago

    Description : Meshtastic is an open source mesh networking solution. From 1.2.1 until 2.6.2, a packet sent to the routing module that contains want_response==true causes a crash. This can lead to a degradation of service for nodes within range of a malicious sender, or via MQTT if downlink is enabled. This vulnerability is fixed in 2.6.2.

    Severity: 4.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-30390 – Azure AzureAD Authentication Bypass

    April 30, 2025

    CVE-2025-1529 – WordPress AM LottiePlayer Stored Cross-Site Scripting Vulnerability

    May 1, 2025

    5 things to do on World Password Day to keep your accounts safe

    May 1, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.