Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Why Apple’s Critique of AI Reasoning Is Premature

    Why Apple’s Critique of AI Reasoning Is Premature

    June 22, 2025

    The debate around the reasoning capabilities of Large Reasoning Models (LRMs) has been recently invigorated by two prominent yet conflicting papers: Apple’s “Illusion of Thinking” and Anthropic’s rebuttal titled “The Illusion of the Illusion of Thinking”. Apple’s paper claims fundamental limits in LRMs’ reasoning abilities, while Anthropic argues these claims stem from evaluation shortcomings rather than model failures.

    Apple’s study systematically tested LRMs on controlled puzzle environments, observing an “accuracy collapse” beyond specific complexity thresholds. These models, such as Claude-3.7 Sonnet and DeepSeek-R1, reportedly failed to solve puzzles like Tower of Hanoi and River Crossing as complexity increased, even exhibiting reduced reasoning effort (token usage) at higher complexities. Apple identified three distinct complexity regimes: standard LLMs outperform LRMs at low complexity, LRMs excel at medium complexity, and both collapse at high complexity. Critically, Apple’s evaluations concluded that LRMs’ limitations were due to their inability to apply exact computation and consistent algorithmic reasoning across puzzles.

    Anthropic, however, sharply challenges Apple’s conclusions, identifying critical flaws in the experimental design rather than the models themselves. They highlight three major issues:

    1. Token Limitations vs. Logical Failures: Anthropic emphasizes that failures observed in Apple’s Tower of Hanoi experiments were primarily due to output token limits rather than reasoning deficits. Models explicitly noted their token constraints, deliberately truncating their outputs. Thus, what appeared as “reasoning collapse” was essentially a practical limitation, not cognitive failure.
    2. Misclassification of Reasoning Breakdown: Anthropic identifies that Apple’s automated evaluation framework misinterpreted intentional truncations as reasoning failures. This rigid scoring method didn’t accommodate models’ awareness and decision-making regarding output length, leading to unjustly penalizing LRMs.
    3. Unsolvable Problems Misinterpreted: Perhaps most significantly, Anthropic demonstrates that some of Apple’s River Crossing benchmarks were mathematically impossible to solve (e.g., cases with six or more individuals with a boat capacity of three). Scoring these unsolvable instances as failures drastically skewed the results, making models appear incapable of solving fundamentally unsolvable puzzles.

    Anthropic further tested an alternative representation method—asking models to provide concise solutions (like Lua functions)—and found high accuracy even on complex puzzles previously labeled as failures. This outcome clearly indicates the issue was with evaluation methods rather than reasoning capabilities.

    Another key point raised by Anthropic pertains to the complexity metric used by Apple—compositional depth (number of required moves). They argue this metric conflates mechanical execution with genuine cognitive difficulty. For example, while Tower of Hanoi puzzles require exponentially more moves, each decision step is trivial, whereas puzzles like River Crossing involve fewer steps but significantly higher cognitive complexity due to constraint satisfaction and search requirements.

    Both papers significantly contribute to understanding LRMs, but the tension between their findings underscores a critical gap in current AI evaluation practices. Apple’s conclusion—that LRMs inherently lack robust, generalizable reasoning—is substantially weakened by Anthropic’s critique. Instead, Anthropic’s findings suggest LRMs are constrained by their testing environments and evaluation frameworks rather than their intrinsic reasoning capacities.

    Given these insights, future research and practical evaluations of LRMs must:

    • Differentiate Clearly Between Reasoning and Practical Constraints: Tests should accommodate the practical realities of token limits and model decision-making.
    • Validate Problem Solvability: Ensuring puzzles or problems tested are solvable is essential for fair evaluation.
    • Refine Complexity Metrics: Metrics must reflect genuine cognitive challenges, not merely the volume of mechanical execution steps.
    • Explore Diverse Solution Formats: Assessing LRMs’ capabilities across various solution representations can better reveal their underlying reasoning strengths.

    Ultimately, Apple’s claim that LRMs “can’t really reason” appears premature. Anthropic’s rebuttal demonstrates that LRMs indeed possess sophisticated reasoning capabilities that can handle substantial cognitive tasks when evaluated correctly. However, it also stresses the importance of careful, nuanced evaluation methods to truly understand the capabilities—and limitations—of emerging AI models.


    Check out the Apple Paper and Anthropic Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

    The post Why Apple’s Critique of AI Reasoning Is Premature appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleIBM’s MCP Gateway: A Unified FastAPI-Based Model Context Protocol Gateway for Next-Gen AI Toolchains
    Next Article Texas A&M Researchers Introduce a Two-Phase Machine Learning Method Named ‘ShockCast’ for High-Speed Flow Simulation with Neural Temporal Re-Meshing

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Why this Lenovo Yoga is one of the best MacBook Air alternatives (and it’s $350 off)

    News & Updates

    CVE-2025-6500 – Code-projects Inventory Management System SQL Injection

    Common Vulnerabilities and Exposures (CVEs)

    SynthID Detector — a new portal to help identify AI-generated content

    Artificial Intelligence

    CVE-2025-3078 – “Xerox Printer Passback Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-52378 – Nexxt Solutions NCM-X1800 Mesh Router Cross-Site Scripting (XSS)

    July 15, 2025

    CVE ID : CVE-2025-52378

    Published : July 15, 2025, 3:15 p.m. | 1 hour, 19 minutes ago

    Description : Cross-Site Scripting (XSS) vulnerability in Nexxt Solutions NCM-X1800 Mesh Router firmware UV1.2.7 and below allowing attackers to inject JavaScript code that is executed in the context of administrator sessions when viewing the device management page via the DEVICE_ALIAS parameter to the /web/um_device_set_aliasname endpoint.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-30018 – SAP SRM Information Disclosure

    May 13, 2025

    CVE-2024-32124 – FortiIsolator Improper Access Control Logging Vulnerability

    July 18, 2025

    Can the BRYCK Alliance turn the Ruhr region into Germany’s deeptech launchpad?

    June 13, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.