Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare

    OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare

    May 13, 2025

    OpenAI has released HealthBench, an open-source evaluation framework designed to measure the performance and safety of large language models (LLMs) in realistic healthcare scenarios. Developed in collaboration with 262 physicians across 60 countries and 26 medical specialties, HealthBench addresses the limitations of existing benchmarks by focusing on real-world applicability, expert validation, and diagnostic coverage.

    Addressing Benchmarking Gaps in Healthcare AI

    Existing benchmarks for healthcare AI typically rely on narrow, structured formats such as multiple-choice exams. While useful for initial assessments, these formats fail to capture the complexity and nuance of real-world clinical interactions. HealthBench shifts toward a more representative evaluation paradigm, incorporating 5,000 multi-turn conversations between models and either lay users or healthcare professionals. Each conversation ends with a user prompt, and model responses are assessed using example-specific rubrics written by physicians.

    Each rubric consists of clearly defined criteria—positive and negative—with associated point values. These criteria capture behavioral attributes such as clinical accuracy, communication clarity, completeness, and instruction adherence. HealthBench evaluates over 48,000 unique criteria, with scoring handled by a model-based grader validated against expert judgment.

    Benchmark Structure and Design

    HealthBench organizes its evaluation across seven key themes: emergency referrals, global health, health data tasks, context-seeking, expertise-tailored communication, response depth, and responding under uncertainty. Each theme represents a distinct real-world challenge in medical decision-making and user interaction.

    In addition to the standard benchmark, OpenAI introduces two variants:

    • HealthBench Consensus: A subset emphasizing 34 physician-validated criteria, designed to reflect critical aspects of model behavior such as advising emergency care or seeking additional context.
    • HealthBench Hard: A more difficult subset of 1,000 conversations selected for their ability to challenge current frontier models.

    These components allow for detailed stratification of model behavior by both conversation type and evaluation axis, offering more granular insights into model capabilities and shortcomings.

    Evaluation of Model Performance

    OpenAI evaluated several models on HealthBench, including GPT-3.5 Turbo, GPT-4o, GPT-4.1, and the newer o3 model. Results show marked progress: GPT-3.5 achieved 16%, GPT-4o reached 32%, and o3 attained 60% overall. Notably, GPT-4.1 nano, a smaller and cost-effective model, outperformed GPT-4o while reducing inference cost by a factor of 25.

    Performance varied by theme and evaluation axis. Emergency referrals and tailored communication were areas of relative strength, while context-seeking and completeness posed greater challenges. A detailed breakdown revealed that completeness was the most correlated with overall score, underscoring its importance in health-related tasks.

    OpenAI also compared model outputs with physician-written responses. Unassisted physicians generally produced lower-scoring responses than models, although they could improve model-generated drafts, particularly when working with earlier model versions. These findings suggest a potential role for LLMs as collaborative tools in clinical documentation and decision support.

    Reliability and Meta-Evaluation

    HealthBench includes mechanisms to assess model consistency. The “worst-at-k” metric quantifies the degradation in performance across multiple runs. While newer models showed improved stability, variability remains an area for ongoing research.

    To assess the trustworthiness of its automated grader, OpenAI conducted a meta-evaluation using over 60,000 annotated examples. GPT-4.1, used as the default grader, matched or exceeded the average performance of individual physicians in most themes, suggesting its utility as a consistent evaluator.

    Conclusion

    HealthBench represents a technically rigorous and scalable framework for assessing AI model performance in complex healthcare contexts. By combining realistic interactions, detailed rubrics, and expert validation, it offers a more nuanced picture of model behavior than existing alternatives. OpenAI has released HealthBench via the simple-evals GitHub repository, providing researchers with tools to benchmark, analyze, and improve models intended for health-related applications.


    Check out the Paper, GitHub PagePage and Official Release. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

    Here’s a brief overview of what we’re building at Marktechpost:

    • ML News Community – r/machinelearningnews (92k+ members)
    • Newsletter– airesearchinsights.com/(30k+ subscribers)
    • miniCON AI Events – minicon.marktechpost.com
    • AI Reports & Magazines – magazine.marktechpost.com
    • AI Dev & Research News – marktechpost.com (1M+ monthly readers)
    • Partner with us

    The post OpenAI Releases HealthBench: An Open-Source Benchmark for Measuring the Performance and Safety of Large Language Models in Healthcare appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleRL^V: Unifying Reasoning and Verification in Language Models through Value-Free Reinforcement Learning
    Next Article A Step-by-Step Guide on Building, Customizing, and Publishing an AI-Focused Blogging Website with Lovable.dev and Seamless GitHub Integration

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2022-50225 – RISC-V Linux Kernel Uprobe SR SPIE Handling Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    I don’t care about Genshin Impact, but this is the best power bank I’ve ever used — and there’s still time to save on one before Prime Day ends

    News & Updates

    EA cancels Titanfall incubation project, lays off staff at Apex Legends and Star Wars Jedi developer Respawn Entertainment

    News & Updates

    How to Get Hired and Hire Better with Dave Hicking

    Development

    Highlights

    CVE-2025-45818 – Slims Senayan Library Management Systems SQL Injection Vulnerability

    May 8, 2025

    CVE ID : CVE-2025-45818

    Published : May 8, 2025, 3:15 p.m. | 21 minutes ago

    Description : Slims (Senayan Library Management Systems) 9 Bulian 9.6.1 is vulnerable to SQL Injection in admin/modules/master_file/item_status.php.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Watch out, another max-severity, make-me-root Cisco bug on the loose

    July 17, 2025

    AWS AI infrastructure with NVIDIA Blackwell: Two powerful compute solutions for the next frontier of AI

    July 9, 2025

    Cisco Warns of Critical ISE Flaw Allowing Unauthenticated Attackers to Execute Root Code

    July 17, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.