Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Foundation Models No Longer Need Prompts or Labels: EPFL Researchers Introduce a Joint Inference Framework for Fully Unsupervised Adaptation Using Fine-Tuning and In-Context Learning

    Foundation Models No Longer Need Prompts or Labels: EPFL Researchers Introduce a Joint Inference Framework for Fully Unsupervised Adaptation Using Fine-Tuning and In-Context Learning

    April 14, 2025

    Foundation models, often massive neural networks trained on extensive text and image data, have significantly shifted how artificial intelligence systems handle language and vision tasks. These models are not designed for a single task but generalize across a wide variety by leveraging their pretraining knowledge. Once trained, they can generate coherent responses, classify images, or solve problems without needing new task-specific training. Their scalability and reuse across domains make them a cornerstone of AI development.

    Despite their broad capabilities, a persistent issue lies in how these models are adapted for new, unseen tasks. In most scenarios, achieving strong performance requires providing them with handcrafted prompts or labeled examples that guide the model on how to behave. This process, however, introduces overhead, as crafting prompts involves trial and error, and collecting labeled examples can be expensive and time-consuming. Moreover, in real-world applications, such support data may not always be readily available, limiting the usability of foundation models in zero-shot settings.

    Several strategies have been used to bridge this gap between generality and task-specific performance. In-context learning enables models to mimic a task by including example input-output pairs during inference, while supervised fine-tuning adjusts model weights using labeled data. Another method, prompt engineering, involves crafting prompts that steer the model toward desired outputs. Though these tools have been successful in boosting performance, each relies on external support—either human input or labeled data—making them less viable in completely unsupervised settings.

    Swiss Federal Institute of Technology Lausanne (EPFL) researchers introduced a joint inference framework that supports unsupervised adaptation. This framework enables foundation models to perform coordinated predictions over multiple inputs without requiring ground truth data or manual prompts. The research team presented two specific techniques under this framework: unsupervised fine-tuning and unsupervised in-context learning. These methods allow models, including closed-weight ones like GPT-4, to improve accuracy without external guidance.

    The approach of unsupervised fine-tuning works by letting the model iteratively improve its predictions using only its feedback. It formulates an optimization objective where predictions for a batch of inputs are generated together, and their joint probability is maximized. This method uses LoRA (Low-Rank Adaptation) for efficient weight updates and introduces a regularization step to avoid trivial solutions, such as predicting the same answer for all inputs. The researchers developed unsupervised in-context learning for situations where weight access isn’t available, such as with GPT-4. This method mimics the effect of labeled ICL by using previously generated outputs as pseudo-labels, refining predictions over multiple iterations without human annotations. Each iteration involves conditioning the model on prior examples and developing a more accurate answer, simulating a supervised learning loop through self-generated data.

    The performance improvements from these unsupervised methods were substantial. On the GSM8K dataset, designed for math reasoning, unsupervised ICL applied to the Qwen2.5-Math model achieved a 39.2% absolute improvement over the standard zero-shot baseline. Similarly, for the Llama-3.1-8B model tested across 13 natural language processing tasks, unsupervised fine-tuning delivered a 23% average gain in accuracy. It matched the performance of fully supervised fine-tuning in 6 out of the 13 tasks. In vision-language tasks, unsupervised ICL also demonstrated strong results—showing a 23% gain on the Food101 dataset and significant improvements across other benchmarks. The research even extended to GPT-4o, a closed-weight model, where a 3% improvement was observed on ImageNet, reinforcing the framework’s versatility.

    This work reveals a meaningful shift in how foundation models can adapt. The researchers successfully addressed the core limitation—reliance on labeled data and manual configuration—by introducing a robust and scalable self-supervised strategy. Their joint inference framework is a practical, generalizable approach that redefines the boundaries of unsupervised learning for large-scale AI models.


    Check out Paper and Project. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

    The post Foundation Models No Longer Need Prompts or Labels: EPFL Researchers Introduce a Joint Inference Framework for Fully Unsupervised Adaptation Using Fine-Tuning and In-Context Learning appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleUnderdamped Diffusion Samplers Outperform Traditional Methods: Researchers from Karlsruhe Institute of Technology, NVIDIA, and Zuse Institute Berlin Introduce a New Framework for Efficient Sampling from Complex Distributions with Degenerate Noise
    Next Article Flutter vs React Native for Mobile Apps: What Laravel Devs Say in 2025

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-3502 – WP Maps Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-47275 – Auth0-PHP Session Cookie Brute Force

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-52977 – Apache HTTP Server Stored XSS

    Common Vulnerabilities and Exposures (CVEs)

    Firefox 139 Brings Custom New Tab Wallpapers, Better Upload Speeds

    Linux

    Highlights

    Development

    Creating a Launch Checklist

    April 1, 2025

    Are you a PM or BA who has been assigned a project or platform that…

    1000+ Unique IPs Attacking Ivanti Connect Secure Systems to Exploit Vulnerabilities

    April 24, 2025

    CVE-2025-5232 – PHPGurukul Student Study Center Management System SQL Injection Vulnerability

    May 27, 2025

    Affordable Luxury Watches at WatchBuyLuxury.com

    May 6, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.