Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Implementing an LLM Agent with Tool Access Using MCP-Use

    Implementing an LLM Agent with Tool Access Using MCP-Use

    May 13, 2025

    MCP-Use is an open-source library that lets you connect any LLM to any MCP server, giving your agents tool access like web browsing, file operations, and more — all without relying on closed-source clients. In this tutorial, we’ll use langchain-groq and MCP-Use’s built-in conversation memory to build a simple chatbot that can interact with tools via MCP. 

    Step 1: Setting Up the Environment

    Installing uv package manager

    We will first set up our environment and start with installing the uv package manager. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    curl -LsSf https://astral.sh/uv/install.sh | sh 

    For Windows (PowerShell):

    Copy CodeCopiedUse a different Browser
    powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

    Creating a new directory and activating a virtual environment

    We will then create a new project directory and initialize it with uv

    Copy CodeCopiedUse a different Browser
    uv init mcp-use-demo
    cd mcp-use-demo

    We can now create and activate a virtual environment. For Mac or Linux:

    Copy CodeCopiedUse a different Browser
    uv venv
    source .venv/bin/activate

    For Windows:

    Copy CodeCopiedUse a different Browser
    uv venv
    .venvScriptsactivate

    Installing Python dependencies

    We will now install the required dependencies

    Copy CodeCopiedUse a different Browser
    uv add mcp-use langchain-groq python-dotenv

    Step 2: Setting Up the Environment Variables

    Groq API Key

    To use Groq’s LLMs:

    1. Visit Groq Console and generate an API key.
    2. Create a .env file in your project directory and add the following line:
    Copy CodeCopiedUse a different Browser
    GROQ_API_KEY=<YOUR_API_KEY>

     Replace <YOUR_API_KEY> with the key you just generated.

    Brave Search API Key

    This tutorial uses the Brave Search MCP Server.

    1. Get your Brave Search API key from: Brave Search API
    2. Create a file named mcp.json in the project root with the following content:
    Copy CodeCopiedUse a different Browser
    {
      "mcpServers": {
        "brave-search": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-brave-search"
          ],
          "env": {
            "BRAVE_API_KEY": "<YOUR_BRAVE_SEARCH_API>"
          }
        }
      }
    }

    Replace <YOUR_BRAVE_SEARCH_API> with your actual Brave API key.

    Node JS

    Some MCP servers (including Brave Search) require npx, which comes with Node.js.

    • Download the latest version of Node.js from nodejs.org
    • Run the installer.
    • Leave all settings as default and complete the installation

    Using other servers

    If you’d like to use a different MCP server, simply replace the contents of mcp.json with the configuration for that server.

    Step 3: Implementing the chatbot and integrating the MCP server

    Create an app.py file in the directory and add the following content:

    Importing the libraries

    Copy CodeCopiedUse a different Browser
    from dotenv import load_dotenv
    from langchain_groq import ChatGroq
    from mcp_use import MCPAgent, MCPClient
    import os
    import sys
    import warnings
    
    warnings.filterwarnings("ignore", category=ResourceWarning)

    This section loads environment variables and imports required modules for LangChain, MCP-Use, and Groq. It also suppresses ResourceWarning for cleaner output.

    Setting up the chatbot

    Copy CodeCopiedUse a different Browser
    async def run_chatbot():
        """ Running a chat using MCPAgent's built in conversation memory """
        load_dotenv()
        os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")
    
        configFile = "mcp.json"
        print("Starting chatbot...")
    
        # Creating MCP client and LLM instance
        client = MCPClient.from_config_file(configFile)
        llm = ChatGroq(model="llama-3.1-8b-instant")
    
        # Creating an agent with memory enabled
        agent = MCPAgent(
            llm=llm,
            client=client,
            max_steps=15,
            memory_enabled=True,
            verbose=False
        )

    This section loads the Groq API key from the .env file and initializes the MCP client using the configuration provided in mcp.json. It then sets up the LangChain Groq LLM and creates a memory-enabled agent to handle conversations.

    Implementing the chatbot

    Copy CodeCopiedUse a different Browser
    # Add this in the run_chatbot function
        print("n-----Interactive MCP Chat----")
        print("Type 'exit' or 'quit' to end the conversation")
        print("Type 'clear' to clear conversation history")
    
        try:
            while True:
                user_input = input("nYou: ")
    
                if user_input.lower() in ["exit", "quit"]:
                    print("Ending conversation....")
                    break
               
                if user_input.lower() == "clear":
                    agent.clear_conversation_history()
                    print("Conversation history cleared....")
                    continue
               
                print("nAssistant: ", end="", flush=True)
    
                try:
                    response = await agent.run(user_input)
                    print(response)
               
                except Exception as e:
                    print(f"nError: {e}")
    
        finally:
            if client and client.sessions:
                await client.close_all_sessions()

    This section enables interactive chatting, allowing the user to input queries and receive responses from the assistant. It also supports clearing the chat history when requested. The assistant’s responses are displayed in real-time, and the code ensures that all MCP sessions are closed cleanly when the conversation ends or is interrupted.

    Running the app

    Copy CodeCopiedUse a different Browser
    if __name__ == "__main__":
        import asyncio
        try:
            asyncio.run(run_chatbot())
        except KeyboardInterrupt:
            print("Session interrupted. Goodbye!")
       
        finally:
            sys.stderr = open(os.devnull, "w")

    This section runs the asynchronous chatbot loop, managing continuous interaction with the user. It also handles keyboard interruptions gracefully, ensuring the program exits without errors when the user terminates the session.

    You can find the entire code here

    Step 4: Running the app

    To run the app, run the following command

    Copy CodeCopiedUse a different Browser
    uv run app.py

    This will start the app, and you can interact with the chatbot and use the server for the session

    The post Implementing an LLM Agent with Tool Access Using MCP-Use appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleA Step-by-Step Guide to Deploy a Fully Integrated Firecrawl-Powered MCP Server on Claude Desktop with Smithery and VeryaX
    Next Article Securing Amazon Bedrock Agents: A guide to safeguarding against indirect prompt injections

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-48474 – FreeScout Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    This Week in Laravel: Starter Kits, Alpine, PDFs and Roles/Permissions

    Development

    CVE-2020-36847 – WordPress Simple-File-List Remote Code Execution Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4458 – Code-projects Patient Record Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-48922 – Drupal GLightbox Cross-Site Scripting (XSS) Vulnerability

    June 26, 2025

    CVE ID : CVE-2025-48922

    Published : June 26, 2025, 2:15 p.m. | 49 minutes ago

    Description : Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’) vulnerability in Drupal GLightbox allows Cross-Site Scripting (XSS).This issue affects GLightbox: from 0.0.0 before 1.0.16.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-6224 – Juju Certificate Private Key Exposure

    July 1, 2025

    CVE-2025-4031 – PHPGurukul Pre-School Enrollment System SQL Injection Vulnerability

    April 28, 2025

    This Week in Laravel: NativePHP for Mobile, Cursor Tips and Stress Testing

    May 23, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.