Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory

    Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory

    May 2, 2025

    In this tutorial, we will explore how to leverage the capabilities of Fireworks AI for building intelligent, tool-enabled agents with LangChain. Starting from installing the langchain-fireworks package and configuring your Fireworks API key, we’ll set up a ChatFireworks LLM instance, powered by the high-performance llama-v3-70b-instruct model, and integrate it with LangChain’s agent framework. Along the way, we’ll define custom tools such as a URL fetcher for scraping webpage text and an SQL generator for converting plain-language requirements into executable BigQuery queries. By the end, we’ll have a fully functional REACT-style agent that can dynamically invoke tools, maintain conversational memory, and deliver sophisticated, end-to-end workflows powered by Fireworks AI.

    Copy CodeCopiedUse a different Browser
    !pip install -qU langchain langchain-fireworks requests beautifulsoup4

    We bootstrap the environment by installing all the required Python packages, including langchain, its Fireworks integration, and common utilities such as requests and beautifulsoup4. This ensures that we have the latest versions of all necessary components to run the rest of the notebook seamlessly.

    Copy CodeCopiedUse a different Browser
    import requests
    from bs4 import BeautifulSoup
    from langchain.tools import BaseTool
    from langchain.agents import initialize_agent, AgentType
    from langchain_fireworks import ChatFireworks
    from langchain import LLMChain, PromptTemplate
    from langchain.memory import ConversationBufferMemory
    import getpass
    import os

    We bring in all the necessary imports: HTTP clients (requests, BeautifulSoup), the LangChain agent framework (BaseTool, initialize_agent, AgentType), the Fireworks-powered LLM (ChatFireworks), plus prompt and memory utilities (LLMChain, PromptTemplate, ConversationBufferMemory), as well as standard modules for secure input and environment management.

    Copy CodeCopiedUse a different Browser
    os.environ["FIREWORKS_API_KEY"] = getpass("🚀 Enter your Fireworks API key: ")

    Now, it prompts you to enter your Fireworks API key via getpass securely and sets it in the environment. This step ensures that subsequent calls to the ChatFireworks model are authenticated without exposing your key in plain text.

    Copy CodeCopiedUse a different Browser
    llm = ChatFireworks(
        model="accounts/fireworks/models/llama-v3-70b-instruct",
        temperature=0.6,
        max_tokens=1024,
        stop=["nn"]
    )
    

    We demonstrate how to instantiate a ChatFireworks LLM configured for instruction-following, utilizing llama-v3-70b-instruct, a moderate temperature, and a token limit, allowing you to immediately start issuing prompts to the model.

    Copy CodeCopiedUse a different Browser
    prompt = [
        {"role":"system","content":"You are an expert data-scientist assistant."},
        {"role":"user","content":"Analyze the sentiment of this review:nn"
                               ""The new movie was breathtaking, but a bit too long.""}
    ]
    resp = llm.invoke(prompt)
    print("Sentiment Analysis →", resp.content)

    Next, we demonstrate a simple sentiment-analysis example: it builds a structured prompt as a list of role-annotated messages, invokes llm.invoke(), and prints out the model’s sentiment interpretation of the provided movie review.

    Copy CodeCopiedUse a different Browser
    template = """
    You are a data-science assistant. Keep track of the convo:
    
    
    {history}
    User: {input}
    Assistant:"""
    
    
    prompt = PromptTemplate(input_variables=["history","input"], template=template)
    memory = ConversationBufferMemory(memory_key="history")
    
    
    chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
    
    
    print(chain.run(input="Hey, what can you do?"))
    print(chain.run(input="Analyze: 'The product arrived late, but support was helpful.'"))
    print(chain.run(input="Based on that, would you recommend the service?"))

    We illustrate how to add conversational memory, which involves defining a prompt template that incorporates past exchanges, setting up a ConversationBufferMemory, and chaining everything together with LLMChain. Running a few sample inputs shows how the model retains context across turns.

    Copy CodeCopiedUse a different Browser
    class FetchURLTool(BaseTool):
        name: str = "fetch_url"
        description: str = "Fetch the main text (first 500 chars) from a webpage."
    
    
        def _run(self, url: str) -> str:
            resp = requests.get(url, timeout=10)
            doc = BeautifulSoup(resp.text, "html.parser")
            paras = [p.get_text() for p in doc.find_all("p")][:5]
            return "nn".join(paras)
    
    
        async def _arun(self, url: str) -> str:
            raise NotImplementedError

    We define a custom FetchURLTool by subclassing BaseTool. This tool fetches the first few paragraphs from any URL using requests and BeautifulSoup, making it easy for your agent to retrieve live web content.

    Copy CodeCopiedUse a different Browser
    class GenerateSQLTool(BaseTool):
        name: str = "generate_sql"
        description: str = "Generate a BigQuery SQL query (with comments) from a text description."
    
    
        def _run(self, text: str) -> str:
            prompt = f"""
    -- Requirement:
    -- {text}
    
    
    -- Write a BigQuery SQL query (with comments) to satisfy the above.
    """
            return llm.invoke([{"role":"user","content":prompt}]).content
    
    
        async def _arun(self, text: str) -> str:
            raise NotImplementedError
    
    
    tools = [FetchURLTool(), GenerateSQLTool()]
    
    
    agent = initialize_agent(
        tools,
        llm,
        agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
        verbose=True
    )
    
    
    result = agent.run(
        "Fetch https://en.wikipedia.org/wiki/ChatGPT "
        "and then generate a BigQuery SQL query that counts how many times "
        "the word 'model' appears in the page text."
    )
    
    
    print("n🔍 Generated SQL:n", result)

    Finally, GenerateSQLTool is another BaseTool subclass that wraps the LLM to transform plain-English requirements into commented BigQuery SQL. It then wires both tools into a REACT-style agent via initialize_agent, runs a combined fetch-and-generate example, and prints out the resulting SQL query.

    In conclusion, we have integrated Fireworks AI with LangChain’s modular tooling and agent ecosystem, unlocking a versatile platform for building AI applications that extend beyond simple text generation. We can extend the agent’s capabilities by adding domain-specific tools, customizing prompts, and fine-tuning memory behavior, all while leveraging Fireworks’ scalable inference engine. As next steps, explore advanced features such as function-calling, chaining multiple agents, or incorporating vector-based retrieval to craft even more dynamic and context-aware assistants.


    Check out the Notebook here. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post Building a REACT-Style Agent Using Fireworks AI with LangChain that Fetches Data, Generates BigQuery SQL, and Maintains Conversational Memory appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleXiaomi introduced MiMo-7B: A Compact Language Model that Outperforms Larger Models in Mathematical and Code Reasoning through Rigorous Pre-Training and Reinforcement Learning
    Next Article Building the Internet of Agents: A Technical Dive into AI Agent Protocols and Their Role in Scalable Intelligence Systems

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    CVE-2025-6545 – Apache PBKDF2 Signature Spoofing Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Deploy a full stack voice AI agent with Amazon Nova Sonic

    Machine Learning

    lakm/laravel-comments

    Development

    CVE-2025-46737 – Cisco SEL Cross-Origin Resource Sharing (CORS) Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-31235 – “Apple iPadOS and macOS Double Free Vulnerability”

    May 12, 2025

    CVE ID : CVE-2025-31235

    Published : May 12, 2025, 10:15 p.m. | 1 hour, 28 minutes ago

    Description : A double free issue was addressed with improved memory management. This issue is fixed in iPadOS 17.7.7, macOS Ventura 13.7.6, macOS Sequoia 15.5, macOS Sonoma 14.7.6. An app may be able to cause unexpected system termination.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Reinforcement Learning, Not Fine-Tuning: Nemotron-Tool-N1 Trains LLMs to Use Tools with Minimal Supervision and Maximum Generalization

    May 13, 2025

    I replaced my iPad with a $100 Android tablet, and here’s my verdict after a week

    May 27, 2025

    To catch up with ChatGPT, Google reportedly cribbed OpenAI’s homework to boost Gemini’s AI game

    June 17, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.