Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: A Unique Way to Primary Key

      July 22, 2025

      BrowserStack launches Figma plugin for detecting accessibility issues in design phase

      July 22, 2025

      Parasoft brings agentic AI to service virtualization in latest release

      July 22, 2025

      Node.js vs. Python for Backend: 7 Reasons C-Level Leaders Choose Node.js Talent

      July 21, 2025

      The best CRM software with email marketing in 2025: Expert tested and reviewed

      July 22, 2025

      This multi-port car charger can power 4 gadgets at once – and it’s surprisingly cheap

      July 22, 2025

      I’m a wearables editor and here are the 7 Pixel Watch 4 rumors I’m most curious about

      July 22, 2025

      8 ways I quickly leveled up my Linux skills – and you can too

      July 22, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025
      Recent

      The Intersection of Agile and Accessibility – A Series on Designing for Everyone

      July 22, 2025

      Zero Trust & Cybersecurity Mesh: Your Org’s Survival Guide

      July 22, 2025

      Execute Ping Commands and Get Back Structured Data in PHP

      July 22, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025
      Recent

      A Tomb Raider composer has been jailed — His legacy overshadowed by $75k+ in loan fraud

      July 22, 2025

      “I don’t think I changed his mind” — NVIDIA CEO comments on H20 AI GPU sales resuming in China following a meeting with President Trump

      July 22, 2025

      Galaxy Z Fold 7 review: Six years later — Samsung finally cracks the foldable code

      July 22, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Machine Learning»A Coding Guide to Different Function Calling Methods to Create Real-Time, Tool-Enabled Conversational AI Agents

    A Coding Guide to Different Function Calling Methods to Create Real-Time, Tool-Enabled Conversational AI Agents

    April 29, 2025

    Function calling lets an LLM act as a bridge between natural-language prompts and real-world code or APIs. Instead of simply generating text, the model decides when to invoke a predefined function, emits a structured JSON call with the function name and arguments, and then waits for your application to execute that call and return the results. This back-and-forth can loop, potentially invoking multiple functions in sequence, enabling rich, multi-step interactions entirely under conversational control. In this tutorial, we’ll implement a weather assistant with Gemini 2.0 Flash to demonstrate how to set up and manage that function-calling cycle. We will implement different variants of Function Calling. By integrating function calls, we transform a chat interface into a dynamic tool for real-time tasks, whether fetching live weather data, checking order statuses, scheduling appointments, or updating databases. Users no longer fill out complex forms or navigate multiple screens; they simply describe what they need, and the LLM orchestrates the underlying actions seamlessly. This natural language automation enables the easy construction of AI agents that can access external data sources, perform transactions, or trigger workflows, all within a single conversation.

    Function Calling with Google Gemini 2.0 Flash

    Copy CodeCopiedUse a different Browser
    !pip install "google-genai>=1.0.0" geopy requests

    We install the Gemini Python SDK (google-genai ≥ 1.0.0), along with geopy for converting location names to coordinates and requests for making HTTP calls, ensuring all the core dependencies for our Colab weather assistant are in place.

    Copy CodeCopiedUse a different Browser
    import os
    from google import genai
    
    
    GEMINI_API_KEY = "Use_Your_API_Key"  
    
    
    client = genai.Client(api_key=GEMINI_API_KEY)
    
    
    model_id = "gemini-2.0-flash"

    We import the Gemini SDK, set your API key, and create a genai.Client instance configured to use the “gemini-2.0-flash” model, establishing the foundation for all subsequent function-calling requests.

    Copy CodeCopiedUse a different Browser
    res = client.models.generate_content(
        model=model_id,
        contents=["Tell me 1 good fact about Nuremberg."]
    )
    print(res.text)

    We send a user prompt (“Tell me 1 good fact about Nuremberg.”) to the Gemini 2.0 Flash model via generate_content, then print out the model’s text reply, demonstrating a basic, end-to-end text‐generation call using the SDK.

    Function Calling with JSON Schema

    Copy CodeCopiedUse a different Browser
    weather_function = {
        "name": "get_weather_forecast",
        "description": "Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g., San Francisco, CA"
                },
                "date": {
                    "type": "string",
                    "description": "the forecasting date for when to get the weather format (yyyy-mm-dd)"
                }
            },
            "required": ["location","date"]
        }
    }
    

    Here, we define a JSON Schema for our get_weather_forecast tool, specifying its name, a descriptive prompt to guide Gemini on when to use it, and the exact input parameters (location and date) with their types, descriptions, and required fields, so the model can emit valid function calls.

    Copy CodeCopiedUse a different Browser
    from google.genai.types import GenerateContentConfig
    
    
    config = GenerateContentConfig(
        system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API. Today is 2025-03-04.",
        tools=[{"function_declarations": [weather_function]}],
    )
    

    We create a GenerateContentConfig that tells Gemini it’s acting as a weather‐retrieval assistant and registers your weather function under tools. Hence, the model knows how to generate structured calls when asked for forecast data.

    Copy CodeCopiedUse a different Browser
    response = client.models.generate_content(
        model=model_id,
        contents='Whats the weather in Berlin today?'
    )
    print(response.text)

    This call sends the bare prompt (“What’s the weather in Berlin today?”) without including your config (and thus no function definitions), so Gemini falls back to plain text completion, offering generic advice instead of invoking your weather‐forecast tool.

    Copy CodeCopiedUse a different Browser
    response = client.models.generate_content(
        model=model_id,
        config=config,
        contents='Whats the weather in Berlin today?'
    )
    
    
    for part in response.candidates[0].content.parts:
        print(part.function_call)

    By passing in config (which includes your JSON‐schema tool), Gemini recognizes it should call get_weather_forecast rather than reply in plain text. The loop over response.candidates[0].content.parts then prints out each part’s .function_call object, showing you exactly which function the model decided to invoke (with its name and arguments).

    Copy CodeCopiedUse a different Browser
    from google.genai import types
    from geopy.geocoders import Nominatim
    import requests
    
    
    geolocator = Nominatim(user_agent="weather-app")
    def get_weather_forecast(location, date):
        location = geolocator.geocode(location)
        if location:
            try:
                response = requests.get(f"https://api.open-meteo.com/v1/forecast?latitude={location.latitude}&longitude={location.longitude}&hourly=temperature_2m&start_date={date}&end_date={date}")
                data = response.json()
                return {time: temp for time, temp in zip(data["hourly"]["time"], data["hourly"]["temperature_2m"])}
            except Exception as e:
                return {"error": str(e)}
        else:
            return {"error": "Location not found"}
    
    
    functions = {
        "get_weather_forecast": get_weather_forecast
    }
    
    
    def call_function(function_name, **kwargs):
        return functions[function_name](**kwargs)
    
    
    def function_call_loop(prompt):
        contents = [types.Content(role="user", parts=[types.Part(text=prompt)])]
        response = client.models.generate_content(
            model=model_id,
            config=config,
            contents=contents
        )
        for part in response.candidates[0].content.parts:
            contents.append(types.Content(role="model", parts=[part]))
            if part.function_call:
                print("Tool call detected")
                function_call = part.function_call
                print(f"Calling tool: {function_call.name} with args: {function_call.args}")
                tool_result = call_function(function_call.name, **function_call.args)
                function_response_part = types.Part.from_function_response(
                    name=function_call.name,
                    response={"result": tool_result},
                )
                contents.append(types.Content(role="user", parts=[function_response_part]))
                print(f"Calling LLM with tool results")
                func_gen_response = client.models.generate_content(
                    model=model_id, config=config, contents=contents
                )
                contents.append(types.Content(role="model", parts=[func_gen_response]))
        return contents[-1].parts[0].text.strip()
       
    result = function_call_loop("Whats the weather in Berlin today?")
    print(result)

    We implement a full “agentic” loop: it sends your prompt to Gemini, inspects the response for a function call, executes get_weather_forecast (using Geopy plus an Open-Meteo HTTP request), and then feeds the tool’s result back into the model to produce and return the final conversational reply.

    Function Calling using Python functions

    Copy CodeCopiedUse a different Browser
    from geopy.geocoders import Nominatim
    import requests
    
    
    geolocator = Nominatim(user_agent="weather-app")
    
    
    def get_weather_forecast(location: str, date: str) -> str:
        """
        Retrieves the weather using Open-Meteo API for a given location (city) and a date (yyyy-mm-dd). Returns a list dictionary with the time and temperature for each hour."
       
        Args:
            location (str): The city and state, e.g., San Francisco, CA
            date (str): The forecasting date for when to get the weather format (yyyy-mm-dd)
        Returns:
            Dict[str, float]: A dictionary with the time as key and the temperature as value
        """
        location = geolocator.geocode(location)
        if location:
            try:
                response = requests.get(f"https://api.open-meteo.com/v1/forecast?latitude={location.latitude}&longitude={location.longitude}&hourly=temperature_2m&start_date={date}&end_date={date}")
                data = response.json()
                return {time: temp for time, temp in zip(data["hourly"]["time"], data["hourly"]["temperature_2m"])}
            except Exception as e:
                return {"error": str(e)}
        else:
            return {"error": "Location not found"}

    The get_weather_forecast function first uses Geopy’s Nominatim to convert a city-and-state string into coordinates, then sends an HTTP request to the Open-Meteo API to retrieve hourly temperature data for the given date, returning a dictionary that maps each timestamp to its corresponding temperature. It also handles errors gracefully, returning an error message if the location isn’t found or the API call fails.

    Copy CodeCopiedUse a different Browser
    from google.genai.types import GenerateContentConfig
    
    
    config = GenerateContentConfig(
        system_instruction="You are a helpful assistant that can help with weather related questions. Today is 2025-03-04.", # to give the LLM context on the current date.
        tools=[get_weather_forecast],
        automatic_function_calling={"disable": True}
    )
    

    This config registers your Python get_weather_forecast function as a callable tool. It sets a clear system prompt (including the date) for context, while disabling “automatic_function_calling” so that Gemini will emit the function call payload instead of invoking it internally.

    Copy CodeCopiedUse a different Browser
    r = client.models.generate_content(
        model=model_id,
        config=config,
        contents='Whats the weather in Berlin today?'
    )
    for part in r.candidates[0].content.parts:
        print(part.function_call)

    By sending the prompt with your custom config (including the Python tool but with automatic calls disabled), this snippet captures Gemini’s raw function‐call decision. Then it loops over each response part to print out the .function_call object, letting you inspect exactly which tool the model wants to invoke and with what arguments.

    Copy CodeCopiedUse a different Browser
    from google.genai.types import GenerateContentConfig
    
    
    config = GenerateContentConfig(
        system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API. Today is 2025-03-04.", # to give the LLM context on the current date.
        tools=[get_weather_forecast],
    )
    
    
    r = client.models.generate_content(
        model=model_id,
        config=config,
        contents='Whats the weather in Berlin today?'
    )
    
    
    print(r.text)

    With this config (which includes your get_weather_forecast function and leaves automatic calling enabled by default), calling generate_content will have Gemini invoke your weather tool behind the scenes and then return a natural‐language reply. Printing r.text outputs that final response, including the actual temperature forecast for Berlin on the specified date.

    Copy CodeCopiedUse a different Browser
    from google.genai.types import GenerateContentConfig
    
    
    config = GenerateContentConfig(
        system_instruction="You are a helpful assistant that use tools to access and retrieve information from a weather API.",
        tools=[get_weather_forecast],
    )
    
    
    prompt = f"""
    Today is 2025-03-04. You are chatting with Andrew, you have access to more information about him.
    
    
    User Context:
    - name: Andrew
    - location: Nuremberg
    
    
    User: Can i wear a T-shirt later today?"""
    
    
    r = client.models.generate_content(
        model=model_id,
        config=config,
        contents=prompt
    )
    
    
    print(r.text)

    We extend your assistant with personal context, telling Gemini Andrew’s name and location (Nuremberg) and asking if it’s T-shirt weather, while still using the get_weather_forecast tool under the hood. It then prints the model’s natural-language recommendation based on the actual forecast for that day.

    In conclusion, we now know how to define functions (via JSON schema or Python signatures), configure Gemini 2.0 Flash to detect and emit function calls, and implement the “agentic” loop that executes those calls and composes the final response. With these building blocks, we can extend any LLM into a capable, tool-enabled assistant that automates workflows, retrieves live data, and interacts with your code or APIs as effortlessly as chatting with a colleague.


    Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

    🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

    The post A Coding Guide to Different Function Calling Methods to Create Real-Time, Tool-Enabled Conversational AI Agents appeared first on MarkTechPost.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleScaling Smarter with Cloud ERP Solution – Driving Business Growth
    Next Article The WAVLab Team is Releases of VERSA: A Comprehensive and Versatile Evaluation Toolkit for Assessing Speech, Audio, and Music Signals

    Related Posts

    Machine Learning

    How to Evaluate Jailbreak Methods: A Case Study with the StrongREJECT Benchmark

    July 22, 2025
    Machine Learning

    Boolformer: Symbolic Regression of Logic Functions with Transformers

    July 22, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

    Web Development

    CVE-2025-26850 – Quest KACE Systems Management Appliance Local Privilege Escalation

    Common Vulnerabilities and Exposures (CVEs)

    Chinese Group Silver Fox Uses Fake Websites to Deliver Sainbox RAT and Hidden Rootkit

    Development

    CVE-2025-4198 – Alink Tap Plugin for WordPress Cross-Site Request Forgery (CSRF) Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-52903 – Apache File Browser Command Execution Vulnerability

    June 26, 2025

    CVE ID : CVE-2025-52903

    Published : June 26, 2025, 7:15 p.m. | 3 hours, 44 minutes ago

    Description : File Browser provides a file managing interface within a specified directory and it can be used to upload, delete, preview, rename and edit files. In version 2.32.0, the Command Execution feature of File Browser only allows the execution of shell command which have been predefined on a user-specific allowlist. Many tools allow the execution of arbitrary different commands, rendering this limitation void. The concrete impact depends on the commands being granted to the attacker, but the large number of standard commands allowing the execution of subcommands makes it likely that every user having the `Execute commands` permissions can exploit this vulnerability. Everyone who can exploit it will have full code execution rights with the uid of the server process. Until this issue is fixed, the maintainers recommend to completely disable `Execute commands` for all accounts. Since the command execution is an inherently dangerous feature that is not used by all deployments, it should be possible to completely disable it in the application’s configuration. As a defense-in-depth measure, organizations not requiring command execution should operate the Filebrowser from a distroless container image. A patch version has been pushed to disable the feature for all existent installations, and making it opt-in. A warning has been added to the documentation and is printed on the console if the feature is enabled. Due to the project being in maintenance-only mode, the bug has not been fixed. The fix is tracked on pull request 5199.

    Severity: 8.0 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Implementing an Accessible and Responsive Accordion Menu

    May 14, 2025

    How To Automate Ticket Creation, Device Identification and Threat Triage With Tines

    July 9, 2025

    Windows 11 24H2 April 2025 Update fixes File Explorer menu opening in opposite direction

    April 7, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.