My IBM Log in Subscribe

Implement function calling with the Granite-3.0-8B-Instruct model in Python with watsonx

21 October 2024

Erika Russi

Data Scientist

IBM

Anna Gutowska

AI Engineer, Developer Advocate

IBM

Jess Bozorg

Lead, AI Advocacy

IBM

What is function calling?

In this tutorial, you will use the IBM® Granite-3.0-8B-Instruct model now available on watsonx.ai™ to perform custom function calling.

Traditional large language models (LLMs), such as the OpenAI GPT-4 (generative pre-trained transformer) model available through ChatGPT, and the IBM Granite™ models that we'll use in this tutorial, are limited in their knowledge and reasoning. They produce their responses based on the data used to train them and are difficult to adapt to personalized user queries. To obtain the missing information, these generative AI models can integrate external tools within the function calling. This method is one way to avoid fine-tuning a foundation model for each specific use-case. The function calling examples in this tutorial will implement external API calls.

The Granite-3.0-8B-Instruct model and tokenizer use natural language processing (NLP) to parse query syntax. In addition, the models use function descriptions and function parameters to determine the appropriate tool calls. Key information is then extracted from user queries to be passed as function arguments.

Steps

Check out this YouTube video that walks you through the following set up instructions in Steps 1 and 2.

Step 1. Set up your environment

While you can choose from several tools, this tutorial is best suited for a Jupyter Notebook. Jupyter Notebooks are widely used within data science to combine code with various data sources such as text, images and data visualizations.

This tutorial walks you through how to set up an IBM account to use a Jupyter Notebook.

  1. Log in to watsonx.ai using your IBM Cloud account.

  2. Create a watsonx.ai project.

    You can get your project ID from within your project. Click the Manage tab. Then, copy the project ID from the Details section of the General page. You need this ID for this tutorial.

  3. Create a Jupyter Notebook.

This step opens a notebook environment where you can copy the code from this tutorial. Alternatively, you can download this notebook to your local system and upload it to your watsonx.ai project as an asset. To view more Granite tutorials, check out the IBM Granite Community. This Jupyter Notebook is available on GitHub.

To avoid Python package dependency conflicts, we recommend setting up a virtual environment.

Step 2. Set up watsonx.ai Runtime service and API key

  1. Create a watsonx.ai Runtime service instance (choose the Lite plan, which is a free instance).

  2. Generate an API Key.

  3. Associate the watsonx.ai Runtime service to the project you created in watsonx.ai.

Step 3. Install and import relevant libraries and set up your credentials

We'll need a few libraries and modules for this tutorial. Make sure to import the following ones; if they're not installed, you can resolve this with a quick pip install. If you are running this tutorial locally, we recommend setting up a virtual environment to avoid Python package dependency conflicts.

# installations
!pip install transformers | tail -n 1
!pip install torch torchvision | tail -n 1
!pip install langchain-ibm | tail -n 1
!pip install python-dotenv | tail -n 1 #imports
import requests
import os
import ast
import re

from transformers import AutoTokenizer
from transformers.utils import get_json_schema
from langchain_ibm import WatsonxLLM
from dotenv import load_dotenv

load_dotenv(os.getcwd()+"/.env", override=True)

Next, we can prepare our environment by setting the model ID for the granite-3-8b-instruct  model, and the tokenizer for the same Granite model.

Store your private keys in a separate.env file in the same level of your directory as this notebook or replace the placeholder text with the WATSONX_APIKEY  and WATSONX_PROJECT_ID  you created in steps 1 and 2.

MODEL_ID = "ibm/granite-3-8b-instruct"
TOKENIZER = AutoTokenizer.from_pretrained("ibm-granite/granite-3.0-8b-instruct")
WATSONX_URL = "https://us-south.ml.cloud.ibm.com"
WATSONX_APIKEY = os.getenv('WATSONX_APIKEY', "<YOUR_WATSONX_API_KEY_HERE>")
WATSONX_PROJECT_ID = os.getenv('PROJECT_ID', "<YOUR_WATSONX_PROJECT_ID_HERE>")

The get_stock_price  function in this tutorial requires an AV_STOCK_API_KEY  key. To generate a free AV_STOCK_API_KEY , visit the Alpha Vantage website.

Secondly, the get_current_weather  function requires a WEATHER_API_KEY . To generate one, create an account. Upon creating an account, select the "API Keys" tab to display your free key.

AV_STOCK_API_KEY = os.getenv('AV_STOCK_API_KEY',"<AV_STOCK_API_KEY_HERE>")
WEATHER_API_KEY = os.getenv('WEATHER_API_KEY',"<WEATHER_API_KEY_HERE>")

Step 4. Define the functions

We can now define our functions. In this tutorial, the get_stock_price  function uses the Stock Market Data API available through Alpha Vantage.

def get_stock_price(ticker: str, date: str) -> dict:
    """
    Retrieves the lowest and highest stock prices for a given ticker and date.
    Args:
    ticker: The stock ticker symbol, e.g., "IBM".
    date: The date in "YYYY-MM-DD" format for which you want to get stock prices.
    Returns:
    A dictionary containing the low and high stock prices on the given date.
    """
    print(f"Getting stock price for {ticker} on {date}")
    try:
        stock_url = f"https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol={ticker}&apikey={AV_STOCK_API_KEY}"
        stock_data = requests.get(stock_url)
        stock_low = stock_data.json()["Time Series (Daily)"][date]["3. low"]
        stock_high = stock_data.json()["Time Series (Daily)"][date]["2. high"]
        return {
            "low": stock_low,
            "high": stock_high
        }
    except Exception as e:
        print(f"Error fetching stock data: {e}")
        return {
            "low": "none",
            "high": "none"
        }

The get_current_weather  function retrieves the real-time weather in a given location using the Current Weather Data API via OpenWeather.

def get_current_weather(location: str) -> dict:
    """
    Fetches the current weather for a given location (default: San Francisco).
    Args:
    location: The name of the city for which to retrieve the weather information.
    Returns:
    A dictionary containing weather information such as temperature, weather description, and humidity.
    """
    print(f"Getting current weather for {location}")

    try:
        # API request to fetch weather data
        weather_url = f"http://api.openweathermap.org/data/2.5/weather?q={location}&appid={WEATHER_API_KEY}&units=metric"
        weather_data = requests.get(weather_url)
        data = weather_data.json()
        # Extracting relevant weather details
        weather_description = data["weather"][0]["description"]
        temperature = data["main"]["temp"]
        humidity = data["main"]["humidity"]

        # Returning weather details
        return {
        "description": weather_description,
        "temperature": temperature,
        "humidity": humidity
        }
    except Exception as e:
        print(f"Error fetching weather data: {e}")
        return {
            "description": "none",
            "temperature": "none",
            "humidity": "none"
        }

Step 5. Set up the API request

Now that our functions are defined, we can create a function that generates a watsonx API request for the provided instructions using the watsonx API endpoint. We will use this function each time we make a request.

def make_api_request(instructions: str) -> str:
    model_parameters = {
        "decoding_method": "greedy",
        "max_new_tokens": 200,
        "repetition_penalty": 1.05,
        "stop_sequences": [TOKENIZER.eos_token]
    }
    model = WatsonxLLM(
        model_id=MODEL_ID,
        url= WATSONX_URL,
        apikey=WATSONX_APIKEY,
        project_id=WATSONX_PROJECT_ID,
        params=model_parameters
    )
    response = model.invoke(instructions)
    return response

Next, we can create a list of available functions. Here, we declare our function definitions that require the function names, descriptions, parameters and required properties.

tools = [get_json_schema(tool) for tool in (get_stock_price, get_current_weather)]
tools

Output:

[{'type': 'function',
    'function': {'name': 'get_stock_price',
    'description': 'Retrieves the lowest and highest stock prices for a given ticker and date.',
    'parameters': {'type': 'object',
        'properties': {'ticker': {'type': 'string',
            'description': 'The stock ticker symbol, e.g., "IBM".'},
        'date': {'type': 'string',
            'description': 'The date in "YYYY-MM-DD" format for which you want to get stock prices.'}},
        'required': ['ticker', 'date']},
    'return': {'type': 'object',
        'description': 'A dictionary containing the low and high stock prices on the given date.'}}},
{'type': 'function',
    'function': {'name': 'get_current_weather',
    'description': 'Fetches the current weather for a given location (default: San Francisco).',
    'parameters': {'type': 'object',
        'properties': {'location': {'type': 'string',
            'description': 'The name of the city for which to retrieve the weather information.'}},
            'required': ['location']},
        'return': {'type': 'object',
            'description': 'A dictionary containing weather information such as temperature, weather description, and humidity.'}}}]

Step 6. Perform function calling

Step 6a. Calling the get_stock_price function

To prepare for the API requests, we must set our query  used in the tokenizer chat template.

query = "What were the IBM stock prices on October 7, 2024?"

Applying a chat template is useful for breaking up long strings of texts into one or more messages with corresponding labels. This allows the LLM to process the input in a format that it expects. Because we want our output to be in a string format, we can set the tokenize  parameter to false. The add_generation_prompt  can be set to true in order to append the tokens indicating the beginning of an assistant message to the output. This will be useful when generating chat completions with the model.

conversation = [
    {"role": "system","content": "You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required."},
    {"role": "user", "content": query },
]

instruction_1 = TOKENIZER.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True)
instruction_1

Output

'<|start_of_role|>available_tools<|end_of_role|>\n{\n "type": "function",\n "function": {\n "name": "get_stock_price",\n "description": "Retrieves the lowest and highest stock prices for a given ticker and date.",\n "parameters": {\n "type": "object",\n "properties": {\n "ticker": {\n "type": "string",\n "description": "The stock ticker symbol, e.g., \\"IBM\\"."\n },\n "date": {\n "type": "string",\n "description": "The date in \\"YYYY-MM-DD\\" format for which you want to get stock prices."\n }\n },\n "required": [\n "ticker",\n "date"\n ]\n },\n "return": {\n "type": "object",\n "description": "A dictionary containing the low and high stock prices on the given date."\n }\n }\n}\n\n{\n "type": "function",\n "function": {\n "name": "get_current_weather",\n "description": "Fetches the current weather for a given location (default: San Francisco).",\n "parameters": {\n "type": "object",\n "properties": {\n "location": {\n "type": "string",\n "description": "The name of the city for which to retrieve the weather information."\n }\n },\n "required": [\n "location"\n ]\n },\n "return": {\n "type": "object",\n "description": "A dictionary containing weather information such as temperature, weather description, and humidity."\n }\n }\n}<|end_of_text|>\n<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following function calls. Your task is to produce a list of function calls necessary to generate response to the user utterance. Use the following function calls as required.<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>What is the current weather in San Francisco?<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>'

Now, we can call the make_api_request  function and pass the instructions we generated.

data_1 = make_api_request(instruction_1)
data_1

Output

"{'name': 'get_stock_price', 'arguments': {'ticker': 'IBM', 'date': '2024-10-07'}}"

As you can see by the function name in the JSON object produced by the model, the appropriate get_stock_price  tool use was selected from the set of functions. To run the API call within the function, let's extract relevant information from the output. With the function name and arguments extracted, we can call the function. To call the function using its name as a string, we can use the globals()  function

def tool_call(llm_response: str):
    tool_request = ast.literal_eval(re.search("({.+})", llm_response).group(0))
    tool_name = tool_request["name"]
    tool_arguments = tool_request["arguments"]
    tool_response = globals()[tool_name](**tool_arguments)
    return tool_response

Get the response from the requested tool.

tool_response = tool_call(data_1)
tool_response

Output

Getting stock price for IBM on 2024-10-07

    {'low': '225.0200', 'high': '227.6700'}

The function successfully retrieved the requested stock price. To generate a synthesized final response, we can pass another prompt to the Granite model along with the information collected from function calling.

conversation2 = conversation + [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Display the tool response in natural language." },
    {"role": "tool_response", "content": str(tool_response) },
]

instruction_2 = TOKENIZER.apply_chat_template(conversation=conversation2, tools=tools, tokenize=False, add_generation_prompt=True)
data_2 = make_api_request(instruction_2)
data_2

Output: 

'On October 7, 2024, the IBM stock prices ranged from a low of $225.02 to a high of $227.67.'

Step 6b. Calling the get_current_weather function

As our next query, let’s inquire about the current weather in San Francisco. We can follow the same steps as in Step 5a by adjusting the query.

query = "What is the current weather in San Francisco?"

chat = [
    {"role":"system","content": f"You are a helpful assistant with access to the following function calls. Your task is to produce a sequence of function calls necessary to generate response to the user utterance. Use the following function calls as required.{payload}"},
    {"role": "user", "content": query }
]

instruction_1 = TOKENIZER.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
instruction_1

Output

'<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following function calls. Your task is to produce a sequence of function calls necessary to generate response to the user utterance. Use the following function calls as required.{\'functions_str\': [\'{"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and country code, e.g. San Francisco, US"}}, "required": ["location"]}}\', \'{"name": "get_stock_price", "description": "Retrieves the lowest and highest stock price for a given ticker symbol and date. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.", "parameters": {"type": "object", "properties": {"ticker": {"type": "string", "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."}, "date": {"type": "string", "description": "Date in YYYY-MM-DD format"}}, "required": ["ticker", "date"]}}\']}<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>What is the current weather in San Francisco?<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>'

data_1 = make_api_request(instruction_1)
data_1

Output:

'[{"name": "get_current_weather", "arguments": {"location": "San Francisco"}}]'

Once again, the model decides the appropriate tool choice, in this case get_current_weather , and extracts the location correctly. Now, let's call the function with the argument generated by the model.

tool_response = tool_call(data_1)
tool_response

Output

Getting current weather for San Francisco

    {'description': 'clear sky', 'temperature': 15.52, 'humidity': 68}

The function response correctly describes the current weather in San Francisco. Lastly, let's generate a synthesized final response with the results of this function call.

conversation2 = conversation + [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Display the tool response in natural language." },
    {"role": "tool_response", "content": str(tool_response) },
]

instruction_2 = TOKENIZER.apply_chat_template(conversation=conversation2, tools=tools, tokenize=False, add_generation_prompt=True)
data_2 = make_api_request(instruction_2)
data_2

Output:

'The current weather in San Francisco is clear with a temperature of 15.52 degrees and a humidity of 68%.'

Summary

In this tutorial, you built custom functions and used the Granite-3.0-8B-Instruct model to determine which function to call based on key information from user queries. With this information, you called the function with the arguments as stated in the model response. These function calls produce the expected output. Finally, you called the Granite-3.0-8B-Instruct model again to synthesize the information returned by the functions.

Related solutions

Related solutions

IBM Granite

Achieve over 90% cost savings with Granite's smaller and open models, designed for developer efficiency. These enterprise-ready models deliver exceptional performance against safety benchmarks and across a wide range of enterprise tasks from cybersecurity to RAG.

Explore Granite
Artificial intelligence solutions

Put AI to work in your business with IBM's industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Explore the IBM library of foundation models in the IBM watsonx portfolio to scale generative AI for your business with confidence.

Explore watsonx.ai Explore AI solutions