Templates for the agent settings

Different models expect different input formats for chat, and because of that you must configure the templates according to the models that you want to use for the Orchestrate agent. Chat templates specify how to convert conversations into the format that the model expects.

Chat templates for IBM Granite models

Use the template based on the IBM Granite model that you use in the Orchestrate agent. The template replaces the values of the chat_template and chat_template_params subproperties of prompt_templates.

For more information, see Configuring the settings of the Orchestrate agent.

ibm/granite-13b-chat-v2

Use the following template with the ibm/granite-13b-chat-v2 model:

prompt_templates:
    chat_template: |
        {%- if messages[0]["role"] == "system" %}
            {%- set system_message = messages[0]["content"] %}
            {%- set loop_messages = messages[1:] %}
        <|system|>
        {{system_message}}
        {%- else %}
            {%- set loop_messages = messages %}
        {%- endif %}
        {%- for message in loop_messages %}
            {%- if message["role"] == "user" or message["role"] == "human" %}
        <|user|>
        {{message.content}}
            {%- elif message["role"] == "assistant" or message["role"] == "ai" %}
        <|assistant|>
        {{message.content}}
            {%- endif %}
        {%- endfor %}
        {%- if add_generation_prompt %}
        <|assistant|>
        {%- endif %}
    chat_template_params:
        add_generation_prompt: true

ibm/granite-20b-multilingual

Use the following template with the ibm/granite-20b-multilingual model:

prompt_templates:
    chat_template: |
        {%- if messages[0]["role"] == "system" %}
        {%- set system_message = messages[0]["content"] %}
        {%- set loop_messages = messages[1:] %}
        ### System: {{system_message}}

        {%- else %}
            {%- set loop_messages = messages %}
        {%- endif %}
        {%- for message in loop_messages %}
            {%- if message["role"] == "user" or message["role"] == "human" %}
        ### User: {{message.content}}

            {%- elif message["role"] == "assistant" or message["role"] == "ai" %}
        ### Assistant: {{message.content}}

        {%- endif %}
        {%- endfor %}
        {%- if add_generation_prompt %}
        ### Assistant: 
        {%- endif %}

    chat_template_params:
        add_generation_prompt: true

Chat template for Meta Llma models

Use the template based on the Meta Llma model that you use in the Orchestrate agent. The template replaces the values of the chat_template and chat_template_params subproperties of prompt_templates.

For more information, see Configuring the settings of the Orchestrate agent.

meta-llama/llama-3-8b-instruct and meta-llama/llama-3-70b-instruct

Use the following template with the meta-llama/llama-3-8b-instruct and meta-llama/llama-3-70b-instruct models:

prompt_templates:
    chat_template: |
    {{- bos_token }}
    {%- if custom_tools is defined %}
        {%- set tools = custom_tools %}
    {%- endif %}
    {%- if not tools_in_user_message is defined %}
        {%- set tools_in_user_message = true %}
    {%- endif %}
    {%- if not date_string is defined %}
        {%- set date_string = "26 Jul 2024" %}
    {%- endif %}
    {%- if not tools is defined %}
        {%- set tools = none %}
    {%- endif %}

    {#- This block extracts the system message, so we can slot it into the right place. #}
    {%- if messages[0]['role'] == 'system' %}
        {%- set system_message = messages[0]['content']|trim %}
        {%- set messages = messages[1:] %}
    {%- else %}
        {%- set system_message = "" %}
    {%- endif %}

    {#- System message + builtin tools #}
    {{- "<|start_header_id|>system<|end_header_id|>\n\n" }}
    {%- if builtin_tools is defined or tools is not none %}
        {{- "Environment: ipython\n" }}
    {%- endif %}
    {%- if builtin_tools is defined %}
        {{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "\n\n"}}
    {%- endif %}
    {{- "Cutting Knowledge Date: December 2023\n" }}
    {{- "Today Date: " + date_string + "\n\n" }}
    {%- if tools is not none and not tools_in_user_message %}
        {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}
        {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
        {{- "Do not use variables.\n\n" }}
        {%- for t in tools %}
            {{- t | tojson(indent=4) }}
            {{- "\n\n" }}
        {%- endfor %}
    {%- endif %}
    {{- system_message }}
    {{- "<|eot_id|>" }}

    {#- Custom tools are passed in a user message with some extra guidance #}
    {%- if tools_in_user_message and not tools is none %}
        {#- Extract the first user message so we can plug it in here #}
        {%- if messages | length != 0 %}
            {%- set first_user_message = messages[0]['content']|trim %}
            {%- set messages = messages[1:] %}
        {%- else %}
            {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}
    {%- endif %}
        {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}
        {{- "Given the following functions, please respond with a JSON for a function call " }}
        {{- "with its proper arguments that best answers the given prompt.\n\n" }}
        {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
        {{- "Do not use variables.\n\n" }}
        {%- for t in tools %}
            {{- t | tojson(indent=4) }}
            {{- "\n\n" }}
        {%- endfor %}
        {{- first_user_message + "<|eot_id|>"}}
    {%- endif %}

    {%- for message in messages %}
        {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}
            {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }}
        {%- elif 'tool_calls' in message %}
            {%- if not message.tool_calls|length == 1 %}
                {{- raise_exception("This model only supports single tool-calls at once!") }}
            {%- endif %}
            {%- set tool_call = message.tool_calls[0].function %}
            {%- if builtin_tools is defined and tool_call.name in builtin_tools %}
                {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
                {{- "<|python_tag|>" + tool_call.name + ".call(" }}
                {%- for arg_name, arg_val in tool_call.arguments | items %}
                    {{- arg_name + '="' + arg_val + '"' }}
                    {%- if not loop.last %}
                        {{- ", " }}
                    {%- endif %}
                    {%- endfor %}
                {{- ")" }}
            {%- else  %}
                {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
                {{- '{"name": "' + tool_call.name + '", ' }}
                {{- '"parameters": ' }}
                {{- tool_call.arguments | tojson }}
                {{- "}" }}
            {%- endif %}
            {%- if builtin_tools is defined %}
                {#- This means we're in ipython mode #}
                {{- "<|eom_id|>" }}
            {%- else %}
                {{- "<|eot_id|>" }}
            {%- endif %}
        {%- elif message.role == "tool" or message.role == "ipython" %}
            {{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }}
            {%- if message.content is mapping or message.content is iterable %}
                {{- message.content | tojson }}
            {%- else %}
                {{- message.content }}
            {%- endif %}
            {{- "<|eot_id|>" }}
        {%- endif %}
    {%- endfor %}
    {%- if add_generation_prompt %}
        {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
    {%- endif %}

    chat_template_params:
    bos_token: <|begin_of_text|>
    eos_token: <|eot_id|>
    add_generation_prompt: true

meta-llama/llama-3-1-8b-instruct, meta-llama/llama-3-1-70b-instruct, and meta-llama/llama-3-405b-instruct

Use the following template with the meta-llama/llama-3-1-8b-instruct, meta-llama/llama-3-1-70b-instruct, and meta-llama/llama-3-405b-instruct models:

prompt_templates:
    chat_template: |
    {{- bos_token }}
    {%- if custom_tools is defined %}
        {%- set tools = custom_tools %}
    {%- endif %}
    {%- if not tools_in_user_message is defined %}
        {%- set tools_in_user_message = true %}
    {%- endif %}
    {%- if not current_date is defined %}
        {%- set current_date = "26 Jul 2024" %}
    {%- endif %}
    {%- if not tools is defined %}
        {%- set tools = none %}
    {%- endif %}

    {#- This block extracts the system message, so we can slot it into the right place. #}
    {%- if messages[0]['role'] == 'system' %}
        {%- set system_message = messages[0]['content']|trim %}
        {%- set messages = messages[1:] %}
    {%- else %}
        {%- set system_message = "" %}
    {%- endif %}

    {#- System message + builtin tools #}
    {{- "<|start_header_id|>system<|end_header_id|>\n\n" }}
    {%- if builtin_tools is defined or tools is not none %}
        {{- "Environment: ipython\n" }}
    {%- endif %}
    {%- if builtin_tools is defined %}
        {{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "\n\n"}}
    {%- endif %}
    {{- "Cutting Knowledge Date: December 2023\n" }}
    {{- "Today Date: " + current_date + "\n\n" }}
    {%- if tools is not none and not tools_in_user_message %}
        {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}
        {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
        {{- "Do not use variables.\n\n" }}
        {%- for t in tools %}
            {{- t | tojson(indent=4) }}
            {{- "\n\n" }}
        {%- endfor %}
    {%- endif %}
    {{- system_message }}
    {{- "<|eot_id|>" }}

    {#- Custom tools are passed in a user message with some extra guidance #}
    {%- if tools_in_user_message and not tools is none %}
        {#- Extract the first user message so we can plug it in here #}
        {%- if messages | length != 0 %}
            {%- set first_user_message = messages[0]['content']|trim %}
            {%- set messages = messages[1:] %}
        {%- else %}
            {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}
    {%- endif %}
        {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}
        {{- "Given the following functions, please respond with a JSON for a function call " }}
        {{- "with its proper arguments that best answers the given prompt.\n\n" }}
        {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
        {{- "Do not use variables.\n\n" }}
        {%- for t in tools %}
            {{- t | tojson(indent=4) }}
            {{- "\n\n" }}
        {%- endfor %}
        {{- first_user_message + "<|eot_id|>"}}
    {%- endif %}

    {%- for message in messages %}
        {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}
            {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' }}
        {%- elif 'tool_calls' in message %}
            {%- if not message.tool_calls|length == 1 %}
                {{- raise_exception("This model only supports single tool-calls at once!") }}
            {%- endif %}
            {%- set tool_call = message.tool_calls[0].function %}
            {%- if builtin_tools is defined and tool_call.name in builtin_tools %}
                {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
                {{- "<|python_tag|>" + tool_call.name + ".call(" }}
                {%- for arg_name, arg_val in tool_call.arguments | items %}
                    {{- arg_name + '="' + arg_val + '"' }}
                    {%- if not loop.last %}
                        {{- ", " }}
                    {%- endif %}
                    {%- endfor %}
                {{- ")" }}
            {%- else  %}
                {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}
                {{- '{"name": "' + tool_call.name + '", ' }}
                {{- '"parameters": ' }}
                {{- tool_call.arguments | tojson }}
                {{- "}" }}
            {%- endif %}
            {%- if builtin_tools is defined %}
                {#- This means we're in ipython mode #}
                {{- "<|eom_id|>" }}
            {%- else %}
                {{- "<|eot_id|>" }}
            {%- endif %}
        {%- elif message.role == "tool" or message.role == "ipython" %}
            {{- "<|start_header_id|>ipython<|end_header_id|>\n\n" }}
            {%- if message.content is mapping or message.content is iterable %}
                {{- message.content | tojson }}
            {%- else %}
                {{- message.content }}
            {%- endif %}
            {{- "<|eot_id|>" }}
        {%- endif %}
    {%- endfor %}
    {%- if add_generation_prompt %}
        {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
    {%- endif %}

    chat_template_params:
    add_generation_prompt: true
    bos_token: <|begin_of_text|>
    current_date: 26 Jul 2024
    tools_in_user_message: false

Chat template for Mistral AI models

Use the template based on the Mistral AI model that you use in the Orchestrate agent. The template replaces the values of the chat_template and chat_template_params subproperties of prompt_templates.

For more information, see Configuring the settings of the Orchestrate agent.

mistralai/mistral-large and mistralai/mixtral-8x7b-instruct-v01

Use the following template with the mistralai/mistral-large and mistralai/mixtral-8x7b-instruct-v01 models:

prompt_templates:
    chat_template: |
    {%- if messages[0]["role"] == "system" %}
        {%- set system_message = messages[0]["content"] %}
        {%- set loop_messages = messages[1:] %}
    {%- else %}
        {%- set loop_messages = messages %}
    {%- endif %}
    {%- if not tools is defined %}
        {%- set tools = none %}
    {%- endif %}
    {%- set user_messages = loop_messages | selectattr("role", "equalto", "user") | list %}

    {%- for message in loop_messages | rejectattr("role", "equalto", "tool") | rejectattr("role", "equalto", "tool_results") | selectattr("tool_calls", "undefined") %}
        {%- if (message["role"] == "user") != (loop.index0 % 2 == 0) %}
            {{- raise_exception("After the optional system message, conversation roles must alternate user/assistant/user/assistant/...") }}
        {%- endif %}
    {%- endfor %}

    {{- bos_token }}
    {%- for message in loop_messages %}
        {%- if message["role"] == "user" %}
            {%- if tools is not none and (message == user_messages[-1]) %}
                {{- "[AVAILABLE_TOOLS] [" }}
                {%- for tool in tools %}
            {%- set tool = tool.function %}
            {{- '{"type": "function", "function": {' }}
            {%- for key, val in tool.items() if key != "return" %}
                {%- if val is string %}
                {{- '"' + key + '": "' + val + '"' }}
                {%- else %}
                {{- '"' + key + '": ' + val|tojson }}
                {%- endif %}
                {%- if not loop.last %}
                {{- ", " }}
                {%- endif %}
            {%- endfor %}
            {{- "}}" }}
                    {%- if not loop.last %}
                        {{- ", " }}
                    {%- else %}
                        {{- "]" }}
                    {%- endif %}
                {%- endfor %}
                {{- "[/AVAILABLE_TOOLS]" }}
                {%- endif %}
            {%- if loop.last and system_message is defined %}
                {{- "[INST] " + system_message + "\n\n" + message["content"] + "[/INST]" }}
            {%- else %}
                {{- "[INST] " + message["content"] + "[/INST]" }}
            {%- endif %}
        {%- elif message["role"] == "tool_calls" or message.tool_calls is defined %}
            {%- if message.tool_calls is defined %}
                {%- set tool_calls = message.tool_calls %}
            {%- else %}
                {%- set tool_calls = message.content %}
            {%- endif %}
            {{- "[TOOL_CALLS] [" }}
            {%- for tool_call in tool_calls %}
                {%- set out = tool_call.function|tojson %}
                {{- out[:-1] }}
                {%- if not tool_call.id is defined or tool_call.id|length != 9 %}
                    {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }}
                {%- endif %}
                {{- ', "id": "' + tool_call.id + '"}' }}
                {%- if not loop.last %}
                    {{- ", " }}
                {%- else %}
                    {{- "]" + eos_token }}
                {%- endif %}
            {%- endfor %}
        {%- elif message["role"] == "assistant" %}
            {{- " " + message["content"] + eos_token}}
        {%- elif message["role"] == "tool_results" or message["role"] == "tool" %}
            {%- if message.content is defined and message.content.content is defined %}
                {%- set content = message.content.content %}
            {%- else %}
                {%- set content = message.content %}
            {%- endif %}
            {{- '[TOOL_RESULTS] {"content": ' + content|string + ", " }}
            {%- if not message.tool_call_id is defined or message.tool_call_id|length != 9 %}
                {{- raise_exception("Tool call IDs should be alphanumeric strings with length 9!") }}
            {%- endif %}
            {{- '"call_id": "' + message.tool_call_id + '"}[/TOOL_RESULTS]' }}
        {%- else %}
            {{- raise_exception("Only user and assistant roles are supported, with the exception of an initial optional system message!") }}
        {%- endif %}
    {%- endfor %}

    chat_template_params:
    bos_token: <s>
    eos_token: </s>

System prompt template for Meta Llama models

Use the template based on the Meta Llama model that you use in the Orchestrate agent. The template replaces the value of the system_prompt subproperty of base_llm.

For more information, see Configuring the settings of the Orchestrate agent.

system_prompt: >
    You are watsonx Orchestrate, a helpful and ethical AI assistant based on an AI language model. You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
        - You answer the questions with markdown formatting using GitHub syntax when possible. 
        - The markdown formatting you support: headings, bold, italic, links, tables, lists, code blocks, and blockquotes. You must omit that you answer the questions with markdown.
        - Any HTML tags must be wrapped in block quotes, for example ```<html>```. You will be penalized for not rendering code in block quotes.
        - When returning code blocks, specify language.
        - You always respond to greetings (for example, hi, hello, g'day, morning, afternoon, evening, night, what's up, nice to meet you, sup, etc) with "Hello! I am watsonx Orchestrate, created by IBM. How can I help you today?"

        If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, admit that you can't answer.  Don't share false information!

        The current time is {current_time}.

System prompt template for Mistral AI models

Use the template based on the Mistral AI model that you use in the Orchestrate agent. The template replaces the value of the system_prompt subproperty of base_llm.

For more information, see Configuring the settings of the Orchestrate agent.

system_prompt: >
    You are watsonx Orchestrate, a helpful and ethical AI assistant based on an AI language model. You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
        - You answer the questions with markdown formatting using GitHub syntax when possible. 
        - The markdown formatting you support: headings, bold, italic, links, tables, lists, code blocks, and blockquotes. You must omit that you answer the questions with markdown.
        - Any HTML tags must be wrapped in block quotes, for example ```<html>```. You will be penalized for not rendering code in block quotes.
        - When returning code blocks, specify language.
        - You always respond to greetings (for example, hi, hello, g'day, morning, afternoon, evening, night, what's up, nice to meet you, sup, etc) with "Hello! I am watsonx Orchestrate, created by IBM. How can I help you today?"

        If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, admit that you can't answer.  Don't share false information!

        Today's date is {current_date} and the current time is {current_time}.


Parent topic:

Configuring the settings of the Orchestrate agent
Managing the chat
Using the chat