Using the AI chat
Interact with AI-powered assistants, AI agents, and skills by using the AI chat. With this chat, you can complete various generative artificial intelligence (AI) tasks, or even use AI assistants to perform more complex tasks like sending emails or posting job requisitions.
To access the AI chat, click Chat in the menu. It opens a chat for you to start the conversation, but if you want to start a new chat click New chat + in the side panel.

Using AI assistants in the chat
When you first interact with the AI agent in the chat, it can manage general AI tasks. However, when your tasks become more focused on a specific domain, you can smoothly route the conversation to AI assistants designed for a domain or use case.
Using AI assistants in the AI chat requires that you complete the following steps beforehand:
- Create an AI assistant.
- If you want to use skill-based actions, you must connect to the apps that you want to use the skills.
- Create actions in that AI assistant.
- Publish the contents of the AI assistant to the draft and live environments.
- Add the AI assistant to the chat.
If you don't add AI assistants to the chat or your prompts don’t match any AI assistant added, the conversation is handled by the AI agent and its base LLM.
Interacting in the AI chat
During the interaction, you can use the AI chat to assist you with your tasks by:
- Prompting questions and simple tasks to AI agent generate content for them.
- Using a specific AI assistant by mentioning it.
- Expecting the AI agent to route the conversation to an AI assistant based on the conversation context.
Prompting questions and simple tasks to the AI agent
The AI agent uses large language models (LLM) to create general-purpose answers and assist with a wide range of generative AI tasks, such as question-and-answer, summarization, classification, text generation, information extraction, and translation for various topics. This toggle is disabled by default.
Explore the sample prompts to get started with the AI agent.
Use prompt engineering techniques to ensure that AI agent can accurately understand and respond to your prompts. For more information about how to write prompts for AI agent, see Tips for writing foundation model prompts: prompt engineering in the IBM watsonx documentation.
When you send a prompt to AI agent, its LLM breaks down your input into tokens. Tokens are individual units of text that represent words, phrases, or other meaningful elements. The number of tokens in your prompt or response might vary depending on the complexity of the task and the length of the input.
The amount of information (context length) that AI agent considers when it processes a user's prompt vary depending on the base LLM that AI agent uses. By default it uses the IBM Granite model, but tenant administrators can change this default by configuring the model of the AI agent.
Mentioning an AI assistant
Navigate the conversation straight to the AI assistant specialized in the domain that is required to complete a task. Mention the AI assistant's name with the @
symbol and include your prompt in the chat bar. For instance, "@RetailAnalysis
show me the sales performance by product category for the past month."
Routing based on the conversation context
Your conversation is automatically directed to an AI assistant, AI agent, or skill based on the conversation context. The conversation history forms the context of the chat, your current input, and the specific details of the AI assistants, AI agents, and skills added to the chat.
To ensure seamless execution of AI assistants' actions, the agent routes with an AI assistant for all subsequent interactions until the assistant completes its actions. It does not suggest new routing recommendations during an ongoing conversation. The AI assistant ends automatically in the following scenarios:
- All Action Completion: The AI assistant ends after it completes all its designed actions. For more details, see Finishing the AI assistant.
- Direct invocation: The AI assistant ends when directly started by using mentions.
- Conversational search response: The AI assistant ends after Conversational search content responds.
Finishing the action
To finish an action, refer to Subactions. You can create multiple actions and design one action to start another using subactions based on your requirements.
Finishing the AI assistant
The AI agent allows multiple AI assistants to work together. The underlying LLM router routes messages to the best-matching assistant and continues until the assistant completes its tasks. To ensure the assistant ends its operations smoothly, follow these best practices:
Practice | Action |
---|---|
Action completed | Use End the Action in the And then section for any required step. |
Subactions completion | To start a subaction, use the End this action after the other action is completed checkbox, or use End the Action in a parent action step. |
Custom fallback | Review custom fallback connections to Agents or external flows. Decide whether the agent can reroute more effectively to other assistants. |
Disambiguation and digression | Revisit configurations for Ask clarifying questions or Change conversation topic features. If they are unnecessary or better handled by another assistant, consider disabling them. |
Viewing your tasks
A task is a skill or skill flow that you run, or a LLM task that is being generated. If the task you requested takes time to run, the AI chat suggests you to work on something else while the requested task runs in your chat. You can start more than one skill over multiple chats. You receive a notification with the task status and task completion.