Adding knowledge to agents

Agents deliver accurate, context-aware responses only when they are connected to reliable sources of information. These sources supply the factual and contextual data that is needed for interpreting user queries and generating meaningful answers. Without this connection, the agent relies solely on its base model, which might lack domain-specific knowledge and produce generic or incomplete responses.

Knowledge source

A knowledge source is where your agent gets information to answer user queries. You can extend an agent’s capabilities by using knowledge:

  • External repositories: Connect to systems like Milvus, Elasticsearch, or custom services to access dynamic, frequently updated content. Use external sources when your content is frequently updated, distributed across systems, or needs automation.

  • File uploads: Add stable, reviewed documents such as manuals or guides directly to the agent’s internal knowledge base. Use file uploads for reliable, static content.

These sources work alongside the agent’s built-in LLM knowledge. They add to its existing knowledge and do not replace it.

Before you begin

Before you configure knowledge sources, ensure that the following prerequisites are met:

  • Agents must be able to retrieve and respond by using content from external repositories in English, Spanish, French, German, and Brazilian Portuguese.

  • If you intend to use Milvus, Elasticsearch, Astra DB, or a custom service, contact your IT administrator to obtain the necessary connection details.

Customizing knowledge settings

You can configure how your agent uses retrieved content to generate responses by adjusting its knowledge settings. These settings help you control response behavior, including confidence thresholds, response length, fallback messaging, and citation display. You can also choose between classic and dynamic modes based on how strictly the agent must follow the retrieved content.

  • Classic mode: The agent retrieves content from connected sources and generates a response that is based solely on that data. This mode is ideal for straightforward use cases where responses must be tightly aligned with the retrieved content. Choose classic mode when you want predictable, content-driven responses. It is best suited for FAQs, policy lookups, documentation-based answers, and other straightforward use cases.

  • Dynamic mode: The agent retrieves content and uses it more flexibly either to generate a response or as a context to complete tasks. This mode is better suited for complex interactions or multi-step tasks. Use dynamic mode when your agent needs to interpret, reason, or act based on the retrieved information. It is best suited for troubleshooting, decision support, task automation, and scenarios that requires reasoning or summarization.

Editing knowledge settings in classic mode

In classic mode, the agent uses retrieved content directly to generate responses.

To configure settings:

  1. Go to the agent configuration page.
  2. Click Knowledge > Edit knowledge settings.
  3. Select Classic.
  4. Customize the following options to refine your agent’s behavior:
  5. Click Save to apply your changes.

Editing knowledge settings in dynamic mode

In dynamic mode, the agent can use the retrieved content either to generate a response or context for completing tasks.

To configure settings:

  1. Go to the agent configuration page.
  2. Navigate to Knowledge and click Edit knowledge settings.
  3. Select Dynamic (Preview).
  4. Customize the following options to refine your agent’s behavior:
  5. Click Save to apply your changes.

Confidence thresholds for retrieval and responses

Confidence thresholds determine when the agent uses knowledge-base content in responses. You can set two types:

Retrieval confidence threshold: Specifies how confident the system must be that the retrieved data is relevant. The system includes the data in a response only if this level of confidence is met.

Response confidence threshold: Specifies how confident the system must be that the generated response is accurate and useful. The system returns the response only if this level of confidence is met.

Each threshold can be set to one of the following levels:

  • Lowest: The agent frequently uses the knowledge base, even when confidence is low. This setting can lead to faster responses but can increase the chance of irrelevant or incorrect answers.
  • Low: The agent uses search results often but with more caution. This setting balances responsiveness with accuracy.
  • High: The agent uses search results selectively and requires higher confidence to include them. This setting reduces the risk of incorrect information.
  • Highest: The agent rarely uses search results unless confidence is notably high. This setting prioritizes accuracy and minimizes the use of uncertain information.

Adjusting the length of the generated response

You can control how detailed your agent’s responses are by selecting a preferred response length:

  • Concise: Short and direct answers, ideal for simple queries.
  • Moderate (default): Balanced responses with enough detail for general use.
  • Verbose: In-depth responses suitable for complex or exploratory queries.

Different use cases require different levels of detail. For example, a chatbot for IT support might need verbose responses, while a customer-facing agent might benefit from concise replies. This setting helps tailor the agent's tone and depth to the audience.

Set a message when no answer is found

When your agent is unable to find a suitable response due to missing content or low confidence, you can set a fallback message to guide the user.

For example,

"I couldn’t find an answer to your question. Please try rephrasing it."

Fallback messages prevent the agent from going silent or producing confusing output and encourage users to try again, improving trust and usability.

Set the maximum number of search results

You can decide how many results the agent retrieves and considers when you generate a response. By setting this limit, you narrow the scope of the agent’s search, which helps you balance relevance, performance, and response time.

Setting the number of citations your agent shows

When your agent uses uploaded documents as knowledge sources, it can display citations in the chat to help users verify the information. These citations reference the sources that are used to generate the response.

You can select a number to determine how many citations or references the agent can provide in its response.

Note: This setting affects only the number of citations that are shown in the chat, not the number that is used to generate the response.

Connecting to external content repositories

To connect your agent to an external content repository, follow the setup guide that matches the type of repository you're using:

Switching content repositories

You can change your agent’s content repository anytime whether switching from uploaded files to an external system like Milvus or Elasticsearch, or between external services.

Considerations before you switch knowledge repositories.

  • Existing configurations are lost: Switching repositories deletes all previously uploaded files, indexing details, and any custom settings tied to the current knowledge source.
  • Data cannot be recovered: When deleted, the previous repository’s content and configurations cannot be restored.
  • Backup is critical: Before you switch repositories, save important files and document key settings to avoid permanent data loss.

To switch repositories:

  1. Click Change source.
  2. Select your new content repository.
  3. Confirm the action by clicking I understand.
Note: This action permanently deletes the previous repository’s details and files.

Creating knowledge descriptions

When you upload documents to your agent’s knowledge base, include a clear and informative description. It helps the agent understand and interpret the data. The agent then decides whether to use the knowledge in its responses or rely on another method, such as calling a tool or by using the large language model (LLM).

Recommendations for writing effective descriptions

Use these recommendations to help write clear, purposeful knowledge descriptions that improve agent understanding, response relevance, and user experience.

Green check icon What to do Red cross icon What to avoid
Use clear, concise language that reflects the document’s purpose. To write vague or generic descriptions like "general info" or "misc".
Highlight key topics or actions that the document supports. To include irrelevant details or overly technical jargon.
Focus on how the content helps the agent respond to user queries. To copy the document title without adding context.
Mention specific use cases or scenarios the document covers. To leave the description field blank or incomplete.

Example of an effective description

The following example shows a knowledge description for HR policies:

"This knowledge base includes HR policies, employee handbooks, and guidelines on benefits, leave, and performance management. It supports queries like 'What is the parental leave policy?' and 'How do I apply for remote work?' Keywords: leave policy, remote work, benefits, performance review, onboarding."