Debugging your agent
The debug mode in preview Chat helps you understand how your agent thinks, processes information, and produces responses. Instead of guessing why an output looks a certain way, the debugger provides clear, step-by-step visibility into the agent's internal execution. Whether you're validating new behaviors or troubleshooting unexpected results, the debugger enables fast, informed problem-solving. The debugger gives you the clarity and control needed to refine agent behavior and build reliable, high-quality experiences.
Using the debugger, you can:
-
Understand how your agent interprets user input
-
See how information moves through tools, memory, and collaborators
-
Identify the reasoning behind a specific response
-
Detect and resolve configuration or logic issues quickly
Getting started with the debugger
To begin debugging your agent:
-
In the Preview Chat window, open the menu and select Enable debug.
-
After running a conversation, click Debug beneath the chat result to open the agent debug page.
The debug interface has two major components:
|
Component |
Description |
|---|---|
|
Agent structure |
Visual map showing how the agent is built and how its parts relate |
|
Execution timeline |
Step-by-step view of the agent's reasoning from request to response |
Agent structure
The agent structure provides a visual map that helps you understand how the agent is built and how its parts relate. This includes:
-
Tools that your agent can use
-
Collaborators available to the agent
-
Connections and relationships between components
Execution timeline
The timeline shows each step of your agent's reasoning from the initial user request to the generated response. This view allows you to:
-
Follow execution flow step by step
-
Jump directly to any specific step
Use the navigation controls on the top of the execution timeline to move forward or backward through the execution timeline.
|
Control |
Action |
|---|---|
|
Highlight used nodes |
Click to highlight the nodes that were part of the execution. |
|
Hide unused nodes |
Click to focus only on nodes involved in the reasoning. |
|
Legends |
Click to get details on the meaning of each node type and its function |
Each step includes supporting information such as:
|
Information type |
Details |
|---|---|
|
Variables |
Summary, input, output, node logs |
|
Node metadata |
About, collaborators, tools, guidelines, llm model |
Variables
Variables store and display the data flowing through your workflow during execution. They help you understand what information is being passed between nodes and how it is transformed.
|
Variable type |
Description |
|---|---|
|
Summary |
High-level view of all variables generated so far in the workflow. Useful for quickly understanding the workflow state without digging into each node. |
|
Input |
Data entering a node. Can include output from previous nodes, static input provided by the user, or system-generated context. |
|
Output |
Data produced by a node after execution. Can be processed text, API responses, structured objects, or transformed variables. |
|
Node logs |
Execution logs for debugging and review. Might contain raw input or output data, execution success or failure states, runtime errors, and performance data. |
Node Metadata
Node metadata gives descriptive and configuration-related information about each node.
|
Metadata field |
Description |
|---|---|
|
About |
A brief summary explaining what the node does—its purpose, function, and role in the workflow. |
|
Tools |
Lists any external tools or integrations the node uses such as APIs, connectors, models, and plug-ins. |
|
Guidelines |
Node-specific instructions or rules such as prompting guidelines, formatting rules, and behavioral constraints that help maintain consistency in logic and output quality. |
|
LLM Model |
Indicates which language model powers this node—helpful for traceability and understanding performance, cost, and capabilities. |
Debugging workflow
Debugging a workflow involves systematically examining its execution to identify, understand, and resolve issues that affect its logic, performance, or output.
Review the input
The debugger captures exactly what your agent understood from the user’s message, helping you confirm that the right scenario is being tested.
Follow the execution path
As you progress through the execution:
-
The relevant node in the agent map becomes highlighted.
-
You can observe how memory influenced the request.
-
You gain visibility into how the agent formed its reasoning.
Inspect collaborator behavior
If your agent hands off work to a collaborator, you can examine:
-
How it interpreted the request
-
How its tools and guidelines shaped its behavior
-
Whether it delegated additional tasks to other components
Verify tool activity
The debugger shows you which tools were actually used. This helps you:
-
Confirm that the expected tools ran
-
Identify tools that should have been triggered but weren’t
-
Detect unexpected logic paths caused by configuration issues
Identify the root cause
By reviewing metadata, instructions, and execution details, you can uncover issues such as:
-
Incorrect collaborator instructions
-
Misaligned tool triggers
-
Logic paths that don’t reflect your intended behavior
After identifying the issue, you can update your configuration and retest immediately.