Practical guidance for reading traces
Trace details provide a structured view of how an agent processes each request, helping you understand the major steps involved in a conversation or task. At a high level, each trace contains spans that represent different activities such as agent reasoning, workflow execution, tool usage, and model calls. These spans help you interpret the flow of execution and identify where time is spent.
What trace details show
Trace details visually represent how an agent processes a message from start to finish. Each message expands into a structured sequence of spans, with each span corresponding to a specific activity such as reasoning, tool flow, model invocation, workflow execution, or human involvement.
You can use trace details to:
Follow the complete runtime flow the agent executed.
Understand how decisions, tools, and model calls contributed to the final response.
Verify what information was passed between steps.
Identify failures, bottlenecks, or unexpected behavior.
What each span represents
The following spans are commonly emitted and represent specific activities within a run. Use the following table as reference when analyzing any trace.
Span | Details |
|---|---|
| The entire workflow run - the outermost span and full execution lifecycle. |
| Agent invocation lifecycle - when an agent is first called. |
| A single task given to an agent. |
| The agent's reasoning or objective for a specific task. |
| The agent's plan or internal reasoning steps. |
| How the system chooses which agent or skill to call next. |
| Work delegated to another agent. |
| Prompt construction — templates used to build model prompts. |
| The model (LLM) invocation responsible for generating text. |
| External action. |
| The agent's decision to use a tool (logical usage decision). |
| A point where human input or verification was required. |
Keep the above details as reference whenever reading a trace — every span you see maps back to one of these definitions.
Recognize the different types of spans
Each trace contains spans that represent key activities. Common examples include:
1. Workflow spans
Workflow spans represent the overall execution lifecycle. They show how the workflow ran end‑to‑end, including branching, loops, tool calls, and LLM interactions.
Example span: LangGraph.workflow
2. Agent spans
Agent spans indicate how the agent thought, planned, and acted. They describe the agent's objectives, how it broke down tasks, and what sub‑agents or tools it selected.
Example spans: invoke_agent, invoke_agent.task, agent.task, agent.plan
3. Tool spans
Tool spans reflect tool‑based actions. Use them to understand how and when tools were triggered, especially when debugging tool failures.
Example spans: tools.task, tool.invoke
4. Model (LLM) spans
Model spans capture the model's behavior. They highlight when the model was used, how long it took, and provide access to input and output text.
Example span: llm.call
5. Human interaction spans
Human interaction spans indicate when the system paused for user input, approval, or decision‑making.
Example span: human.task
Use spans to understand overall performance
Even without doing deeper analytics, trace spans help you estimate performance:
Workflow duration: The outermost span indicates how long the full request took. Look at the
LangGraph.workflowspan.Agent thinking and acting: Agent‑related spans help you see where the agent planned versus acted. Compare
agent.plan(planning) andinvoke_agent.task(execution).Model usage: LLM spans show where the system relied on the language model, which often contributes to latency. Review
llm.callto see how often and how long the model was used.Tool overhead: Inspect
tools.taskandtool.invoketo understand time spent on tool calls.
These views help you identify which parts of the process contribute most to overall latency.
Spot errors or issues quickly
You can scan trace spans to identify where something went wrong:
Look for spans marked with warnings or error indicators.
Check whether a tool, workflow, or model call returned an error.
Understand whether the issue originated from a tool, model, or workflow step.
This high‑level view helps you quickly identify problem areas before getting into deeper analysis.
Understand model usage at a glance
llm.call spans highlight:
When the model was invoked
How many interactions occurred
The general role the model played in producing the response
These spans offer a quick understanding of model involvement during a request.
Best practices when reviewing traces
Start with the top‑level
LangGraph.workflowspan to understand the overall structure.Expand the trace step‑by‑step to see how the agent progressed.
Look for repeated patterns, such as recurring tool calls (
tool.invoke) or model invocations (llm.call).Use error markers to quickly locate failures within long traces.
These patterns help you interpret the trace flow visually without needing in‑depth technical knowledge.