Monitoring LangGraph

LangGraph is a framework that you can use to build stateful, multi-agent workflows in LangChain. LangGraph integrates with large language models (LLMs) such as WatsonX, OpenAI, and Claude.

You can monitor LangGraph by using Instana and capture traces, metrics, and logs. This data provides you with real-time insights into the AI agent interactions, execution paths, and system performance.

LangGraph enables multi-agent workflows with non-linear execution. Therefore, you require distributed tracing to track the LLM interactions and API calls. By integrating OpenTelemetry (OTel) and Traceloop’s OpenLLMetry, you can instrument LangGraph workflows to capture spans and traces, and thereby visualize the execution flows in Instana. Real-time observability further enhances workflow optimization by analyzing resource utilization and performance trends. This integration helps you to fine-tune AI-driven automation, which improves reliability and efficiency in LangGraph-based applications.

Instrumenting LangGraph Application

To instrument LangGraph applications, complete the following steps:

Make sure that your environment meets all the prerequisites. For more information, see Prerequisites.

  1. To install dependencies for LangGraph, run the following command:

    pip3 install langgraph langchain_ibm duckduckgo_search
    
  2. Export the following credentials to access the WatsonX models and search tool that are used in the LangGraph sample application.

    export WATSONX_URL="<watsonx-url>"
    export WATSONX_PROJECT_ID="<watsonx-project-id>"
    export WATSONX_API_KEY="<watsonx-api-key>"
    export TAVILY_API_KEY="<tavily-api-key>"
    
  3. Run the following code to generate a LangGraph sample application:

    import os
    from langchain_community.tools.tavily_search import TavilySearchResults
    from langchain_community.tools import DuckDuckGoSearchRun
    from typing import Literal
    from langchain_core.messages import BaseMessage, HumanMessage
    from langgraph.prebuilt import create_react_agent
    from langgraph.graph import MessagesState, StateGraph, START, END
    from langgraph.types import Command
    from langchain_ibm.chat_models import ChatWatsonx
    
    from traceloop.sdk import Traceloop
    from traceloop.sdk.decorators import workflow
    
    # Initialize Traceloop
    Traceloop.init(app_name="langGraph_sample_app")
    
    # Define external tools
    tavily_tool = TavilySearchResults(max_results=10)
    duckDuckGoSearchRun_tool = DuckDuckGoSearchRun()
    
    # Define LLM
    parameters = {
        "decoding_method": "sample",
        "max_new_tokens": 600,
        "min_new_tokens": 1,
        "temperature": 0.5,
        "top_k": 50,
        "top_p": 1,
    }
    llm = ChatWatsonx(
        model_id="ibm/granite-3-2b-instruct",
        url="https://us-south.ml.cloud.ibm.com",
        project_id=os.getenv("WATSONX_PROJECT_ID"),
        apikey=os.getenv("WATSONX_API_KEY"),
        params=parameters,
    )
    
    
    def get_next_node(last_message: BaseMessage, goto: str):
        if not last_message.content:
            return END
        return goto
    
    
    # Research agent and node
    research_agent = create_react_agent(
        llm,
        tools=[tavily_tool],
        state_modifier="You are a research assistant. Your ONLY job is to conduct thorough research. ",
    )
    
    
    def research_node(
        state: MessagesState,
    ) -> Command[Literal["blogger", END]]:
        result = research_agent.invoke(state)
        goto = get_next_node(result["messages"][-1], "blogger")
        result["messages"][-1] = HumanMessage(
            content=result["messages"][-1].content, name="researcher"
        )
        return Command(
            update={
                "messages": result["messages"],
            },
            goto=goto,
        )
    
    
    # Writer agent and node
    writer_agent = create_react_agent(
        llm,
        tools=[duckDuckGoSearchRun_tool],
        state_modifier="You are a blog writer. Your task is to create a well-structured blog",
    )
    
    
    def writer_node(state: MessagesState) -> Command[Literal["researcher", END]]:
        result = writer_agent.invoke(state)
        result["messages"][-1] = HumanMessage(
            content=result["messages"][-1].content, name="blogger"
        )
        return Command(
            update={
                "messages": result["messages"],
            },
        )
    
    
    # Define workflow graph
    workflow = StateGraph(MessagesState)
    workflow.add_node("researcher", research_node)
    workflow.add_node("blogger", writer_node)
    
    workflow.add_edge(START, "researcher")
    graph = workflow.compile()
    
    # Execute workflow
    events = graph.stream(
        {
            "messages": [
                (
                    "user",
                    "First, conduct a research on AI trend in 2025 "
                    "Then, based on that research, create a well-structured blog post.",
                )
            ],
        },
        {"recursion_limit": 150},
    )
    
    # Print workflow execution steps
    for s in events:
        print(s)
        print("----")
    
  4. Execute the following command to run the LangChain application:

    python3 ./<langGraph-sample-application>.py
    

LangGraph generates a response as shown in the following example:

After you configure monitoring, Instana collects the following traces and metrics from the LangGraph application:

To view the metrics collected from LLM that is used in LangGraph, see View metrics.