Monitoring LangChain
LangChain is a Python framework that is designed to build and manage AI-driven applications that use large language models (LLMs) and external tools to create intelligent, context-aware systems. With LangChain, you can develop intelligent workflows that chain together tasks, tools, and LLMs to reason about data and generate dynamic outputs such as documentation, summaries, or insights. LangChain’s ability to chain together multiple tasks and tools enables the creation of sophisticated AI systems that can adapt to diverse use cases.
Instana provides powerful observability for LangChain, offering real-time monitoring of workflows, LLM interactions, and task execution. By integrating Instana’s distributed tracing, metrics collection, and error detection, you can gain deep insights into how LangChain workflows process data, interact with external tools, and optimize task execution. Instana ensures end-to-end traceability for AI-driven systems, making LangChain implementations more transparent, reliable, and scalable. This integration allows teams to monitor performance and detect anomalies for a seamless operation of AI-powered applications that are built with LangChain.
Instrumenting LangChain Application
To instrument the LangChain application, complete the following steps:
Make sure that your environment meets all the prerequisites. For more information, see Prerequisites.
-
To install dependencies for LangChain, run the following command:
pip3 install langchain-core==0.3.34 langchain-community==0.3.17
-
Export the following credentials to access the watsonX models used in the LangChain sample application:
export WATSONX_URL="<watsonx-url>" export WATSONX_PROJECT_ID="<watsonx-project-id>" export WATSONX_API_KEY="<watsonx-api-key>"
-
Generate a LangChain sample application by running the following code:
from langchain_ibm import WatsonxLLM from langchain_core.prompts import PromptTemplate from traceloop.sdk import Traceloop from traceloop.sdk.decorators import task, workflow import os # Initialize Traceloop Traceloop.init(app_name="langchain_watsonx_service") @task(name="initialize_watsonx_model") def initialize_model(): return WatsonxLLM( model_id="ibm/granite-13b-instruct-v2", url=os.getenv("WATSONX_URL"), apikey=os.getenv("WATSONX_API_KEY"), project_id=os.getenv("WATSONX_PROJECT_ID"), params={ "decoding_method": "sample", "max_new_tokens": 512, "min_new_tokens": 1, "temperature": 0.5, "top_k": 50, "top_p": 1, } ) @task(name="create_prompt_template") def create_prompt(): return PromptTemplate( input_variables=["input_text"], template="You are a helpful AI assistant. Respond to the following: {input_text}" ) @task(name="process_llm_query") def process_query(prompt, llm, input_text): chain = prompt | llm return chain.invoke({"input_text": input_text}) @workflow(name="watsonx_conversation_workflow") def run_conversation(): watsonx_llm = initialize_model() prompt_template = create_prompt() input_text = "Explain the concept of quantum computing in simple terms." response = process_query(prompt_template, watsonx_llm, input_text) print("Response:", response) if __name__ == "__main__": run_conversation()
-
Run the LangChain application by running the following command:
python3 ./<langchain-sample-application>.py
The LangChain sample app generates a response as shown in the following example:
Response: Quantum computing is a type of computing that uses quantum-mechanical phenomena such as superposition and quantum entanglement to perform calculations. It is different from classical computing, which uses binary digits (0s and 1s) to represent information. In quantum computing, information is encoded in quantum bits, or "qubits," which can exist in a superposition of both 0 and 1 at the same time. This allows quantum computers to perform certain types of calculations much faster than classical computers.
After you configure LangChain monitoring, Instana collects the following traces and metrics from the LangChain application:
To view the metrics collected from LLM that is used in LangChain, see View metrics.
Troubleshooting
You might encounter the following issue while monitoring LangChain based applications:
Some metrics might not be listed with expected values when create_react_agent is used
If any of the metrics, including the model name, are not listed correctly, verify the package that is used to import create_react_agent. Importing from the package 'langchain.agents' is not supported, instead use the 'langgraph.prebuilt' package.