Monitoring CrewAI
CrewAI is a Python framework that can create and manage AI agents that collaborate to complete tasks. You can use CrewAI to build autonomous AI agents that can work together, execute workflows, and optimize decision-making. CrewAI integrates with large language models (LLMs) and external tools, making it useful for various applications, including automation, research, and customer service.
Instana provides powerful observability for CrewAI, enabling real-time monitoring of AI agent workflows, task execution, and LLM performance. By integrating Instana’s distributed tracing, metrics collection, and error detection, you gain deep insights into how CrewAI agents interact, execute tasks, and optimize workflows. Instana provides end-to-end traceability for AI-driven systems, making CrewAI implementations more transparent, reliable, and scalable.
Instrumenting CrewAI Application
To instrument the CrewAI application, complete the following steps:
Make sure that your environment meets all the prerequisites. For more information, see Prerequisites.
-
To install dependencies for CrewAI, run the following command:
pip3 install crewai==0.76.9 crewai-tools==0.13.4
-
Export the following credentials to access the WatsonX models used in the CrewAI sample application.
export WATSONX_URL="<watsonx-url>" export WATSONX_PROJECT_ID="<watsonx-project-id>" export WATSONX_API_KEY="<watsonx-api-key>" export SERPER_API_KEY="<serper-api-key>"
-
Run the following code to generate a CrewAI sample application:
from crewai import Agent, Task, Crew, Process from crewai.llm import LLM from crewai_tools import SerperDevTool from pydantic import BaseModel from traceloop.sdk import Traceloop from traceloop.sdk.decorators import workflow Traceloop.init(app_name="Crewai_test") search_tool = SerperDevTool() llm1 = LLM(model="watsonx/meta-llama/llama-3-1-70b-instruct") llm2 = LLM(model="watsonx/meta-llama/llama-3-3-70b-instruct") class JsonOutput(BaseModel): agent: str expected_output: str total_tokens: int prompt_tokens: int completion_tokens: int successful_requests: int # Define the Agents researcher = Agent( role="Senior Research Analyst", goal="Uncover cutting-edge developments in AI and data science", backstory="You are a Senior Research Analyst at a leading tech think tank.", verbose=True, allow_delegation=False, llm=llm1, tools=[search_tool], # Tool for online searching ) writer = Agent( role="Tech Content Strategist", goal="Craft compelling content on tech advancements", backstory="You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation.", verbose=True, allow_delegation=False, llm=llm2, tools=[search_tool], # Tool for online searching ) # Define the Tasks task1 = Task( description="Perform an in-depth analysis of the following topic: {topic}", expected_output="Comprehensive analysis report in bullet points", agent=researcher, ) task2 = Task( description="Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements", expected_output="A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024", agent=writer, output_json=JsonOutput, ) # Create the crew crew = Crew( agents=[researcher, writer], tasks=[task1, task2], verbose=True, process=Process.sequential, ) @workflow(name="crewai_workflow") def watsonx_crew_kickoff(topic): return crew.kickoff(inputs={"topic": topic}) topics = ["Artificial Intelligence", "Machine learning", "Neural Network"] for topic in topics: result = watsonx_crew_kickoff(topic)
-
Execute the following command to run the CrewAI application:
python3 ./<crewai-sample-application>.py
CrewAI generates a response as shown in the following example:
After you configure monitoring, Instana collects the following traces and metrics from the CrewAI application:
To view the metrics collected from LLM that is used in CrewAI, see View metrics.