In this tutorial, you'll use Agent Communication Protocols (ACP) to explore a multi-agent, cross-platform AI workflow that demonstrates real-time agent collaboration with BeeAI and crewAI. ACP functions as a shared, open-standard messaging layer that enables agents from different frameworks to communicate and coordinate without custom integration logic.
ACP is especially valuable for enterprise AI environments, where teams often need to build agents and workflows across diverse platforms, tools and infrastructures. By providing a standardized messaging layer, ACP enables scalable, secure and modular agent collaboration that meets the demands of modern enterprise AI systems.
This project demonstrates agent interoperability by enabling AI-driven agents to collaborate across framework silos, combining agent capabilities like research, content generation and feedback into a unified workflow.
Most agentic AI frameworks handle communication by using custom or closed systems. This architecture makes it difficult to connect agents across toolchains, teams or infrastructures, especially when combining components from different AI systems.
ACP introduces a standardized, framework-independent messaging format for how autonomous agents send, receive and interpret messages. Messages are structured, typically in JSON, and contain metadata to enrich agent interactions with clarity and consistency.
By decoupling communication from an agent's internal logic, ACP allows teams to mix and match agents built with different AI agent frameworks, such as BeeAI, CrewAI, LangChain or LangGraph, without requiring custom integration code. This approach increases scalability, simplifies automation and supports modular, transparent system design that aligns with modern industry standards.
By the end of this tutorial, you will have seen a practical example of ACP and have hands-on experience using the following technologies:
This project demonstrates a multi-agent workflow that showcases how ACP (through the acp-sdk) can streamline coherent and observable collaboration across agent ecosystems.
The workflow begins when the user provides a URL. From there, a modular, framework-independent system of specialized agents transforms the webpage content into a creative artifact—an original song—accompanied by professional-style critique. All components work in concert to combine these outputs into a single, unified human-readable Markdown report. This final result represents a complete transformation of the original data, blending creative generation with analytical insight.
This songwriting workflow illustrates how ACP enables a multi-agent, agentic AI system to coordinate collaboration between agents developed with two distinct frameworks: BeeAI and crewAI, by serving as a shared communication layer across the system.
By separating communication from implementation, the system remains modular and extensible—capable of orchestrating agents across frameworks while producing cohesive, end-to-end outputs from unstructured web content.
ACP agents
This project uses four specialized AI agents:
Songwriting and critique project workflow
Throughout the workflow, messages exchanged between agents are structured as JSON objects enriched with metadata. This metadata guides each agent's understanding of the message content, context and expected responses.
This workflow demonstrates a reusable pattern applicable to any use case that requires orchestrating multi-agent data transformation and analysis pipelines.
ACP provides a common messaging system that allows agents built with different frameworks to exchange information in a standardized way. This open protocol allows agents to interoperate without needing custom integrations or shared internal logic.
The ACP client (
ACP client workflow overview
The
Key roles of
Example client usage:
Here are the system requirements to run this project:
Before you get started, here’s a quick overview of the tools and provider services you’ll need.
The following list covers the main frameworks, platforms and APIs required for the multi-agent workflow.
In the subsequent sections, you’ll find step-by-step instructions for installing, configuring and using each tool and provider so you can set up your environment.
BeeAI and crewAI are both designed to work with a variety of language model providers, making them flexible for different environments and use cases. In this tutorial, OpenRouter is the LLM provider for the BeeAI agent, while Ollama is used for the crewAI agents locally.
Both frameworks are providor-independent, so you can switch to other LLM services by updating the configuration settings. Your setup might vary depending on the LLM provider you choose. Additionally, this tutorial includes an optional, preconfigured setup for using IBM watsonx.ai as an alternative cloud-based provider.
You can also use your preferred LLM provider and model; however, please note that only the configurations shown in this tutorial have been tested. Other providers and models might require additional setup or adjustments.
The following requirements are for the three supported providers in this project:
You’ll need an OpenRouter API key to use the preconfigured BeeAI agent server with cloud-based language models.
To use OpenRouter as your LLM provider for the BeeAI agent, follow these steps:
Note: The free model might be different depending on when this tutorial is run. For free models, check out the OpenRouter free tier model list.
If you plan to use Ollama as your LLM provider for the crewAI agent, follow these steps:
To use IBM watsonx.ai as your LLM provider for the crewAI server, follow these steps:
IBM watsonx.ai is used as an optional cloud LLM provider for crewAI agents in this tutorial.
AgentOps is an optional service for tracing, monitoring and visualizing your multi-agent workflows.
If you want to use AgentOps in this project, follow these steps:
AgentOps is not required to run the workflow, but it can help you monitor agent activity and debug multi-agent interactions.
To run this project, clone the GitHub repository by using https://github.com/IBM/ibmdotcom-tutorials.git as the HTTPS URL. For detailed steps on how to clone a repository, refer to the GitHub documentation.
This tutorial can be found inside the projects directory of the repo.
Inside a terminal, navigate to this tutorial's directory:
This project requires three separate Python scripts to run simultaneously for each component of the multi-agent system. As a result, you'll need to open three terminal windows or tabs.
Start by keeping you current terminal open, then open two more terminals and ensure all three are navigated to the correct directories (as shown in the next step).
Using an IDE?
If you're using an IDE like Visual Studio Code*, you can use the Split Terminal feature to manage multiple terminals side by side.
Otherwise, open three stand-alone terminal windows and navigate each to the proper subdirectory.
Terminal navigation
Each terminal is responsible for one of the following components:
Each component runs in its own virtual environment to ensure clean dependency management. This tutorial uses UV, a Rust-based Python package manager to manage and sync environments.
Note: Make sure Python 3.11 or later is installed before proceeding.
Install UV
If you haven’t already, install UV by using Homebrew (recommended for macOS and Linux):
Note for Windows users: Install WSL (Windows Subsystems for Linux) and follow the Linux instructions within your WSL terminal.
Create and activate a virtual env (in each terminal)
In each terminal (BeeAI, crewAI and ACP client), run the following code:
This step will create and activate a
Running
Now install dependencies in each terminal by using:
This step installs the dependencies listed in the
With BeeAI installed, use the CLI to start the BeeAI platform in the
Note: On the first run, this step might take several minutes.
Set up your LLM provider (OpenRouter)
Run the following command to configure the LLM provider and model through the interactive CLI:
Follow the prompts to select OpenRouter and enter your API key and model details.
To confirm your settings, use:
This step should output your configured
Alternatively, advanced users can manually edit a
Example .env for OpenRouter
To verify that BeeAI is working, send a test prompt:
A valid response confirms that the platform is active.
Troubleshooting
If needed, you can update or restart the platform:
In the
Open
You can also customize your own provider by using the crewAI LLM config docs.
Update crewAI agent code
In
Make sure the environment variable names in your
Once both BeeAI and crewAI are configured, start the agent servers in their respective terminals.
Start the BeeAI agent server
In the beeai_agent_server terminal:
You should see output confirming the server has started on
The terminal should log health check pings every couple seconds. A
Start the crewAI agent server
In the crewai_agent_server terminal:
You should see the server running on
Confirm that all agents are running
ACP-compliant agents built locally are automatically recognized by BeeAI. Use the BeeAI CLI to confirm that all local agents are registered and healthy (this step can run in any free terminal):
You should see entries for:
If all are listed and reachable, we can confirm that these agents are successfully interoperated!
In the terminal dedicated to the acp-client server (inside
Inside the terminal, you will be prompted to enter a URL. This input triggers the multi-agent workflow.
With all agents and the client/server running, you're ready to start the ACP project!
Note: Outputs from large language models (LLMs) are probabilistic and can vary each time you run the workflow, even with the same input.
In this tutorial, you connected two different multi-agent frameworks through an ACP client/server that exposed endpoints for the AI agents to collaborate to generate and transform data. By separating communication from agent behavior, ACP makes it possible for agents built with BeeAI, crewAI, LangChain and other agent frameworks to work together without custom integration logic. This approach improves modularity, scaling and interoperability.
ACP is an open initiative driven by the need for agents to send, receive and interpret messages. Messages in ACP are structured—typically in formats like JSON—and enriched with metadata to ensure consistency and clarity across agent interactions. Whether you're using agents powered by OpenAI, Anthropic or other AI models, ACP provides a shared messaging layer that supports framework-independent interoperability.
By following this workflow, you’ve seen how creative and analytical agents can work in harmony, transforming unstructured web content into a song, professional critique and a unified Markdown report. This approach demonstrates the power of ACP to enable seamless, scalable and flexible multi-agent AI systems.
When you're done experimenting with the system, follow these steps to cleanly shut down all running components:
1. Stop each running server
In each terminal window, press
You should see output like:
2. If the server hangs during shutdown
If a server becomes unresponsive or hangs on shutdown (for example, stuck at
Find the process ID (PID)
Run the following command to locate the server process:
Identify the PID of the process that you're trying to stop. For example:
Kill the process. Use the PID to forcefully stop it:
Repeat this process for each server if needed.
That’s it! You've successfully run a complete cross-platform multi-agent system by using ACP.
Build, deploy and manage powerful AI assistants and agents that automate workflows and processes with generative AI.
Build the future of your business with AI solutions that you can trust.
IBM Consulting AI services help reimagine how businesses work with AI for transformation.