A2A, or Agent2Agent protocol is an open standard that enables structured communication between AI agents, clients and tools. In this tutorial, you can build an agent system where a chat client processes user queries and sends them to an AI agent running on an A2A-compliant server.
Most agentic AI applications implement custom communication between components (for example, ChatDev’s ChatChain), making it difficult to reuse the same agent across different applications or integrate external tools. This lack of standardization prevents interoperability and limits the development of a broader agent ecosystem.
A2A solves this limitation by separating the communication layer from the agent logic through a standardized protocol built on HTTP, JSON-RPC 2.0, and Server-Sent Events (SSE). This decoupling allows agents to collaborate with other agents, serve client requests, and access external tools without custom integration code.
A2A supports decentralized architectures that allow teams to evolve their AI systems incrementally without breaking client code. Teams can update tools, swap models, or modify agent behavior while maintaining a consistent interface across complex workflows.
Agents exchange information in messages structured in JSON-RPC format that include metadata that enriches agent interactions with clarity and consistency. Each A2A server exposes an AgentCard at a well-known endpoint (.well-known/agent-card.json) that describes the agent’s capabilities as structured JSON data. Thus, it allows clients to dynamically discover what an agent can do, similar to how API documentation describes available endpoints.
Follow along to build and run an A2A agent system, and gain hands-on experience with:
Note: If you’ve worked with ACP (Agent Communication Protocol), you can recognize similarities. ACP, originally developed by IBM’s BeeAI, has joined forces with Google A2A under the Linux Foundation. BeeAI now uses A2A adapters (A2AServer and A2AAgent) to provide A2A-compliant communication. A2A also works alongside MCP (Model Context Protocol) to enable agents to interact with data sources and tools, creating interoperable agent ecosystems.
This project demonstrates how A2A enables clean separation between the client interface and agent logic.
The workflow follows this sequence:
This workflow demonstrates a reusable pattern applicable to use cases requiring structured client-agent communication such as chatbots, task automation systems, customer support agents and research assistants with tool orchestration.
This project uses a single AI agent with multiple tool capabilities. In more complex systems, you can deploy multiple specialized agents, each focused on specific domains or tasks.
RequirementAgent (BeeAI): A declarative agent that dynamically selects and coordinates multiple tools based on the user’s request. It uses:
The A2A server (
2. Agent setup: Creates a RequirementAgent with tools and memory to handle the agent lifecycle
3. Server configuration: Exposes the agent through A2A-compliant HTTP endpoints
The server automatically exposes an AgentCard at /.well-known/agent-card.json that describes the agent’s capabilities and helps validate agent configurations.
The A2A client (
Connection setup: Creates an A2A client adapter
The
Message exchange: Sends asynchronous prompts and processes responses:
The
Here are the system requirements to run this project:
Before you get started, here’s an overview of the tools required for this project:
This project uses Ollama as a model provider for the AI agent. Follow these steps to set up Ollama:
Note: You can use any Ollama-compatible model by setting the
To run this project, clone the GitHub repository by using https://github.com/IBM/ibmdotcom-tutorials.git as the HTTPS URL. For detailed steps on how to clone a repository, refer to the GitHub documentation.
This tutorial can be found inside the projects directory of the repo.
Inside a terminal, navigate to this tutorial’s directory:
This project requires two separate Python scripts to run simultaneously, one for the server and the other for the client. You need to open two terminal windows or tabs.
Keep you current terminal open, then open a second terminal and ensure both are navigated to the correct project directory (the
Using an IDE?
If you’re using an IDE like Visual Studio Code, you can use the Split Terminal feature to manage multiple terminals side by side.
Otherwise, open two stand-alone terminal windows and navigate each to the project directory.
Virtual environments help keep dependencies separate and maintained. To keep the server and client dependencies separate, create a virtual environment for each component.
For the server:
Navigate to the
Create a virtual environment with Python 3.11:
Activate the virtual environment:
Note for Windows users: Use venv\Scripts\activate to activate the virtual environment.
For the client:
Navigate to the
Create and activate a virtual environment:
Install the required dependencies for each component by running this code in each terminal:
You can run
In the first terminal, start the A2A agent server:
You should see:
The server is now listening for incoming requests from the client application, ready to support agent-to-agent communication.
In the other terminal, start the A2A client:
This should prompt you for input:
Type a message in the client terminal and press
In the server terminal, you can see A2A protocol logs showing the communication with push notifications:
The first request retrieves the AgentCard that describes the agent’s capabilities. The second request sends your message as a
Note: Outputs from LLMs are probabilistic and can vary each time you run the workflow, even with the same input.
Experiment with different types of queries to test the agent’s various tools:
Navigate to https://0.0.0.0:9999/.well-known/agent-card.json in your browser to view the
This JSON document describes:
This AgentCard allows any A2A-compliant client to discover and interact with the agent without prior knowledge of its implementation details.
In this tutorial, you built a chat system by using an A2A-complaint server that exposed a structured interface for client-agent communication. By separating the messaging layer from internal logic, the Agent2Agent protocol enables teams to update agent capabilities, swap models or modify tool configurations without changing client code. This flexibility is especially valuable when coordinating input-required tasks, tracking task status or treating each operation as a discrete unit of work.
A2A works by defining a common message format that any compliant component can understand, allowing autonomous agents to collaborate with other agents. The protocol specification defines how messages are structured in JSON-RPC format and enriched with metadata to ensure consistency and clarity across interactions.
This tutorial builds on the foundational examples provided by the A2A samples repository. For more information about the original implementation, refer to the readme file in the repository, which provides more context and examples for building A2A-compliant systems.
For real-world deployments, A2A servers can implement authentication mechanisms to secure agent endpoints, use server-sent events for streaming responses, and scale to handle production workflows. By following this workflow, you saw how a command line client can interact with an AI agent through a standardized protocol, enabling the agent to coordinate multiple tools and provide contextual responses. This approach demonstrates the power of A2A to enable maintainable, scalable and flexible AI systems.
When you’re done experimenting with the system, follow these steps to cleanly shut down all running components:
In each terminal window, press Ctrl+C to stop the running process.
You should see output like:
If the server becomes unresponsive or hangs on shutdown, you can forcefully stop it:
Find the process ID (PID):
Identify the PID of the process that you’re trying to stop.
End the process:
Repeat this process for each server if needed.
That’s it. You’ve successfully run a complete A2A-compliant chat system.
Build, deploy and manage powerful AI assistants and agents that automate workflows and processes with generative AI.
Build the future of your business with AI solutions that you can trust.
IBM Consulting AI services help reimagine how businesses work with AI for transformation.