Introduced by Anthropic, the Model Context Protocol (MCP) provides a standardized way for AI models to get the context they need to carry out tasks. In the agentic realm, MCP acts as a tier for AI agents to connect and communicate with external services and tools, such as APIs, databases, files, web searches and other data sources.
MCP encompasses these three key architectural elements:
The MCP host contains orchestration logic and can connect each MCP client to an MCP server. It can host multiple clients.
An MCP client converts user requests into a structured format that the protocol can process. Each client has a one-to-one relationship with an MCP server. Clients manage sessions, parse and verify responses and handle errors.
The MCP server converts user requests into server actions. Servers are typically GitHub repositories available in various programming languages and provide access to tools. They can also be used to connect LLM inferencing to the MCP SDK through AI platform providers such as IBM and OpenAI.
In the transport layer between clients and servers, messages are transmitted in JSON-RPC 2.0 format using either standard input/output (stdio) for lightweight, synchronous messaging or SSE for asynchronous, event-driven calls.