Calico Opens in a new tab is an open source community project that provides networking for containers and virtual machines.

Calico is built on the third layer, also known as Layer 3 or the network layer, of the Open System Interconnection (OSI) model. Calico uses the Border Gateway Protocol (BGP) to build routing tables that facilitate communication among agent nodes. By using this protocol, Calico networks offer better performance and network isolation.

Calico implements the Kubernetes Container Network Interface (CNI) as a plug-in and provides agents for Kubernetes to provide networking for containers and pods.

Calico creates a flat Layer-3 network and assigns a fully routable IP address to every pod. It divides a large network CIDR (Classless Inter-Domain Routing) into smaller blocks of IP addresses and assigns one or more of these smaller blocks to nodes in the cluster. The division is done during IBM Cloud Private installation by using the network_cidr parameter in config.yaml in CIDR notation.

Calico by default creates a BGP mesh between all nodes of the cluster and broadcasts the routes for container networks to all worker nodes. Each node is configured to act as a Layer 3 gateway for the subnet. The subnet is assigned to the worker node and serves the connectivity to pod subnets that are hosted on the host. All nodes participate in the BGP mesh, which advertises all of the local routes that the worker nodes own to all other nodes. BGP peers that are external to the cluster can participate, but the size of the cluster affects how many BGP advertisements these external peers receive. Route reflectors might be required when the cluster scales pass a certain size.

For more informtion, see Configuring BGP Peers Opens in a new tab.

When routing pod traffic, Calico uses the system capabilities such as the node's local route tables and iptables. All pod traffic traverses iptables rules before they are routed to their destination.

Calico maintains its state by using an etcd key/value store. By default in IBM Cloud Private, Calico uses the same etcd key/value store as Kubernetes to store the policy and network configuration state.

Calico can be configured to allow pods to communicate with each other with or without IP-in-IP tunneling. IP-in-IP adds a header for all packets as part of the encapsulation, but allows containers to communicate on its overlay network through almost any non-overlapping underlay network.

In some environments where the underlay subnet address space is constrained and there is no access to add additional IP address pools, like on some public clouds, Calico can be a good fit. However, in environments that do not require an overlay, IP-in-IP tunneling must be disabled to remove the packet encapsulation resource use, and to allow any physical routing infrastructure to do packet inspection for compliance and audit. In such scenarios, the underlay network can be made aware of the additional pod subnets by adding the underlay network routers to the BGP mesh. For more information about a Calico network when nodes are on different network segments, see Calico network across different network segments.

Calico components

Calico has the following components:

  1. calico/node agent
  2. calico/cni
  3. calico/kube-controller

To ensure that the nodes meet the Calico system requirements, review the information in Preparing the nodes.

Diagram: sample distribution of calico components across master and worker nodes in a cluster

calico/node agent

This entity consists of three components - felix, bird, and confd.


The CNI plug-in provides the IPAM functions by provisioning IP addresses for the pods that are hosted on the nodes.


The calico/kube-controller watches Kubernetes NetworkPolicy objects and keeps the Calico data store in sync with the Kubernetes objects. The calico/node that is running on each node uses the information in the Calico etcd data store to program the local iptables.


calicoctl is a command-line tool that can be used to manage the Calico network and security policies and other Calico configurations. It communicates directly with etcd to manipulate the data store. It provides a number of resource management commands and can be used to troubleshoot Calico network issues. To set up your Calico CLI, see Installing the Calico CLI (calicoctl).

Calico network across different network segments

When nodes are on different network segments, they are connected by a router in the underlay and infrastructure network. The traffic between two nodes on different subnets traverses through the router, which is the gateway for the two subnets. If the router is not aware of the pod subnet, it cannot forward the packets between the hosts.

Diagram: two nodes on different networks and a router that connects them

There are two ways to handle this scenario:

  1. Calico can be configured to create IP-in-IP tunnel endpoints on each node for every subnet that is hosted on the node. Any packet that is originated by the pod and is egressing the node, is encapsulated with the IP-in-IP header and the node IP address is used as the source. This way, the infrastructure router does not see the pod IP addresses.

    The IP-in-IP tunneling brings in extra network throughput and latency due to additional packet processing at each endpoint to encapsulate and decapsulate packets. On bare metal, the resource use is not significant as certain network operations can be offloaded to the network interface cards. However, on virtual machines, the resource use can be significant and also affected by the number of CPU cores and network I/O technologies that are configured and used by the hypervisors. The additional packet encapsulation overhead can also be significant when smaller maximum transmission unit (MTU) sizes are used since it can introduce packet fragmentation. Jumbo frames must be enabled whenever possible.

  2. The second option is to make the infrastructure router aware of the pod network. You can do so by enabling BGP on the router and adding the nodes in the cluster as BGP peers. These steps allow the router and the hosts to exchange the route information with each other. The size of the cluster in this scenario can come into play as in the BGP mesh. Every node in the cluster is a peer of the router after BGP is enabled on the router.

IBM Cloud Private configuration options for using Calico container networking


network_type: calico
calico_ipip_mode: Always
calico_tunnel_mtu: 1430
calico_ip_autodetection_method: can-reach={{ groups['master'][0] }}

For more information about IBM Cloud Private deployment topologies that use Calico, see IBM Cloud Private deployment topologies.

Calico monitoring in Prometheus and Grafana

By default, the Prometheus component that is running on the management nodes scrapes the Calico node agent that is running in IBM Cloud Private for metrics. IBM Cloud Private includes a Grafana dashboard that displays cluster network metrics that are retrieved from Calico in graphic representation.

Additionally, alerts can be set by using these metrics that are collected from Felix.