An IBM® Cloud Private cluster has four main classes of nodes: boot, master, worker, and proxy.
You can optionally specify management, Vulnerability Advisor (VA), and etcd nodes in your cluster.
You need to determine the architecture of your IBM Cloud Private cluster before you install it. Configuration of multiple master nodes for master node high availability and separation of the management node from the master node are only options during installation. After installation, you can add or remove only worker, proxy, management, and VA nodes from your cluster.
Note: In the following images, the clusters represent minimal IBM Cloud Private configurations. Actual production configurations can vary.
- Boot node
- Master node
- Worker node
- Proxy node
- Management node
- VA node
- etcd node
- Cluster architectures
A boot or bootstrap node is used for running installation, configuration, node scaling, and cluster updates. Only one boot node is required for any cluster. You can use a single node for both master and boot.
You can use a single boot node for multiple clusters. In such a case, the boot and master cannot be on a single node. Each cluster must have its master node. On the boot node, you must have a separate installation directory for each cluster. If you are providing your own certificate authority (CA) for authentication, you must have a separate CA domain for each cluster.
A master node provides management services and controls the worker nodes in a cluster. Master nodes host processes that are responsible for resource allocation, state maintenance, scheduling and monitoring. Because a high availability (HA) environment contains multiple master nodes, if the leading master node fails, failover logic automatically promotes a different node to the master role. Hosts that can act as the master are called master candidates.
A worker node is a node that provides a containerized environment for running tasks. As demands increase, more worker nodes can easily be added to your cluster to improve performance and efficiency. A cluster can contain any number of worker nodes, but a minimum of one worker node is required.
A proxy node is a node that transmits external request to the services created inside your cluster. Because a high availability (HA) environment contains multiple proxy nodes, if the leading proxy node fails, failover logic automatically promotes a different node to the proxy role. While you can use a single node as both master and proxy, it is best to use dedicated proxy nodes to reduce the load on the master node. A cluster must contain at least one proxy node if load balancing is required inside the cluster.
A management node is an optional node that only hosts management services such as monitoring, metering, and logging. By configuring dedicated management nodes, you can prevent the master node from becoming overloaded. You can enable a separate management node only during IBM Cloud Private installation. If you don’t enable a separate management node during installation, management services are placed on the master node.
A VA (Vulnerability Advisor) node is an optional node that is used for running the Vulnerability Advisor services. Vulnerability Advisor services are resource intensive. If you use the Vulnerability Advisor service, specify a dedicated VA node. For more information about the Vulnerability Advisor, see Vulnerability Advisor.
An etcd node is an optional node that is used for running the
etcd distributed key value store. Configuring an etcd node in an IBM Cloud Private cluster that has many nodes, such as 100 or more, helps to improve the
etcd performance. For more information about configuring an etcd node, see Setting the node roles in the hosts file.
If you use proxy nodes in your cluster, the architecture resembles the following diagram:
If you use management nodes in your cluster, the architecture resembles the following diagram:
If you use VA nodes in your cluster, the architecture resembles the following diagram: