An IBM® Cloud Private cluster has four main classes of nodes: boot, master, worker, and proxy.
You can optionally specify management and Vulnerability Advisor (VA) nodes in your cluster.
You determine the architecture of your IBM Cloud Private cluster before you install it. After installation, you can add or remove only worker, proxy, management, and VA nodes from your cluster. You cannot convert a standard cluster into a high availability cluster, or add more master nodes to a high availability cluster.
Important: The boot, master, proxy, VA, and management nodes in your cluster must use the same platform architecture. Only the worker nodes can use a different platform architecture. For example, if you plan to use Linux® on Power® 64-bit LE nodes as master nodes, you use a Linux® on Power® 64-bit LE boot node.
A boot or bootstrap node is used for running installation, configuration, node scaling, and cluster updates. Only one boot node is required for any cluster. You can use a single node for both master and boot.
You can use a single boot node for multiple clusters. In such a case, the boot and master cannot be on a single node. Each cluster must have its master node. On the boot node, you must have a separate installation directory for each cluster. If you are providing your own certificate authority (CA) for authentication, you must have a separate CA domain for each cluster.
A master node provides management services and controls the worker nodes in a cluster. Master nodes host processes that are responsible for resource allocation, state maintenance, scheduling, and monitoring. Because a high availability (HA) environment contains multiple master nodes, if the leading master node fails, failover logic automatically promotes a different node to the master role. Hosts that can act as the master are called master candidates.
A worker node is a node that provides a containerized environment for running tasks. As demands increase, more worker nodes can easily be added to your cluster to improve performance and efficiency. A cluster can contain any number of worker nodes, but a minimum of one worker node is required.
A proxy node is a node that transmits external request to the services created inside your cluster. Because a high availability (HA) environment contains multiple proxy nodes, if the leading proxy node fails, failover logic automatically promotes a different node to the proxy role. While you can use a single node as both master and proxy, it is best to use dedicated proxy nodes to reduce the load on the master node. A cluster must contain at least one proxy node if load balancing is required inside the cluster.
If you use these node types in your cluster, the architecture resembles the following diagram:
A management node is an optional node that only hosts management services such as monitoring, metering, and logging. By configuring dedicated management nodes, you can prevent the master node from becoming overloaded. You can enable the management node only during IBM Cloud Private installation.
If you use a management node in your cluster, the architecture resembles the following diagram:
A VA (Vulnerability Advisor) node is an optional node that is used for running the Vulnerability Advisor services. Vulnerability Advisor services are resource intensive. If you use the Vulnerability Advisor service, specify a dedicated VA node. For more information about the Vulnerability Advisor, see Vulnerability Advisor.
If you use a VA node in your cluster, the architecture resembles the following diagram: