IBM® Cloud Private components
IBM Cloud Private has two main components: a container manager (Docker) and a container orchestrator (Kubernetes).
Other components of an IBM Cloud Private cluster work alongside these main components to provide services such as authentication, storage, networking, logging, and monitoring. A cluster management console is also provided, which serves as a centralized management location for these services.
For more information about architecture models and node types, see Architecture.
Note: Management components, such as monitoring, metering, and logging, run on the management node. If no management node is present in your cluster, then the management components run on the master node.
|Alert manager||0.13.0||Single management node||Handles alerts sent by the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as slack, email, or PagerDuty.|
|Ansible based installer and ops manager||2.5.0||Boot node||Deploys IBM Cloud Private on master and worker nodes. The boot node is also used to scale the size of the cluster on demand, and for doing rolling updates.|
|Authentication manager||22.214.171.124||Each master node||Provides an HTTP API for managing users. Protocols are implemented in a RESTful manner. OpenID Connect is used for authentication.|
|calico/node||3.0.4||All nodes, except the boot node.||Sets the Calico network configurations on each node. For more information about Calico components, see https://docs.projectcalico.org/v2.6/releases/ .|
|calicoctl||2.0.2||Each master node||A client tool that runs as a Kubernetes job to set up overall Calico configurations.|
|calico/cni||2.0.3||All nodes, except the boot node.||Sets the network CNI plug-ins on each node.|
|calico/kube-policy-controller||2.0.2||Each master node||A controller center that sets the network policy in the IBM Cloud Private cluster.|
|Docker Registry||2||Each master node||Private image registry that is used to store container image files in image repositories. The Docker distribution and registry version is API V2.|
|Default backend||1.2||Single master node||Minor component of the ingress controller that assists with the routing of inbound connections to services in your cluster.|
|Elasticsearch||5.5.1||Single management node||Stores the system and application logs and metrics. Elasticsearch also provides an advanced API that can be used for querying these logs and metrics.|
|etcd||3.2.14||Each master node||Distributed key-value store that maintains configuration data.|
|Filebeat||5.5.1||All nodes, except the boot node.||Collects the logs for all system components, and user application containers that are running on each node.|
||Single management node||Facilitates cluster discovery and management in a multiple cluster environment.|
|GlusterFS||3.12.1||Selected worker nodes||A storage file system.|
|Grafana||4.6.3||Single management node||Data visualization & Monitoring with support for Prometheus as datasource.|
|Heapster||1.4.0||Single master node||Connects to the kubelet that is running in each worker node and collects node and container metrics. These metrics include CPU, memory, and network usage.|
|Heketi||5.0.0||Runs as a pod on any worker node.||CLI to manage GlusterFS.|
|Helm (Tiller)||2.7.2||Single master node||Manages Kubernetes charts (packages).|
|IBM Cloud Private management console||126.96.36.199||Each master node||A web portal that is based on the Open DC/OS GUI. This management console connects to the leading master node by using the virtual IP (VIP) provided by the VIP manager.|
|Image manager||188.8.131.52||Each master node||Manages images by providing extended features to the Docker registry. These features include authorization for push, pull, and remove operations. The image manager also provides authorization for cataloging of image libraries.|
|Indices-cleaner||0.2||Single management node||Cleans up Elasticsearch data.|
|Kibana||5.5.1||Single management node||A UI providing easy access to data stored in Elasticsearch, plus the ability to create visualizations and dashboards of that data.|
|Kubelet||1.10.0||All nodes, except the boot node.||Supervises the system components of the cluster.|
||1.14.4||All master nodes||Provides service discovery for Kubernetes applications.|
|Kubernetes apiserver||1.10.0||Each master node||Provides a REST API for validating and configuring data for Kubernetes objects. These Kubernetes objects include pods, service, and replication controllers.|
|Kubernetes control manager||1.10.0||Each master node||Maintains the shared state of the Kubernetes cluster by monitoring, and adjusting the current state to ensure that the required service standard is in effect. This maintenance is done through the km apiserver.|
|Kubernetes pause||3.0||All nodes, except the boot node.||Stores the IP address for pods, and sets up the network namespace for other containers that join the pod.|
|Kubernetes proxy||1.10.0||All nodes, except the boot node.||Takes traffic that is directed at Kubernetes services and forwards it to the appropriate pods. Kubernetes proxy is started by km minion.|
|Kubernetes scheduler||1.10.0||Each master node||Assigns pods to worker nodes based on scheduling policy.|
|kube_state_metrics||1.2.0||Single management node||Communicates with the Kubernetes API server to generates metrics about the state of Kubernetes objects.|
|Logstash||5.5.1||Single management node||Transforms and forwards the logs that are collected by Filebeat to Elasticsearch.|
|mariaDB||10.1.16||Each master node||Database that is used by OIDC.|
||Collects usage metrics for your applications and cluster.|
|MongoDB||3.6||Each master node||Database that is used by metering service (IBM® Cloud Product Insights), Helm repository server, and Helm API server.|
|OpenID Connect (OIDC)||1.0||Each master node||Identity protocol over OAuth 2.0. Websphere Liberty profile is used as the OIDC provider. Liberty profile can be configured to integrate with an existing enterprise LDAP server.|
||Single management node||Collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.|
|Rescheduler||0.5.2||Each master node||Used for pod management in a cluster. A rescheduler is an agent that proactively relocates running pods to optimize the layout of pods in a cluster.For more information about the Kubernetes rescheduler, see https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/rescheduler.md|
|Router||184.108.40.206||Each master node||Hosts the management console and acts as the reverse proxy for all system components API.|
|Service Catalog||0.1.2||Each master node||Implements the Open Service Broker API to provide service broker integration for IBM Cloud Private|
|UCarp||1.5.2||Each master and proxy node||Used to manage virtual IP (VIP) on the master node. This component helps to maintain high availability (HA) in the cluster. UCarp requires an HA master environment to start.|
|Unified router||220.127.116.11||Single master node||Used to support backend functioning of the IBM Cloud Private management console.|
|vip_manager||1.0||Master and proxy nodes|
|NGINX Ingress controller||0.13.0||Each proxy node||Used to load balance NodePort Kubernetes services.|
Vulnerability Advisor (VA) components (optional feature)
|Kafka||0.10.0.1||VA node||Data pipeline component that is used for data ingestion and curation.|
|Security Analytics Service (SAS) components
||1.2.1||VA node||Vulnerability Advisor frontend service components. SAS components provide RESTful APIs for the Vulnerability Advisor crawlers and the Vulnerability Advisor dashboard.
The crawlers output scanned container and image information, which are known as frames, into the Vulnerability Advisor data pipeline by using the SAS APIs.
The Vulnerability Advisor dashboard, also uses SAS APIs to report Vulnerability Advisor findings.
|Statsd||0.7.2||VA node||Used by the Vulnerability Advisor service for internal system monitoring.|
|VA Elasticsearch||5.5.1||VA node||Data pipeline component used for indexing and querying Vulnerability Advisor data and analytics annotations.|
|VA Elasticsearch curator||5.4.1||VA node||Elasticsearch curator that is used to manage the Vulnerability Advisor index size and to prune old indices. The
||1.2.1||VA node||Vulnerability Advisor data pipeline components that improve the security of scanned containers and image data by using various analytics, including vulnerability analysis, compliance checking, password analysis, configuration analysis, and
These annotators use internal and external security and compliance information to improve the security of your containers and images.
||1.2.1||VA node||Data pipeline components that are used to index Vulnerability Advisor findings into the Vulnerability Advisor backend.|
||1.2.1||VA node||Data pipeline components that are used for APIs and triggers.|
|VA Usncrawler||1.2.1||VA node||Data pipeline component that is used to ingest and aggregate external security notices for the Vulnerability Advisor analytics components.|
|VA Crawlers||1.2.1||VA node||Vulnerability Advisor data collectors, also known as crawlers, that inspect running containers and offline images.
These crawlers extract system and application information that is used by all the Vulnerability Advisor analytics components.
Live and metrics crawlers run on worker nodes and are deployed as DaemonSets.
The registry crawlers runs as a separate deployment and scans images that are deployed into the IBM Cloud Private image registry.
|Zookeeper||3.4.9||VA node||Used by the kafka component in the Vulnerability Advisor.|