Preparing to install IBM Cloud Pak® for Multicloud Management
Before you install IBM Cloud Pak® for Multicloud Management as in Installing the IBM Cloud Pak for Multicloud Management, you must review the following installation requirements, and ensure that the following steps are done. Or else, the installation might fail.
- Red Hat® OpenShift® Container Platform
- Storage
- Networking
- Elasticsearch
- Preparing to install Monitoring
- Preparing to install Managed services
- Preparing to install Mutation Advisor
- Installing Red Hat Advanced Cluster Management
- Verifying Red Hat Advanced Cluster Management installation
- Preparing an online cluster for installation
- Preparing an offline cluster for installation
Red Hat® OpenShift® Container Platform
-
You must have a supported OpenShift Container Platform version, including the registry and storage services, which are installed and working in your cluster. For more information about the supported versions, see Supported OpenShift versions and platforms. For more information about installing OpenShift Container Platform, see Red Hat OpenShift documentation.
- For OpenShift Container Platform version 4.6, see OpenShift Container Platform 4.6 Documentation
- For OpenShift Container Platform version 4.6, see OpenShift Container Platform 4.6 Documentation
-
To ensure that the OpenShift Container Platform cluster is set up correctly, access the OpenShift Container Platform web console.
The OpenShift Container Platform web console can be found by running the following command:
kubectl -n openshift-console get route
You can see a similar output to the following example:
openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None
The console URL in this example is
https:// console-openshift-console.apps.new-coral.purple-chesterfield.com
. Open the URL in your browser and check the result. -
For a Red Hat OpenShift on IBM Cloud cluster, you must have a supported OpenShift Container Platform version that is installed by using IBM Cloud Kubernetes Service so that the managed OpenShift Container Platform service is supported. For more information, see Creating Red Hat OpenShift on IBM Cloud clusters
.
-
If you are installing your cluster on a public cloud, such as Red Hat OpenShift on IBM Cloud, you can enable authentication with Red Hat OpenShift. By default, your cluster uses OpenID Connect (OIDC) to authenticate users with Kubernetes. For more information, see Delegating authentication to OpenShift.
OpenShift Container Platform CLI tools
If the OpenShift Container Platform CLI tools are not installed on the boot node, you need to download, decompress, and install the OpenShift Container Platform CLI tools oc
from
OpenShift Container Platform client binary files.
Storage
Check the recommended storage options for specific components of IBM Cloud Pak for Multicloud Management. For more information, see Storage options.
Ensure that you have a pre-configured storage class in OpenShift Container Platform that can be used for creating storage for IBM Cloud Pak for Multicloud Management. For the instances of IBM Cloud Pak for Multicloud Management in production environment,
the storage class reclaimPolicy
must be Retain
and for the instances of IBM Cloud Pak for Multicloud Management meant only for PoCs or demonstration environments, the reclaimPolicy
can be either Delete
or Retain
. For more details, see Storage Classes .
You need persistent storage for some of the service pods.
You must also set the default
annotation to the storage class that you are using for IBM Cloud Pak for Multicloud Management.
oc patch storageclass <storage_class_name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
You can use the following command to get the storage classes that are configured in your cluster. Pick a storage class that provides block storage.
oc get storageclasses
Following is a sample output:
NAME PROVISIONER AGE
rook-ceph-block-internal (default) rook-ceph.rbd.csi.ceph.com 42d
rook-ceph-cephfs-internal rook-ceph.cephfs.csi.ceph.com 42d
rook-ceph-delete-bucket-internal ceph.rook.io/bucket 42d
Managed services storage requirements
Important: If you are installing Managed services, the persistent volume that Managed services uses must have ReadWriteMany
access mode. You must pick a storage class that provides such access. For more information,
see the OpenShift documentation.
- For OpenShift Container Platform version 4.6, see Access modes
.
The persistent storage requirements for Managed services are tabulated as follows:
Persistent storage requirement | Size (GB) | Notes |
---|---|---|
cam-mongo-pv | 20 GB | 20 GB for up to 10k deployments. Add 10 GB for each additional 10k deployment. |
cam-logs-pv | 10 GB | Static |
cam-terraform-pv | 15 GB | Usage can grow or shrink |
cam-bpd-appdata-pv | 20 GB | The requirement increases based on the number of templates in the local repository |
Monitoring storage requirements
Important: If you are installing Monitoring, the module requires a low latency storage. It is ideal to use local storage for performance reasons. For more information, see Guidelines for choosing your storage solution for Monitoring.
Using ganesha-nfs-server as a RWX dynamic provisioner for POCs
If you need RWX storage for Cloud Automation Manager or other operators that require RWX for POCs, see Using ganesha-nfs-server as a RWX dynamic provisioner for POCs
Networking
The port number 9555 is required to be open on every node in the OS environment for the node exporter in the monitoring service. This port is configurable and 9555 is the default value.
Elasticsearch
For Elasticsearch, ensure that the vm.max_map_count
setting is at least 262144
on all worker and compute nodes. Run the following command to check:
sudo sysctl -a | grep vm.max_map_count
If the vm.max_map_count
setting is not at least 262144
, run the following commands to set the value to 262144
:
sudo sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
Preparing to install Monitoring
If you plan to install Monitoring, you must firstly install Red Hat Advanced Cluster Management and enable the Red Hat Advanced Cluster Management observability service. For supported Red Hat Advanced Cluster Management version, see Supported Red Hat® Advanced Cluster Management for Kubernetes version. Red Hat Advanced Cluster Management is used by Monitoring to store and retrieve metric data. For more information, see Install Red Hat Advanced Cluster Management.
Important: Before you install the Red Hat Advanced Cluster Management, review the following information about storage, and metric summarization and data retention.
Storage
If you want to configure your own storage solution for Monitoring rather than selecting the default (dynamically provisioned) storage class that is defined for IBM Cloud Pak® for Multicloud Management, you must set up this storage solution at IBM Cloud Pak® for Multicloud Management installation time. To configure your own storage solution, the general sequence of steps is:
- Choose the storage solution that you want. Refer to Choosing your storage solution for Monitoring.
- Ensure that you select the advanced installation mode to install IBM Cloud Pak® for Multicloud Management. Then, when you are updating the YAML specification to install the Monitoring operator during
the IBM Cloud Pak® for Multicloud Management installation, you must also complete the storage configuration, that is, update the
storage
parameters for Monitoring at this time. For help with configuring thestorage
parameters in the YAML specification, refer to Monitoring configuration.
Metric summarization and data retention
Metric summarization and data retention is controlled by the observability service (multicluster-observability-operator) in Red Hat Advanced Cluster Management. Before you install Monitoring create an instance of the observability service, and change the retention values. For more information, see the Enabling the observability service in Red Hat Advanced Cluster Management topic.
Preparing to install Managed services
If you plan to install Managed services, review the following optional requirement:
-
Security context settings:
The Pod security policy control is enabled by default on IBM Cloud Pak for Multicloud Management. Managed services includes a
PodSecurityPolicy
in the Helm chart that supports the following securityContext settings:privileged: false allowPrivilegeEscalation: false hostPID: false hostIPC: false hostNetwork: false allowedCapabilities: - SETPCAP - AUDIT_WRITE - CHOWN - NET_RAW - DAC_OVERRIDE - FOWNER - FSETID - KILL - SETGID - SETUID - NET_BIND_SERVICE - SYS_CHROOT - SETFCAP requiredDropCapabilities: - MKNOD readOnlyRootFilesystem: false {{- if .Values.global.audit }} allowedHostPaths: - pathPrefix: {{ .Values.auditService.config.journalPath }} readOnly: false runAsUser: rule: RunAsAny {{- else }} runAsUser: ranges: - max: 1111 min: 999 rule: MustRunAs {{- end }} fsGroup: ranges: - max: 1111 min: 999 rule: MustRunAs seLinux: rule: RunAsAny supplementalGroups: ranges: - max: 1111 min: 999 rule: MustRunAs volumes: - configMap - emptyDir - secret - persistentVolumeClaim - nfs - downwardAPI - projected
Preparing to install Mutation Advisor
Important: Installation of the Mutation Advisor operator is optional. You do not need to install the Mutation Advisor operator to gain access to the core functions of IBM Cloud Pak for Multicloud Management. For more information about what you can do with Mutation Advisor, see Mutation Advisor.
Mutation Advisor requires that your cluster nodes have the kernel-devel
package and falco driver, without which the Mutation Advisor installation fails.
-
Install the kernel-devel package.
To verify the package availability on your cluster nodes and install the package if required, you must create a
MachineConfig
for kernel-devel and Falco driver before you install theibm-management-mutation-advisor
operator. You can run this on any Linux® node that can log in to the cluster with anoc
command.-
If your cluster has internet access, complete the following steps:
-
Log in to your target cluster as a Cluster Administrator as follows:
oc login <cluster_host:port> --username=<cluster_admin_user> --password=<cluster_admin_password>
-
Create the
08-machine_config.yaml
file with the following contents:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: generation: 1 labels: machineconfiguration.openshift.io/role: worker name: 08-worker-extensions spec: extensions: - kernel-devel
-
Apply the
08-machine_config.yaml
file by running the following command:oc apply -f 08-machine_config.yaml
Note: This step restarts all cluster nodes. The cluster resources can be available only after all nodes are successfully updated.
-
Create the
09-ma-falco-driverloader.yaml
file with the following contents:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 09-ma-driver-loader spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description= install Falco-Driver-Loader Wants=network-online.target After=network-online.target [Service] ExecStartPre=/bin/mkdir --parents /root/.falco ExecStart=/usr/bin/podman run -entrypoint="/bin/bash" --rm -i -t \ --privileged \ -v /root/.falco:/root/.falco \ -v /proc:/host/proc:ro \ -v /boot:/host/boot:ro \ -v /lib/modules:/host/lib/modules:ro \ -v /usr:/host/usr:ro \ -v /etc:/host/etc:ro \ docker.io/falcosecurity/falco-driver-loader:0.32.2 Restart=on-failure [Install] WantedBy=multi-user.target default.target enabled: true name: ma-driver-loader.service extensions: null fips: false kernelArguments: null kernelType: '' osImageURL: ''
-
Apply the
09-ma-falco-driverloader.yaml
file by running the following command:oc apply -f 09-ma-falco-driverloader.yaml
Note: This step restarts all cluster nodes. The cluster resources can be available only after all nodes are successfully updated.
-
-
If your cluster has no internet access, complete the following steps:
-
For Red Hat® Enterprise Linux® or CentOS implementations, you must install the Extra Packages for Enterprise Linux® (EPEL) repository to gain access to
jq
if you do not have access from an existing repository.The following procedure depends on
jq
being installed on the host. Therefore, follow the steps for non-Red Hat Enterprise Linux implementations. See Download jq.
-
For CentOS 7 versions that are still supported, and for Red Hat Enterprise Linux, run the following command:
sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
-
For Red Hat Enterprise Linux 8, complete the following steps:
-
Install the EPEL repository by running the following command:
sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
-
Run the following commands to enable the
codeready-builder-for-rhel-8-*-rpms
repository. Some EPEL packages share dependencies with thecodeready-builder-for-rhel-8-*-rpms
repository:ARCH=$( /bin/arch ) subscription-manager repos --enable "codeready-builder-for-rhel-8-${ARCH}-rpms"
-
-
For CentOS 8, complete the following steps:
-
Install the EPEL repo by running the following command:
sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
-
Run the following commands to enable the PowerTools repo. EPEL packages might share dependencies with the PowerTools repo:
dnf config-manager --set-enabled PowerTools
For more information about the EPEL repository, see Extra Packages for Enterprise Linux (EPEL)
-
-
For OCP version 4.10 and above, complete the following steps:
-
Get the node name.
oc get node
-
Check the kernel version of the cluster node.
oc get node <NODE_NAME> -o json | jq -r .status.nodeInfo.kernelVersion
Example output:
4.18.0-147.8.1.el8_1.x86_64
-
On your bastion node, find and download the
kernel-devel-<kernel_version_found_in_step3>.rpm
fromhttps://access.redhat.com/downloads
.For the kernel version
4.18.0-305.19.1.el8_4.x86_64
:Example command:
wget http://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/kernel-devel-4.18.0-305.19.1.el8_4.x86_64.rpm
For the Kernel version 4.18.0-372.59.1.el8_6.x86_64.
-
Select the Red Hat OpenShift Container Platform from the downloads page. Choose your OpenShift Container Platform version from the drop-down menu. In this case, version 4.12.22 for RHEL 8 and switch to the Packages tab.
-
Search for the
kernel-devel
package and click kernel-devel from the filtered list. -
This redirects to the kernel-devel package download page, the latest version available by default is 8.6 EUS.
-
Update the version to
4.18.0-372.59.1.el8_6
and click the download link.
-
-
-
-
Log in to your target cluster node as
kubeadmin
with theoc login
command. -
On your bastion node, download the
ma-prereq-offline.sh
script.wget https://raw.githubusercontent.com/IBM/cp4mcm-samples/master/scripts/ma-prereq-offline.sh
-
Copy the kernel package and script from your bastion node to your OpenShift cluster node.
-
On your OpenShift cluster node, assign executable permission to the script.
chmod +x ma-prereq-offline.sh
-
Run the script.
./ma-prereq-offline.sh ${full-path-to-the-kernel-devel-rpm} --debug-image=registry.redhat.io/rhel8/support-tools:latest
The debug image is an image of your choice that has support tools that are installed. Ensure that the image is present in the bastion host.
Note: The script requires the Red Hat
support-tools
container image. If your cluster node does not have the image, you can download the image from Red Hat Enterprise Linux Support Toolsand push it to your internal registry
.
You can then run the script by using the following command:
./ma-prereq-offline.sh --debug-image=image-registry.openshift-image-registry.svc:5000/openshift/tool:latest <full-path-to-the-kernel-devel-rpm>
-
-
-
Install falco driver on all the worker nodes:
- Pull down the
falco-driver-loader
.docker pull docker.io/falcosecurity/falco-driver-loader:0.32.2
- Save
falco-driver-loader
image to your local environment:docker save docker.io/falcosecurity/falco-driver-loader:0.32.2 > falco-driver-loader-image.tar
-
Enter each worker node by using the following command:
oc debug node/xxxxx
Note: You will get
pod/<debug pod name>
. Make note of<debug pod name>
to use in later step. -
Run the command:
chroot /host
- Create
/root/.falco
directorymkdir /root/.falco
- On another terminal, copy
falco-driver-loader-image.tar
from outside to the debug pod. The debug pod name can be found in step 2c.
Example:oc cp falco-driver-loader-image.tar <debug pod name>:/host/etc/falco-driver-loader-image.tar
oc cp falco-driver-loader-image.tar worker0cp4mcm78830cpfyreibmcom-debug:/host/etc/falco-driver-loader-image.tar
- Load
falco-driver-loader
image in the debug pod:podman load < /etc/falco-driver-loader-image.tar
- Run the
/etc/falco-driver-loader
in the debug pod.podman run --rm -i -t --privileged -v /root/.falco:/root/.falco -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /etc:/host/etc:ro falcosecurity/falco-driver-loader:latest -> podman run --rm -i -t --privileged -v /root/.falco:/root/.falco -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /etc:/host/etc:ro falcosecurity/falco-driver-loader:0.32.2
- Exit the debug node.
- If Mutation Advisor is already installed, restart all the crawler pods.
- Pull down the
Installing Red Hat Advanced Cluster Management
For using cluster management capability with IBM Cloud Pak for Multicloud Management, you must install Red Hat Advanced Cluster Management. For more information about the supported Red Hat Advanced Cluster Management version, see Supported Red Hat Advanced Container Management version.
Important: You must install Red Hat Advanced Cluster Management in the default open-cluster-management
namespace.
See the following Red Hat Advanced Cluster Management documentation for installation instructions:
- If your cluster is online, see Installing Red Hat® Advanced Cluster Management for Kubernetes
.
- If your cluster is offline, see Install on disconnected networks
.
The Monitoring module requires the Red Hat Advanced Cluster Management observability service. When you have completed the Red Hat Advanced Cluster Management installation, you must create an instance of the observability service (multicluster-observability-operator), and change the retention values. For more information, see the Enabling the observability service in Red Hat Advanced Cluster Management topic.
Verifying Red Hat Advanced Cluster Management installation
To verify if Red Hat Advanced Cluster Management is installed and running, perform the following checks:
-
Check if there are pods up and running in the
open-cluster-management
namespace. If there are, the Red Hat Advanced Cluster Management operator is installed. -
Check if there are any instances of the
multiclusterhubs.operator.open-cluster-management.io
CRD in any namespaces. -
Check if there are any instances of the
multiclusterobservabilities.observability.open-cluster-management.io
CRD in any namespaces.
Tip: Observability components are deployed in the open-cluster-management-observability namespace
. This namespace uses the observatorium
CRD type.
You can check for the CR in your current cluster by running the oc get observatorium -A
command:
# oc get observatorium -A
NAMESPACE NAME AGE
open-cluster-management-observability observability-observatorium 23h
You can check for the pods by running the oc get po -n open-cluster-management-observability
command:
# oc get po -n open-cluster-management-observability
NAME READY STATUS RESTARTS AGE
alertmanager-0 2/2 Running 0 23h
alertmanager-1 2/2 Running 0 23h
alertmanager-2 2/2 Running 0 23h
grafana-75d5886689-454x7 1/1 Running 0 23h
grafana-75d5886689-qtr2r 1/1 Running 0 23h
observability-observatorium-observatorium-api-58c885df69-mdxd9 1/1 Running 0 23h
observability-observatorium-observatorium-api-58c885df69-zkdd7 1/1 Running 0 23h
observability-observatorium-thanos-compact-0 1/1 Running 0 23h
observability-observatorium-thanos-compact-1 1/1 Running 0 23h
observability-observatorium-thanos-compact-2 1/1 Running 0 23h
observability-observatorium-thanos-query-7457bc4fd7-7kpgv 1/1 Running 0 23h
observability-observatorium-thanos-query-7457bc4fd7-7z86h 1/1 Running 0 23h
observability-observatorium-thanos-query-frontend-6c9c9f588d549 1/1 Running 0 23h
observability-observatorium-thanos-query-frontend-6c9c9f58j9727 1/1 Running 0 23h
observability-observatorium-thanos-receive-controller-86d7tpt9p 1/1 Running 0 23h
observability-observatorium-thanos-receive-default-0 1/1 Running 0 23h
observability-observatorium-thanos-receive-default-1 1/1 Running 0 23h
observability-observatorium-thanos-receive-default-2 1/1 Running 0 23h
observability-observatorium-thanos-rule-0 1/1 Running 0 23h
observability-observatorium-thanos-rule-1 1/1 Running 0 23h
observability-observatorium-thanos-rule-2 1/1 Running 0 23h
observability-observatorium-thanos-store-memcached-0 2/2 Running 0 23h
observability-observatorium-thanos-store-memcached-1 2/2 Running 0 22h
observability-observatorium-thanos-store-memcached-2 2/2 Running 0 22h
observability-observatorium-thanos-store-shard-0-0 1/1 Running 11 23h
observability-observatorium-thanos-store-shard-1-0 1/1 Running 11 23h
observability-observatorium-thanos-store-shard-2-0 1/1 Running 11 23h
observatorium-operator-5bb7f8b748-n6nqc 1/1 Running 0 23h
rbac-query-proxy-7f4b7565dd-m6rbz 1/1 Running 0 23h
rbac-query-proxy-7f4b7565dd-mrn74 1/1 Running 0 23h