Preparing to install foundational services
Before you install, review the following installation requirements.
- Provisioning storage for installing on Linux on IBM Z and LinuxONE
- OpenShift Container Platform cluster
- Configure OpenShift Container Platform cluster for foundational services
- Control installation of Certificate manager operands
Provisioning storage for installing on Linux on IBM Z and LinuxONE
Before you can install foundational services on Linux on IBM Z and LinuxONE, you need to provision your OpenShift Container Platform clusters with persistent storage by using Openshift Container Storage (OCS). If you are using OpenShift Container Platform version 4.6, you can use OCS to provision persistent storage. For more information, see NFS support and configuration in foundational services.
OpenShift Container Platform cluster
Hardware sizing requirement
For the hardware requirements, see Hardware requirements and recommendations for foundational services.
Version of OpenShift Container Platform
- You must have a supported version of OpenShift Container Platform, including the registry and storage services, installed and working in your cluster. For more information about the supported versions, see Supported OpenShift versions and platforms.
OpenShift Container Platform CLI tools
If there are no OpenShift Container Platform CLI tools on the boot node, you need to download, decompress, and install the OpenShift Container Platform CLI tools oc
from OpenShift Container Platform client binaries .
OpenShift console availability
-
To ensure that the OpenShift Container Platform cluster is set up correctly, access the web console.
The web console URL can be found by running following command:
oc -n openshift-console get route
Example output:
openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None
The console URL in this example is
https://console-openshift-console.apps.new-coral.purple-chesterfield.com
. Open the URL in your browser and check the OpenShift Container Platform cluster status. -
For a Red Hat OpenShift on IBM Cloud cluster, you must install a supported version of OpenShift Container Platform by using IBM Cloud Kubernetes Service so that the managed OpenShift Container Platform service is supported. For more information, see Tutorial: Creating Red Hat OpenShift on IBM Cloud clusters .
-
If you are installing your cluster on a public cloud, such as Red Hat OpenShift on IBM Cloud, authentication with Red Hat OpenShift is enabled by default. For more information, see Delegating authentication to OpenShift (ibm-iam-operator). {: #auth}
Available storage class
The foundational services installer uses the default storage class to install MongoDB and Logging services. Ensure that you have a pre-configured supported storage class in OpenShift Container Platform that can be used for creating storage for IBM Cloud Pak foundational services. You need persistent storage for some of the service pods. For more information, see Storage options.
You can use the following command to get the storage classes that are configured in your cluster. Pick a storage class that provides block storage.
oc get storageclasses
Following is a sample output:
NAME PROVISIONER AGE
rook-ceph-block-internal rook-ceph.rbd.csi.ceph.com 42d
rook-ceph-cephfs-internal (default) rook-ceph.cephfs.csi.ceph.com 42d
rook-ceph-delete-bucket-internal ceph.rook.io/bucket 42d
See the following information for an OpenShift cluster that runs on IBM Cloud®:
ibmc-block-gold
is a storage class that is available for classic infrastructure on IBM Cloud®. For more information, see Deciding on the block storage configuration.ibmc-vpc-block-10iops-tier
is the equivalent storage class for newer VPC instances. For more information, see Setting up Block Storage for VPC
oc get sc
Example output:
NAME PROVISIONER AGE
default ibm.io/ibmc-file 4h
ibmc-block-bronze (default) ibm.io/ibmc-block 4h
ibmc-block-custom ibm.io/ibmc-block 4h
ibmc-block-gold ibm.io/ibmc-block 4h
ibmc-block-retain-bronze ibm.io/ibmc-block 4h
ibmc-block-retain-custom ibm.io/ibmc-block 4h
ibmc-block-retain-gold ibm.io/ibmc-block 4h
ibmc-block-retain-silver ibm.io/ibmc-block 4h
ibmc-block-silver ibm.io/ibmc-block 4h
ibmc-file-bronze ibm.io/ibmc-file 4h
ibmc-file-custom ibm.io/ibmc-file 4h
ibmc-file-gold ibm.io/ibmc-file 4h
ibmc-file-retain-bronze ibm.io/ibmc-file 4h
ibmc-file-retain-custom ibm.io/ibmc-file 4h
ibmc-file-retain-gold ibm.io/ibmc-file 4h
ibmc-file-retain-silver ibm.io/ibmc-file 4h
ibmc-file-silver ibm.io/ibmc-file 4h
The default storage class is marked as (default)
.
If you want to set the default storage class or update the default storage class in your OpenShift Container Platform, see Changing the default storage class.
The storage class provisioner is defined in the PROVISIONER
list. To enable dynamic volume provisioning, see Enabling Dynamic Provisioning.
Important: After you install foundational services, if you need to change the default storage class, follow these steps to avoid errors and complications.
- Back up the components that use the default storage class.
- Create a persistent volume claim (PVC) by using the new storage-class-bound persistent volume (PV).
- Restore the components.
For backup and restore of foundational services components, see Foundational services backup and restore.
For backup and restore of MongoDB, see MongoDB.
Using Azure File storage class
To use Azure File storage class with IBM Cloud Pak foundational services on Azure environments, complete the following steps before you create the storage class.
- Create a project for installing IBM Cloud Pak foundational services.
-
Run the following command to retrieve the
ssc.uid-range
of the project:oc describe project <project_name>
In the annotations, find the value of
ssc.uid-range
and save it. Following is the sample output:openshift.io/sa.scc.uid-range: 1000630000/10000
-
When you create the Azure File storage class, set the following
MonutOptions
:mountOptions: - dir_mode=0777 - file_mode=0777 - uid=<retrieved_uid>
where
uid
is the initial part of the value ofssc.uid-range
that you retrieved in step 2.For example:
mountOptions: - dir_mode=0777 - file_mode=0777 - uid=1000630000
Multiple zones requirement
The following prequisites are applicable if you are installing foundational services in a cluster that has multiple zones.
Storage class
The storage class that you use for the foundational services must have its volumeBindingMode
set to WaitForFirstConsumer
.
You might need to create your own storage class to set the volumeBindingMode
. In the following example, the ibmc-block-gold
storage class that is available for clusters on IBM Cloud® is used as a template for creating
a custom storage class.
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: ibmcloud-block-storage-plugin
name: ibmc-block-wffc
parameters:
billingType: hourly
classVersion: "2"
fsType: ext4
iopsPerGB: "10"
sizeRange: '[20-4000]Gi'
type: Endurance
provisioner: ibm.io/ibmc-block
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Required Kubernetes labels
In an on-premises, multizone Red Hat OpenShift Container Platform cluster, if you want the foundational services replicas to be equally spread across zones, you must add the following labels to each worker node. For more information, see topology.kubernetes.io/region and topology.kubernetes.io/zone.
topology.kubernetes.io/region
, which is required on all worker nodes, in both single and multiregion clusters. In public cloud environments, the Red Hat OpenShift Container Platform cluster worker nodes always have such a label. However, for on-premises Red Hat OpenShift Container Platform clusters, you must manually add the label. The label value can be the same across all worker nodes.topology.kubernetes.io/zone
, which is required on all worker nodes, in multizone clusters. The label value must be unique on each worker node.
Important: If you do not add these two labels, Kubernetes might not equally balance the foundational services across zones.
Configure OpenShift Container Platform cluster for foundational services
Before you install foundational services, you must configure your OpenShift Container Platform cluster for services.
Networking
- The port number 9555 is required to be open on every node in the OS environment for the node exporter in the monitoring service. This port is configurable and 9555 is the default value.
Logging
Elasticsearch
For Elasticsearch, ensure that the vm.max_map_count setting is at least 262144 on all nodes. Run the following command to check:
sudo sysctl -a | grep vm.max_map_count
If the vm.max_map_count setting is not at least 262144, complete these steps to set the value to 262144:
-
Define a custom resource with the
vm.max_map_count
set to262144
. See the following example:-
Use any editor to create a YAML file.
vi tuned-cs-es-yaml
-
Add the following content to the YAML file.
apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: common-services-es namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [sysctl] vm.max_map_count=262144 name: common-services-es recommend: - priority: 10 profile: common-services-es
-
-
Create the custom resource.
oc create -f <YAML-file-name>
Following command uses the example YAML file.
oc create -f tuned-cs-es-yaml
Control installation of Certificate manager operands
Pre-requisites:
- Red Hat delivers a standardized certificate manager that is available as an optional component in your cluster. If you plan to use the OpenShift cert-manager Operator or the Cloud Native Computing Foundation (CNCF) cert-manager, one cert-manager must be already installed on a cluster. For more information, see cert-manager Operator for Red Hat OpenShift overview and Cloud native certificate management.
Important:
- The following procedure is supported for IBM Cloud Pak foundational services version 3.19.9 and newer. If multiple Cert Managers are installed in the cluster, delete one of them. For more information, see Problem when you install two different cert-managers.
- For IBM Cloud Pak foundational services version 3.22, IBM Cert Manager operator can detect if other Cert Manager is installed in the cluster to determine whether or not to deploy IBM Cert Manager. The following procedure is not mandatory. However, we it is recommended.
-
To check the version mapping between IBM Cloud Pak foundational services, IBM Cert Manager and the open source cert-manager, see Foundational services cert-manager and community cert-manager version mapping.
-
By performing this procedure, you are responsible for configuring and managing the cert-manager service on the cluster, and ensuring that its version is compatible with your version of foundational services. Currently, v1.7.1 CNCF cert-manager is compatible with IBM Cloud Pak foundational services.
Certificate manager operator (ibm-cert-manager-operator
) installs the following three deployments as part of its operands:
cert-manager-cainjector
cert-manager-controller
cert-manager-webhook
These operands are forked from CNCF cert-manager, and are responsible for managing Certificates. These operands, however, can only be installed on a cluster once. Multiple instances on a cluster cause unexpected behavior, which is an issue when a cluster already has a CNCF cert-manager installed before the installation of foundational services.
Note: The following procedure works for clusters which have a CNCF cert-manager installed with via Helm, YAML files (kubectl
apply
), or OLM operator.
Steps
Complete the following steps before installing foundational services. These steps will configure ibm-cert-manager-operator
to make use of an existing CNCF cert-manager that is already installed, so that no additional operands are installed.
- Create the
ibm-cpp-config
configmap in namespace where foundational services will be installed in. -
Add
deployCSCertManagerOperands: "false"
to the data.The following is a sample output:
kind: ConfigMap apiVersion: v1 metadata: name: ibm-cpp-config namespace: ibm-common-services data: deployCSCertManagerOperands: "false"