Preparing to install foundational services
Before you install, review the following installation requirements.
- OpenShift Container Platform cluster
- Configure OpenShift Container Platform cluster for foundational services
Existing instance of foundational services in the cluster
If you already have foundational services version 3.x.x installed in the cluster and would like to install a new instance of foundational services version 4.x.x, you must check whether the existing instance of foundational services version 3.x.x is under dedicated mode, supporting multiple instances of foundational services installation in the cluster.
Identifying if an existing instance of foundational services version 3.x.x is installed in dedicated mode
Check the common-service-maps
ConfigMap under the kube-public
namespace and confirm if a dedicated controlNamespace
is specified in the ConfigMap. Check the namespaces of the foundational services version 3.x.x
instance is listed in the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: common-service-maps
namespace: kube-public
data:
common-service-maps.yaml: |
controlNamespace: cs-control <------------- controlNamespace
namespaceMapping:
- requested-from-namespace:
- cloudpakns1
- cloudpakns2
- cloudpakns3
map-to-common-service-namespace: ibm-common-services <---------- Bedrock v3.x instance
If a cluster with an existing instance of foundational services version 3.x.x is not under multiple-namespaces installation, you must convert it to a multiple-namespaces installation before installing a new instance of foundational services version 4.x.x. For more information, see Isolated migration.
OpenShift Container Platform cluster
Hardware sizing requirement
For the hardware requirements, see Hardware requirements and recommendations for foundational services.
Version of OpenShift Container Platform
For more information about the supported versions, see Supported OpenShift versions and platforms.
OpenShift Container Platform CLI tools
If there are no OpenShift Container Platform CLI tools on the boot node, you need to download, decompress, and install the OpenShift Container Platform CLI tools oc
from OpenShift Container Platform client binaries .
OpenShift console availability
-
To ensure that the OpenShift Container Platform cluster is set up correctly, access the web console.
-
The web console URL can be found by running following command:
oc -n openshift-console get route
-
Example output:
openshift-console console console-openshift-console.apps.new-coral.purple-chesterfield.com console https reencrypt/Redirect None
The console URL in this example is
https://console-openshift-console.apps.new-coral.purple-chesterfield.com
. Open the URL in your browser and check the OpenShift Container Platform cluster status.
-
-
For a Red Hat OpenShift on IBM Cloud cluster, you must install a supported version of OpenShift Container Platform by using IBM Cloud Kubernetes Service so that the managed OpenShift Container Platform service is supported. For more information, see Tutorial: Creating Red Hat OpenShift on IBM Cloud clusters
.
-
If you are installing your cluster on a public cloud, such as Red Hat OpenShift on IBM Cloud, authentication with Red Hat OpenShift is enabled by default. For more information, see Delegating authentication to OpenShift (ibm-im-operator). {: #auth}
Available storage class
Ensure that you have a pre-configured storage class in OpenShift Container Platform that can be used for creating storage for IBM Cloud Pak foundational services. You need persistent storage for some of the service pods.
You can use the following command to get the storage classes that are configured in your cluster. Pick a storage class that provides block storage.
oc get storageclasses
Following is a sample output:
NAME PROVISIONER AGE
rook-ceph-block-internal rook-ceph.rbd.csi.ceph.com 42d
rook-ceph-cephfs-internal (default) rook-ceph.cephfs.csi.ceph.com 42d
rook-ceph-delete-bucket-internal ceph.rook.io/bucket 42d
For an OpenShift cluster that runs on IBM Cloud®, ibmc-block-gold
is always available. For installing IBM Cloud Pak foundational services on IBM Cloud®, you might need to use the ibmc-block-gold
storage class. For
more information, see Deciding on the block storage configuration.
oc get sc
Example output:
NAME PROVISIONER AGEdefault ibm.io/ibmc-file 4h
ibmc-block-bronze (default) ibm.io/ibmc-block 4h
ibmc-block-custom ibm.io/ibmc-block 4h
ibmc-block-gold ibm.io/ibmc-block 4h
ibmc-block-retain-bronze ibm.io/ibmc-block 4h
ibmc-block-retain-custom ibm.io/ibmc-block 4h
ibmc-block-retain-gold ibm.io/ibmc-block 4h
ibmc-block-retain-silver ibm.io/ibmc-block 4h
ibmc-block-silver ibm.io/ibmc-block 4h
ibmc-file-bronze ibm.io/ibmc-file 4h
ibmc-file-custom ibm.io/ibmc-file 4h
ibmc-file-gold ibm.io/ibmc-file 4h
ibmc-file-retain-bronze ibm.io/ibmc-file 4h
ibmc-file-retain-custom ibm.io/ibmc-file 4h
ibmc-file-retain-gold ibm.io/ibmc-file 4h
ibmc-file-retain-silver ibm.io/ibmc-file 4h
ibmc-file-silver ibm.io/ibmc-file 4h
The default storage class is marked as (default)
.
The foundational services installer uses the default storage class to install MongoDB and Logging services. If you want to set the default storage class or update the default storage class in your OpenShift Container Platform, see Change the default StorageClass.
The storage class provisioner is defined in the PROVISIONER
list. To enable dynamic volume provisioning, see Enabling Dynamic Provisioning.
Using Azure File storage class
To use Azure File storage class with IBM Cloud Pak foundational services on Azure environments, complete the following steps before you create the storage class.
- Create a project for installing IBM Cloud Pak foundational services.
-
Run the following command to retrieve the
ssc.uid-range
of the project:oc describe project <project_name>
In the annotations, find the value of
ssc.uid-range
and save it. Following is the sample output:openshift.io/sa.scc.uid-range: 1000630000/10000
-
When you create the Azure File storage class, set the following
MonutOptions
:mountOptions: - dir_mode=0777 - file_mode=0777 - uid=<retrieved_uid>
where uid
is the initial part of the value of ssc.uid-range
that you retrieved in step 2.
For example:
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000630000
Multiple zones requirement
The following prerequisites are applicable if you are installing foundational services in a cluster that has multiple zones.
Storage class
The storage class that you use for the foundational services must have its volumeBindingMode
set to WaitForFirstConsumer
.
You might need to create your own storage class to set the volumeBindingMode
. In the following example, the ibmc-block-gold
storage class that is available for clusters on IBM Cloud® is used as a template for creating
a custom storage class.
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: ibmcloud-block-storage-plugin
name: ibmc-block-wffc
parameters:
billingType: hourly
classVersion: "2"
fsType: ext4
iopsPerGB: "10"
sizeRange: '[20-4000]Gi'
type: Endurance
provisioner: ibm.io/ibmc-block
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Required Kubernetes labels
In an on-premises, multizone Red Hat OpenShift Container Platform cluster, if you want the foundational services replicas to be equally spread across zones, you must add the following labels to each worker node. For more information, see topology.kubernetes.io/region and topology.kubernetes.io/zone.
topology.kubernetes.io/region
, which is required on all worker nodes, in both single and multiregion clusters. In public cloud environments, the Red Hat OpenShift Container Platform cluster worker nodes always have such a label. However, for on-premises Red Hat OpenShift Container Platform clusters, you must manually add the label. The label value can be the same across all worker nodes.topology.kubernetes.io/zone
, which is required on all worker nodes, in multizone clusters. The label value must be unique on each worker node.
Important: If you do not add these two labels, Kubernetes might not equally balance the foundational services across zones.
Configure OpenShift Container Platform cluster for foundational services
Before you install foundational services, you must configure your OpenShift Container Platform cluster for services.
Networking
- The port number 9555 is required to be open on every node in the OS environment for the node exporter in the monitoring service. This port is configurable and 9555 is the default value.