Configuring OpenShift Data Foundation storage
IBM® Security Center for Z requires persistent storage, which is used for storing scan results and other data. Your installation can choose from various different options for persistent storage. For your reference, the following topic shows how to set up Red Hat OpenShift Data Foundation (ODF) Cluster for Ceph file system (CephFS) storage on a z/OS zCX cluster.
Red Hat OpenShift Container Platform supports other types of persistent storage, too. For more information, see the following Red Hat OpenShift documentation: OpenShift Container Platform storage overview.
Sample procedure
To set up ODF CephFS storage for use with IBM Security Center for Z, follow these steps.
- Provision the Red Hat OpenShift cluster with at least 3 control nodes and 3 compute nodes.
- Ensure that the cluster version is 4.12. To do so, modify the bastion.yml file during the initial provisioning of zCX OpenShift Container Platform (OCP) to point to the 4.12 binary files. Or, by manually upgrading from your current version. For more information, see the Red Hat OpenShift documentation for cluster upgrades.
- From the oc command line, run the command oc get nodes to ensure that all nodes are in
Ready state, with all CSRs approved. You can approve CSRs manually by using the following command:
oc get csr -o name | xargs oc adm certificate approve - To prepare for the reconfiguration and local storage setup steps, take down the compute nodes.
To do so, enter the MVS command STOP (P) command on the z/OS system:
For example, if the z/OS job name of the first compute node is OCPJOB1, enter P OCPJOB1 to shut down the instance.P <node jobname> - Run the OCP reconfiguration workflow to specify at least 10 CPU and 24 GB memory for each compute node.
- Run the OCP add local storage disks workflow, specifying your desired disk capacity in Step 2 of
the workflow.Further reading:
- Start the compute nodes and wait until they reach Ready state. To check the state, use the oc get nodes command.
- Go to the web console. Then, click .
- To find the Local Storage Operator, enter "local storage" in the Filter by keyword field.
- Set the following options in the Install Operator page, then install:
- Update channel is set to "4.12"
- Installation mode is set to "A specific namespace on the cluster"
- Installed namespace is set to "Operator recommended namespace openshift-local-storage"
- Update approval is set to "Automatic"
- Verify that the Local Storage Operator indicates a successful installation (a green checkmark).
- Return to the Operator Hub:
- To find the OpenShift Data Foundation Operator, scroll down or enter "OpenShift Data Foundation" in the Filter by keyword field.
- Set the following options on the Install Operator page, then install:
- Update channel is set to "stable-4.12"
- Installation mode is set to "A specific namespace on the cluster"
- Installed namespace is set to "Operator recommended namespace openshift-storage"
- Approval Strategy is set to "Automatic"
- Console plug-in is set to "Enable"
- Go to the Installed Operators page and verify that the OpenShift Data Foundation Operator indicates a successful installation (a green checkmark).
- Go to the Storage page and verify that the Data Foundation dashboard is available.
- Go to the Data Foundation dashboard, then click Create Storage System.
- On the Backing Storage page, do the following steps:
- For Deployment type, select Full Deployment.
- Select the option Create a new StorageClass using the local storage devices.
- Click Next.
- On the Create local volume set page, supply the following information:
- Name for the LocalVolumeSet. This value is also the name for the StorageClass.
- Select the option Disks on all nodes to use the available disks from the 3 compute nodes.
- Leave the other options set to their defaults.
- Click Next.
A window is displayed to confirm the creation of LocalVolumeSet.
- For Network, select Default (SDN).
The storage system is created. To verify its status, go to .