Table of contents

Basic security features on Red Hat OpenShift Container Platform

Cloud Pak for Data builds on OpenShift® security features that protect sensitive customer data with strong encryption controls and improve access control across applications and the platform.

Features

Red Hat® OpenShift Container Platform enables an improved security posture with the addition of many capabilities that greatly increase the security of the platform.

  • Uses Red Hat CoreOS as the immutable host operating system.
  • Provides stronger platform security with FIPS (Federal Information Processing Standard) compliant encryption (FIPS 140-2 Level 1). For more information, see Services that support FIPS
  • Uses the Node Tuning Operator, which provides opportunities to further reduce privilege requirements in the security context constraints (SCC). For more information, see Using the Node Tuning Operator.
  • Supports encrypting data that is stored in etcd, which provides extra protection for secrets that are stored in the etcd database. For more information, see Encrypting etcd data.
  • Provides a Network Bound Disk Encryption (NBDE) feature that can be used to automate remote enablement of LUKS encrypted volumes, making it better protected against physical theft of host storage.
  • Enables SELinux as mandatory on Red Hat OpenShift Container Platform.

Service accounts and roles

Cloud Pak for Data runs in a separate namespace or project on the OpenShift cluster. In the OpenShift project, Cloud Pak for Data creates service accounts and RBAC role bindings for pods to use within that namespace.

  • No cluster level access is permitted. All roles impose a restriction to work within that namespace only.
  • Two roles are created: cpd-admin-role and cpd-viewer-role. These roles allow Cloud Pak for Data to ensure that the principle of least privilege can be applied even within the same namespace.
  • Four service accounts are created: zen-admin-sa, zen-editor-sa, zen-viewer-sa, and zen-norbac-sa. No SCCs are explicitly bound to these service accounts and hence they pick up restricted SCC by default..
  • The default service account that is automatically created in every OpenShift project is not granted any RBAC privileges; that is, no roles are bound.
  • The expectation is that the default service account will be used for user workloads such as Notebooks and Python jobs. It will not be allowed to perform any kind of actions inside the namespace.
  • The default service account is associated with the restricted security context constraints (SCCs). However, some add-on services might still need custom SCCs, for example to support IPCs. For more information, see Security context constraints in the IBM® Cloud Platform Common Services documentation.
Note: Custom SCCs were removed in Cloud Pak for Data 4.0. Default restricted SCCs are now installed for most services.

However, if you plan to install certain Cloud Pak for Data services, you might need to create some custom SCCs. For more information, see Creating required security context constraints for services.

Service UIDs

Services use UIDs based on the Red Hat OpenShift Container Platform project where they are installed.

When you create a project, Red Hat OpenShift assigns a unique range of UIDs to the project. To determine the UIDs that are associated with a project, run the following command:

oc describe project project-name

Replace project-name with the name of the project where Cloud Pak for Data is installed.

Additionally, if a service uses a custom SCC, it reserves one or more UIDs:
  • The Db2® as a service restricted SCC reserves UID 500.
  • The IBM® Db2 SCC reserves the following UIDs: 500, 501, 505, 600, 700, and 1001.
  • The Watson™ Knowledge Catalog SCC reserves UID 10032.

For details on which services use these SCCs, see Custom SCCs for services.

Security hardening

Security hardening is enforced on Cloud Pak for Data on Red Hat OpenShift. The following security hardening actions are taken:

  • Only non-root processes are run in containers. The UIDs of the processes are in the OpenShift Project's pre-defined range only, enforced by the use of the restricted SCCs. The restricted SCC does not allow running containers as root.
    Attention: Some services still do require a fixed UID. Such services use a custom SCC for that purpose.
  • Cluster Admin privileges are not required for Cloud Pak for Data workloads at runtime. Cluster Admin authority is needed only to set up projects and custom SCCs (and only for the services that do need them). Service accounts in each Cloud Pak for Data instance are granted privileges that are only scoped within their OpenShift project.
  • Cloud Pak for Data users are typically not granted OpenShift Kubernetes access, and even if they are, it would be only for express purpose of installing or upgrading services inside their assigned OpenShift project.
  • Strict use of service accounts with RBAC privileges is enforced, and the least privilege principle is applied. Cloud Pak for Data ensures that any pod that is running user code (such as scripts or analytics environments) is not granted any RBAC privileges.
  • No host access is required for Cloud Pak for Data workloads at runtime. This restriction is enforced by the SCCs. There is no access to host paths or networks.
  • All pods have restricted resource consumption. Pod resource requests and limits are set for each pod, which restricts the consumption. This approach helps protect against noisy neighbors that cause resource contention.
  • Reliability gauges (liveness and readiness probes) are present for each pod to ensure that the pods are working correctly.
  • For consumption monitoring, each of the pods on Cloud Pak for Data is annotated with metering annotations to uniquely identify add-on service workloads on the cluster.
Note: Some of the add-on services on Cloud Pak for Data might have exceptions to the security hardening and are being tracked to ensure compliance in future releases.

Prescriptive security practices during installation

You don't need an SSH to OpenShift cluster nodes to deploy or manage Cloud Pak for Data and its add-on services. The OpenShift oc command-line interface and the cloudctl command-line interface are used to deploy and manage the IBM Cloud Pak® for Data platform operator and its services.

You can install the software on a cluster that is connected to the internet or a cluster that is air-gapped.

Installing Cloud Pak for Data on clusters that are connected to the internet
Recommended actions for additional security when you access images and to ensure reliability. It is highly recommended that you use the cloudctl utility from a client workstation to download the CASE packages and mirror images from the IBM Entitled Registry and other public container registries into your private container registry.
Installing Cloud Pak for Data on air-gapped clusters
Cloud Pak for Data supports mirroring of images to your private container registry. This procedure does not require that your private container registry and Red Hat OpenShift Container Platform cluster are able to access the internet. You are able to download the CASE packages and mirror images by using the cloudctl utility from a bastion node. If the bastion node does not have a direct access to the private container registry, then an intermediary container registry can be used. By using the intermediary container registry, you are able to mirror the images first on the bastion node. Next, you transfer those images from a network that does have access to your private container registry. Therefore, your Red Hat OpenShift Container Platform cluster would be configured to pull from the private container registry. That way you are able to install the Cloud Pak for Data operators and services without access to the internet.
Namespace scope and Operator privileges
Cloud Pak for Data uses the Operator pattern to manage its workloads, which allows for a separation of concerns. That way Operators that are resident in one central namespace are granted access to manage the Cloud Pak for Data Service workloads in multiple different "instance" namespaces. Users in those instance namespaces can thus be granted far lesser privileges, scoped within that namespace. Operators too are scoped to operate only within these specific namespaces and are, by design, not permitted to manage non-Cloud Pak for Data namespaces in the cluster.
You need the following scope and user privileges.
  • To install the Operator Catalog Source, you need an Red Hat OpenShift Container Platform user, with privileges in the openshift-marketplace namespace.
  • To create an "own Namespace" mode for the OperatorGroup and any Subscriptions for the needed Operators, you need users with privileges in either the ibm-common-services or cpd-operators.
    Note: For security reasons, the "All Namespaces" mode for the OperatorGroup is not recommended.
  • To to restrict the namespaces that the Cloud Pak for Data Operators have authority over, use the IBM Cloud Pak foundational services Namespace-scope Operator. You can expand the Operator access to more Cloud Pak for Data instance namespaces where the service workloads are then deployed. For more information, see Authorizing foundational services to perform operations on workloads in a namespace.
  • To create custom resources to deploy the individual Cloud Pak for Data services or to upgrade or scale them postinstallation, you need users with Project admin privileges in the "instance" namespaces.
Cluster admin responsibility
Cluster Admins are expected to manage the OpenShift cluster and prepare it for use by the Cloud Pak for Data Services. Such tasks include:
Node tuning and machine pool configurations for kernel settings and cri-o settings (such as pids-limit, ulimit).
Only for services that need them.
Set up the image content source policy and any secrets.
The setup is done to pull images from the private container registry.
Create OpenShift Projects.
For the IBM Cloud Pak foundational services and Operators.
For the instances of Cloud Pak for Data.
Configure the namespaces.
Define namespace quotas and Limit Ranges.
Grant access of the Cloud Pak for Data Admins to specific instance namespaces.
Create custom SCCs.
Only for services that need them.
Storage installation and configuration.
Storage that is used by the workloads.
Securely manage OpenShift.
Handle encryption and auditing as well as other operations such as adding nodes, replacing nodes and others.