Security on Cloud Pak for Data
IBM® Cloud Pak for Data supports several different mechanisms for securing your environment and your data.
Quick links
Secure engineering practices
Cloud Pak for Data follows IBM Security and Privacy by Design (SPbD). Security and Privacy by Design (SPbD) at IBM is a set of focused security and privacy practices, including vulnerability management, threat modeling, penetration testing, privacy assessments, security testing, and patch management.
For more information about the IBM Secure Engineering Framework (SEF) and SPbD, see the following resources:
Image certification
All containers in Cloud Pak for Data use Certified Red Hat Universal Base images (UBI) only.
For more information, see Using Red Hat Universal Base Images.
Basic security features on Red Hat OpenShift Container Platform
Security is required for every enterprise, especially for organizations in the government, financial services, and healthcare sectors. OpenShift® container platform provides a set of security features. These features protect sensitive customer data with strong encryption controls and improve the oversight of access control across applications and the platform itself.
Cloud Pak for Data builds on the security features provided by OpenShift by creating Security Context Constraints (SCC), service accounts, and roles so that Cloud Pak for Data pods and users have the lowest level of privileges to the OpenShift platform that is needed for them. Cloud Pak for Data is also security hardened on the OpenShift platform and is installed in a secure and transparent manner.
For more information, see Basic security features on Red Hat OpenShift Container Platform.
Authentication and authorization
By default, Cloud Pak for Data user records are stored in an internal repository database. The internal repository database enables you to complete the initial setup of Cloud Pak for Data. However, after you set up Cloud Pak for Data, it is recommended that you use an enterprise-grade password management solution, such as SAML SSO or an LDAP provider for password management.
- User management
- For more information, see the following resources:
- Authorization
- Cloud Pak for Data provides user management capabilities to authorize users. For more information, see Managing users.
- Tokens and API keys
- You can use tokens and API keys to securely access Cloud Pak for Data instances, services, and APIs.
- Cloud Pak for Data automatically generates a bearer token when a user signs in, and securely stores information in the user's home directory. When the user signs out, the stored bearer token is cleared.
- An analytics project requires a personal access token to connect to an external Git repository.
- Cloud Pak for Data provides an encrypted bearer token in the model deployment details that an application developer can use for evaluating models online with REST APIs. The token never expires, and is limited to the model it is associated with.
- API keys let you authenticate to Cloud Pak for Data
instances or services with your own credentials. For more information, see Generating API keys for authentication.
You must use an API key to access Cloud Pak for Data APIs. For more information, see Developer resources.
- Cloud Pak for Data uses a JSON Web Token (JWT) to authenticate to some services. Services that support JWT tokens can use the Cloud Pak for Data credentials to authenticate to the service. For more information, see:
- Cloud Pak for Data uses a JSON Web Token (JWT) to
authenticate to some data sources. Data sources that support JWT tokens can use the Cloud Pak for Data credentials to authenticate to the data source.
For more information, see:
- Db2 connection
- Cognos® Analytics connection
- HDFS via Execution Engine for Hadoop connection
- Hive via Execution Engine for Hadoop connection
- Storage volume connection
If you are creating a connection to a data source that supports JWT tokens, you can select the Use your Cloud Pak for Data credentials to authenticate to the data source checkbox. When the user logs in to Cloud Pak for Data with their username and password, the Cloud Pak for Data authorization service returns a JWT token to the browser. The token is forwarded to the data source to grant access to the system. The user does not need to enter credentials again for the data source. The token has a limited expiry and generally lasts only an hour unless the browser refreshes it.
- Idle web client session timeout
- You can configure the idle web client session timeout in accordance with your security and compliance requirements. When a user leaves their session idle in a web browser for the specified length of time, the user is automatically logged out of the web client.
Encryption
Cloud Pak for Data supports protection of data at rest and in motion. It supports FIPS (Federal Information Processing Standard) compliant encryption for all encryption needs.
- Data
-
- In general, data security is managed by your remote data sources. Unless you use shared credentials to access your remote data sources, only users with the appropriate credentials can access the data in a remote data source. For more information, see Storage considerations.
- To ensure that your data in Cloud Pak for Data is stored securely, you can encrypt your storage partition. If you use Linux Unified Key Setup-on-disk-format (LUKS), you must enable LUKS and format the partition with XFS before you install Cloud Pak for Data. For more information, see Linux Unified Key Setup-on-disk-format (LUKS).
- Communications
- You can use TLS or SSL to encrypt communications to and from Cloud Pak for Data.
- FIPS
- Cloud Pak for Data also supports protection of data at rest and in motion. It supports FIPS (Federal Information Processing Standard) compliant encryption for all encryption needs.
Network access requirements
To ensure secure transmission of network traffic to and from the Cloud Pak for Data cluster, you need to configure the communication ports used by the Cloud Pak for Data cluster.
- Primary port
- The primary port is what the Red Hat OpenShift router exposes.
- Communication ports for services
- When you provision a new service or integration on your Cloud Pak for Data cluster, the services might require connections to be made from outside the cluster.
- DNS service name
- When you install the Cloud Pak for Data control plane, the installation points to the default Red Hat OpenShift DNS service name. If your OpenShift cluster is configured to use a custom name for the DNS service, a project administrator or cluster administrator must update the DNS service name to prevent performance problems.
Audit logging
Audit logging provides accountability, traceability, and regulatory compliance concerning access to and modification of data.
For more information, see Auditing Cloud Pak for Data.
Multi tenancy and network security
To make effective use of infrastructure and reduce operational expenses, you can run Cloud Pak for Data in multi-tenanted mode on a single OpenShift cluster, while still maintaining security, compliance, and independent operability.
For more information, see Multitenancy and network security.
Regulatory compliance
Cloud Pak for Data is assessed for various Privacy and Compliance regulations. Cloud Pak for Data provides features that can be used by clients in preparation for various privacy and compliance assessments. These features are not an exhaustive list, due to the many ways that clients can choose and configure features, and the large variety of ways that Cloud Pak for Data can be used stand-alone or with third-party applications and systems.
Cloud Pak for Data is not aware of the nature of data that it is handling other than at a technical level (for example, encoding, data type, size). Therefore, Cloud Pak for Data can never be aware of the presence or lack of personal data. Customers must track whether personal information is present in the data that is being used by Cloud Pak for Data.
For more information, see What regulations does Cloud Pak for Data comply with?
Additional security measures
To protect your Cloud Pak for Data instance, consider the following best practices.
- Network isolation
- As a best practice, you should use network isolation to isolate the Red Hat OpenShift project (Kubernetes namespace) where Cloud Pak for Data is deployed. Then, you must ensure that only the appropriate services are accessible outside the namespace or outside the cluster. For more information about network isolation, review the following OpenShift documentation.
- Setting up an elastic load balancer
- To filter out unwanted network traffic, such as protecting against Distributed Denial of Service (DDoS) attacks, use an elastic load balancer that accepts only full HTTP connections. Using an elastic load balancer that is configured with an HTTP profile inspects the packets and forward only the HTTP requests that are complete to the Cloud Pak for Data web server. For more information, see Protecting Against DDos Attacks.
- Disabling the external registry route
- For the registry server, you can disable the external route that is used to push images to the registry server when you are not installing Cloud Pak for Data. However, if you leave the route disabled when you try to install Cloud Pak for Data, the installation fails.