What's new and changed in the control plane
The IBM® Cloud Pak for Data control plane release and subsequent refreshes can include new features, bug fixes, and security updates. Refreshes appear in reverse chronological order, and only the refreshes that contain updates for the control plane are shown.
You can see a list of the new features for the platform and all of the services at What's new in IBM Cloud Pak for Data?.
Installing or upgrading the control plane
Refresh 2 of Cloud Pak for Data Version 3.5
A new version of the control plane was released in January 2021.
Assembly version: 3.5.2
This release includes the following changes:
- New features
The following versions of the Cloud Pak for Data control plane can run on Red Hat® OpenShift® 4.6:
- Version 3.5.1
- Version 3.5.2
Version 3.5.2 of the control plane also includes the following features and updates:
- Quota enforcement
- If you want to programmatically enforce the quotas that you set for Cloud Pak for Data or for various Cloud Pak for Data services, you must install Version 1.1.1 of the
scheduling service on your cluster.
For details on quota enforcement, see Managing the platform.
For details on installing the scheduling service, see Setting up the scheduling service.
- Support for Portworx 2.6.2
- If you use Portworx Essentials for IBM, you must use
Portworx Version 2.6.2.
Cloud Pak for Data Version 3.5.2 customers are entitled to download and use Portworx Essentials for IBM Version 2.6.2.
For details, see Setting up Portworx storage.
- Bug fixes
This release includes the following fixes:
Issue: The LDAP synchronization job failed when a user's name, email, or ID included an apostrophe.
Resolution: The LDAP synchronization jobs can handle names, emails, and IDs with apostrophes.
Issue: (For clusters with Watson™ Knowledge Catalog installed.) If a user is assigned roles and permissions through a group, Watson Knowledge Catalog could not retrieve the roles and permissions from the group because the roles and groups were not specified in the default user profile data.
Resolution: The roles and permissions from the group are included in the default user profile data.
Initial release of Cloud Pak for Data Version 3.5
A new version of the control plane was released as part of Cloud Pak for Data Version 3.5.
Assembly version: 3.5.1
This release includes the following changes:
- Red Hat OpenShift support
You can deploy Cloud Pak for Data Version 3.5 on the following versions of Red Hat OpenShift:
- Version 3.11
- Version 4.5
- Operator-based installation on the Red Hat Marketplace
If you want to install Cloud Pak for Data from the Red Hat Marketplace, you can use the Cloud Pak for Data operator. You can use the operator to install, scale, and upgrade the Cloud Pak for Data control plane and services using a custom resource (CR).
The operator will be available through the Red Hat Marketplace and is compatible with the Red Hat Operator Lifecycle Manager.
For details, see Installing the Cloud Pak for Data from the OpenShift Console.
- zLinux support
You can deploy the following Cloud Pak for Data software on zLinux (s390x):
- The Cloud Pak for Data control plane
- Db2 Warehouse
- Db2 for z/OS® Connector
- Db2 Data Gate
- New service account required
The Cloud Pak for Data control plane requires an additional service account:
cpd-norbac-sa, which is bound to a restricted security context constraint (SCC).
This security account is specified in the
cpd-cli admcommand for the control plane.
- New upgrade step required
Before you can upgrade to Cloud Pak for Data Version 3.5, you must upgrade the Cloud Pak for Data metadata by running the
For details, see Preparing to upgrade the Watson Machine Learning Accelerator.
- New features
- Customize the home page
In Cloud Pak for Data Version 3.5, you can customize the home page in two ways:
- Platform-level customization
- A Cloud Pak for Data administrator can specify which
cards and links to display on the home page.
- The cards that are available from the home page are determined by the services that are
installed on the platform.
You can disable cards if you don't want users to see them. The changes apply to all users. However, the cards that an individual user sees are determined by their permissions and the services that they have access to.
- Resource links
- You can customize the links that are displayed in the Resources section of the home page.
For details, see Customizing the home page.
- Personal customization
- Each user can specify the cards that are displayed on their home page. (However, the list of
cards that they can choose from is determined by the Cloud Pak for Data administrator.)
In addition, each user can specify which links to display in the Quick navigation section of their home page.
These features are offered in addition to the branding features that were introduced in Cloud Pak for Data 3.0.1.
- Create user groups
A Cloud Pak for Data administrator can create user groups to make it easier to manage large numbers of users who need similar permissions.
When you create a user group, you specify the roles that all of the members of the group have.If you configure a connection to an LDAP server, user groups can include:
You can assign a user group access to various assets on the platform in the same way that you assign an individual user access. The benefit of a group is that it is easier to:
- Existing platform users
- LDAP users
- LDAP groups
- Give many users access to an asset.
- Remove a user's access to assets by removing them from the user group.
- Manage your cluster resources with quotas
Cloud Pak for Data Version 3.5 makes it easier to manage and monitor your Cloud Pak for Data deployment.
The Platform management page gives you a quick overview of the services, service instances, environments, and pods running in your Cloud Pak for Data deployment. The Platform management page also shows any unhealthy or pending pods. If you see an issue, you can use the cards on the page to drill down to get more information about the problem.
In addition, you can see your current vCPU and memory use. You can optionally set quotas to help you track your actual use against your target use. When you set quotas, you specify alert thresholds for vCPU and memory use. When you reach the alert threshold, the platform sends you an alert so that you aren't surprised by unexpected spikes in resource use.
- Manage and monitor production workloads
The Deployment spaces page gives you a dashboard that you can use to monitor and manage production workloads in multiple deployment spaces.This page makes it easier for Operations Engineers to manage jobs and online deployments, regardless of where they are running. The dashboard helps you assess the status of workloads, identify issues, and manage workloads. You can use this page to:
- Compare jobs.
- Identify issues as they surface.
- Accelerate problem resolution.
Common core services This feature is available only when the Cloud Pak for Data common core services are installed. The common core services are automatically installed by services that rely on them. If you don't see the Deployment spaces page, it's because none of the services that are installed on your environment rely on the common core services.
- Store secrets in a secure vault
Cloud Pak for Data introduces a new set of APIs that you can use to protect access to sensitive data. You can create a vault that you can use to store:
- Database credentials
- API keys
For more information, see Credentials and secrets API.
- Improved navigation
The Cloud Pak for Data navigation menu is organized to focus on the objects that you need to access, such as:
- Your task inbox
The items in the navigation depend on the services that are installed.
- Manage connections more easily
The Connections page makes it easier for administrators to define and manage connections and for users to find connections.
The Connections page is a catalog of connections that can be used by various services across the platform. Any user who has access to the platform can see the connections on this page. However, only users with the credentials for the underlying data source can use a connection.
Users who have the Admin role on the connections catalog can create and manage these connections. Unlike previous releases of Cloud Pak for Data, services can refer to these connections, rather than creating local copies. This means that any changes you make on the Connections page are automatically cascaded to the services that use the connection.
Common core services This feature is available only when the Cloud Pak for Data common core services are installed. The common core services are automatically installed by services that rely on them. If you don't see the Connections page, it's because none of the services that are installed on your environment rely on the common core services.
- Workflows for managing business processes
You can use workflows to manage your business processes. For example, when you install Watson Knowledge Catalog, the service includes predefined workflow templates that you can use to control the process of creating, updating, and deleting governance artifacts.
From the Workflow management page, you can define and configure the types of workflows that you need to support your business processes.
You can import and configure BPMN files from Flowable.
Service The feature is available only if Watson Knowledge Catalog is installed.
For details, see Workflows.
- Connect to storage volumes
In Cloud Pak for Data Version 3.5, you can connect to storage volumes from the Connections page or from services that support storage volume connections.
The storage volumes can be on external Network File System (NFS) storage or persistent volume claims (PVCs). This feature lets you access the files that are stored in these volumes from Jupyter Notebooks, Spark jobs, projects, and more. For details, see Connecting to data sources.
You can also create and manage volumes from the Storage volumes page. For more information, see Managing storage volumes.
- Improved backup and restore process
The backup and restore utility can now call hooks provided by Cloud Pak for Data services to perform the quiesce operation. Quiesce hooks offer optimizations and other enhancements compared to scaling down all Kubernetes resources. Services might be quiesced and unquiesced in a certain order, or services might be suspended without having to bring down pods to reduce the time it takes to bring down applications and bring them back up. For more information, see Backing up the file system to a local repository or object store.
- Audit service enhancements
The Audit Logging Service in Cloud Pak for Data now supports increased events monitoring in the zen-audit-config configmap.
If you updated the zen-audit-config configmap to forward auditable events to an external security information and event management (SIEM) solution using the Cloud Pak for Data Audit Logging Service, you must update the zen-audit-config configmap to continue forwarding auditable events.
<match export export.**>
<match export export.** records records.** syslog syslog.**>
You can also use the
oc patch configmapcommand to update the zen-audit-config configmap. For more information, see Export IBM Cloud Pak for Data audit records to your security information and event management solution.
- Configure the idle web session timeout
A Cloud Pak for Data administrator can configure the idle web session timeout in accordance with your security and compliance requirements. If a user leaves their session idle in a web browser for the specified length of time, the user is automatically logged out of the web client.
- Auditing assets with IBM Guardium®
The method for integrating with IBM Guardium has changed. IBM Guardium is no longer available as an option from the Connections page. Instead, you can connect to your IBM Guardium appliances from the Platform configuration page.
For details, see Auditing your sensitive data with IBM Guardium.
- Common core services
Common core services can be installed once and used by multiple services. The common core services support:
- Deployment management
- Job management
- Metadata repositories
The common core services are automatically installed by services that rely on them. If you don't see these features in the web client, it's because the common core services are not supported by any of the services that are installed on your environment.
- Use your Cloud Pak for Data credentials to authenticate to a data source
Some data sources now allow you to use your Cloud Pak for Data credentials for authentication. Log in to Cloud Pak for Data and never enter credentials for the data source connection. If you change your Cloud Pak for Data password, you don't need to change the password for each data source connection. Data sources that support Cloud Pak for Data credentials have the selection Use your Cloud Pak for Data credentials to authenticate to the data source on the data source connection page. When you add a new connection to a project, the selection is available under Personal credentials.
The following data sources support Cloud Pak for Data credentials:
- HDFS via Execution Engine for Hadoop *
- Hive via Execution Engine for Hadoop *
- IBM Cognos® Analytics
- IBM Data Virtualization
- IBM Db2
- Storage volume *
* HDFS via Execution Engine for Hadoop, Hive via Execution Engine for Hadoop, and Storage volume support only Cloud Pak for Data credentials.
- Deprecated features
- LDAP group roles
You can no longer map an LDAP group directly to a Cloud Pak for Data role.
Instead, you can create user groups and add an LDAP group to the user group. When you create a user group, you can assign one or more roles to the user group.