Version 2.0.1 release notes

Cloud Pak for Data System version 2.0.1 is based on Red Hat OpenShift Container Platform 4.6.32. It supports Cloud Pak for Data 4.0.2 and Netezza Performance Server 11.2.2.0.

Upgrading

If you have an existing installation of Cloud Pak for Data System 1.x, and you plan to upgrade, open a support ticket. For more information on the process, see Advanced upgrade from versions 1.0.x.

What's new

Red Hat OpenShift Container Platform
Red Hat OpenShift Container Platform version 4.6.32 is now supported. For more information, see OpenShift Container Platform 4.6 release notes.
Cloud Pak for Data 4.0.2
Cloud Pak for Data 4.0 introduces operator-based installations, an improved user management experience, more platform monitoring data from the web client, and an improved connections interface. In addition, Cloud Pak for Data 4.0 includes new services, like IBM Match 360 with Watson™ and Product Master, and enhancements to existing services, such as Watson Knowledge Catalog, Decision Optimization, and OpenPages. For more information about these new features, read What's new in IBM Cloud Pak for Data.

Cloud Pak foundational services 3.11 images are available on the system for installation. Some basic services are already preinstalled.

Support for Netezza Performance Server 11.2.2.0
For a full list of new features and fixes, see 11.2.2.0 release notes.
Web consoles
System web console:
  • Home page resource usage now includes improved memory metrics.
  • When you use the bell icon on the top pane to view the most recent notifications, you can click a notification to view its details. All related notifications are also shown.
  • From the Software overview page, you can now view the list of all available instances of Cloud Pak for Data and launch these instances. Follow the Cloud Pak for Data documentation to add new instances.
  • From the Software overview page, you can now launch the Red Hat OpenShift console, Cloud Pak for Data web console, and the Netezza web console.
  • User management was removed from the Cloud Pak for Data System web console. To manage Cloud Pak for Data users, you can use the Cloud Pak for Data web console. To manage Cloud Pak for Data System users, use apsysusermgmt command.
Red Hat OpenShift web console:

You can now access the Red Hat OpenShift web console and launch it from the Software overview page in the system web console. See the known issue below for the required configuration of this feature.

Multitenancy for Cloud Pak for Data
Multiple tenants of Cloud Pak for Data can now be installed on the system. See Installing Cloud Pak for Data tenants for more details.

If multiple tenants are installed, the instances are visible in the system web console in the Software overview page and can be launched from there.

Multitenancy for NPS
In Cloud Pak for Data System 1.X the only way to have separate Netezza tenants was to purchase a second Cloud Pak for Data System. Each system required three nodes to run the Red Hat OpenShift control plane, and extra nodes as Cloud Pak for Data worker nodes. With Netezza on Cloud Pak for Data System 2.0.1, this set of required infrastructure can be shared, and the system can be expanded to support multiple Netezza instances. For more information, see Multi-tenancy on Netezza 11.2.2.0.
User management
  • System users (apadmin) and Cloud Pak for Data users are separated:
    • System users for administering Cloud Pak for Data System can be modified only with the apsysusermgmt command. For more information, see apsysusermgmt command
    • Cloud Pak for Data application users can be managed only from the Cloud Pak for Data web console or with the cpd-cli user-mgmt command. For more information, refer to the Cloud Pak for Data documentation.
    LDAP must also be configured separately. You can configure LDAP for system users as described in Integrating Cloud Pak for Data System with external directory servers and for application users, follow Cloud Pak for Data documentation.
  • apusermgmt command is replaced with apsysusermgmt. You can manage only the Cloud Pak for Data System users with this command. For more information, see apsysusermgmt command. The command parameter -g|--systemrole Admin|User no longer differentiates between Cloud Pak for Data and Cloud Pak for Data System users.
  • Cloud Pak for Data System user groups are renamed as follows:
    • ibmapadmins is now ibmapsysadmins
    • ibmapusers is now ibmapsysusers
Network configuration
  • Keepalived is used for managing floating IP addresses instead of Platform Manager.

    Keepalived is an industry standard software that is designed to manage floating IP and clustered services. The daemon makes sure that the node is able at a network level to take the floating IP. Keepalived healthchecks are much more robust.

  • New procedure for operator-based network setup for Netezza deployments. Each production Netezza deployment requires a dedicated IP on Cloud Pak for Data System 2.0.x. You might need to add some extra entries to the existing yaml configuration file as described in Operator-based network setup.
  • A more automated network configuration: the aposHouseConfig.py command checks for the most recent YAML file in the directory and validates it, then it asks for confirmation to run the configuration with this file. For more information, see Testing the YAML file and running playbooks.
  • Firewall-based port forwarding was replaced by HAProxy as a proper Layer 4 load balancer as the means to get traffic into Red Hat OpenShift. It allows for higher concurrent Red Hat OpenShift network bandwidth and fault tolerance.
Storage

Red Hat OpenShift Data Foundation (formerly known as Red Hat OpenShift Container Storage) 4.6.5 is used to provide storage on the system instead of Portworx. For more information, see Platform storage.

If you plan to connect to external NFS storage volumes, see Enabling users to connect to external NFS storage volumes.

Platform Manager
The control plane nodes in version 2.0.x each host two Kernel-base Virtual Machines (KVM).
  • e1n1: e1n1-master and e1n1-ldap
  • e1n2: e1n2-master and e1n2-ldap
  • e1n3: e1n3-master and e1n3-ldap
The worker nodes no longer use Kernel-base Virtual Machines (KVM) under Red Hat Enterprise Linux (RHEL). The worker nodes natively run Red Hat Enterprise Linux (RHEL) CoreOS (lightweight container host).
ap commands changes:
  • The following commands are no longer supported:
    • ap apps
    • ap ds
  • ap node command displays more data now: Fabric Node Name, MCP state (Machine Config pool state), and node capabilities. The personality of the node hosting Netezza is now called NPS instead of VDB.
  • When setting the node personality with ap node set_personality command, you no longer use the storage class argument. OpenShift Data Foundation is the only option in version 2.x.
SNMP configuration
Cloud Pak for Data System version 2.x uses a new version of the MIB file - IBM-GTv2_MIB.txt. For more information, see The Cloud Pak for Data System MIB file.
Diagnostic data collection - apdiag improvements
A new feature is added to the diagnostic data collection utility apdiag for users of the Call Home capabilities. You can now run apdiag to collect new diagnostic data and direct that collection to be appended to an existing Support Case. The command syntax for this is:
apdiag collect --components <components to collect from> --case-num <CSP case #>
  • This feature assumes you already have Call Home properly configured and enabled for the system.
  • For customers subscribed to the Blue Diamond service, this feature only works to append data to CSP cases previously opened via Call Home.
Call Home service changes
Starting with version 2.0, Call Home:
  • is running within an OpenShift Pod instead of the docker container;
  • is deployed by OpenShift and not by Platform Manager;
  • is running on one of the worker nodes and not on a control node.
The commands to display the Call Home status and to test the services changed. For more information, see Call Home.
Documentation
  • Cloud Pak for Data System documentation is now versioned for 1.0.x and 2.0.x software versions. If the selected topic is different in these versions, you can select the version from the drop-down list at the top.
  • Common IBM Cloud Paks documentation is available at https://www.ibm.com/docs/en/cloud-paks/1.0. Learn about the foundational services for all IBM Cloud Paks.

Known issues

Connector nodes are not supported in Cloud Pak for Data System 2.0.1.
Do not upgrade 1.x systems with connector nodes installed.
After provisioning or upgrade to version 2.0.1, the web console is in WARNING state due to unhealthy deployments of zen-watchdog
After provisioning or upgrade, consoles are accessible but the state is WARNING due to zen-watchdog pod in Error/CrashLoopBackOff state. The following alert is raised:
451: Webconsole service is not ready
The warning can be ignored. The zen-watchdog pod does not impact the operation of the web console.
Cloud Pak for Data web console does not have the link to the Cloud Pak for Data System web console
There is no option to navigate to the System web console directly from the Cloud Pak for Data web console. You must manually enter the System web console URL instead, as described in Web console.
Red Hat OpenShift Console launch from the system web console requires extra configuration
When clicking the Red Hat OpenShift Console launch icon, the user is redirected to the login page, and when trying to log in the URL that does not open.

Workaround:

To be able to use the console, you need to replace a part of the URL to ensure the console is displayed correctly:

In the website URL, replace localcluster.fbond with customer FQDN. For example, in the following URL:
https://oauth-openshift.apps.localcluster.fbond/oauth/authorize?client_id=console&redirect_uri=https%3A%2F%2Fopenshift-console.gt21-app.your.server.abc.com%2Fauth%2Fcallback&response_type=code&scope=user%3Afull&state=514ff224
localcluster.fbond must be replaced with gt21-app.your.server.abc.com
Additional Cloud Pak for Data tenant's route must be configured
When you install multiple Cloud Pak for Data instances on the cluster, extra steps are required to make the system console correctly get the route of each instance.

You must use external-cpd-route name when creating the route in each Cloud Pak for Data namespace. Otherwise, system console is not able to detect the right route and will fail to navigate to the corresponding Cloud Pak for Data console.

Then, you need to manually run the following commands to make sure that the Software resource usage page works properly in the system console:
  1. Switch to system console namespace:
    oc project ap-console
  2. Copy the jwt-global folder from nginx pod in system console namespace to local folder:
    oc cp ap-console/<one of nginx-pods name>:user-home/_global_/config/jwt-global
              /<local_directory_name>/jwt-global
  3. Switch to that Cloud Pak for Data namespace:
    oc project <CPD_namespace> 
  4. Copy the jwt-global folder from local folder to the nginx pod:
    oc cp /<local_directory_name>/jwt-global/ <CPD_namespace>/<one of nginx-pods
              name>:user-home/_global_/config/
If mgt1 network is unplugged, the node stops functioning correctly
When the management network mgt1 cables are unplugged, the keepalived service fails and you might be unable to get any services. System restart is needed to fix this issue. Contact Support to assist you.
It is not advisable to disconnect the management network at any time, as it stops the nodes from functioning.
When installing Cloud Pak for Data or its services, machineconfig update hangs with node unable to drain rook-ceph-osd pod
Machineconfig pool is degraded, cannot proceed, AND, specifically it is caused by rook-ceph-osd pod failing to drain, AND, specifically it is caused by rook-ceph-mgr pod in openshift-storage namespace being stuck in Init state.
Workaround:
  1. Run:
    oc get mcp
  2. If any pool is degraded, run:
    oc get nodes -o=custom-columns=NAME:.metadata.name,TAINTS:.spec.taints
  3. If node.kubernetes.io/unschedulable in the taint in one of the nodes, run
    oc describe
    for that node.
  4. If there exists some message about unable to evict rook-ceph-osd pod, then run
    oc get pods -n openshift-storage | grep mgr
  5. If that rook-ceph-mgr pod is stuck in Init state, then delete that pod.
  6. When the pod is deleted, wait for a new pod to be created. Then, poll for it to be running, it can take 5 mins.
  7. When it is running, poll
    oc get mcp
    to verify that the pool is no longer degraded. Watch for it to be Updating=False. Then, you can resume the installation that was blocked.