Version 1.0.4 release notes

Hardware Platform Interface

New features:
  • Package-based firmware update for mgtsw and fabsw to 3.7.9
  • Lenovo firmware update automation on SD530 for all subcomponents, including newer XCC / uEFI to fix LED and DIMM issues.
  • RPC BOM support (enter the MTM in the SCJ, they show up in sys_hw_info --api). Not monitored or configured.
  • sys_hw_check for checking configuration and BOM issues, and providing a firmware report.
  • Initial automation for the IPS installation procedure to simplify switch setup.
Fixes:
  • Node enclosure discovery improvements
  • Stability and reliability improvements across the board

Platform Manager

  • Reporting Portworx Cluster and its elements with ap sw -d and ap fs
  • ap info reports PDUs MTM and serial number if configured by Hardware Platform Interface
  • Fixed an issue with supporting systems with more than 9 enclosures

NodeOS

New features:
  • System provisioning support for Portworx
  • VM template with CRIO installation
  • Custom ibm-ca-os-openstorage-sdk-python package to include OpenStorage SDK python modules 
  • Security patch updates
  • lldpad package installation for wiring diagnostics
  • Network setup validation
  • NetworkManager-dispatcher-routing-rules package installation for routing gateway flip flops
  • iperf package installation 
  • cobbler and fence-agents packages installation to support SDC (Storage Data Clear)
Fixes:
  • Updated libvirt packages containing fixes for disk replacement functionality
  • Updated kernel inside the VM to support RHOS version 3.11.154 
  • GPFS upgrade to version 4.2.3.18
  • xCat database cleanup at the start of the provisioning 
  • Remove DHCP/DNS traces on performing node unset operation
  • Improved Mellanox driver installation 
  • Missing ibm-apos-network-tools package installation 
  • RedHat patch for statvfs system call failure 
  • Updated tzdata package to support Brazilian DST 
  • Improved logic to generate unique MAC address for VM
  • Updated firewall rules for Cloud Pak for Data System 

Storage

  • When upgrading to version 1.0.4.0, the system storage is reconfigured to support Portworx.
    Note: Back up your data as it might be lost during the upgrade.
  • Fixes that allow drive replacement without requiring ap node disable/enable of any nodes.

Software

IBM Cloud Pak for Data version 2.5
For more information on new features, see Version 2.5.0 release notes.
IBM Cloud Pak for Data services (add-ons)
You can now use the cpd command to install additional services. For more information, seeInstalling IBM Cloud Pak for Data services.
Web console
  • Resource allocation (launch from Resource usage page, top left widget) provides allocated resources dedicated to Cloud Pak for Data.
  • Fixed an issue with the web console not loading usage data.
IBM Performance Server for PostgreSQL version 11.0.3.0
Read the release notes at IBM Performance Server version 11.0.3.0 release notes.

Serviceability

  • Enhanced data collection via apdiag now supports Cloud Pak for Data 2.5.
  • apdiag now supports data collection for Portworx storage subsystem.
  • Provided a fix for "Locate LEDs". It now works properly for nodes and for NVMe drives when those devices are DISABLED.
  • Improved Call Home notification support for the IBM Cognitive Support Platform.

Security

You can now change your platform user password when you log in to a control node via CLI. For more information, see Changing platform user password.

Known issues

  • During the upgrade procedure, the VM is not upgraded but re-installed from scratch, which might cause data loss. Contact IBM Support for more information on this issue.
  • Any custom configuration of house network might be lost during an upgrade. Manual re-configuration might be required.
  • IBM Performance Server is not upgraded automatically when you upgrade the system. You can upgrade IPS to version 11.0.3.0 manually, as described in Upgrading Cloud Pak for Data System.
  • If any Cloud Pak for Data services were deployed using portworx-shared-gp storage class with single replica count storage volume, it could cause data loss and raise issue with Portworx not being able to disable a node due to single replication.

    Workaround:

    Run the following steps from a control VM (or a node with Portworx where pxctl commands can be run):
    1. Increase replica count from 1 to 2:
      for i in $(pxctl v l |awk '$5 ~ /^1$/ {print $1}' ); do pxctl v u --repl 2 $i; done
    2. Increase replica count from 2 to 3:
      for i in $(pxctl v l |awk '$5 ~ /^2$/ {print $1}' ); do pxctl v u --repl 3 $i; done