Version 1.0.13.0 release notes (October 2018)

Version 1.0.13.0 includes a number of enhancements, and bug fixes.

What's new

Software administration

The appkg_install command syntax is modified. There are three new flags added: emcinstaller to install the EMC NetWorker client, emcnetworker to manage the EMC NetWorker client process and emcsettings to backup or restore the EMC NetWorker installation.

Usage:
appkg_install emcinstaller --chart <chart_path> --path <install_source_directory>
appkg_install emcnetworker (--start | --stop | --status | --version)
appkg_install emcsettings (--restore | --backup)
For more information, see Installing third-party software with appkg_install command.
Platform Manager
  • When database container fails to start on node, the node is disabled.
  • Platform Manager is now sending event 153 after detecting recent kernel panic(s) on a node.
    Note: On the first run after the upgrade, events may be sent for kernel panics that occurred in the past, not earlier than on 06/01/2018.
Security
On appliance with password policy enabled on platform using ap_ldap_ppolicy.pl, any new users created by admin will be prompted for password change on their first login.

Components

IBM Integrated Analytics System version 1.0.13.0 includes the following components:
You can now use IAS with DSX Local version 1.2 - an improved version with many new features such as new Model management, new All Active Environments page etc. For more details, see What's new
Note: DSX is not pre-installed and IBM Support must be involved in the installation process.

When you migrate from other databases, you can now use IBM Database Conversion Workbench 4.0.20 - see Release Notes for details.

Resolved issues

  • A security hole was closed which allowed unprivileged user login from a remote system. The hole was unintentionally opened due to the way Brocade implemented log collection for hardware issues in its Fibre Channel switches. The IAS log collection has been updated, but if an installed system has ever had an issue on a Fibre Channel switch which required log collection, the hole will remain until logs are collected. In such a situation, collect logs with the following command, targeting every node in the system to ensure the hole is closed:
    apdiag collect --components hw/switch/fcsw --fcsw hadomain1.fcswa --node node0101-fab
    This command may be run with the database online, and it will take approximately 5 minutes per node.
  • Fixed reporting of FSP events. In previous releases, if more than one FSP unrecoverable event was generated in a short period of time, Platform Manager sent events for all of them, but only one of these events would contain proper details of FSP event. In others, details were reported as unknown.
  • Fixed an issue with apstop reporting failures on stopping containers even if it was successful.
  • Fixed an issue with ap hw reporting node status as OK instead of UNREACHABLE when the node is disabled and then turned off.
  • db_logprune now cleans up all the older log chains (C directories) under the archive log path, keeping only up to 50 latest logs in the latest log chain (C directory).
  • Fixed a problem with copying or moving a file system backup due to lacking permissions. Backup directory and images ownership is now changed from db2inst1 to bluadmin user to avoid such problems.
  • An issue with untagged images taking up Docker storage on nodes is fixed. The untagged images are now pruned after upgrades.

Known issues

  • When apsetup steps require redeploying the Db2 Warehouse container, the user management settings, such as, for example, external LDAP, are not saved during redeployment.

  • Versions of apupgrade prior to 1.0.13.0 might return errors when running a pre-upgrade --upgrade-details call. Also, due to a change in apupgrade between versions, upgrades to 1.0.13 may fail with a un-handled error:

    Unhandled error when attempting upgrade. Stack trace of failed command logged to /var/log/appliance/apupgrade/20181017/apupgrade20181017121958.log.tracelog coercing to Unicode: need string or buffer, dict found <type 'exceptions.TypeError'>
    Workaround:
    Before upgrading to 1.0.13.0, upgrade apupgrade on its own. If the existing apupgrade is 1.0.11 or above, this can be done by running apupgrade with the --upgrade-apupgrade option:
    apupgrade --upgrade-directory  /localrepo --use-version 1.0.13.0_release --upgrade-apupgrade
    If your current version is below 1.0.11, you need to manually install the apupgrade RPM, which requires root access:
    rpm -Uvh /localrepo/1.0.13.0_release/EXTRACT/bundle/app_img/apupgrade/apupgrade-1.0.13.0-SNAPSHOT-release-1.0.13.0.noarch.rpm --test           
    rpm -Uvh /localrepo/1.0.130_release/EXTRACT/bundle/app_img/apupgrade/apupgrade-1.0.13.0-SNAPSHOT-release-1.0.13.0.noarch.rpm
  • When running apupgrade to start the upgrade on the appliance, Platform Manager might time out resulting in node0101 (or current head node) rebooting, or, in some cases, powering itself off.

    Workaround:
    Restart the database with apstop -p --service and apstart -p before upgrading to avoid this condition.