IBM Integrated Analytics System 1.0.26.4 release notes (February 2022)

Version 1.0.26.4 replaces versions 1.0.26.0, 1.0.26.1, 1.0.26.2, 1.0.26.3. It has the same content as these versions and, additionally, it upgrades the web console container and the Db2 Warehouse container to address security issues.

Important:
  1. If you are on IAS 1.0.19.2 or lower, you must upgrade to IAS 1.0.19.7 before you can upgrade to IAS 1.0.26.4.
  2. Versions 1.0.25 or 1.0.26.x is a required upgrade path to get onto Podman. You can upgrade directly to 1.0.26.4 if on 1.0.19.5 or above.
  3. If you are upgrading to IAS 1.0.26.4 with Windows AD or a custom bluadmin group name, see Upgrading to IAS 1.0.26 or Db2 Warehouse 11.5.6 with a custom bluadmin group name.
  4. To get information about upgrade times for upgrading to version 1.0.26.4, see 1.0.26.4 upgrade times.
  5. If you want to preserve SSL certificate files during upgrade, you must specify the old certificates by using SSL environment variables in the dashdb.env file. For more information, see Preserving old certificate files during upgrade.
  6. When encrypting systems w DSNs, you are recommended to encrypt the devices one at a time for reliability.

What's new

Security issues addressed in versions 1.0.26.x
1.0.26.1:
  • Platform Manager certificate patch
  • Call Home trust file
1.0.26.2:
  • Log4j vulnerabilities - Full upgrade is required to address these vulnerabilities.
1.0.26.4:
  • The security vulnerabilities in the web console container and the Db2 Warehouse container (Samba, Polkit, Log4j 2.17.1)
    Note: You must also install security patch 7.9.21.12.SP6 to fix these vulnerabilities in the platform.
Switch firmware upgrade
If your system is already upgraded to version 1.0.26.4 but your fabric or management switch firmware is below, you can run apupgrade with --update-switches option to upgrade only the switch firmware.
Example command:
apupgrade --upgrade --upgrade-directory /localrepo --use-version 1.0.26.4_release --update-switches

In a sample test environment (multi-rack system with tiered storage) the switches upgrade lasted 3 hrs 10 min (for fabric and management switch).

This option is only available if you are on 1.0.26.2 or later.

GPFS update
Updated GPFS to version 5.0.5.5.9.
AFMDR snapshot cleanup
When using the disaster recovery solution, you can now use the apdr clean --snapshot command to delete older snapshots from both primary and secondary systems. For more details see Managing snapshots.
SNMP
Added the option to send INFORM notifications to expand the reporting capabilities. For more information, see Configuring SNMP trap notifications.
Backup and restore enhancements
Monitoring
Schema backup and restore reports details about the job to MON_GET_UTILITY and changes history event monitors that track the UTILSTART and UTILSTOP events. Details include type, timestamp, number of tables, sessions parallelism, compression, and more.
WLM (Workload Manager) integration
Each database connection that is established by schema backup and restore uses WLM_SET_CLIENT_INFO to associate itself with the utility.
  • CLIENT_APPLNAME is set in the following format:
    <utility name>:<scope>:<name>
    Example:
    db_backup:SCHEMA:PROD
    db_restore:table:JRNL
  • CLIENT_ACCTNG is set to the corresponding timestamp.
Running db_backup concurrently
Multiple concurrent schema type db_backup instances are allowed. db_backup -status prints information about all active instances.
db_backup -abort takes a new parameter (PID) of the process to end (available from -status).
Database backups are still limited to a single instance.
Tables enabled for Row and Column Access Control (RCAC)
Backup and restore operations are now able to process tables that have RCAC enabled. New schema scope privilege must be granted by SECADM: LOGICAL BACKUP AND RESTORE OF RCAC PROTECTED DATA.
db_backup captures RCAC DDL and all table data into the backup image.
db_restore is extended with two new options: -keep-rcac (default) and -replace-rcac.
-keep-rcac denotes that RCAC enabled tables are truncated (instead of DROP/CREATE), preserving all RCAC rules. This option is only allowed when target DDL (for all RCAC-enabled tables) exactly matches what is stored in the backup image.
-replace-rcac denotes that RCAC enabled tables are dropped and re-created, and all rules are restored and re-enabled. Requires user to hold SECADM privilege.
Support for system-period temporal tables
Schema type backup and restore can capture and restore system-period temporal tables, including associated history table. Both tables must be part of the same schema. For more information see https://www.ibm.com/docs/en/db2/11.5?topic=tables-time-travel-query-using-temporal
Partitioned database syntax for database backups
New option for db_backup and db_restore to use separate folder for each data partition (MLN). A new folder under target path is created by backup and data is expected to be in the folder by restore. The name of the folder is the partition number.
db_backup -type FRH -path PATH -partitioned-backup
Target path is appended with /MLN $4N.
db_restore -type FRH -path PATH -timestamp TIMESTAMP -partitioned-restore
If path includes /MLN $4N then -partitioned-restore is optional.
If path does not include /MLN $4N, -partitioned-restore is mandatory.
Performance optimizations for schema backup
  • Backup table order is determined by table size in pages at the start of the backup. Largest tables are to be processed first. This solution allows db_backup to use sessions parallelism and become more deterministic. In addition, it helps to avoid a scenario where one large table is left for the end.
  • Some administrative tasks that completed before data unloads are now subject to -sessions parallelism.
  • Incremental and delta schema backups skip table unload if the table has not been modified since previous backup.
Crontab feature for backup and restore
An override option is added to add scheduled backups to cron tab. Follow these steps:
  1. apinit is required to reflect the changes to change the default DB_BACKUP_SCHEDULE value in /opt/ibm/appliance/storage/head/dashdb.env file.
  2. Find the db_backup script at $CRON_SCRIPTS_DIR/db_backup.sh
  3. Add the db backup logic in this script.
Limitations:
  • Only the root user can update $DB_BACKUP_SCHEDULE.
  • Only a single cron script for scheduling is allowed.
  • The cron-tab configurations are not preserved after upgrading IAS.
R support for UDFs
Added R support for UDFs. For more information, see Creating UDXs in R.
Trickle-feed support
Inserting data into column-organized tables via a trickle-feed (SQL statements that insert a small number of new rows). For more information see

https://www.ibm.com/docs/en/db2-warehouse?topic=variables-column-organized-tables.

Adaptive Workload Management (WLM) for row-store tables
By default, this feature is enabled for column-store systems. Now support for row-store tables is added, and it can be enabled on row-store system using procedure documented at https://www.ibm.com/docs/en/db2-warehouse?topic=manager-enabling-adaptive-workload.

Components

Db2 Warehouse 11.5.6.0
See What's New in Db2 Warehouse.
Db2 Engine 11.5.6.0
To learn more about the changes that are introduced in Db2 11.5.6.0, read https://www.ibm.com/docs/en/db2/11.5?topic=database-whats-new in the Db2 Documentation

Known issues

When upgrading from 1.0.25.0 > SP3 > 1.0.26.4 and 1.0.25.0 > SP4 >1.0.26.4 preliminary check fails
When upgrading from version 1.0.25.0 with security patch, the upgrade preliminary check might fail with a similar error in the upgrade log:
022-01-10 04:59:45 Logging to: /var/log/appliance/apupgrade/20220110/apupgrade20220110045945.log
2022-01-10 04:59:45 INFO: Command used for this run: apupgrade --preliminary-check --ignore-battery-reconditioning --upgrade-directory /localrepo --use-version 1.0.26.3-20220109.231826-5-release
2022-01-10 04:59:45
2022-01-10 04:59:45 INFO: Checking for installed RPM version.
2022-01-10 04:59:47 INFO: Verifying if the upgrade is valid.
2022-01-10 04:59:47 Upgrade is not supported from SP3 to 1.0.26.3-20220109.231826-5-release.
2022-01-10 04:59:47 Unhandled error when attempting upgrade. Stack trace of failed command logged to /var/log/appliance/apupgrade/20220110/apupgrade20220110045945.log.tracelog
Upgrade is not supported from SP3 to 1.0.26.3-20220109.231826-5-release.
<type 'exceptions.Exception'>
2022-01-10 04:59:47 ERROR: Exception: Traceback (most recent call last):

WORKAROUND:

Manually install the upgrade RPM as in the following steps:
  1. Create a directory for the upgrade bundle, move the bundle inside and create another directory named EXTRACT. Then untar the bundle by running the command:
    tar -xvf <bundle_name> -C EXTRACT
         |-- Version
              |-- EXTRACT
              |-- bundle
    
  2. Find the RPM, by running the following command and replacing <version_dir> with the name of the directory you created.
    find /localrepo/<version_dir> -name "apupgrade*.rpm"
  3. Run the following command and replace <full_rpm_path> with the output from the previous command:
    rpm -Uvh <full_rpm_path>
If SELinux is set to enforce, EMC restore and installation fails on the system

db_migrate restore from backup might run into an out of memory issue

RCAC (Row and Column Access Control) limitations
  • Using the -keep-rcac option is not allowed when at least for one table in target schema RCAC rule definition includes dependency on another table and or object - even in the same schema as RCAC table. You can use the option if both row and column access control and all RCAC rules are disabled on that table.
  • Restoring schema that contains RCAC mask/permission that depend on another table/object fails, even with -replace-rcac recommended option. Workaround to facilitate restore in this scenario is either to specify -target-schema option, or manually drop schema prior to restore.
Concurrent schema restore results in: ERROR: An incompatible backup or restore is in progress. Please try again once it is complete.
In 1.0.26.4 concurrent schema backups are supported, but not the concurrent schema restores. Concurrent backup and restore of different schemas is not supported.
Partitioned backup failure due to insufficient space does not clean up after failure
When a partitioned backup is stored on a filesystem without enough disk space, the following error is generated and cleanup fails:
ERROR: [Errno 39] Directory not empty: '/scratch/1026part/backup_onl_1'
The tracelog provides more accurate information:
ERROR: There is not enough space available in the following backup path(s): /scratch/1026part
Restoring partitioned backup fails in the web console
When a partitioned backup is listed in the console, there is no indication that it is partitioned and it is not possible to run the restore with partitioned option in the console. The standard restore will fail. Use db_restore command with partitioned backups.
Generate DDL is not working from console
When trying to generate the DDL from the following menu in the web console Administer > Table > (select any table) > Generate DDL, the error message with the following details is displayed:
sh: line 1: /mnt/blumeta0/home/dsadm/sqllib/db2profile: No such file or directory.
WORKAROUND:
  1. Add the following line in /opt/ibm/dsserver/Config/dswebserver.properties:
    DSWEB_SSHUTIL_LOCALEXEC_DB2_INST=db2inst1
  2. Restart console:
    su - dsadm -c /opt/ibm/dsserver/bin/restart.sh
Generate DDL is not working for Order by Create Time clause
When trying to generate the DDL from the following menu in the web console Administer > Table > (select any table) > Generate DDL > Order by create time, the error message is displayed:
The action was not completed.
This feature is not supported for Order by Create Time clause.
Secondary system in AFMDR fails to come up after upgrading to 1.0.26.4
Upgrade from 1.0.25.0 to 1.0.26.4 fails on secondary system while updating database ENCRLIB configuration. The following error is returned in Db2wh_local.log:
ERROR: Failed to update database configuration ENCRLIB for BLUDB after the update
WORKAROUND:
  1. Stop appliance on both primary and secondary using command apstop -v after upgrade on both systems is complete:
    [apuser@sail82-t07-n1 ~]$ apstop -v
    Successfully deactivated system
  2. Resume replication from primary system using command apdr resume:
    [apuser@sail81-t07-n1 ~]$ apdr resume
    Getting APDR status
    Successfully resumed DR replication between Primary and Secondary
  3. To check the replication queue status use the command apdr status --replqueue:
    [apuser@sail81-t07-n1 ~]$ apdr status --replqueue
    Directory Queue Status Sync Status
    /opt/ibm/appliance/storage/head/home/db2inst1/db2/keystore Ready In-Sync
    /opt/ibm/appliance/storage/data/db2inst1 Ready In-Sync
    /opt/ibm/appliance/storage/local/db2inst1 Starting Behind
    /opt/ibm/appliance/storage/local/db2archive Ready In-Sync
    /opt/ibm/appliance/storage/head/db2_config Ready In-Sync
  4. You can start using the primary system after starting the appliance using the command apstart -w.
    Note: Once all the filesets are in ready state, it means that the primary system is synced with the secondary system.
Backing up with IBM Spectrum Protect (TSM) does not work in the console
Backing up with IBM Spectrum Protect (TSM) is not supported from the web console. You can use the db_backup command instead.
apnetTest health command reports a failure when checking component versions
Example error:
Net Compare Failed
[NodeHealth-node0101]                           Fail
	fab1[version] expected=1.712.30-0 actual=1.713.36-0 storm 7.13.1.0
The expected version value is hardcoded and might be lower than the actual driver version. This error is benign and can be ignored.
Tiered storage systems (with DSNs)

When encrypting tiered storage systems (DSN storage), you are recommended to encrypt the devices one at a time for reliability.

Management switch replacement procedure error
During a management switch replacement procedure, the following issue might be seen when running sys_hw_config mgtsw.
IOError: [Errno 2] No such file or directory

Despite the error, the switch will be configured properly, but this issue can affect the replacement of missing backup patch panel files.

WORKAROUND:

Before running sys_hw_config, run the following command:
/opt/ibm/appliance/platform/xcat/scripts/security/known_hosts_cleaner.py mgtsw01x