Integrated Analytics System 1.0.30.0 release notes (September 2024)

Version 1.0.30.0 replaces version 1.0.28.2 includes RHEL 8 upgrade, firmware updates, and issue fixes.

Note: IIAS 1.0.30.0 and 1.0.30.0.IF1 must be upgraded together to avoid issues that are caused by using an unsupported GPFS version.

What's new

IAS version 1.0.30.0 comes with:
  • RHEL 8.8 operating system (kernel version 4.18.0-477.27.1.el8_8.ppc64le)
  • Upgraded Python version 3.9
  • Podman as a container version 4.4.1
  • STIG complaint with DISA STIG V1R9 profile

Components

Db2 Warehouse 11.5.9.0-cn1
See What's New in Db2 Warehouse.
Db2 Engine 11.5.9
To learn more about the changes that are introduced in Db2 11.5.9, read What's new in the Db2 Documentation.

Upgrade path

The minimum version that is required for upgrading to 1.0.30.0 is 1.0.25.0.

The minimum security patch that is required for upgrading to 1.0.30.0 is SP24.

To upgrade version 1.0.30.0, see Upgrading to version 1.0.30.0

Limitations

Note:
  • The Integrated Analytics System 1.0.30.0 version will no longer support the Spark functions.
  • The Integrated Analytics System does not support the Z Linux® platform for Db2 Warehouse.
  • The Integrated Analytics System does not support AFMDR.
  • The FDM tool is deprecated and no longer supported.

Resolved issues

IAS version 1.0.30.0 comes with:
  • Precheck for AFM-DR, Guardium server, NFS mount, and TSM external configuration.
  • Vulnerability in Python and httpd packages fixed.
  • Latest nginx version (1.22) included.
  • Backup and Restore utility for local and external platform users included.
  • Handle FSN FW file copy failure gracefully during configuration and update.
  • Update all management and Fab switch CLAGs with unique MACs.
  • The svsutils, apnetHealth, and apnetHostname utilities now have fixes.

Known issues

COISBAR schema level backups in Integrated Analytics System container shows the version of 270 instead of 300
COISBAR schema level backups inside the Integrated Analytics System container shows version 270 instead of 300. Non-COISBAR schema-level backups show the version 300.
Example:
[bluadmin@x86-db2wh1 - Db2wh test]$ db_backup -history
Acquiring a unique timestamp and log file...
Logging to /mnt/bludata0/scratch/bluadmin_BNR/logs/20240625065752/backup20240625065752.log
   TIMESTAMP      START_TIME       END_TIME      OPERATIONTYPE       SCOPE        SCHEMA_NAME      SESSIONS        LOCATION               VERSION

--------------------------------------------------------------------------------------------------------------------------------------------

20240625065725  20240625065725  20240625065743      ONLINE          SCHEMA      CMBS_P_DATAMART        -        /mnt/external/backups/              300

20240625065703  20240625065703  20240625065709      ONLINE          SCHEMA          COISBAR            -        /mnt/external/backups/              270

20240625065041  20240625065041  20240625065106      ONLINE         DATABASE          NONE              1        /mnt/external/backups/test/backup_onl_1

20240624154918  20240624154918  20240624154930      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300

20240624154845  20240624154845  20240624154857      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300

20240624154813  20240624154813  20240624154824      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300

20240624154739  20240624154739  20240624154750      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300

20240624154706  20240624154706  20240624154717      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300

20240624154633  20240624154633  20240624154644      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300

20240624154604  20240624154604  20240624154616      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300


Please find the history at this location: /mnt/bludata0/scratch/bluadmin_BNR/logs/backup_history.txt

[bluadmin@x86-db2wh1 - Db2wh test]$
The version field in db_backup -history displays the backup metadata version. For any RMT enabled schema backups, the version column displays 270. For non-RMT based schemas, the version field displays 300 as the backup metadata version.
Note: This is only applicable to the backup metadata version and not the backup application version. The db_backup -version and db_backup -version commands show version 1.0.30.0.
The parameter ap version -s show incorrect output of callhome version
After the upgrade, the parameter ap version -s might show the incorrect output of the callhome version.
Workaround
  1. Make sure the callhome container is installed and deployed on all nodes. If callhome is not deployed on a node with the right version, contact IBM support or run the following command for verification:
    for n in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py);do printf "${n}: ";ssh ${n} 'podman images | grep callhome';done
    Example:
    [root@sail67-t07-n1 ~]# for n in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py);do printf "${n}: ";ssh ${n} 'podman images | grep callhome';done
    node0101: localhost/callhome/release-1.0.30.0  1.2.0.0-ppc64le    211deaeee6c8  6 months ago  1.31 GB
    node0102: localhost/callhome/release-1.0.30.0  1.2.0.0-ppc64le    211deaeee6c8  6 months ago  1.31 GB
    node0103: localhost/callhome/release-1.0.30.0  1.2.0.0-ppc64le    211deaeee6c8  6 months ago  1.31 GB
    node0104: localhost/callhome/release-1.0.30.0  1.2.0.0-ppc64le    211deaeee6c8  6 months ago  1.31 GB
    node0105: localhost/callhome/release-1.0.30.0  1.2.0.0-ppc64le    211deaeee6c8  6 months ago  1.31 GB
    node0106: localhost/callhome/release-1.0.30.0  1.2.0.0-ppc64le    211deaeee6c8  6 months ago  1.31 GB
    node0107: localhost/callhome/release-1.0.30.0  1.2.0.0-ppc64le    211deaeee6c8  6 months ago  1.31 GB
    Note: The number of nodes might vary, but each node provides the output.
  2. Run the following command to see where callhome is operating. It is expected that all callhome containers appear as exited. This is the most common cause of this problem:
    for n in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py);do printf "${n}: ";ssh ${n}  'podman ps -a | grep callhome';done
    Example:
    # for n in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py);do printf "${n}: ";ssh ${n} 'podman ps -a | grep callhome';done
    node0101: 0fa4b4b8aed4  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Up 5 days               callhome
    node0102: d8db29e006d4  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Exited (137) 6 days ago              callhome
    node0103: da3f7032ed23  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Exited (137) 6 days ago              callhome
    node0104: 7309a7e52511  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Exited (137) 6 days ago              callhome
    node0105: 972acbe270be  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Exited (137) 6 days ago              callhome
    node0106: ced4a393306c  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Exited (137) 6 days ago              callhome
    node0107: bfb9f7aaf787  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Exited (137) 6 days ago              callhome
    
    Note: This indicates that callhome is operating on node0101 (which is expected) and not on any other nodes.
    Check the master node:
    [root@sail67-t07-n1 ~]# ap node -d
    +-----------------+---------+-----------+-----------+--------+
    | Node            |   State | Monitored | Is Master | Is HUB |
    +-----------------+---------+-----------+-----------+--------+
    | hadomain1.node1 | ENABLED |       YES |       YES |    YES |
    | hadomain1.node2 | ENABLED |       YES |        NO |     NO |
    | hadomain1.node3 | ENABLED |       YES |        NO |     NO |
    | hadomain1.node4 | ENABLED |       YES |        NO |     NO |
    | hadomain1.node5 | ENABLED |       YES |        NO |     NO |
    | hadomain1.node6 | ENABLED |       YES |        NO |     NO |
    | hadomain1.node7 | ENABLED |       YES |        NO |     NO |
    +-----------------+---------+-----------+-----------+--------+
  3. If callhome is not operating, or is running but not on the master, stop it on all other nodes. By default, it is expected that the 0101 is the master node.
  4. Run the following command to start the callhome if it is not operating:
    podman start callhome
  5. Run the following command to check that callhome stays active for more than a few minutes:
    podman ps | grep callhome
    Example:
    0fa4b4b8aed4  localhost/callhome/release-1.0.30.0:1.2.0.0-ppc64le  /usr/sbin/init  6 days ago  Up 5 days               callhome
    Look at the callhome log. The last line should be 'Reached target Multi-User System'.
    
    [root@sail67-t07-n1 ~]# podman logs callhome | tail -10
             Starting LSB: Supports the direct execution of binary formats....
             Starting SystemD Unit file to manage callhome as a service...
    [  OK  ] Reached target sshd-keygen.target.
             Starting OpenSSH server daemon...
    [  OK  ] Started D-Bus System Message Bus.
    [FAILED] Failed to start OpenSSH server daemon.
    See 'systemctl status sshd.service' for details.
    [  OK  ] Started LSB: Supports the direct execution of binary formats..
    [  OK  ] Started SystemD Unit file to manage callhome as a service.
    [  OK  ] Reached target Multi-User System.
  6. Run the following command to take a backup of the current component_version_info.json file:
    mv /var/log/appliance/apupgrade/component_version_info.json  /var/log/appliance/apupgrade/component_version_info.json.bkp
  7. Rerun the following command again:
    apupgrade --use-version 1.0.30.0_release --get-installed-components --upgrade-directory /localrepo
  8. Run ap version -s to verify the correct output.
Upgrade might fail when upgrading from 1.0.25.0 to 1.0.30.0
When upgrading from 1.0.25.0 to 1.0.30.0, the upgrade might fail with the following error:
2025-01-28 04:48:27 TRACE: In method logger.py:log_error:142 from parent method exception_reraiser.py:handle_non_cmd_runner_exception:26 with args
                               msg = Exception: Traceback (most recent call last):
                             File "/opt/ibm/appliance/apupgrade/bin/apupgrade", line 1343, in main_dont_die
                               self.prereqs.ensure_prereqs_met(self.repo, essentials, security, is_database_and_console)
                             File "/opt/ibm/appliance/apupgrade/bin/../modules/ibm/ca/apupgrade_prereqs/apupgrade_prereqs.py", line 70, in ensure_prereqs_met
                               self.ensure_upgrade_is_valid(is_essentials, is_security, is_database_and_console)
                             File "/opt/ibm/appliance/apupgrade/bin/../modules/ibm/ca/apupgrade_prereqs/iias_apupgrade_prereqs.py", line 89, in ensure_upgrade_is_valid
                               raise Exception(self._get_msg('is_valid_upgrade__err_msg', self.installed_version, self.upgrade_version))
                           Exception: Unable to upgrade from 1.0.25.0-20210409131801b20285 to 1.0.30.0_release.
Workaround
  1. Run the following command to update the check to include all later versions greater than 1.0.24.0:
    sed -i "s/base_upgrade_version == Version('1.0.25.0')/base_installed_version > Version('1.0.24.0')/" /opt/ibm/appliance/apupgrade/modules/ibm/ca/apupgrade_prereqs/iias_apupgrade_prereqs.py
  2. Restart the upgrade.