Integrated Analytics System 1.0.31.0 release notes (May 2025)

Version 1.0.31.0 replaces version 1.0.30.0.IF1 includes RHEL 8.10 upgrade and issue fixes.

What's new

Integrated Analytics System 1.0.31.0 comes with:
  • RHEL 8.10 operating system (kernel version 4.18.0-553.33.1.el8_10.ppc64le)
  • GPFS updated version to 5.1.9.9
  • Java updated to version ibm-java-ppc64le-sdk-8.0-8.30.ppc64le.rpm
  • Podman updated to version 4.9.4

Components

Db2 Warehouse 11.5.9.0-cn2
See What's New in Db2 Warehouse.
Db2 engine 11.5.9
To learn more about the changes that are introduced in Db2 11.5.9, read What's new in the Db2 documentation.

Upgrade path

The minimum version that is required for upgrading to 1.0.31.0 is 1.0.25.0.

The minimum security patch that is required for upgrading to 1.0.31.0 is:
  • SP23 for 1.0.25.0 to 1.0.28.x releases (RHEL 7)
  • SP4 for 1.0.30.0 to 1.0.30.0 IF1 release (RHEL 8)

To upgrade version 1.0.31.0, see Upgrading to version 1.0.31.0

Limitations

Note:
  • The Integrated Analytics System 1.0.31.0 will no longer support the spark functions.
  • The Integrated Analytics System does not support AFMDR.
  • The FDM tool is deprecated and no longer supported.

Resolved issues

Integrated Analytics System 1.0.31.0 comes with fixes for:
  • Backup and restore for RMT over TSM.
  • Backup and restore -cleanup-failed-restore support.
  • Backup and restore fix to handle lack of DBPARTITIONNUMS parameter.
  • IDAA upgrades.
  • Podman load issues during apinit.
  • Podman commands performance issues.
  • aide issue with cron job.
  • Handling tuned, chronyd services start timeouts during upgrade.
  • Preserving CallHome certificates in restart.
  • Component version collection during upgrade.
  • Upgrade process bottlenecks (checks modification, command performance review).
  • Allow upgrade completion if issue occurs in the post upgrade phase.
  • Extending aphistory logging with apsetup usage tracking.
  • Certificate status monitoring-policy.

Known issues

  1. COISBAR schema level in Integrated Analytics System backups container shows the version of 270 instead of 300
    COISBAR schema level inside the Integrated Analytics System backups container shows version 270 instead of 300. Non-COISBAR schema-level backups show the version 300.
    Example:
    [bluadmin@x86-db2wh1 - Db2wh test]$ db_backup -history
    Acquiring a unique timestamp and log file...
    Logging to /mnt/bludata0/scratch/bluadmin_BNR/logs/20240625065752/backup20240625065752.log
       TIMESTAMP      START_TIME       END_TIME      OPERATIONTYPE       SCOPE        SCHEMA_NAME      SESSIONS        LOCATION               VERSION
    
    --------------------------------------------------------------------------------------------------------------------------------------------
    
    20240625065725  20240625065725  20240625065743      ONLINE          SCHEMA      CMBS_P_DATAMART        -        /mnt/external/backups/              300
    
    20240625065703  20240625065703  20240625065709      ONLINE          SCHEMA          COISBAR            -        /mnt/external/backups/              270
    
    20240625065041  20240625065041  20240625065106      ONLINE         DATABASE          NONE              1        /mnt/external/backups/test/backup_onl_1
    
    20240624154918  20240624154918  20240624154930      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300
    
    20240624154845  20240624154845  20240624154857      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300
    
    20240624154813  20240624154813  20240624154824      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300
    
    20240624154739  20240624154739  20240624154750      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300
    
    20240624154706  20240624154706  20240624154717      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300
    
    20240624154633  20240624154633  20240624154644      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300
    
    20240624154604  20240624154604  20240624154616      ONLINE          SCHEMA      TEST_AUTO_SCHEMA               -        /mnt/external/backups/              300
    
    
    Please find the history at this location: /mnt/bludata0/scratch/bluadmin_BNR/logs/backup_history.txt
    
    [bluadmin@x86-db2wh1 - Db2wh test]$
    The version field in db_backup -history displays the backup metadata version. For any RMT enabled schema backups, the version column displays 270. For non-RMT based schemas, the version field displays 300 as the backup metadata version.
    Note: This is only applicable to the backup metadata version and not the backup application version. The db_backup -version and db_backup -version commands show version 1.0.30.0.
  2. Upgrade might fail when upgrading from 1.0.26.4 or 1.0.25.0 to 1.0.31.0
    When upgrading from 1.0.26.4 or 1.0.25.0 to 1.0.31.0, the upgrade might fail with the following error:
    2025-01-28 04:48:27 TRACE: In method logger.py:log_error:142 from parent method exception_reraiser.py:handle_non_cmd_runner_exception:26 with args
                                   msg = Exception: Traceback (most recent call last):
                                 File "/opt/ibm/appliance/apupgrade/bin/apupgrade", line 1343, in main_dont_die
                                   self.prereqs.ensure_prereqs_met(self.repo, essentials, security, is_database_and_console)
                                 File "/opt/ibm/appliance/apupgrade/bin/../modules/ibm/ca/apupgrade_prereqs/apupgrade_prereqs.py", line 70, in ensure_prereqs_met
                                   self.ensure_upgrade_is_valid(is_essentials, is_security, is_database_and_console)
                                 File "/opt/ibm/appliance/apupgrade/bin/../modules/ibm/ca/apupgrade_prereqs/iias_apupgrade_prereqs.py", line 89, in ensure_upgrade_is_valid
                                   raise Exception(self._get_msg('is_valid_upgrade__err_msg', self.installed_version, self.upgrade_version))
                               Exception: Unable to upgrade from 1.0.25.0-20210409131801b20285 to 1.0.31.0_release.
    Workaround
    1. Run the following command to update the check to include all later versions greater than 1.0.24.0:
      sed -i "s/base_upgrade_version == Version('1.0.25.0')/base_installed_version > Version('1.0.24.0')/" /opt/ibm/appliance/apupgrade/modules/ibm/ca/apupgrade_prereqs/iias_apupgrade_prereqs.py
    2. Restart the upgrade.
  3. Enabling Guardium might fail due to authentication issues when both FIPS and SELinux are enabled on the system
    Workaround
    • If your system has both FIPS and SELinux enabled, you must disable them before enabling Guardium. Otherwise, container redeployment might fail due to authentication issues. When Guardium is enabled, you can restore both FIPS and SELinux to their initial states.
  4. The apnetPing tool might fail on 1/3 rack and half-rack system
    When upgrading to 1.0.31.0, the apnetPing tool might fail on 1/3 rack and half-rack system with following error:
    [root@node0101 ~]# apnetPing fab
              ['node0101-fab', 'node0102-fab', 'node0103-fab']
    node0101-fab         OK            OK            OK    Traceback (most recent call last):
      File "/usr/bin/apnetPing", line 279, in <module>
        main()
      File "/usr/bin/apnetPing", line 253, in main
        print("    ", gResultArray[i][j], end=' ')
    IndexError: list index out of range

    Due to the above issue, apdiag collect --components network/apnet_ping might fail as well.

    Workaround
    1. Run the following command to go to the directory /opt/ibm/appliance/platform/comms/tools:
      cd /opt/ibm/appliance/platform/comms/tools
    2. Take a backup of apnetPing.py
    3. Run the following command to fix apnetPing.py:
      sed -i -e 's/ha_domain_cnt > 0/ha_domain_cnt > 1/' apnetPing.py
  5. Upgrade may fail at dashdb postinstall phase on multirack system
    If your system is not running BLUDR (QRep replication) and includes more than one compute rack (indicating the presence of SPARE nodes), the upgrade to version 1.0.31.0 may fail due to the following reason:
    1. DashdbUpgrader.postinstall
        Upgrade Detail: Component post install for dashdb
        Caller Info:The call was made from 'DashdbPostinstaller.do_postinstall' on line 39 with file located at '/localrepo/1.0.31.0_release/EXTRACT/upgrade/bundle_upgraders/../dashdb/dashdb_postinstaller.py'
        Message: One or more components status check failed due to ERROR: Error running command [podman exec dashDB status] on ['node0201']
    Workaround
    1. Edit the file:
      /localrepo/1.0.31.0_release/EXTRACT/upgrade/bundle_upgraders/..
      /dashdb/dashdb_postinstaller.py
      and navigate to line 134. Update the conditional statement from: if is_bluedr_enabled: to if is_bluedr_enabled or spare_nodes:
    2. Run the following command:
      sed -i -e 's,self.verify_bundle,#self.verify_bundle,g'
            /opt/ibm/appliance/apupgrade/bin/apupgrade
    3. Restart the upgrade process.