Integrated Analytics System 1.0.31.0 release notes (May 2025)
Version 1.0.31.0 replaces version 1.0.30.0.IF1 includes RHEL 8.10 upgrade and issue
fixes.
What's new
Integrated Analytics System 1.0.31.0 comes with:
- RHEL 8.10 operating system (kernel version 4.18.0-553.33.1.el8_10.ppc64le)
- GPFS updated version to 5.1.9.9
- Java updated to version ibm-java-ppc64le-sdk-8.0-8.30.ppc64le.rpm
- Podman updated to version 4.9.4
Components
- Db2 Warehouse 11.5.9.0-cn2
- See What's New in Db2 Warehouse.
- Db2 engine 11.5.9
- To learn more about the changes that are introduced in Db2 11.5.9, read What's new in the Db2 documentation.
Upgrade path
The minimum version that is required for upgrading to 1.0.31.0 is 1.0.25.0.
The minimum security patch that is required for upgrading to 1.0.31.0 is:
- SP23 for 1.0.25.0 to 1.0.28.x releases (RHEL 7)
- SP4 for 1.0.30.0 to 1.0.30.0 IF1 release (RHEL 8)
To upgrade version 1.0.31.0, see Upgrading to version 1.0.31.0
Limitations
Note:
- The Integrated Analytics System 1.0.31.0 will no longer support the spark functions.
- The Integrated Analytics System does not support AFMDR.
- The FDM tool is deprecated and no longer supported.
Resolved issues
Integrated Analytics System 1.0.31.0 comes with fixes for:
- Backup and restore for RMT over TSM.
- Backup and restore
-cleanup-failed-restoresupport. - Backup and restore fix to handle lack of DBPARTITIONNUMS parameter.
- IDAA upgrades.
- Podman load issues during
apinit. - Podman commands performance issues.
aideissue withcronjob.- Handling tuned,
chronydservices start timeouts during upgrade. - Preserving
CallHomecertificates in restart. - Component version collection during upgrade.
- Upgrade process bottlenecks (checks modification, command performance review).
- Allow upgrade completion if issue occurs in the post upgrade phase.
- Extending
aphistorylogging withapsetupusage tracking. - Certificate status monitoring-policy.
Known issues
-
- COISBAR schema level in Integrated Analytics System backups container shows the version of 270 instead of 300
- COISBAR schema level inside the Integrated Analytics System backups
container shows version 270 instead of 300. Non-COISBAR schema-level backups show the version
300.Example:
[bluadmin@x86-db2wh1 - Db2wh test]$ db_backup -history Acquiring a unique timestamp and log file... Logging to /mnt/bludata0/scratch/bluadmin_BNR/logs/20240625065752/backup20240625065752.log TIMESTAMP START_TIME END_TIME OPERATIONTYPE SCOPE SCHEMA_NAME SESSIONS LOCATION VERSION -------------------------------------------------------------------------------------------------------------------------------------------- 20240625065725 20240625065725 20240625065743 ONLINE SCHEMA CMBS_P_DATAMART - /mnt/external/backups/ 300 20240625065703 20240625065703 20240625065709 ONLINE SCHEMA COISBAR - /mnt/external/backups/ 270 20240625065041 20240625065041 20240625065106 ONLINE DATABASE NONE 1 /mnt/external/backups/test/backup_onl_1 20240624154918 20240624154918 20240624154930 ONLINE SCHEMA TEST_AUTO_SCHEMA - /mnt/external/backups/ 300 20240624154845 20240624154845 20240624154857 ONLINE SCHEMA TEST_AUTO_SCHEMA - /mnt/external/backups/ 300 20240624154813 20240624154813 20240624154824 ONLINE SCHEMA TEST_AUTO_SCHEMA - /mnt/external/backups/ 300 20240624154739 20240624154739 20240624154750 ONLINE SCHEMA TEST_AUTO_SCHEMA - /mnt/external/backups/ 300 20240624154706 20240624154706 20240624154717 ONLINE SCHEMA TEST_AUTO_SCHEMA - /mnt/external/backups/ 300 20240624154633 20240624154633 20240624154644 ONLINE SCHEMA TEST_AUTO_SCHEMA - /mnt/external/backups/ 300 20240624154604 20240624154604 20240624154616 ONLINE SCHEMA TEST_AUTO_SCHEMA - /mnt/external/backups/ 300 Please find the history at this location: /mnt/bludata0/scratch/bluadmin_BNR/logs/backup_history.txt [bluadmin@x86-db2wh1 - Db2wh test]$
-
- Upgrade might fail when upgrading from 1.0.26.4 or 1.0.25.0 to 1.0.31.0
- When upgrading from 1.0.26.4 or 1.0.25.0 to 1.0.31.0, the upgrade might fail with the following
error:
2025-01-28 04:48:27 TRACE: In method logger.py:log_error:142 from parent method exception_reraiser.py:handle_non_cmd_runner_exception:26 with args msg = Exception: Traceback (most recent call last): File "/opt/ibm/appliance/apupgrade/bin/apupgrade", line 1343, in main_dont_die self.prereqs.ensure_prereqs_met(self.repo, essentials, security, is_database_and_console) File "/opt/ibm/appliance/apupgrade/bin/../modules/ibm/ca/apupgrade_prereqs/apupgrade_prereqs.py", line 70, in ensure_prereqs_met self.ensure_upgrade_is_valid(is_essentials, is_security, is_database_and_console) File "/opt/ibm/appliance/apupgrade/bin/../modules/ibm/ca/apupgrade_prereqs/iias_apupgrade_prereqs.py", line 89, in ensure_upgrade_is_valid raise Exception(self._get_msg('is_valid_upgrade__err_msg', self.installed_version, self.upgrade_version)) Exception: Unable to upgrade from 1.0.25.0-20210409131801b20285 to 1.0.31.0_release.
-
- Enabling Guardium might fail due to authentication issues when both FIPS and
SELinuxare enabled on the system - Workaround
- If your system has both FIPS and
SELinuxenabled, you must disable them before enabling Guardium. Otherwise, container redeployment might fail due to authentication issues. When Guardium is enabled, you can restore both FIPS andSELinuxto their initial states.
- If your system has both FIPS and
- Enabling Guardium might fail due to authentication issues when both FIPS and
-
- The
apnetPingtool might fail on 1/3 rack and half-rack system - When upgrading to 1.0.31.0, the
apnetPingtool might fail on 1/3 rack and half-rack system with following error:[root@node0101 ~]# apnetPing fab ['node0101-fab', 'node0102-fab', 'node0103-fab'] node0101-fab OK OK OK Traceback (most recent call last): File "/usr/bin/apnetPing", line 279, in <module> main() File "/usr/bin/apnetPing", line 253, in main print(" ", gResultArray[i][j], end=' ') IndexError: list index out of range
- The
-
- Upgrade may fail at dashdb postinstall phase on multirack system
- If your system is not running BLUDR (QRep replication) and includes more than one compute rack
(indicating the presence of SPARE nodes), the upgrade to version 1.0.31.0 may fail due to the
following reason:
1. DashdbUpgrader.postinstall Upgrade Detail: Component post install for dashdb Caller Info:The call was made from 'DashdbPostinstaller.do_postinstall' on line 39 with file located at '/localrepo/1.0.31.0_release/EXTRACT/upgrade/bundle_upgraders/../dashdb/dashdb_postinstaller.py' Message: One or more components status check failed due to ERROR: Error running command [podman exec dashDB status] on ['node0201']