Upgrading to version 1.0.30.0

Follow the instructions after this to upgrade the Integrated Analytics System to version 1.0.30.0.

Before you begin

  1. Read the Release notes for information on any known issues that might affect your IAS version upgrade.
  2. Do the healthcheck of the system. For more details, see CPDS IIAS System healthcheck tool.
  3. If your system is configured with an external LDAP, you must back up the LDAP configuration before upgrading to version 1.0.30.0. You might need to restore the LDAP configuration after upgrading.
    Do the following procedures to backup the LDAP configuration before upgrading
    1. Copy the backup script:
      cp /localrepo/<bundle_name>/EXTRACT/bundle/app_img/ldap/ap_ldap_backup_restore.pl /opt/ibm/appliance/platform/ldap/bin/
    2. Create an LDAP configuration backup:
      /opt/ibm/appliance/platform/ldap/bin/ap_ldap_backup_restore.pl backup
    Do the following procedure to restore the LDAP configuration after upgrading
    1. Restore the LDAP Configuration
      /opt/ibm/appliance/platform/ldap/bin/ap_ldap_backup_restore.pl restore
  4. Before upgrading to 1.0.30.0, make sure to take a full backup of your database.
  5. Before upgrading to 1.0.30.0, make sure to backup all your EMC client NetWorker configuration data. You need to restore the configuration data after upgrading.
  6. Before upgrading to 1.0.30.0, backup the following Db2 configuration files:
    [root@sail68-t03-n1 ~]# podman exec -it dashDB  su - db2inst1 -c "db2 get dbm cfg | grep -i keystore"
     SSL server keydb file                   (SSL_SVR_KEYDB) = /mnt/blumeta0/db2/ssl_keystore/bludb_ssl.kdb
     SSL server stash file                   (SSL_SVR_STASH) = /mnt/blumeta0/db2/ssl_keystore/bludb_ssl.sth
     SSL client keydb file                  (SSL_CLNT_KEYDB) = /mnt/blumeta0/db2/ssl_keystore/bludb_ssl.kdb
     SSL client stash file                  (SSL_CLNT_STASH) = /mnt/blumeta0/db2/ssl_keystore/bludb_ssl.sth
     Keystore type                           (KEYSTORE_TYPE) = PKCS12
     Keystore location                   (KEYSTORE_LOCATION) = /head/home/db2inst1/db2/keystore/keystore.p12
  7. Before upgrading to 1.0.30.0, in the configuration file /opt/ibm/appliance/storage/head/dashdb.env, modify the value of DISABLE_SPARK from NO to YES.
  8. Before starting the upgrade, ensure that no configured third-party tools are present.
  9. Before starting the upgrade, ensure that no third-party backup and restoration utilities are configured. For example, TSM, Lintape NFS, EMC. If they are configured, you must disable them before upgrading and then re-enable them once the upgrade is complete.
  10. Run the following command to make sure that guardium is not running:
    podman exec -it dashDB /opt/ibm/guardium/guard_stap/guard-config-update –status

    Also, make sure that the output does not show: STAP : enabled

  11. Before upgrading, run the following command to disable AFM-DR:
    systemctl stop apafmdr
  12. If you have DSX preinstalled and are not using it, IBM recommends that you open a service ticket to uninstall it.
  13. The minimum version necessary to upgrade to 1.0.30.0 is 1.0.25.0. The minimum security patch required to upgrade to 1.0.30.0 is SP24.
  14. If you have any additional rpms or modules installed on your system in addition to those provided, remove them before proceeding with the upgrade.
  15. If you are using the BLUDR functionality, make sure to follow the procedures listed below before updating to 1.0.30.0:
    • Before ending replication on both source and target systems, make sure that all replication sets are caught up and the LAST CONSISTENCY POINT is current for all replication sets.
    • Backup the files or folders listed below from both the source and target systems. You must restore the BLUDR settings after upgrading:
      • $BLUDR_SHARED_DIR/settings_backup/server.env file
      • $BLUDR_SHARED_DIR/logs/replication
      • $BLUDR_SHARED_DIR/certificates
    • Make sure the contents of the server.env and asnpwd.aut files from $BLUDR_SHARED_DIR/logs/replication are same after the upgrade.
      Note: In 1.0.30.0, the file server.env contains extra contents.

About this task

Only the system bundle is upgraded in version 1.0.30.0

Procedure

  1. Download or copy the Integrated Analytics System bundle to /localrepo.
    Note: The upgrade bundle must be downloaded to the actual /localrepo partition on the node running the upgrade. GPFS is stopped during the upgrade, so if the bundle is downloaded to GPFS storage, the upgrade fails.
  2. Run ap issues and resolve any of the issues that might impact the upgrade process.
  3. Copy the tar.gz file from the downloaded location to a directory on node0101 with the same name as the version of the file name.
    1. Create the new directory outside the container on node0101:
      mkdir /localrepo/1.0.30.0_release
      In this case, /localrepo defines the upgrade directory for the --upgrade-directory parameter that is used later.

      The directory name cannot start with release or iias prefix. Use the release number.

    2. Move the upgrade package (tar.gz) file into this new directory:
      mv package.tar.gz /localrepo/1.0.30.0_release
  4. Start a session that continues if your connection is dropped.
    screen -S apupgrade
    If you get disconnected, the upgrade continues. To reconnect, issue the screen -rd apupgrade command.

    After RHEL upgrade, use tmux as screen is deprecated in RHEL 8.8.

  5. Run the following command to upgrade the apupgrade:
    apupgrade --upgrade-apupgrade --upgrade-directory /localrepo --use-version 1.0.30.0
  6. Perform an upgrade pre-check.
    The pre-check step is also run as part of the upgrade, but it is advised to run it before you start the actual upgrade so that the system health can be verified and fixed when needed.
    apupgrade --preliminary-check --upgrade-directory /localrepo --use-version 1.0.30.0_release
  7. Run apupgrade --upgrade --upgrade-directory <upgrade_directory> --use-version 1.0.30.0_release
    Note:
    • Do not use the --force-restart option as the upgrade process might fail with errors.
    • If upgrading from a version earlier than v1.0.3.0, you must run this command twice.
    Example:
    apupgrade --upgrade --upgrade-directory /localrepo --use-version 1.0.30.0_release
  8. Follow the status of the upgrade by reviewing the following logs:
    /var/log/appliance/apupgrade/apupgrade.log
    /var/log/appliance/apupgrade/apupgrade.log.tracelog
    /var/log/appliance/apupgrade/apupgrade.1.0.x.x_release.status
  9. Optional: Ensure that the Platform Manager and the system are up and running after all upgrades are complete.
    1. The command ap state should return Ready.
    2. The commands ap issues and ap issues -sw should be clear of problems. If not, fix any issues, and run the command apstart.

Results

In 1.0.30.0, the ap version -s container reflects the web console version as follows:
[root@sail62-t14-n1 ~]# ap version -s
Appliance software version is 1.0.30.0

All component versions are synchronized.

+-----------------------------+-----------------------------------------------------------------+
| Component Name              | Version                                                         |
+-----------------------------+-----------------------------------------------------------------+
| Appliance platform software | 1.0.30.0-20240820103140b118                                     |
| apcomms                     | 2.4.33.0-20240322192202b14735                                   |
| appkg_install               | 1.0.27.0-20240705065954b226                                     |
| apupgrade                   | 1.0.29.1-20240705065948b226                                     |
| callhome                    | 1.2.0.0-20240426041622b4                                        |
| containerapi                | 1.0.30.0-20240405004057b15199                                   |
| dashdb                      | 11.5.9.0_cn1-20240819-2314                                      |
| gpfs                        | 5.1.2.0-13                                                      |
| gpfsconfig                  | 1.0.30.0-20240624130220b16841                                   |
| jwtservice                  | 1.0.12.0-20240202153326b1                                       |
| magneto                     | 1.0.30.0-20240701175425b63                                      |
| nodeos                      | 1.0.30.0-20240624124222b16841                                   |
| pfscfg                      | 2.0.2.15-20220928192456b1                                       |
| platformbackups             | 1.0.20.0-20240202123121b13276                                   |
| platformservices            | pfs-software                        : 1.1.30.0-20240314145510b6 |
|                             | pfs-ppc64le-image                   : 2.0.2.17-20240203094058b3 |
|                             | pfs-ppc64le-standalone-utils        : 2.0.2.17-20240203094058b3 |
| platformservicesfirmware    | pfs-cumulus-fabsw-firmware          : 1.3.1.0-b1                |
|                             | pfs-raritan-ts-firmware             : 1.4.0.0-b1                |
|                             | pfs-ibm-fsn-firmware                : 1.11.1.0-b1               |
|                             | pfs-delta-rpc-firmware              : 1.2.1.0-b2                |
|                             | pfs-ibm-fcsw-firmware               : 1.3.0.0-b1                |
|                             | pfs-cumulus-mgtsw-firmware          : 1.2.1.0-b1                |
|                             | pfs-ibm-node-firmware               : 1.8.1.0-b3                |
|                             | pfs-ibm-dsn-firmware                : 1.7.0.0-b1                |
| psklm                       | 1.0.23.0-20240202153229b1                                       |
| supporttools                | 1.0.29.0-20240202153204b13268                                   |
| svsutils                    | 2.0.2.14-20220928192614b1                                       |
| web_console                 | 1.0.30.0-202408210036                                           |
+-----------------------------+-----------------------------------------------------------------+