Upgrading to version

Upgrade to version is performed by IBM Support.

Before you begin

Before you start the upgrade, from /opt/ibm/appliance/platform/apos-comms/customer_network_config/ansible directory, you must run:
ANSIBLE_HASH_BEHAVIOUR=merge ansible-playbook -i ./System_Name.yml playbooks/house_config.yml --check -v
If any changes are listed in --check --v, ensure that they are expected. If they are unexpected, you must edit the YAML file so that it contains only the expected changes. You might rerun this command as necessary until you see no errors.

About this task

Upgrade to is supported only for systems on and above.

Only the system bundle is upgraded in There is no need to download the following packages:


  1. Connect to node e1n1 via the management address and not the application address or floating address.
  2. Verify that e1n1 is the hub:
    1. Check for the hub node by verifying that the dhcpd service is running:
      systemctl is-active dhcpd
    2. If the dhcpd service is running on a node other than e1n1, bring the service down on that other node:
      systemctl stop dhcpd
    3. On e1n1, run:
      systemctl start dhcpd
  3. Download the system bundle from Fix Central and copy it to /localrepo on e1n1.
    Note: The upgrade bundle requires a significant amount of free space. Make sure you delete all bundle files from previous releases.
  4. From the /localrepo directory on e1n1, run:
    and move the system bundle into that directory.
    The directory that is used here must be uniquely named - for example, no previous upgrades on the system can have been run out of a directory with the same name.
  5. Optional: Run upgrade details to view details about the specific upgrade version:
    apupgrade --upgrade-details --upgrade-directory /localrepo --use-version --bundle system
  6. Before you start the upgrade process, depending on your requirements:
    • Run the preliminary checks with --preliminary-check option:
      apupgrade --preliminary-check --upgrade-directory /localrepo --use-version --bundle system
      if you just want to check for potential issues and cannot accept any system disruptions. This check is non-invasive and you can rerun it as necessary. You can expect the following output after you run the preliminary checks.
      All preliminary checks complete
      Finished running pre-checks.
    • Optional: Run the preliminary checks with --preliminary-check-with-fixes option:
      apupgrade --preliminary-check-with-fixes --upgrade-directory /localrepo --use-version --bundle system
      if you want to check for potential issues and attempt to automatically fix those. Run it if you can accept your system to be disrupted as this command might cause the nodes to reboot.
  7. Optional: Upgrade the apupgrade command to get the new command options:
    apupgrade --upgrade-apupgrade --upgrade-directory /localrepo --use-version --bundle system

    The value for the --use-version parameter is the same as the name of the directory you created in step 4.

  8. Open support ticket to have the system upgraded by IBM Support.


After the upgrade is complete, some of the following alerts might be opened in the system:
| 439         | SW_NEEDS_ATTENTION         | SW    | Openshift node is not ready                                   | YES      |
| 440         | SW_NEEDS_ATTENTION         | SW    | Openshift service is not ready                                | YES      |
| 446         | SW_NEEDS_ATTENTION         | SW    | ICP4D service is not ready                                    | YES      |
| 451         | SW_NEEDS_ATTENTION         | SW    | Webconsole service is not ready                               | YES      |
| 460         | SW_NEEDS_ATTENTION         | SW    | Portworx component is not healthy    
Close them manually with ap issues --close <alert_id>
As part of the upgrade process, VMs are disabled on all nodes and they are shut down. They are expected to stay in the shut off state in
[root@gt01-node1 ~]# for node in `/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py`; do echo ${node}; ssh $node virsh list --all; done
 Id    Name                           State
 -     e1n1-1-control                 shut off

 Id    Name                           State
 -     e1n2-1-control                 shut off

 Id    Name                           State
 -     e1n3-1-control                 shut off

 Id    Name                           State
 -     e1n4-1-worker                  shut off

In, the Netezza web console runs on one of the three control nodes (or on a connector node if installed). There are two docker containers required for operation of the Netezza console: cyclops and the associated influxdb container. Container images are installed on all control nodes for high availability. When a control node goes out of service, Platform Manager starts the cyclops and influxdb containers on another control node (or a connector node).
[root@gt01-node1 ~]# for node in `/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --control`; do echo ${node}; ssh $node docker ps -a | grep -E 'cyclops|influxdb'; done
0d5c48c43d1d        cyclops:4.0.2-20220131b16309-x86_64   "/scripts/start.sh"      12 hours ago        Created                              cyclops
0166a1a80a35        influxdb:latest                       "/entrypoint.sh in..."   12 hours ago        Up 6 hours           >8086/tcp   influxdb
ede85f799cd5        cyclops:4.0.2-20220131b16309-x86_64   "/scripts/start.sh"      12 hours ago        Exited (137) 12 hours ago                       cyclops
0ade5a9fedb8        influxdb:latest                       "/entrypoint.sh in..."   12 hours ago        Exited (0) 12 hours ago                         influxdb
a9e6d1d07a95        cyclops:4.0.2-20220131b16309-x86_64   "/scripts/start.sh"      12 hours ago        Exited (137) 12 hours ago                       cyclops
d69a463388e5        influxdb:latest                       "/entrypoint.sh in..."   12 hours ago        Exited (0) 12 hours ago                         influxdb
[root@gt01-node1 ~]# 
The ap version -s container reflects the web console version:
[root@gt26-node1 ~]# ap version -s
Appliance software version is

All component versions are synchronized.

| Component Name              | Version                                                            |
| Appliance platform software |                                       |
| cyclops                     | cyclops:4.0.2-20220131b16309