Upgrading to PowerVC 2.1.0

You can directly upgrade to PowerVC version 2.1.0 through backup restore procedure. If you have any version of PowerVC 2.0.2 and later installed, take a backup, copy the backup file to a custom location, and then uninstall the existing version. Reboot the system, install PowerVC 2.1.0, and then perform restore. Alternatively, you can restore backup on a different system that has PowerVC 2.1.0 installed.

  • While upgrading to 2.1.0 legacy using backup and restore, active backup is supported only for PowerVC version You can preform active backup by using powervc-backup --active command.
  • Legacy install of PowerVC is not supported for add node, replace node, resync node, compute plane node registration, and monitoring features.

PowerVC 2.1.0 supports only NovaLink 2.1.0. Before upgrading to PowerVC 2.1.0, we recommend you upgrade NovaLink to version 2.1.0.

PowerVC provides you three methods of upgrading - through backup and restore, offline upgrade, and rolling upgrade.

In backup and restore method, you can choose to upgrade PowerVC on the same system where you have earlier version installed or on a different system. This topic details the procedure for backup and restore method.

For offline upgrade, you can upgrade all nodes simultaneously in a multinode environment even when services are offline. This method is only supported when you upgrade from PowerVC versions 2.0.3 to 2.1.0. For details, see PowerVC offline upgrade.

If you are upgrading through rolling upgrade method, you can upgrade PowerVC only on the same system. This method is only supported when you upgrade from PowerVC versions 2.0.3 to 2.1.0.

  • After upgrading to PowerVC 2.1.0, it is recommended that you visit Fix Central to download and install any fix packs that are available. For more information, see the Getting fixes from Fix Central topic.
  • To maintain PowerVC cluster consistency, restart controller node one at a time. Make sure that you allow 30 minutes before you restart the next node.
Upgrade considerations
Consider the following before you start with install or upgrade procedure.
  • Make sure that the managed hosts are on IBM POWER8® or later.
  • Review the hardware and software requirements.
  • For RHEL and SLES, PowerVC 2.1.0 installation is supported on both single node and multinode environments.
  • Before you install PowerVC 2.1.0 on RHEL 8.6 EUS, RHEL 8.7, RHEL 9.0 EUS, or RHEL 9.1, make sure you install the OpsMgr iFix IT43104_IT43106.
  • Before performing any operation, make sure that the VM hostname is same as the hostname that you used to create the inventory.
  • Before upgrading to PowerVC version 2.1.0, migrate any hosts running on Compute plane node to PowerVC controller by using /opt/ibm/powervc/bin/powervc-manage -o migratehmchost --hmchostname <MTMS HOST>. After upgrade process is completed, you can migrate the hosts back to Compute plane node by using /opt/ibm/powervc/bin/powervc-manage -o migratehmchost --hmchostname <MTMS HOST>.
  • Support for IBM XIV® Storage System and EMC VNX has been withdrawn.
  • If backup cluster has root user credentials and restore cluster has sudo user credentials then the restore cluster will have login enabled for both root and sudo users after restore operation is completed.
  • Compute plane node must be removed and added later in a scenario where backup is being taken using root user credentials and restore is performed by using sudo credentials.

Taking backup of PowerVC

For PowerVC versions 2.0.2.x and 2.0.3, complete these steps. This procedure is supported only for HA or clustered environment.
  1. Ensure that the user has sufficient sudo privileges configured to perform execution of this command and read access to the files.
  2. Run the powervc-opsmgr backup command with any needed options. Perform NovaLink upgrade to 2.1.
    Listed here is the upgrade combination.
    Backup of PowerVC 2.0.2.x and 2.0.3 can be restored on PowerVC version 2.1.0 running on RHEL or SLES OS.
When the backup operation completes, a new file powervc_backup.tar.gz is placed in a new time stamp subdirectory of the target directory. For example, a potential file path is /var/opt/ibm/powervc/backups/<timestamp>/powervc_backup.tar.gz. You can then leave the archive there or move or copy it to another directory as needed.
Note: If an error occurs while running powervc-opsmgr backup, check for errors in the powervc-opsmgr backup logs in /opt/ibm/powervc/log.

You can also refer to Backing up IBM Power Virtualization Center data for more information on backup procedure.

Restoring to PowerVC version 2.1.0

Review operating system requirements at hardware and software requirements. If restore fails due to network disconnection or any other reason, you can re-run powervc-opsmgr restore -c <CLUSTER_NAME> to resume the restore operation.

If PowerVC version 2.1.0 is using the local OS registry (and not the LDAP server) as the backend for authentication, then ensure all local OS users existing (for authentication) on the PowerVC version 2.0.2.x and later system are also created on the PowerVC version 2.1.0 system before restore is performed. This prevents any user login issues after restore.

In PowerVC, there are two options available for upgrade in a clustered or HA environment.
  • PowerVC restore on single node. You can move to three node cluster using powervc-opsmgr addnodes command.
  • PowerVC restore on single node and multinode (three nodes). To restore on single node or multinode, run powervc-opsmgr restore command. Later, you can add two more nodes making it a five node cluster by running the powervc-opsmgr addnodes command.
    Note: When restoring in a multinode environment, make sure you perform restore on the primary node.

Perform the following steps to restore the backup from PowerVC versions 2.0.2.x and later (for HA environment) and (for legacy install of PowerVC) to PowerVC version 2.1.0. Before starting with the restore steps, ensure that the target system is properly prepared and all services of PowerVC are in running state.

  1. Copy the backup file to the system where PowerVC version 2.1.0 is installed. Perform this step if you are upgrading PowerVC on a different system.
  2. Run # powervc-opsmgr restore to restore PowerVC from backup. This restore command looks for backup archive at /var/opt/powervc/backups directory by default.
    # powervc-opsmgr restore -c <clustername> -b <backup archive file path>

    Additional PowerVC nodes can be added to the cluster to make it multinode environment. For details, see PowerVC Operations Manager.

    • After a successful restore, if another restore is triggered on the same system make sure /var/opt/ibm/powervc/backups/<powervc_release>/ is removed or moved to a different location to avoid conflicts.
    • When backup and restore operation is performed on a fresh system, the PowerVC services running on NovaLink host are upgraded to the latest version.
    • After restore if hosts are in unknown state, then make sure that /etc/nova/nova-<hostname>.conf file ownership and permissions are correct by using these commands.
      chown nova:nova /etc/nova/nova-*.conf
      chmod 660 /etc/nova/nova-*.conf

Update PowerVC on the NovaLink host

As part of PowerVC upgrade, the PowerVC components running on NovaLink are automatically updated.

If the NovaLink host is down during upgrade or restore procedure, the admin user can log in to the PowerVC UI and click Finish Update NovaLink after the host is up and running. This action updates only the PowerVC components running on NovaLink and does not update NovaLink version.

  • If you are planning to re-install OS on NovaLink, it should be done prior to PowerVC upgrade. After successful upgrade and restore if the NovaLink is brought down to re-install the OS, PowerVC cannot connect back automatically. In such a case, please contact support team.
  • For RHEL 9.0, make sure that the CRB repo is enabled.

NovaLink update procedure

Note: Consider the update procedure only if you are upgrading OS on NovaLink host to RHEL 9.0.
Once PowerVC restore or upgrade is completed on version 2.1.0, perform the following steps.
  • Move hosts into maintenance mode in PowerVC 2.1.0.
  • On the NovaLink host, re-install operating system to RHEL 9.0.

    If you have changed the NovaLink credentials during OS re-installation process, or if the host is registered to PowerVC via ssh key based authentication prior to re-installation, then you must update username and password in Edit Host Connection section of the Host details page.

  • Make sure NovaLink 2.1.0 is properly installed.
  • Exit hosts from the maintenance mode.
  • From primary or bootstrap node, add backup file permission by running chmod 644 /var/opt/ibm/powervc/backups/powervc_backup.tar.gz.
  • Login to PowerVC 2.1.0 GUI, update NovaLink and wait until upgrade completes. You can choose one of the following methods to update NovaLink.
    • On the Overview page, click Update NovaLink in the notification to finish updating NovaLink.
    • Navigate to the Hosts page and click Finish Update NovaLink.

Monitoring scenario

With PowerVC version 2.1.0, monitoring components have changed. As such, monitoring upgrading from previous versions to 2.1.0 is not a trivial process. Users should use the a new CLI command powervc-opsmgr monitoring -c <cluster_name> --upgrade to perform this process.

This command will perform the following tasks, in sequence.
  • Log data from the database will be backed up; configuration of new filters or dashboards will NOT be backed up, so users should be aware and manually save those (from Filebeat, Logstash and Kibana) prior to the update process.
  • All monitoring services are stopped.
  • Deprecated services (Elasticsearch and Kibana) are uninstalled.
  • New services (OpenSearch and OpenSearch Dashboards) are installed.
  • Non-deprecated services (Logstash and Filebeat) are upgraded and reconfigured.
  • New services (OpenSearch and OpenSearch Dashboards) are configured.
At this point, by default, a data restore is NOT automatically performed. Please note that even though in theory a data restore from Elasticsearch to OpenSearch could be done, that the previous data would be stored in different indices and would be mixed with new indices created by OpenSearch to store any new data from this point on. For this reason, the restore is not automatically done and is not recommended.
In order to view your old Elasticsearch data there are 2 options.
  • Transfer the backed up data from the upgraded cluster into an old cluster you have not upgraded yet (and thus still has Elasticsearch) and restore it there using the --restore CLI command.
  • (Unsupported, for advanced users only) Prior to upgrade, edit the /opt/ibm/powervc-opsmgr/ansible/monitoring/plays/upgrade-vars.yaml file and change the value of the skip_restore variable from yes to no. With this, when the --upgrade command runs, the last step to restore the Elasticsearch backed-up data into OpenSearch will be performed. As previously stated, the data will be kept in indices with the same name as they were kept in Elasticsearch before. OpenSearch adopts a different index naming suffix, so there's no mixing of old data with new data. Users should be aware that this old restored data is NOT curated / pruned, so they need to do that manually by using OpenSearch API commands to remove the indices so the data is purged from the database. This could be done via curl commands or a REST API client. Please refer to OpenSearch documentation for details on how to perform these steps.

In addition, if you had customized configuration for Filebeat, Logstash or Elasticsearch, or custom visualizations and dashboards for Kibana before, these are not restored during an upgrade.

As previously mentioned, porting of this custom configuration is not supported and should be done by the users themselves. Please remember to manually backup the configuration files for these services prior to running the --upgrade command, and then proceed to manually port them after upgrade . This upgrade process should only be needed once when upgrading to PowerVC 2.1.0 and it is not expected to be needed in future releases.