Migrating data to 5.1.1.10 or 5.1.1.11

Data migration to version 5.1.1.10 or 5.1.1.11 is only supported if you are upgrading the Z Data Analytics Platform from fix pack 5.1.0.9. You can migrate only the user ID information that is previously maintained in the Keycloak zdap security realm and operational data that is stored in the Z Data Analytics Platform long-term datastore volume. Short-term storage data (such as data that is stored in the Apache Kafka message broker) and event data (such as data that is stored in the Problem Insights server’s internal datastore) is not migrated.

Important: If you plan to migrate existing installations of Z Anomaly Analytics from two unique systems to a single system, the following method is the only method that supports full migration of long-term storage data with a single installation of the fix pack 5.1.1.10 or 5.1.1.11 image:
  1. Upgrade the Z Data Analytics Platform of IBM Z® Operational Log and Data Analytics, and simultaneously, install the Z Anomaly Analytics components on the existing Z Data Analytics Platform system.
  2. Export long-term storage data from the existing Z Anomaly Analytics system, and import it into the combined fix pack 5.1.1.10 or 5.1.1.11 installation.
  3. After the upgrade and data migration are successfully completed, uninstall the old Z Data Analytics installation from its system.

Other scenarios for combining deployments from two unique systems onto one system are possible, but are significantly more complex. If you have a requirement to consolidate onto the existing Z Anomaly Analytics system or onto a new system, contact IBM® Support for guidance.

Before you begin

Before you begin the migration procedure, complete the following steps regarding data ingestion:

  1. Stop any instances of the Z Common Data Provider Data Streamer or Data Collector that are sending data to the Z Data Analytics Platform.
  2. Allow the Kafka consumer groups in the product to process all unprocessed data that is remaining in the Apache Kafka topics.
    Identify the consumer groups by running one of the following commands:
    • Docker
      <ZDAP_HOME>/bin/dockerManageZdap.sh kafka-consumer-groups --list
    • Podman
      <ZDAP_HOME>/bin/podmanManageZdap.sh kafka-consumer-groups --list
    Determine the status of each consumer group by running one of the following commands:
    • Docker
      <ZDAP_HOME>/bin/dockerManageZdap.sh kafka-consumer-groups --describe --group <consumer_group>
    • Podman
      <ZDAP_HOME>/bin/podmanManageZdap.sh kafka-consumer-groups --describe --group <consumer_group>

During the upgrade, user ID data is automatically migrated if 5.1.1.10 or 5.1.1.11 is installed on the same system as the original Z Data Analytics Platform installation. Review the migrated user IDs and reconcile any potential duplicates in the IzoaKeycloak realm.

Procedure

After the deployment and configuration of the Z Data Analytics Platform fix pack 5.1.1.10 or 5.1.1.11 is complete, perform the following steps to migrate operational data:
  1. Change into the Z Data Analytics Platform installation directory (<ZOA_HOME>).
  2. Make sure that all services are shut down (no running services should be reported).
    • Docker
      ./bin/dockerManageZoa.sh ps
    • Podman
      ./bin/podmanManageZoa.sh ps
  3. Run one of the following commands to start data migration:
    • Docker
      ./bin/dockerManageZoa.sh move-data
    • Podman
      ./bin/podmanManageZoa.sh move-data
  4. When prompted, specify the following volumes:
    To move data from
    ibmzlda_zdap_datastore_data
    To move data to
    ibmzaiops_zaiops_datastore
Note: Data volumes from prior installations are intentionally not removed by the migration process. Once data migration is complete, and you have validated that all wanted data has been moved successfully, you can remove the following volumes:
  • ibmzlda_zdap_dashboards_data
  • ibmzlda_zdap_datastore_data
  • ibmzlda_zdap_kafkabroker
  • ibmzlda_zdap_keycloak
  • ibmzlda_zdap_parser_data
  • ibmzlda_zdap_zookeeper
To do so, run one of the following commands:
  • Docker
    docker volume rm <volume_name>
  • Podman
    podman volume rm <volume_name>