Redeploying IBM Db2 Warehouse on Linux

If you already set up Db2® Warehouse on Linux® system, but need to redeploy it, you must first take steps to preserve your data and configurations.

Before you begin

Ensure that your Linux system meets the hardware prerequisites described in IBM Db2 Warehouse prerequisites.

Ensure that your Linux system meets the prerequisites described in Getting container images.

About this task

To perform this task, you must have root authority on the host operating system.

Procedure

  1. Clear your browser cache.
  2. Log in to Docker or Podman on each node host by using your API key:
    echo <apikey> | docker login -u iamapikey --password-stdin icr.io
    echo <apikey> | podman login -u iamapikey --password-stdin icr.io
    where <apikey> is the API key that you created as a prerequisite in Getting container images.
  3. (MPP only) Back up your /mnt/clusterfs/nodes nodes configuration file to a different location so you can reuse it in your new deployment:
    cp /mnt/clusterfs/nodes directory
  4. Stop the services and containers by using one of the following approaches:
    • If you used a standard deployment, perform the following substeps:
      1. On the head node host (for MPP) or the single node host (for SMP), stop services. You can identify the head node host in an MPP deployment by issuing the docker exec -it Db2wh status or podman exec -it Db2wh status command and looking for the host in the IBM Db2 Warehouse Cluster Status section of the output. To stop services, issue the following command:
        docker exec -it Db2wh stop
        podman exec -it Db2wh stop
      2. Stop the container on all node hosts:
        docker stop Db2wh
        podman stop Db2wh
    • If you deployed your cluster by using Db2 Warehouse Orchestrator, run the db2wh_orchestrator.sh script as follows:
      path_on_host/db2wh_orchestrator.sh --file /mnt/clusterfs/nodes --stop 
  5. Optional: If there is any data (including user accounts) in your cluster file system that you want to exist in your new deployment, save the data in a safe place. You can use whatever method you prefer, such as creating a tar file or creating a snapshot. For instructions on creating a backup by using snapshots, see Taking an online snapshot of your IBM Db2 Warehouse database.
  6. On each node host, rename the Db2 Warehouse container:
    docker rename Db2wh different_container_name
    Note: The rename command is not supported with Podman. You will need to remove the container without clearing the mounted partition.
  7. Remove the contents of the /mnt/clusterfs cluster file system directory by issuing the following command.
    Important: Issuing the following command results in data loss, so ensure that you backed up any data that you want to preserve, as indicated in Step 4.
    rm -rf /mnt/clusterfs/*
  8. (MPP only) Restore the nodes file that you backed up in Step 2:
    cp directory/nodes /mnt/clusterfs/ 
  9. Optional: If you want to restore from a snapshot backup, restore it into the /mnt/clusterfs directory. See Restoring from a snapshot of your IBM Db2 Warehouse database.
  10. To avoid a possible failure of the port availability check during redeployment, perform the following steps as root. This failure can occur because the Docker engine occasionally does not release all its resources, such as UNIX sockets.
    1. Stop the Docker engine by issuing the following command:
      systemctl stop docker
    2. Restart networking by issuing the following command:
      systemctl restart network
    3. Start the Docker engine by issuing the following command:
      systemctl start docker
  11. Re-create the containers:
    • If you performed a standard deployment, perform the following substeps:
      1. Pull, create, and initialize the Db2 Warehouse container on all node hosts (for MPP) or the single node host (for SMP) by issuing a docker run or Podman run command.
        Note:
        • If necessary, use one or more -e flags to set configuration options during deployment. For example, if you plan to set up HADR after completing an SMP redeployment, include the following option setting in your docker run or podman run command:
          -e HADR_ENABLED='YES'
          For more information, see Configuring IBM Db2 Warehouse.
        • If necessary, replace the container version in the docker run or podman run command with the version of the container that you want to deploy. The container versions are described in IBM Db2 Warehouse containers.
        The basic docker run or podman run command is:
        docker run -d -it --privileged=true --net=host --name=Db2wh -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 <tag>
        podman run -d -it --privileged=true --net=host --name=Db2wh -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 <tag>
        where <tag> represents one of the following values:
        • For a container for POWER® LE hardware:
          icr.io/obs/hdm/db2wh_ee:v11.5.6.0-db2wh-ppcle
        • For a container for z Systems® hardware:
          icr.io/obs/hdm/db2wh_ee:v11.5.6.0-db2wh-s390x
        • For a container for x86 hardware:
          icr.io/obs/hdm/db2wh_ee:v11.5.6.0-db2wh-linux
      2. Check whether the redeployment is progressing successfully or is complete by issuing the following command. For an MPP deployment, issue the command on the head node.
        docker logs --follow Db2wh
        podman logs --follow Db2wh
        After the deployment finishes, you should see a message that tells you that you successfully deployed Db2 Warehouse, along with the console URL and login information.
      3. Exit the Docker or Podman logs by pressing Ctrl+C.
      4. If the port availability check fails even though you performed step 10, perform the following steps:
        1. Remove the container by issuing the following command:
          docker rm -f Db2wh
          podman rm -f Db2wh
        2. Remove the contents of the /mnt/clusterfs cluster file system directory by issuing the following command:
          rm -rf /mnt/clusterfs/*
        3. Reboot the host.
        4. Restart step 11 from the beginning.
  12. On the head node host, log in to the web console by using the web console URL that was provided after the successful completion messages. The URL is https://head_node_IPaddress:8443.
  13. Remove the old container that you renamed in step 6 of this procedure.