Deploying IBM Db2 Warehouse MPP on Google Cloud Platform

You can deploy Db2® Warehouse on Google Cloud Platform.

Before you begin

Ensure that you have a Google Cloud account and a paid subscription.

Procedure

  1. In the Google Cloud Platform (GCP) portal:
    1. Create a new project.
    2. In the new project, use the Google Compute Engine to create several VM instances:
      • One instance for the head node
      • Two or more additional instances for the data nodes
      For each instance, specify 4 vCPUs, 16 GB memory, and 30 GB persistent volume size. Each instance corresponds to one node in your cluster.
    3. Create a 1 TB volume Google Filestore instance. This generates an NFS server that you can use to mount a volume on each of the nodes in your cluster.
    4. Attach an additional storage disk to each VM:
      • To the VM of the head node and each VM of a data node, attach a disk with a size of at least 200GB. The database is distributed equally among these nodes, so choose a disk size that, when multiplied by the total number of nodes, is equal to or greater than the size of the database.
      • To the VM that is to serve as the NFS server, attach a disk with at least 500GB. This disk houses the cluster file system that is shared by the nodes. For a list of file system requirements, see IBM Db2 Warehouse prerequisites for Linux and x86 hardware.
      For more information, see Creating and Starting a VM Instance.
    5. In the file system that the nodes share, create a nodes configuration file with the name /mnt/clusterfs/nodes.
      This file specifies, for each node, the node’s type, host name, and IP address in the form node_type=node_hostname:node_IP_address. For the host name, specify the short name that is returned by the hostname -s command, not the fully qualified domain name. For example, the following file defines a three-node cluster:
      head_node=test27:160.08.675.309 
      data_node1=test28:161.08.675.309
      data_node2=test29:162.08.675.309
  2. On each of the VMs that you created earlier for the head node and data nodes:
    1. Mount its attached data disk with the name /mnt/diskbludata0.
    2. Mount, with the name /mnt/clusterfs, the disk that is attached to the NFS server and that houses the file system that the nodes share.
    For more information, see Google Cloud Filestore.
  3. Log in to Docker using your API key:
    echo <apikey> | docker login -u iamapikey --password-stdin icr.io
    where <apikey> is the API key that you created as a prerequisite in Getting container images.
  4. To deploy Db2 Warehouse in an MPP cluster, issue the following command:
    docker run -d -it --privileged=true --net=host --name=Db2wh -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 icr.io/obs/hdm/db2wh_ee:v11.5.6.0-db2wh-linux
    In this example:
    • The disk /mnt/blumeta0 that is mounted inside the container is the shared NFS cluster file system that was mounted as /mnt/clusterfs on the head node VM.
    • The disk /mnt/bludata0 that is mounted inside the container is a local attached disk that was mounted as /mnt/diskbludata0 on the head node VM.