Deploying the Docker images through Helm

Before you begin

  • The IBM Financial Crimes Insight platform must be installed and configured.
  • The IBM Surveillance Insight for Financial Services is installed and configured.
  • The Hortonworks Data Platform (HDP) is set up.
  • The Kubernetes cluster is set up.

Procedure

  1. Tag the 3 images (API node, backend, and Streams) with a 2.0.3 tag by running the following commands:
    docker tag <ftr image name>:1.0.3  <ftr image name>:2.0.3

    You must change the name of one image, ftr-cat-node-api, to ftr-cat-node-ftr by using the following command:

    docker tag fcidev-si-docker-registry.fss.ibm.com:5000/ibmcom/fci-cat-node-api:1.0.3 fcidev-si-docker-registry.fss.ibm.com:5000/ibmcom/fci-cat-node-ftr:2.0.3
  2. Use the following commands to purge the cats-policy release:
    helm ls

    If the cats proxy charts are shown, delete them using the following command:

    helm del –purge cats-proxy
  3. Run the following command to deploy Helm:
    helm install --set "managerIPAddresses={10.65.6.40}" \
    --set "forwards.3001.serviceReleaseName=cats" \
    --set "forwards.3001.serviceName=ftr-nodejs" \
    --set "forwards.3001.servicePort=3001" \
    --set "forwards.3003.serviceReleaseName=cats" \
    --set "forwards.3003.serviceName=ftr-apinodejs" \
    --set "forwards.3003.servicePort=3003" \
    /fcimedia/ftr/archives/fci-charts-1.0.3/charts/nginx-ingress-controller-1.0.3.tgz
    

    If the Kubernetes system variables are not defined in values.yaml file, you can define them by using the following commands:

    set global.managerFQDN=$(hostname -f) --set global.nfsServer=$(hostname -f),global.dockerRepository=" "

    This Helm install command would deploy both the Node images. However, for Streams, the persistent volume must be initialized.

  4. Start the Node application:
    1. Run the following Kubernetes commands:
      kubectl get pods
      kubectl exec <pod ID> bash
    2. Open the .env file by using the following command:
      vi .env
    3. Change the API node image address:
      https://localhost:<port of api node image>
    4. Restart the application:
      pm2 restart app
    5. Open apinodejs image pod and change the value of HBASEADDRESS variable in /opt/codebase/.env file to the current HBase server.
    6. Restart the index:
      pm2 restart index
  5. Initialize the volume for Streams:
    1. Go to the following path on the Kubernetes manager: /fcimedia/ftr/cats-install-kit-2.0.3/helm
    2. Run the following command:
      initialize-pv -p $(kubectl get pod -l app=<ftr-streams>,release=cats -o jsonpath='{.items[*].metadata.name}') -i init-pv -t <location of streams data tar>

      This step deploys the Streams image.

    3. Copy the Hadoop configuration directories (hadoop and hadoop-hdfs) from the Hadoop installation to the config directory in the volume.
    4. In the volume point, an ingest directory exists. Use this directory to copy the new trade data files for ingestion from the various streams.
  6. Start the Streams application:
    1. Identify all of the containers that are deployed in this cluster:
      kubectl get pods
    2. Get the container name of the FTR Streams container from the output.
    3. Run the following command to access the container:
      kubectl exec -it <pod name> bash
    4. Update the following configuration:
      1. In the data/sifs-jaas.conf file, update the Kafka Kerberos principal:
        principal="kafka/hdp1secondary.fss.ibm.com";
      2. In data/producer.properties, update the Kafka server details:
        bootstrap.servers==hdp1secondary.fss.ibm.com:6667
      3. In data/application.properties, update the HDFS and Configuration Server locations:
        hdfs_url=hdfs://hdp1master.fss.ibm.com:8020/
        config_server=https://fcidev-si-kmaster.fss.ibm.com:3001/SIFSServices/ftr
        
    5. Go to the application directory and start the streams application:
      cd /opt/codebase
      ./app.sh
    6. Log in as the streamsadmin user, and verify that the IBM FTR Streams jobs are running in the container:
      streamtool lsjobs -d StreamsDomain -i SIInstance Instance: SIInstance

      The results appear as:

      Id State   Healthy User                   Date                                          Name                                                                    Group
      77 Running yes     streamsadmin 2018-07-05T13:46:08+0000 application::CCSIngestFileRenaming_v1_77 default
      78 Running yes     streamsadmin 2018-07-05T13:46:27+0000 application::CCSIngestFileRenaming_v1_78 default