Installing

After reviewing the system requirements and other planning information, you can proceed to install IIBM Sterling Connect:Direct Web Services Container.

The following tasks represent the typical task flow for performing the installation:

Setting up your registry server

To install IBM Sterling Connect:Direct Web Services Container, you must have a registry server where you can host the image required for installation.

Using the existing registry server

If you have an existing registry server, you can use it, provided that it is in close proximity to cluster where you will deploy IBM Sterling Connect:Direct Web Services Container. If your registry server is not in close proximity to your cluster, you might notice performance issues.

Before installation, ensure that the required pull secrets are created in the namespace or project and are associated with the appropriate service accounts. Proper management of these pull secrets is required. The pull secret can be referenced in the values.yaml file under image.imageSecrets.

Using Docker registry

Kubernetes does not provide a registry solution out of the based. However, you can create your own registry server and host your images. Please refer to the deployment of registry server.

Setting up Namespace or project

To install IBM Sterling Connect:Direct Web Services Container, you must have an existing namespace/project or create a new if required.

You can either use an existing namespace or create a new one in Kubernetes cluster. Similarly, you either use an existing project or create a new one in OpenShift cluster. A namespace or project is a cluster resource. So, it can only be created by a Cluster Administrator. Refer the following links for more details:

For Kubernetes - Namespaces

For Red Hat OpenShift - Working with projects

IBM Sterling Connect:Direct Web Services Container has been integrated with IBM Licensing and Metering service using Operator. You need to install this service. For more information, refer to License Service deployment without an IBM Cloud Pak.

Installing and configuring IBM Licensing and Metering service

License Service is required for monitoring and measuring license usage of IIBM Sterling Connect:Direct Web Services Container in accordance with the pricing rule for containerized environments. Manual license measurements are not allowed. Deploy License Service on all clusters where IBM Sterling Connect:Direct Web Services Container is installed.

IBM Sterling Connect:Direct Web Services Container contains an integrated service for measuring the license usage at the cluster level for license evidence purposes.

Overview

The integrated licensing solution collects and stores the license usage information which can be used for audit purposes and for tracking license consumption in cloud environments. The solution works in the background and does not require any configuration. Only one instance of the License Service is deployed per cluster regardless of the number of containerized products that you have installed on the cluster.

Deploying License Service

Deploy License Service on each cluster where IBM FHIR Server is installed. License Service can be deployed on any Kubernetes based orchestration cluster. For more information about License Service, how to install and use it, see the License Service documentation.

Validating if License Service is deployed on the cluster

To ensure license reporting continuity for license compliance purposes make sure that License Service is successfully deployed. It is recommended to periodically verify whether it is active. To validate whether License Service is deployed and running on the cluster, you can, for example, log into the Kubernetes or Redhat OpenShift cluster and run the following command:
For Kubernetes
kubectl get pods --all-namespaces | grep ibm-licensing | grep -v operator
For Redhat OpenShift
oc get pods --all-namespaces | grep ibm-licensing | grep -v operator

The following response is a confirmation of successful deployment:

1/1 Running

Archiving license usage data

Remember to archive the license usage evidence before you decommission the cluster where IBM Sterling Connect:Direct Web Services Container was deployed. Retrieve the audit snapshot for the period when IBM Sterling Connect:Direct Web Services Container was on the cluster and store it in case of audit.

For more information about the licensing solution, see License Service documentation.

Downloading IBM Sterling Connect:Direct Web Services Container

Before you install IBM Sterling Connect:Direct Web Services Container, ensure that the installation files are available on your client system.

Depending on the availability of internet on the cluster, the following procedures can be followed. Choose the one which applies best for your environment.

Online Cluster

The cluster which has access to the internet is called Online cluster. You may have a Kubernetes or OpenShift cluster and it has access to the internet. The process to get required installation files consists of two steps:
  1. Create the entitled registry secret: Complete the following steps to create a secret with the entitled registry key value:
    1. Ensure that you have obtained the entitlement key that is assigned to your ID.
      1. Log in to My IBM Container Software Library by using the IBM ID and password that are associated with the entitled software.
      2. In the Entitlement keys section, under Activation Keys, select Copy to copy the entitlement key to the clipboard.
      3. Save the entitlement key to a safe location for later use.
        To confirm that your entitlement key is valid, click Container software library that is provided in the left of the page. You can view the list of products that you are entitled to. If Connect:Direct Web Services is not listed, or if the Container software library link is disabled, it indicates that the identity with which you are logged in to the container library does not have an entitlement for IBM Connect:Direct Web Services. In this case, the entitlement key is not valid for installing the software.
      Note: For assistance with the Container software library (e.g. product not available in the library; problem accessing your entitlement registry key), contact MyIBM Order Support.
    2. Set the entitled registry information by completing the following steps:
      1. Log on to machine from where the cluster is accessible
      2. export ENTITLED_REGISTRY=cp.icr.io
      3. export ENTITLED_REGISTRY_USER=cp
      4. export ENTITLED_REGISTRY_KEY=<entitlement_key>
    3. This step is optional. Log on to the entitled registry with the following docker login command:
      docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY"
      
    4. Create a Docker-registry secret:
       kubectl create secret docker-registry <any_name_for_the_secret> 
      --docker-username=$ENTITLED_REGISTRY_USER 
      --docker-password=$ENTITLED_REGISTRY_KEY 
      --docker-server=$ENTITLED_REGISTRY -n <your namespace/project name>
      
    5. Update the service account or helm chart image pull secret configurations using `image.imageSecrets` parameter with the above secret name.
  2. Download the Helm chart: You can follow the steps below to download the helm chart from the repository.
    1. Make sure that the helm client (CLI) is present on your machine. Execute/run helm CLI on machine and you should be able to see the usage of helm CLI.
      helm
    2. Check the ibm-helm repository in your helm CLI.
      helm repo list
      If the ibm-helm repository already exists with URL https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm, then update the local repository else add the repository.
    3. Update the local repository, if ibm-helm repository already exists on helm CLI.
      helm repo update
    4. Add the helm chart repository to local helm CLI if it does not exist.
      helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm
    5. List ibm-cdws helm charts available on repository.
      helm search repo -l ibm-cdws
    6. Download the latest helm chart.
      helm pull ibm-helm/ibm-cdws 
      At this stage, ensure that the Helm chart configuration references the entitled registry secret so that the required container image for the IBM Connect:Direct Web Services chart can be pulled during deployment. Both the Helm chart and the entitled registry secret must be present on the system where the deployment is performed.

Offline (Airgap) Cluster

You have a Kubernetes or OpenShift cluster but it is a private cluster which means it does not have the internet access. Depending upon the cluster, follow the below procedures to get the installation files.

For Kubernetes Cluster

Since, your Kubernetes cluster is private and it does not have internet access, you cannot download the required installation files directly from the server. By following steps below, you can get the required files.
  1. Get an RHEL machine which has
    • internet access
    • Docker CLI (docker) or Podman CLI (podman)
    • kubectl
    • helm
  2. Download the Helm chart by following the steps mentioned in the Online installation section.
  3. Extract the downloaded helm chart.
    tar -zxf <ibm-cdws-helm chart-name>
  4. Get the container image detail:
    erRepo=$(grep -w "repository:" ibm-cdws/values.yaml |cut -d '"' -f 2)
    erTag=$(grep -w "tag:" ibm-cdws/values.yaml | cut -d '"' -f 2)
    erImgTag=$erRepo:$erTag
  5. This step is optional if you already have a docker registry running on this machine. Create a docker registry on this machine. Follow Setting up your registry server.
  6. Get the Entitled registry entitlement key by following steps a and b explained in Online Cluster under Create the entitled registry section.
  7. Get the container image downloaded in docker registry:
    docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p
    "$ENTITLED_REGISTRY_KEY"
    docker pull $erImgTag
    Note: Skip step 8, 9 and 10, if the cluster where deployment will be performed is accessible from this machine and cluster can fetch container images from registry running on this machine.
  8. Save the container image.
    docker save -o <container image file name.tar> $erImgTag
  9. Copy or transfer the installation files to your cluster. At this point you have both downloaded container image and helm chart for IBM Connect:Direct Web Services. You need to transfer these two file to a machine from where you can access your cluster and its registry.
  10. After transferring the files, load the container image into your registry.
    docker load -i <container image file name.tar>

For Red Hat OpenShift Cluster

If your cluster is not connected to the internet, the deployment can be done in your cluster via connected or disconnected mirroring.

If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.

Before you begin

You must complete the steps in the following sections before you begin generating mirror manifests:
Important: If you intend on installing using a private container registry, your cluster must support ImageContentSourcePolicy (ICSP).

Prerequisites

Regardless of whether you plan to mirror the images with a bastion host or to the file system, you must satisfy the following prerequisites:
  • Red Hat® OpenShift® Container Platform requires you to have cluster admin access to run the deployment.
  • A Red Hat® OpenShift® Container Platform cluster must be installed.

Prepare a host

If you are in an air-gapped environment, you must be able to connect a host to the internet and mirror registry for connected mirroring or mirror images to file system which can be brought to a restricted environment for disconnected mirroring. For information on the latest supported operating systems, see ibm-pak plugin install documentation.

The following table explains the software requirements for mirroring the IBM Cloud Pak images:
Table 1. Software requirements for mirroring the IBM Cloud Pak images
Software Purpose
Docker Container management
Podman Container management
Red Hat OpenShift CLI (oc) Red Hat OpenShift Container Platform administration
Complete the following steps on your host:
  1. Install Docker or Podman.
    To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
    Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
    yum check-update
    yum install docker
    

    To install Podman, see Podman Installation Instructions.

  2. Install the oc Red Hat® OpenShift® Container Platform CLI tool.
  3. Download and install the most recent version of IBM Catalog Management Plug-in for IBM Cloud Paks from the IBM/ibm-pak. Extract the binary file by entering the following command:
    tar -xf oc-ibm_pak-linux-amd64.tar.gz
    Run the following command to move the file to the /usr/local/bin directory:
    Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    Note: Download the plug-in based on the host operating system. You can confirm that oc ibm-pak -h is installed by running the following command:
    oc ibm-pak --help

    The plug-in usage is displayed.

    For more information on plug-in commands, see command-help.

Your host is now configured and you are ready to mirror your images.

Creating registry namespaces

Top‑level namespaces are namespaces that appear at the root path of a private registry.

For example, if your registry is hosted at:

myregistry.com:5000

then mynamespace in:

myregistry.com:5000/mynamespace

is a top-level namespace. You can have multiple top-level namespaces.

When images are mirrored to your private registry, the top-level namespace where the images are mirrored must already exist or be automatically created during the image push.

If your registry does not allow automatic creation of top-level namespaces, you must create them manually.

Specify a namespace during mirror manifest generation

When generating mirror manifests, you can specify the top-level namespace by setting:

TARGET_REGISTRY=myregistry.com:5000/mynamespace

This approach requires creating only one namespace (mynamespace) in your registry if automatic namespace creation is not supported.

You can also provide top-level namespaces in the final registry using the --final-registry option.

If you do not specify a namespace

If you do not specify your own top-level namespace, the mirroring process uses the namespaces defined by the CASE files.

For example, it will try to mirror images to:

myregistry.com:5000/cp
Manual namespace creation

If your registry does not allow automatic creation of top-level namespaces and you do not specify your own namespace during mirror manifest generation, you must create the following namespace at the root of your registry:

cp

There may be additional top-level namespaces you need to create.

See Generate mirror manifests for details on using the oc ibm-pak describe command to list all required top-level namespaces.

Set Environment Variables and Download CASE Files

If your host must connect to the internet via a proxy, you must set environment variables on the machine that accesses the internet via the proxy server.

If you are mirroring via connected mirroring, set the following environment variables on the machine that accesses the internet via the proxy server:
export https_proxy=http://proxy-server-hostname:port
export http_proxy=http://proxy-server-hostname:port

# Example:
export https_proxy=http://server.proxy.xyz.com:5018
export http_proxy=http://server.proxy.xyz.com:5018
Before mirroring your images, you can set the environment variables on your mirroring device, and connect to the internet so that you can download the corresponding CASE files. To finish preparing your host, complete the following steps:
Note: Save a copy of your environment variable values to a text editor. You can use that file as a reference to cut and paste from when you finish mirroring images to your registry.
  1. Create the following environment variables with the installer image name and the version.
    export CASE_NAME=ibm-cdws

    To find the CASE name and version, see IBM: Product CASE to Application Version.

  2. Connect your host to the intranet.
  3. The plug-in can detect the locale of your environment and provide textual helps and messages accordingly. You can optionally set the locale by running the following command:
    oc ibm-pak config locale -l LOCALE

    where LOCALE can be one of de_DE, en_US, es_ES, fr_FR, it_IT, ja_JP, ko_KR, pt_BR, zh_Hans, zh_Hant.

  4. Configure the plug-in to download CASEs as OCI artifacts from IBM Cloud Container Registry (ICCR).
    oc ibm-pak config repo 'IBM Cloud-Pak OCI registry' -r oci:cp.icr.io/cpopen --enable
  5. Enable color output (optional with v1.4.0 and later)
    oc ibm-pak config color --enable true
  6. Download the image inventory for your IBM Cloud Pak to your host.
    Tip: If you do not specify the CASE version, it will download the latest CASE.
    oc ibm-pak get \
    $CASE_NAME \
    --version $CASE_VERSION
    

By default, the root directory used by plug-in is ~/.ibm-pak. This means that the preceding command will download the CASE under ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION. You can configure this root directory by setting the IBMPAK_HOME environment variable. Assuming IBMPAK_HOME is set, the preceding command will download the CASE under $IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION.

The logs files will be available at $IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log.

Your host is now configured and you are ready to mirror your images.

Mirroring images to your private container registry

The process of mirroring images takes the image from the internet to your host, then effectively copies that image to your private container registry. After you mirror your images, you can configure your cluster and complete air-gapped installation.

Complete the following steps to mirror your images from your host to your private container registry:
  1. Generate mirror manifests
  2. Authenticating the registry
  3. Mirror images to final location
  4. Configure the cluster
  5. Install IBM Cloud® Paks by way of Red Hat OpenShift Container Platform
Generate mirror manifests

Note:
  • If you want to install subsequent updates to your air-gapped environment, you must do a CASE get to get the image list when performing those updates. A registry namespace suffix can optionally be specified on the target registry to group mirrored images.

  • Define the environment variable $TARGET_REGISTRY by running the following command:
    export TARGET_REGISTRY=<target-registry>
    

    The <target-registry> refers to the registry (hostname and port) where your images will be mirrored to and accessed by the oc cluster. For example setting TARGET_REGISTRY to myregistry.com:5000/mynamespace will create manifests such that images will be mirrored to the top-level namespace mynamespace.

  • Run the following commands to generate mirror manifests to be used when mirroring from a bastion host (connected mirroring):
    oc ibm-pak generate mirror-manifests \
       $CASE_NAME \
       $TARGET_REGISTRY \
       --version $CASE_VERSION
    
    Example ~/.ibm-pak directory structure for connected mirroring
    The ~/.ibm-pak directory structure is built over time as you save CASEs and mirror. The following tree shows an example of the ~/.ibm-pak directory structure for connected mirroring:
    tree ~/.ibm-pak
    /root/.ibm-pak
    ├── config
    │   └── config.yaml
    ├── data
    │   ├── cases
    │   │   └── YOUR-CASE-NAME
    │   │       └── YOUR-CASE-VERSION
    │   │           ├── XXXXX
    │   │           ├── XXXXX
    │   └── mirror
    │       └── YOUR-CASE-NAME
    │           └── YOUR-CASE-VERSION
    │               ├── catalog-sources.yaml
    │               ├── image-content-source-policy.yaml
    │               └── images-mapping.txt
    └── logs
       └── oc-ibm_pak.log
    

    Notes: A new directory ~/.ibm-pak/mirror is created when you issue the oc ibm-pak generate mirror-manifests command. This directory holds the image-content-source-policy.yaml, images-mapping.txt, and catalog-sources.yaml files.

    Tip: If you are using a Red Hat® Quay.io registry and need to mirror images to a specific organization in the registry, you can target that organization by specifying:
       export ORGANIZATION=<your-organization>
       oc ibm-pak generate mirror-manifests
       $CASE_NAME
       $TARGET_REGISTRY/$ORGANIZATION
       --version $CASE_VERSION
    
You can also generate manifests to mirror images to an intermediate registry server, then mirroring to a final registry server. This is done by passing the final registry server as an argument to --final-registry:
   oc ibm-pak generate mirror-manifests \
      $CASE_NAME \
      $INTERMEDIATE_REGISTRY \
      --version $CASE_VERSION
      --final-registry $FINAL_REGISTRY

In this case, in place of a single mapping file (images-mapping.txt), two mapping files are created.

  1. images-mapping-to-registry.txt
  2. images-mapping-from-registry.txt
  1. Run the following commands to generate mirror manifests to be used when mirroring from a file system (disconnected mirroring):
    oc ibm-pak generate mirror-manifests \
       $CASE_NAME \
       file://local \
       --final-registry $TARGET_REGISTRY
    
    Example ~/.ibm-pak directory structure for disconnected mirroring
    The following tree shows an example of the ~/.ibm-pak directory structure for disconnected mirroring:
    tree ~/.ibm-pak
    /root/.ibm-pak
    ├── config
    │   └── config.yaml
    ├── data
    │   ├── cases
    │   │   └── ibm-cp-common-services
    │   │       └── 1.9.0
    │   │           ├── XXXX
    │   │           ├── XXXX
    │   └── mirror
    │       └── ibm-cp-common-services
    │           └── 1.9.0
    │               ├── catalog-sources.yaml
    │               ├── image-content-source-policy.yaml
    │               ├── images-mapping-to-filesystem.txt
    │               └── images-mapping-from-filesystem.txt
    └── logs
       └── oc-ibm_pak.log
    
    Note: A new directory ~/.ibm-pak/mirror is created when you issue the oc ibm-pak generate mirror-manifests command. This directory holds the image-content-source-policy.yaml, images-mapping-to-filesystem.txt, images-mapping-from-filesystem.txt, and catalog-sources.yaml files.
Tip: Some products support the ability to generate mirror manifests only for a subset of images using the --filter argument and image grouping. The --filter argument provides the ability to customize which images are mirrored during an air-gapped installation. As an example for this functionality ibm-cloud-native-postgresql CASE can be used, which contains groups that allow mirroring specific variant of ibm-cloud-native-postgresql (Standard or Enterprise). Use the --filter argument to target a variant of ibm-cloud-native-postgresql to mirror rather than the entire library. The filtering can be applied for groups and architectures. Consider the following command:
   oc ibm-pak generate mirror-manifests \
      ibm-cloud-native-postgresql \
      file://local \
      --final-registry $TARGET_REGISTRY \
      --filter $GROUPS

The command was updated with a --filter argument. For example, for $GROUPS equal to ibmEdbStandard the mirror manifests will be generated only for the images associated with ibm-cloud-native-postgresql in its Standard variant. The resulting image group consists of images in the ibm-cloud-native-postgresql image group as well as any images that are not associated with any groups. This allows products to include common images as well as the ability to reduce the number of images that you need to mirror.

Note: You can use the following command to list all the images that will be mirrored and the publicly accessible registries from where those images will be pulled from:
   oc ibm-pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images
Tip: The output of the preceding command will have two sections:
  1. Mirroring Details from Source to Target Registry
  2. Mirroring Details from Target to Final Registry. A connected mirroring path that does not involve a intermediate registry will only have the first section.

    Note down the Registries found sub sections in the preceding command output. You will need to authenticate against those registries so that the images can be pulled and mirrored to your local registry. See the next steps on authentication. The Top level namespaces found section shows the list of namespaces under which the images will be mirrored. These namespaces should be created manually in your registry (which appears in the Destination column in the above command output) root path if your registry does not allow automatic creation of namespaces.

Authenticating the registry

Complete the following steps to authenticate your registries:

  1. Store authentication credentials for all source Docker registries.

    Your product might require one or more authenticated registries. The following registries require authentication:

    • cp.icr.io
    • registry.redhat.io
    • registry.access.redhat.com

    You must run the following command to configure credentials for all target registries that require authentication. Run the command separately for each registry:

    Note: The export REGISTRY_AUTH_FILE command only needs to run once.
    export REGISTRY_AUTH_FILE=<path to the file which will store the auth credentials generated on podman login>
    podman login <TARGET_REGISTRY>
    
    Important: When you log in to cp.icr.io, you must specify the user as cp and the password which is your Entitlement key from the IBM Cloud Container Registry. For example:
    podman login cp.icr.io
    Username: cp
    Password:
    Login Succeeded!
    

For example, if you export REGISTRY_AUTH_FILE=~/.ibm-pak/auth.json, then after performing podman login, you can see that the file is populated with registry credentials.

If you use docker login, the authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows. After docker login you should export REGISTRY_AUTH_FILE to point to that location. For example in Linux you can issue the following command:
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
Table 2. Directory description
Directory Description
~/.ibm-pak/config Stores the default configuration of the plug-in and has information about the public GitHub URL from where the cases are downloaded.
~/.ibm-pak/data/cases This directory stores the CASE files when they are downloaded by issuing the oc ibm-pak get command.
~/.ibm-pak/data/mirror This directory stores the image-mapping files, ImageContentSourcePolicy manifest in image-content-source-policy.yaml and CatalogSource manifest in one or more catalog-sourcesXXX.yaml. The files images-mapping-to-filesystem.txt and images-mapping-from-filesystem.txt are input to the oc image mirror command, which copies the images to the file system and from the file system to the registry respectively.
~/.ibm-pak/data/logs This directory contains the oc-ibm_pak.log file, which captures all the logs generated by the plug-in.
Mirror images to final location

Complete the steps in this section on your host that is connected to both the local Docker registry and the Red Hat® OpenShift® Container Platform cluster.

  1. Mirror images to the final location.

    • For mirroring from a bastion host (connected mirroring):

      Mirror images to the TARGET_REGISTRY:
       oc image mirror \
         -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
         --filter-by-os '.*'  \
         -a $REGISTRY_AUTH_FILE \
         --insecure  \
         --skip-multiple-scopes \
         --max-per-registry=1 \
         --continue-on-error=true
      

      If you generated manifests in the previous steps to mirror images to an intermediate registry server followed by a final registry server, run the following commands:

      1. Mirror images to the intermediate registry server:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-registry.txt \
          --filter-by-os '.*'  \
          -a $REGISTRY_AUTH_FILE \
          --insecure  \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --continue-on-error=true
        
      2. Mirror images from the intermediate registry server to the final registry server:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-registry.txt \
          --filter-by-os '.*'  \
          -a $REGISTRY_AUTH_FILE \
          --insecure  \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --continue-on-error=true
        

        The oc image mirror --help command can be run to see all the options available on the mirror command. Note that we use continue-on-error to indicate that the command should try to mirror as much as possible and continue on errors.

        oc image mirror --help
        
        Note: Sometimes based on the number and size of images to be mirrored, the oc image mirror might take longer. If you are issuing the command on a remote machine it is recommended that you run the command in the background with a nohup so even if network connection to your remote machine is lost or you close the terminal the mirroring will continue. For example, the below command will start the mirroring process in background and write the log to my-mirror-progress.txt.
        nohup oc image mirror \
        -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
        -a $REGISTRY_AUTH_FILE \
        --filter-by-os '.*' \
        --insecure \
        --skip-multiple-scopes \
        --max-per-registry=1 \
        --continue-on-error=true > my-mirror-progress.txt  2>&1 &
        
        You can view the progress of the mirror by issuing the following command on the remote machine:
        tail -f my-mirror-progress.txt
        
    • For mirroring from a file system (disconnected mirroring):

      Mirror images to your file system:
       export IMAGE_PATH=<image-path>
       oc image mirror \
         -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
         --filter-by-os '.*'  \
         -a $REGISTRY_AUTH_FILE \
         --insecure  \
         --skip-multiple-scopes \
         --max-per-registry=1 \
         --continue-on-error=true \
         --dir "$IMAGE_PATH"
      

      The <image-path> refers to the local path to store the images. For example, in the previous section if provided file://local as input during generate mirror-manifests, then the preceding command will create a subdirectory v2/local inside directory referred by <image-path> and copy the images under it.

    The following command can be used to see all the options available on the mirror command. Note that continue-on-error is used to indicate that the command should try to mirror as much as possible and continue on errors.

    oc image mirror --help
    
    Note: Sometimes based on the number and size of images to be mirrored, the oc image mirror might take longer. If you are issuing the command on a remote machine, it is recommended that you run the command in the background with nohup so that even if you lose network connection to your remote machine or you close the terminal, the mirroring will continue. For example, the following command will start the mirroring process in the background and write the log to my-mirror-progress.txt.
     export IMAGE_PATH=<image-path>
     nohup oc image mirror \
       -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
       --filter-by-os '.*' \
       -a $REGISTRY_AUTH_FILE \
       --insecure \
       --skip-multiple-scopes \
       --max-per-registry=1 \
       --continue-on-error=true \
       --dir "$IMAGE_PATH" > my-mirror-progress.txt  2>&1 &
    

    You can view the progress of the mirror by issuing the following command on the remote machine:

    tail -f my-mirror-progress.txt
    
  2. For disconnected mirroring only: Continue to move the following items to your file system:

    • The <image-path> directory you specified in the previous step
    • The auth file referred by $REGISTRY_AUTH_FILE
    • ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt
  3. For disconnected mirroring only: Mirror images to the target registry from file system

    Complete the steps in this section on your file system to copy the images from the file system to the $TARGET_REGISTRY. Your file system must be connected to the target docker registry.

    Important: If you used the placeholder value of TARGET_REGISTRY as a parameter to --final-registry at the time of generating mirror manifests, then before running the following command, find and replace the placeholder value of TARGET_REGISTRY in the file, images-mapping-from-filesystem.txt, with the actual registry where you want to mirror the images. For example, if you want to mirror images to myregistry.com/mynamespace then replace TARGET_REGISTRY with myregistry.com/mynamespace.
    1. Run the following command to copy the images (referred in the images-mapping-from-filesystem.txt file) from the directory referred by <image-path> to the final target registry:
      export IMAGE_PATH=<image-path>
      oc image mirror \
        -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \
        -a $REGISTRY_AUTH_FILE \
        --from-dir "$IMAGE_PATH" \
        --filter-by-os '.*' \
        --insecure \
        --skip-multiple-scopes \
        --max-per-registry=1 \
        --continue-on-error=true
Configure the cluster

  1. Update the global image pull secret for your Red Hat OpenShift cluster. Follow the steps in Updating the global cluster pull secret.

    The documented steps in the link enable your cluster to have proper authentication credentials in place to pull images from your TARGET_REGISTRY as specified in the image-content-source-policy.yaml which you will apply to your cluster in the next step.

  2. Create ImageContentSourcePolicy

    Important:
    • Before you run the command in this step, you must be logged into your OpenShift cluster. Using the oc login command, log in to the Red Hat OpenShift Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

      • If you used the placeholder value of TARGET_REGISTRY as a parameter to --final-registry at the time of generating mirror manifests, then before running the following command, find and replace the placeholder value of TARGET_REGISTRY in file, ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml with the actual registry where you want to mirror the images. For example, replace TARGET_REGISTRY with myregistry.com/mynamespace.

    Run the following command to create ImageContentSourcePolicy:

       oc apply -f  ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
    

    If you are using Red Hat OpenShift Container Platform version 4.7 or earlier, this step might cause your cluster nodes to drain and restart sequentially to apply the configuration changes.

  3. Verify that the ImageContentSourcePolicy resource is created.

    oc get imageContentSourcePolicy
    
  4. Verify your cluster node status and wait for all the nodes to be restarted before proceeding.

    oc get MachineConfigPool
    
    $ oc get MachineConfigPool -w
    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master   rendered-master-53bda7041038b8007b038c08014626dc   True      False      False      3              3                   3                     0                      10d
    worker   rendered-worker-b54afa4063414a9038958c766e8109f7   True      False      False      3              3                   3                     0                      10d
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are in the UPDATED=True status before proceeding.

  5. Go to the project where deployment has to be done:

    Note: You must be logged into a cluster before performing the following steps.
    export NAMESPACE=<YOUR_NAMESPACE>
    
    oc new-project $NAMESPACE
    
  6. Optional: If you use an insecure registry, you must add the target registry to the cluster insecureRegistries list.

    oc patch image.config.openshift.io/cluster --type=merge \
    -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'
    
  7. Verify your cluster node status and wait for all the nodes to be restarted before proceeding.

    oc get MachineConfigPool -w
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are updated.

    At this point your cluster is ready for IBM Connect:Direct Web Services deployment. The helm chart is present in ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-cdws-1.0.x.tgz directory. Use it for deployment. Copy it in current directory.

    cp ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-cdws-1.0.x.tgz .
    Note: Replace with version information in above command.
  8. Configuration required in Helm chart: To use the image mirroring in OpenShift cluster, helm chart should be configured to use the digest value for referring to container image. Set image.digest.enabled to true in values.yaml file or pass this parameter using Helm CLI.
Setting up a repeatable mirroring process

Once you complete a CASE save, you can mirror the CASE as many times as you want to. This approach allows you to mirror a specific version of the IBM Cloud Pak into development, test, and production stages using a private container registry.

Follow the steps in this section if you want to save the CASE to multiple registries (per environment) once and be able to run the CASE in the future without repeating the CASE save process.

  1. Run the following command to save the CASE to ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION which can be used as an input during the mirror manifest generation:
    oc ibm-pak get \
    $CASE_NAME \
    --version $CASE_VERSION
    
  2. Run the oc ibm-pak generate mirror-manifests command to generate the image-mapping.txt:
    oc ibm-pak generate mirror-manifests \
    $CASE_NAME \
    $TARGET_REGISTRY \
    --version $CASE_VERSION
    
    Then add the image-mapping.txt to the oc image mirror command:
    oc image mirror \
      -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
      --filter-by-os '.*'  \
      -a $REGISTRY_AUTH_FILE \
      --insecure  \
      --skip-multiple-scopes \
      --max-per-registry=1 \
      --continue-on-error=true
    

If you want to make this repeatable across environments, you can reuse the same saved CASE cache (~/.ibm-pak/$CASE_NAME/$CASE_VERSION) instead of executing a CASE save again in other environments. You do not have to worry about updated versions of dependencies being brought into the saved cache.

Applying Pod Security Standard for Kubernetes Cluster

Pod Security Standard should be applied to the Kubernetes namespace. This helm chart has been certified with baseline security standards with enforce security level. For more details, refer to Pod Security Standards.

SecurityContextConstraints (SCC) for IBM Sterling Connect:Direct Web Services Container

IBM Connect:Direct Web Services helm chart requires a SecurityContextConstraints (SCC) to be tied to the target namespace prior to deployment.

Configure UID and GID ranges for OpenShift

When deploying on OpenShift, configure the namespace with UID and GID ranges that match the values defined in values.yaml:
storageSecurity:
  fsGroup: 45678
  supplementalGroups: [65534]
  runAsUser: 45678
  runAsGroup: 45678

This range (40000–49999) covers the user and group IDs used in the chart.

Verify the namespace configuration:
oc describe ns <namespace_name>

Ensure that the UID and GID ranges include runAsUser, runAsGroup, and fsGroup values from values.yaml.

Creating storage for Data Persistence

The containers are ephemeral entity, all the data inside the container will be lost when the containers are destroyed/removed, so data must be saved to Storage Volume using Persistent Volume. Persistent volume is recommended for Connect:Direct Web Services storing application data files. A Persistent Volume (PV) is a piece of storage in the cluster that is provisioned by an administrator or dynamic provisioner using storage classes. For more information see:
IBM Sterling Connect:Direct Web Services Container supports:
  • Dynamic Provisioning using storage classes
  • Pre-created Persistent Volume
  • Pre-created Persistent Volume Claim
  • The only supported access mode is `ReadWriteOnce`

Dynamic Provisioning

Dynamic provisioning is supported using storage classes. To enable dynamic provisioning use following configuration for helm chart:
  • persistence.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
  • pvClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.

Non-Dynamic Provisioning

Non-Dynamic Provisioning is supported using pre-created Persistent Volume and pre-created Persistent Volume Claim.

Using pre-created Persistent Volume- When creating Persistent Volume, make a note of the storage class and metadata labels, that are required to configure Persistent Volume Claim's storage class and label selector during deployment. This ensures that the claims are bound to Persistent Volume based on label match. These labels can be passed to helm chart either by --set flag or custom values.yaml file. The parameters defined invalues.yaml for label name and its value are pvClaim.selector.label and pvClaim.selector.value respectively.

Refer below yaml templates for Persistent Volume creation. Customize as per your requirement. Example: Create Persistent volume using NFS server
kind: PersistentVolume
apiVersion: v1
metadata:
  name: <persistent volume name> 
  labels:
    app.kubernetes.io/name: <persistent volume name>
    app.kubernetes.io/instance: <release name>
    app.kubernetes.io/managed-by: <service name>
    helm.sh/chart: <chart name>
    release: <release name>
    purpose: cdwsconfig
spec:
  storageClassName: <storage classname>
  capacity:
    storage: <storage size>
  accessModes:
    - ReadWriteOnce
  nfs:
    server: <NFS server IP address>
    path: <mount path>
Invoke the following command to create a Persistent Volume:
Kubernetes:
kubectl create -f <peristentVolume yaml file>
OpenShift:
oc create -f <peristentVolume yaml file>

Using pre-created Persistent Volume Claim (PVC)- The existing PVC can also be used for deployment. The parameter for pre-created PVC is pvClaim.existingClaimName. One should pass a valid PVC name to this parameter else deployment would fail.

Apart from required Persistent Volume, you can bind extra storage mounts using the parameters provided in values.yaml. The parameters in persistentVolumeExtra needs to be configured for the same.

The deployment mounts following configuration/resource directories on the Persistent Volume -
  • INSTALLATION_DIR/JSONFileSystem
  • INSTALLATION_DIR/RestLogs
  • INSTALLATION_DIR/mftws/BOOT-INF/classes
When deployment is upgraded or pod is recreated in Kubernetes based cluster then, only the data of above directories are saved/persisted on Persistent Volume.

In the INSTALLATION_DIR/mftws/BOOT-INF/classes directory, only the following required files are saved/persisted: application.properties, .hiddenFile, ssl-server.jks, trustedkeystore.jks, and log4j2.yaml.

Setting permission on storage

When shared storage is mounted on a container, it is mounted with same POSIX ownership and permission present on exported NFS directory. The mounted directories on container may not have correct owner and permission needed to perform execution of scripts/binaries or writing to them. This situation can be handled as below -
  • Option A: The easiest and undesirable solution is to have open permissions on the NFS exported directories.
     chmod -R 777 <path-to-directory>
  • Option B: Alternatively, the permissions can be controlled at group level leveraging the supplementalGroups and fsGroup setting. For example - if we want to add GID to supplementalGroups or fsGroup, it can be done using storageSecurity.supplementalGroups or storageSecurity.fsGroup.
Apart from above recommendation, during deployment, a default Connect:Direct Web Services user cdwsuser with group cdwsuser is created. The default UID and GID of cdwsuser is 1010.

Root Squash NFS support

Root squash NFS is secure NFS share when root privileges are shrinked similar to unprivileged user. Also, this user is mapped to nfsnobody or nobody user on the system. So, you cannot perform operations like changing the ownership of any files/directories.
Connect:Direct Web Services helm chart can be deployed on root squash NFS. Since, the ownership of files/directories mounted in container would be mounted as nfsnobody or nobody. The POSIX group ID of the root squash NFS share should be added to Supplemental Group list statefulset using storageSecurity.supplementalGroup in values.yaml file. Similarly, if extra NFS share is mounted then proper read/write permission can be provide to container user using supplemental groups only.

Creating secret

Passwords are used for KeyStore, TrustStore and CA-Signed Key Certificate by Administrator to connect to Connect:Direct Web Services.

To separate application secrets from the Helm Release, a Kubernetes secret must be created based on the examples given below and be referenced in the Helm chart as secret.secretName value.

To create Secrets using the command line, follow the steps below:
  1. Create a template file with Secret defined as described in the example below:
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret name>
    type: Opaque
    data:
      trustStorePassword: <base64 encoded password>
      keyStorePassword: <base64 encoded password>
      caCertPassword: <base64 encoded password>
    Here:
    • trustStorePassword refers to the the Trust Store password
    • keyStorePassword refers to the Key Store password.
    • caCertPassword refers to the CA Signed Certificate password. This parameter is required when user want to configure a CA-signed key certificate in web services.
    • After the secret is created, delete the yaml file for security reasons.
    Note: Base64 encoded passwords need to be generated manually by invoking the below command:
    echo -n “<your desired password>” | base64
    Use the output of this command in the <secret yaml file>.
  2. Run the following command to create the Secret:
    Kubernetes:
    kubectl create -f <secret yaml file>
    OpenShift
    oc create -f <secret yaml file>
    To check the secret created invoke the following command:
    kubectl get secrets

    For more details see, Secrets.

    Default Kubernetes secrets management has certain security risks as documented here, Kubernetes Security.

    Users should evaluate Kubernetes secrets management based on their enterprise policy requirements and should take steps to harden security.

  3. Secrets needs to be created to configure desired CA-Signed Key Certificate and Trusted Certificate. It can be created using below example, as required -
    Kubernetes
    kubectl create secret generic cdws-ca-cert-secret --from-file=/path/to/certificate_file1
    
    kubectl create secret generic cdws-trust-cert-secret --from-file=/path/to/certificate_file2
    
    OpenShift
    oc create secret generic cdws-ca-cert-secret --from-file=/path/to/certificate_file1
    
    oc create secret generic cdws-trust-cert-secret --from-file=/path/to/certificate_file2
    
    Note:
    • Ensure that the CA-Signed key certificate contains the complete certificate chain.

Configuring- Understanding values.yaml

The following table describes configuration parameters listed in the values.yaml file in Helm charts used to complete the installation.

Parameter Description Default
affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity"  
affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity"  
affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity"  
affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity"  
affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity"  
affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity"  
arch Node Architecture amd64
autoscaling.enabled Autoscaling is enabled or not true
autoscaling.maxReplicas Maximum pod replica 2
autoscaling.minReplicas Minimum pod replica 1
autoscaling.targetCPUUtilizationPercentage Target CPU Utilization 70
autoscaling.targetMemoryUtilizationPercentage Target Memory Utilization 70
cdwsParams.certificateExpiryTime Self-signed certificate - Enter the certificate expiration time in days  
cdwsParams.certificateLabel Certificate label for CA-signed Certificate/Self-signed certificate  
cdwsParams.commonName Self-signed certificate - Identifies the host name associated with the certificate  
cdwsParams.country Self-signed certificate - The two-letter ISO code for the country where your organization is location.  
cdwsParams.dnsName Self-signed certificate - Identifies the domain name associated with the certificate.  
cdwsParams.emailId Self-signed certificate - An email address used to contact your organization.  
cdwsParams.ipAddress Self-signed certificate - Identifies the IP Address associated with the certificate.  
cdwsParams.locality Self-signed certificate - The city where your organization is located.  
cdwsParams.organization Self-signed certificate - The legal name of your organization. Should not be abbreviated and should include suffixes (Inc, Corp, LLC).  
cdwsParams.state Self-signed certificate - The state/region where your organization is located.  
cdwsParams.restOnly Self-signed certificate - The state/region where your organization is located. |  
dashboard.enabled For making monitoring dashboard enabled  
defaultPodDisruptionBudget. Minimum replicas required for pod disruption budget

enabled: false

minAvailable: 1

hostAliases.enabled Enable hostname and IP mapping for DNS resolution false
hostAliases.hostEntries For providing IP and hostname mapping []
image.digest.enabled    
image.imageSecrets Image pull secrets  
image.pullPolicy Image pull policy IfNotPresent
image.repository Image full name including repository  
image.tag Image tag  
ingress.annotations Annotation for ingress resource []
ingress.controller Ingress controller name  
ingress.enabled Flag to enable or disable ingress false
ingress.host Ingress hostname  
ingress.tls.enabled TLS is enabled or disabled for ingress resource false
ingress.tls.secretName TLS secret name if enabled  
initResources.limits.cpu Init Container CPU limit 500m
initResources.limits.memory Init Container memory limit 1Gi
initResources.requests.cpu Init Container CPU requested 250m
initResources.requests.memory Init Container Memory requested 1Gi
license License agreement. Set true to accept the license. false
licenseType Specify prod or non-prod for production or non-production license type respectively prod
livenessProbe.initialDelaySeconds Initial delays for liveness 15
livenessProbe.periodSeconds Time period for liveness 15
livenessProbe.timeoutSeconds Timeout for liveness 10
networkPolicy.egress Network Policy egress rules {}
networkPolicy.ingress Network Policy ingress rules {}
persistence.enabled To use persistent volume true
persistence.useDynamicProvisioning To use storage classes to dynamically create PV false
pvClaim Specify the existing PV claim name to be used for deployment  
pvClaim.accessMode Access mode for PV Claim ReadWriteOnce
pvClaim.existingClaimName Provide name of existing PV claim to be used  
pvClaim.selector.label PV label key to bind this PVC  
pvClaim.selector.value PV label value to bind this PVC  
pvClaim.size Size of PVC volume 500Mi
pvClaim.storageClassName Storage class of the PVC  
persistentVolumeExtra.accessMode PV accessMode ReadWriteOnce
persistentVolumeExtra.claimName Already created PVC name  
persistentVolumeExtra.enabled Persistent volume for user input false
persistentVolumeExtra.selector.label Label name for attaching PV  
persistentVolumeExtra.selector.value Label value for attaching PV  
persistentVolumeExtra.size Size of PVC volume 100Mi
persistentVolumeExtra.storageClassName Storage class of the PVC manual
readinessProbe.initialDelaySeconds Initial delays for readiness 15
readinessProbe.periodSeconds Time period for readiness 15
readinessProbe.timeoutSeconds Timeout for readiness 10
replicaCount Number of deployment replicas 1
resources.limits.cpu Container CPU limit 1500m
resources.limits.memory Container memory limit 1Gi
resources.requests.cpu Container CPU requested 1000m
resources.requests.memory Container Memory requested 1Gi
route.enabled Route for OpenShift Enabled/Disabled false
secret.caCertSecretName CA Certificate file to be imported at the time of install  
secret.secretName Secret name for Secure Parameters  
secret.trustCertSecretName Trusted Certificate file to be imported at the time of install  
secComp.profile seccomp profile filepath  
secComp.type seccomp profile type RuntimeDefault
serviceAccount.create Enable/disable service account creation true
serviceAccount.name Name of Service Account to use for container  
service.annotations missing Add metadata to the Service object to support integration with external tools or controllers.  
service.allowIngressTraffic Allowing Ingress traffic for Web Console true
service.externalIP External IP for service discovery  
service.externalTrafficPolicy For passing external Traffic Policy Local
service.loadBalancerIP For passing load balancer IP  
service.loadBalancerSourceRanges Load Balancer sources []
service.port Web Console port number 9443
service.protocol Web Console Protocol for service TCP
service.sessionAffinity Session Affinity ClientIP
service.type Kubernetes service type exposing ports LoadBalancer
service.webConsoleName Web Console name cdws-web-console
storageSecurity.fsGroup Used for controlling access to block storage  
storageSecurity.supplementalGroups Groups IDs used for controlling access 65534
runAsGroup: Specify the group ID under which the containerized process runs.  
runAsUser Run apps in a container under a nondefault user account. 1010
timeZone This flag is used for setting TimeZone of container Asia/Calcutta

Use the following steps to complete this action:

To override configuration parameters during Helm installation, you can choose one of the following methods:

Method 1: Override Parameters Directly with CLI Using --set

This approach uses the --set argument to specify each parameter that needs to be overridden at the time of installation.

Example for Helm Version 3:

helm install <release-name> \
--set service.port=9443 \
... 
ibm-cdws-1.0.x.tgz

Method 2: Use a YAML File with Configured Parameters

Alternatively, specify configurable parameters in a values.yaml file and use it during installation. This approach can be helpful for managing multiple configurations in one place.

  • To obtain the values.yaml template from the Helm chart:

    • For Online Cluster:

      helm inspect values ibm-helm/ibm-cdws > my-values.yaml
      
    • For Offline Cluster:

      helm inspect values <path-to-ibm-cdws-helm-chart> > my-values.yaml
      
  • Edit the my-values.yaml file to include your desired configuration values and use it with the Helm installation command:

Example for Helm Version 3:

helm install <release-name> -f my-values.yaml ... ibm-cdws-1.0.x.tgz
To mount extra volumes, use the following method:
YAML Configuration:
persistentVolumeExtra:

enabled: true

claimName: ""

#if claim name is not given and enabled is true then next 3 properties are required

storageClassName: "manual"

size: 100Mi

accessMode: "ReadWriteOnce"

selector:

label: "" value: ""

An extra volume must be mounted when web services running inside the container require access to files by using an absolute path. For example, in the process control API, the processFile parameter requires the full file path. This requirement can be met by using an extra volume mount.

The mount path for the additional persistent volume claim is /opt/process. Therefore, a corresponding processFile value would be:
/opt/process/process_file.cdp

Affinity

The chart provides ways in form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in Kubernetes. See, Kubernetes documentation for details.

Note: For exact parameters, its value and its description, please refer to values.yaml file present in the helm chart itself. Untar the helm chart package to see this file inside chart directory.

Network Policy Change

Out of the box Network Policies

IBM Sterling Connect:Direct Web Services Container comes with predefined network policies based on mandatory security guidelines. By default, all outbound communication is restricted, permitting only intra-cluster communication.

Out-of-the-box Egress Policies:
  1. Deny all Egress Traffic

  2. Allow Egress Traffic within the Cluster

Defining Custom Network Policy

During the deployment of the Helm chart, you have the flexibility to enable or disable network policies. If policies are enabled, a custom egress network policy is essential for communication outside the cluster which can be defined in values.yaml under networkPolicy.egress spec. Similarly, custom ingress policy can also be defined in values.yaml under networkPolicy.ingress spec. This can serve as a reference during helm chart deployment:
ingress:{}
# - from:
# ports:
# - protocol: TCP
# port: 9443 #port should be same as defined for service.port

 egress:{}

Pod Disruption Budget

IBM Sterling Connect:Direct Web Services Container supports pod disruption budget to ensure zero downtime during maintenance or upgrade activity. To configure the pod disruption budget, update the defaultpodDisruptionBudget.enabled to true in values.yaml:
defaultPodDisruptionBudget:
  enabled: false
  minAvailable: 1

For more information, refer Pod Disruption Budget.

Autoscaling

This chart provides method to configure horizontal as well as vertical scaling.

For vertical scaling, user can update the resources.limits in values.yaml and set the value for CPU and memory according to their requirement and resource availability.

resources:
  limits:
    cpu: 3000m
    memory: 2Gi
    ephemeral-storage: "3Gi"
HorizontalPodAutoscalar is used to scale the application horizontally. To scale the application horizontally, update autoscaling.enabled to true in values.yaml:
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 2
  targetCPUUtilizationPercentage: 70
  targetMemoryUtilizationPercentage: 70

Fore more information, refer to Horizontal Pod Autoscaling.

Installing IBM Connect:Direct Web Services using Helm chart

After completing all Installing section steps, deploy the IBM Sterling Connect:Direct Web Services Container by invoking following command:
helm install my-release --set license=true,image.repository=<reponame>,image.tag=<image tag>,image.imageSecrets=<image pull secret>,secret.secretName=<CDWS secret name> ibm-cdws-1.0.x.tgz
or
helm install my-release ibm-cdws-1.0.x.tgz -f my-values.yaml

This command deploys ibm-cdws-1.0.x.tgz chart on the Kubernetes cluster using the default configuration. Creating storage for Data Persistence lists parameters that can be configured at deployment.

Mandatory parameters required at the helm install command:
Table 3.
Parameter Description Default Value
license License agreement for IBM Certified Container Software false
image.repository Image full name including repository  
image.tag Image tag  
image.imageSecrets Image pull secrets  
secret.secretName Secret name for Connect:Direct Web Services password store  

Validating the Installation

After the deployment procedure is complete, you should validate the deployment to ensure that everything is working according to your needs. The deployment may take approximately 4-5 minutes to complete.

To validate if the IBM Sterling Connect:Direct Web Services Container deployment using Helm charts is successful, invoke the following commands to verify the status (STATUS is DEPLOYED) for a Helm chart with release, my-release and namespace, my-namespace.
  • Check the Helm chart release status by invoking the following command and verify that the STATUS is DEPLOYED:
    helm status my-release
  • Wait for the pod to be ready. To verify the pods status (READY) use the dashboard or through the command line interface by invoking the following command:
    kubectl get pods -l release my-release -n my-namespace -o wide
  • To view the service and ports exposed to enable communication in a pod invoke the following command:
    kubectl get svc -l release= my-release -n my-namespace -o wide

    The screen output displays the external IP and exposed ports under EXTERNAL-IP and PORT(S) column respectively. If external LoadBalancer is not present, refer Master node IP as external IP.

Exposed Services

IBM Connect:Direct Web Services for Admin and User Functions can be accessed using LoadBalancer or external IP and mapped server port. If external LoadBalancer is not present, then refer to Master node IP for communication.

In addition to that, on OpenShift cluster, services can also be accessed through route. For configuring route in OpenShift cluster, route.enabled should be set to true in values.yaml file. User can then get the route using the following command:
oc get route -l release=my-release -n my-namespace -o wide

From the output of the above command, extract the HOST/PORT value and use it to access Connect:Direct Web Services (https://<HOST/PORT>/cdws-ui/index.html).

Similarly, for accessing application in kubernetes cluster, ingress resource can be configured. For configuring ingress resource, following parameters have to be updated in values.yaml:
ingress:
  enabled: false
  host: ""
  controller: "nginx"
  annotations: {}
  tls:
    enabled: false
    secretName: ""
Set ingress.enabled to true, update the hostname in ingress.host, which will be used to access the web services application. Enable ingress.tls and update ingress.tls.secretName with the TLS secret name. Create a Kubernetes TLS secret using the following command:
kubectl create secret tls ibm-cdws-tls --key=<key file path> --cert=<cert file path>
Utilize the following command to verify the configuration of the Ingress resource:
kubectl get ingress -l release=my-release -n my-namespace -o wide
From the above output, extract the hostname from HOSTS column and use it to access webservices like: https://hostname/cdws-ui/index.html.
Note: The NodePort service type is not recommended due to additional security concerns and complexities in management, both from an application and networking infrastructure standpoint.

DIME and DARE Security Considerations

This topic provides security recommendations for setting up Data In Motion Encryption (DIME) and Data At Rest Encryption (DARE). It is intended to help you create a secure implementation of the application.

  1. All sensitive application data at rest is stored in binary format so user cannot decrypt it. This chart does not support encryption of user data at rest by default. Administrator can configure storage encryption to encrypt all data at rest.
  2. Data in motion is encrypted using transport layer security (TLS 1.3).