Installing
After reviewing the system requirements and other planning information, you can proceed to install IIBM Sterling Connect:Direct Web Services Container.
The following tasks represent the typical task flow for performing the installation:
Setting up your registry server
To install IBM Sterling Connect:Direct Web Services Container, you must have a registry server where you can host the image required for installation.
Using the existing registry server
If you have an existing registry server, you can use it, provided that it is in close proximity to cluster where you will deploy IBM Sterling Connect:Direct Web Services Container. If your registry server is not in close proximity to your cluster, you might notice performance issues.
Before installation, ensure that the required pull secrets are created in the
namespace or project and are associated with the appropriate service accounts.
Proper management of these pull secrets is required. The pull secret can be
referenced in the values.yaml file under
image.imageSecrets.
Using Docker registry
Kubernetes does not provide a registry solution out of the based. However, you can create your own registry server and host your images. Please refer to the deployment of registry server.
Setting up Namespace or project
To install IBM Sterling Connect:Direct Web Services Container, you must have an existing namespace/project or create a new if required.
You can either use an existing namespace or create a new one in Kubernetes cluster. Similarly, you either use an existing project or create a new one in OpenShift cluster. A namespace or project is a cluster resource. So, it can only be created by a Cluster Administrator. Refer the following links for more details:
For Kubernetes - Namespaces
For Red Hat OpenShift - Working with projects
IBM Sterling Connect:Direct Web Services Container has been integrated with IBM Licensing and Metering service using Operator. You need to install this service. For more information, refer to License Service deployment without an IBM Cloud Pak.
Installing and configuring IBM Licensing and Metering service
License Service is required for monitoring and measuring license usage of IIBM Sterling Connect:Direct Web Services Container in accordance with the pricing rule for containerized environments. Manual license measurements are not allowed. Deploy License Service on all clusters where IBM Sterling Connect:Direct Web Services Container is installed.
IBM Sterling Connect:Direct Web Services Container contains an integrated service for measuring the license usage at the cluster level for license evidence purposes.
Overview
The integrated licensing solution collects and stores the license usage information which can be used for audit purposes and for tracking license consumption in cloud environments. The solution works in the background and does not require any configuration. Only one instance of the License Service is deployed per cluster regardless of the number of containerized products that you have installed on the cluster.
Deploying License Service
Deploy License Service on each cluster where IBM FHIR Server is installed. License Service can be deployed on any Kubernetes based orchestration cluster. For more information about License Service, how to install and use it, see the License Service documentation.
Validating if License Service is deployed on the cluster
kubectl get pods --all-namespaces | grep ibm-licensing | grep -v operatoroc get pods --all-namespaces | grep ibm-licensing | grep -v operator
The following response is a confirmation of successful deployment:
1/1 Running
Archiving license usage data
Remember to archive the license usage evidence before you decommission the cluster where IBM Sterling Connect:Direct Web Services Container was deployed. Retrieve the audit snapshot for the period when IBM Sterling Connect:Direct Web Services Container was on the cluster and store it in case of audit.
For more information about the licensing solution, see License Service documentation.
Downloading IBM Sterling Connect:Direct Web Services Container
Before you install IBM Sterling Connect:Direct Web Services Container, ensure that the installation files are available on your client system.
Depending on the availability of internet on the cluster, the following procedures can be followed. Choose the one which applies best for your environment.
Online Cluster
- Create the entitled registry secret: Complete the following steps to create a
secret with the entitled registry key value:
- Ensure that you have obtained the entitlement key that is assigned to your ID.
- Log in to My IBM Container Software Library by using the IBM ID and password that are associated with the entitled software.
- In the Entitlement keys section, under Activation Keys, select Copy to copy the entitlement key to the clipboard.
- Save the entitlement key to a safe location for later
use.To confirm that your entitlement key is valid, click Container software library that is provided in the left of the page. You can view the list of products that you are entitled to. If Connect:Direct Web Services is not listed, or if the Container software library link is disabled, it indicates that the identity with which you are logged in to the container library does not have an entitlement for IBM Connect:Direct Web Services. In this case, the entitlement key is not valid for installing the software.
Note: For assistance with the Container software library (e.g. product not available in the library; problem accessing your entitlement registry key), contact MyIBM Order Support. - Set the entitled registry information by completing the following steps:
- Log on to machine from where the cluster is accessible
- export ENTITLED_REGISTRY=cp.icr.io
- export ENTITLED_REGISTRY_USER=cp
- export ENTITLED_REGISTRY_KEY=<entitlement_key>
- This step is optional. Log on to the entitled registry with the following docker
login
command:
docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY" - Create a Docker-registry
secret:
kubectl create secret docker-registry <any_name_for_the_secret> --docker-username=$ENTITLED_REGISTRY_USER --docker-password=$ENTITLED_REGISTRY_KEY --docker-server=$ENTITLED_REGISTRY -n <your namespace/project name> - Update the service account or helm chart image pull secret configurations using
`
image.imageSecrets` parameter with the above secret name.
- Ensure that you have obtained the entitlement key that is assigned to your ID.
- Download the Helm chart: You can follow the steps below to download the helm
chart from the repository.
- Make sure that the helm client (CLI) is present on your machine. Execute/run helm
CLI on machine and you should be able to see the usage of helm
CLI.
helm - Check the
ibm-helmrepository in your helm CLI.
If thehelm repo listibm-helmrepository already exists with URLhttps://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm, then update the local repository else add the repository. - Update the local repository, if
ibm-helmrepository already exists on helm CLI.helm repo update - Add the helm chart repository to local helm CLI if it does not
exist.
helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm - List ibm-cdws helm charts available on
repository.
helm search repo -l ibm-cdws - Download the latest helm chart.
At this stage, ensure that the Helm chart configuration references the entitled registry secret so that the required container image for the IBM Connect:Direct Web Services chart can be pulled during deployment. Both the Helm chart and the entitled registry secret must be present on the system where the deployment is performed.helm pull ibm-helm/ibm-cdws
- Make sure that the helm client (CLI) is present on your machine. Execute/run helm
CLI on machine and you should be able to see the usage of helm
CLI.
Offline (Airgap) Cluster
You have a Kubernetes or OpenShift cluster but it is a private cluster which means it does not have the internet access. Depending upon the cluster, follow the below procedures to get the installation files.
For Kubernetes Cluster
- Get an RHEL machine which has
- Download the Helm chart by following the steps mentioned in the Online installation section.
- Extract the downloaded helm
chart.
tar -zxf <ibm-cdws-helm chart-name> - Get the container image
detail:
erRepo=$(grep -w "repository:" ibm-cdws/values.yaml |cut -d '"' -f 2)erTag=$(grep -w "tag:" ibm-cdws/values.yaml | cut -d '"' -f 2)erImgTag=$erRepo:$erTag - This step is optional if you already have a docker registry running on this machine. Create a docker registry on this machine. Follow Setting up your registry server.
- Get the Entitled registry entitlement key by following steps a and b explained in Online Cluster under Create the entitled registry section.
- Get the container image downloaded in docker
registry:
docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY"docker pull $erImgTagNote: Skip step 8, 9 and 10, if the cluster where deployment will be performed is accessible from this machine and cluster can fetch container images from registry running on this machine. - Save the container
image.
docker save -o <container image file name.tar> $erImgTag - Copy or transfer the installation files to your cluster. At this point you have both downloaded container image and helm chart for IBM Connect:Direct Web Services. You need to transfer these two file to a machine from where you can access your cluster and its registry.
- After transferring the files, load the container image into your
registry.
docker load -i <container image file name.tar>
For Red Hat OpenShift Cluster
If your cluster is not connected to the internet, the deployment can be done in your cluster via connected or disconnected mirroring.
If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
Before you begin
Prerequisites
- Red Hat® OpenShift® Container Platform requires you to have cluster admin access to run the deployment.
- A Red Hat® OpenShift® Container Platform cluster must be installed.
Prepare a host
If you are in an air-gapped environment, you must be able to connect a host to the internet and mirror registry for connected mirroring or mirror images to file system which can be brought to a restricted environment for disconnected mirroring. For information on the latest supported operating systems, see ibm-pak plugin install documentation.
| Software | Purpose |
|---|---|
| Docker | Container management |
| Podman | Container management |
| Red Hat OpenShift CLI (oc) | Red Hat OpenShift Container Platform administration |
- Install Docker or Podman.To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
yum check-update yum install dockerTo install Podman, see Podman Installation Instructions.
- Install the
ocRed Hat® OpenShift® Container Platform CLI tool. - Download and install the most recent version of IBM Catalog Management Plug-in for IBM
Cloud Paks from the IBM/ibm-pak. Extract the binary file by entering
the following command:
tar -xf oc-ibm_pak-linux-amd64.tar.gzRun the following command to move the file to the /usr/local/bin directory:Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pakNote: Download the plug-in based on the host operating system. You can confirm thatoc ibm-pak -his installed by running the following command:oc ibm-pak --helpThe plug-in usage is displayed.
For more information on plug-in commands, see command-help.
Your host is now configured and you are ready to mirror your images.
Creating registry namespaces
Top‑level namespaces are namespaces that appear at the root path of a private registry.
For example, if your registry is hosted at:
myregistry.com:5000
then mynamespace in:
myregistry.com:5000/mynamespace
is a top-level namespace. You can have multiple top-level namespaces.
When images are mirrored to your private registry, the top-level namespace where the images are mirrored must already exist or be automatically created during the image push.
If your registry does not allow automatic creation of top-level namespaces, you must create them manually.
Specify a namespace during mirror manifest generation
When generating mirror manifests, you can specify the top-level namespace by setting:
TARGET_REGISTRY=myregistry.com:5000/mynamespace
This approach requires creating only one namespace (mynamespace) in your
registry if automatic namespace creation is not supported.
You can also provide top-level namespaces in the final registry using the
--final-registry option.
If you do not specify a namespace
If you do not specify your own top-level namespace, the mirroring process uses the namespaces defined by the CASE files.
For example, it will try to mirror images to:
myregistry.com:5000/cp
Manual namespace creation
If your registry does not allow automatic creation of top-level namespaces and you do not specify your own namespace during mirror manifest generation, you must create the following namespace at the root of your registry:
cp
There may be additional top-level namespaces you need to create.
See Generate mirror manifests for details on using the
oc ibm-pak describe command to list all required top-level
namespaces.
Set Environment Variables and Download CASE Files
If your host must connect to the internet via a proxy, you must set environment variables on the machine that accesses the internet via the proxy server.
export https_proxy=http://proxy-server-hostname:port
export http_proxy=http://proxy-server-hostname:port
# Example:
export https_proxy=http://server.proxy.xyz.com:5018
export http_proxy=http://server.proxy.xyz.com:5018
- Create the following environment variables with the installer image name and the
version.
export CASE_NAME=ibm-cdwsTo find the CASE name and version, see IBM: Product CASE to Application Version.
- Connect your host to the intranet.
- The plug-in can detect the locale of your environment and provide textual helps and
messages accordingly. You can optionally set the locale by running the following
command:
oc ibm-pak config locale -l LOCALEwhere LOCALE can be one of
de_DE, en_US, es_ES, fr_FR, it_IT, ja_JP, ko_KR, pt_BR, zh_Hans, zh_Hant. - Configure the plug-in to download CASEs as OCI artifacts from IBM Cloud Container
Registry
(ICCR).
oc ibm-pak config repo 'IBM Cloud-Pak OCI registry' -r oci:cp.icr.io/cpopen --enable - Enable color output (optional with v1.4.0 and
later)
oc ibm-pak config color --enable true - Download the image inventory for your IBM Cloud Pak to your host.Tip: If you do not specify the CASE version, it will download the latest CASE.
oc ibm-pak get \ $CASE_NAME \ --version $CASE_VERSION
By default, the root directory used by plug-in is ~/.ibm-pak. This means
that the preceding command will download the CASE under
~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION. You can configure this root
directory by setting the IBMPAK_HOME environment variable. Assuming
IBMPAK_HOME is set, the preceding command will download the CASE under
$IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION.
The logs files will be available at
$IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log.
Your host is now configured and you are ready to mirror your images.
Mirroring images to your private container registry
The process of mirroring images takes the image from the internet to your host, then effectively copies that image to your private container registry. After you mirror your images, you can configure your cluster and complete air-gapped installation.
- Generate mirror manifests
- Authenticating the registry
- Mirror images to final location
- Configure the cluster
- Install IBM Cloud® Paks by way of Red Hat OpenShift Container Platform
Generate mirror manifests
-
If you want to install subsequent updates to your air-gapped environment, you must do a
CASE getto get the image list when performing those updates. A registry namespace suffix can optionally be specified on the target registry to group mirrored images. -
Define the environment variable
$TARGET_REGISTRYby running the following command:export TARGET_REGISTRY=<target-registry>The
<target-registry>refers to the registry (hostname and port) where your images will be mirrored to and accessed by the oc cluster. For example setting TARGET_REGISTRY tomyregistry.com:5000/mynamespacewill create manifests such that images will be mirrored to the top-level namespacemynamespace. - Run the following commands to generate mirror manifests to be used when mirroring from a bastion host (connected mirroring):Example
oc ibm-pak generate mirror-manifests \ $CASE_NAME \ $TARGET_REGISTRY \ --version $CASE_VERSION~/.ibm-pakdirectory structure for connected mirroringThe~/.ibm-pakdirectory structure is built over time as you save CASEs and mirror. The following tree shows an example of the~/.ibm-pakdirectory structure for connected mirroring:tree ~/.ibm-pak /root/.ibm-pak ├── config │ └── config.yaml ├── data │ ├── cases │ │ └── YOUR-CASE-NAME │ │ └── YOUR-CASE-VERSION │ │ ├── XXXXX │ │ ├── XXXXX │ └── mirror │ └── YOUR-CASE-NAME │ └── YOUR-CASE-VERSION │ ├── catalog-sources.yaml │ ├── image-content-source-policy.yaml │ └── images-mapping.txt └── logs └── oc-ibm_pak.logNotes: A new directory
~/.ibm-pak/mirroris created when you issue theoc ibm-pak generate mirror-manifestscommand. This directory holds theimage-content-source-policy.yaml,images-mapping.txt, andcatalog-sources.yamlfiles.Tip: If you are using a Red Hat® Quay.io registry and need to mirror images to a specific organization in the registry, you can target that organization by specifying:export ORGANIZATION=<your-organization> oc ibm-pak generate mirror-manifests $CASE_NAME $TARGET_REGISTRY/$ORGANIZATION --version $CASE_VERSION
--final-registry: oc ibm-pak generate mirror-manifests \
$CASE_NAME \
$INTERMEDIATE_REGISTRY \
--version $CASE_VERSION
--final-registry $FINAL_REGISTRY
In this case, in place of a single mapping file (images-mapping.txt), two mapping files are created.
- images-mapping-to-registry.txt
- images-mapping-from-registry.txt
- Run the following commands to generate mirror manifests to be used when mirroring from a file system (disconnected mirroring):Example
oc ibm-pak generate mirror-manifests \ $CASE_NAME \ file://local \ --final-registry $TARGET_REGISTRY~/.ibm-pakdirectory structure for disconnected mirroringThe following tree shows an example of the~/.ibm-pakdirectory structure for disconnected mirroring:tree ~/.ibm-pak /root/.ibm-pak ├── config │ └── config.yaml ├── data │ ├── cases │ │ └── ibm-cp-common-services │ │ └── 1.9.0 │ │ ├── XXXX │ │ ├── XXXX │ └── mirror │ └── ibm-cp-common-services │ └── 1.9.0 │ ├── catalog-sources.yaml │ ├── image-content-source-policy.yaml │ ├── images-mapping-to-filesystem.txt │ └── images-mapping-from-filesystem.txt └── logs └── oc-ibm_pak.logNote: A new directory~/.ibm-pak/mirroris created when you issue theoc ibm-pak generate mirror-manifestscommand. This directory holds theimage-content-source-policy.yaml,images-mapping-to-filesystem.txt,images-mapping-from-filesystem.txt, andcatalog-sources.yamlfiles.
--filter argument and image grouping. The
--filter argument provides the ability to customize which images are
mirrored during an air-gapped installation. As an example for this functionality
ibm-cloud-native-postgresql CASE can be used, which contains groups
that allow mirroring specific variant of ibm-cloud-native-postgresql
(Standard or Enterprise). Use the --filter argument to target a variant
of ibm-cloud-native-postgresql to mirror rather than the entire library.
The filtering can be applied for groups and architectures. Consider the following
command: oc ibm-pak generate mirror-manifests \
ibm-cloud-native-postgresql \
file://local \
--final-registry $TARGET_REGISTRY \
--filter $GROUPS
The command was updated with a --filter argument. For example, for
$GROUPS equal to ibmEdbStandard the mirror manifests
will be generated only for the images associated with
ibm-cloud-native-postgresql in its Standard variant. The resulting image
group consists of images in the ibm-cloud-native-postgresql image group as
well as any images that are not associated with any groups. This allows products to include
common images as well as the ability to reduce the number of images that you need to
mirror.
oc ibm-pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images
- Mirroring Details from Source to Target Registry
-
Mirroring Details from Target to Final Registry. A connected mirroring path that does not involve a intermediate registry will only have the first section.
Note down the
Registries foundsub sections in the preceding command output. You will need to authenticate against those registries so that the images can be pulled and mirrored to your local registry. See the next steps on authentication. TheTop level namespaces foundsection shows the list of namespaces under which the images will be mirrored. These namespaces should be created manually in your registry (which appears in the Destination column in the above command output) root path if your registry does not allow automatic creation of namespaces.
Authenticating the registry
Complete the following steps to authenticate your registries:
-
Store authentication credentials for all source Docker registries.
Your product might require one or more authenticated registries. The following registries require authentication:
cp.icr.ioregistry.redhat.ioregistry.access.redhat.com
You must run the following command to configure credentials for all target registries that require authentication. Run the command separately for each registry:
Note: Theexport REGISTRY_AUTH_FILEcommand only needs to run once.export REGISTRY_AUTH_FILE=<path to the file which will store the auth credentials generated on podman login> podman login <TARGET_REGISTRY>Important: When you log in tocp.icr.io, you must specify the user ascpand the password which is your Entitlement key from the IBM Cloud Container Registry. For example:podman login cp.icr.io Username: cp Password: Login Succeeded!
For example, if you export REGISTRY_AUTH_FILE=~/.ibm-pak/auth.json, then
after performing podman login, you can see that the file is populated with
registry credentials.
docker login, the authentication file is typically located at
$HOME/.docker/config.json on Linux or
%USERPROFILE%/.docker/config.json on Windows. After docker
login you should export REGISTRY_AUTH_FILE to point to that
location. For example in Linux you can issue the following
command:export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
| Directory | Description |
|---|---|
~/.ibm-pak/config |
Stores the default configuration of the plug-in and has information about the public GitHub URL from where the cases are downloaded. |
~/.ibm-pak/data/cases |
This directory stores the CASE files when they are downloaded by issuing the
oc ibm-pak get command. |
~/.ibm-pak/data/mirror |
This directory stores the image-mapping files, ImageContentSourcePolicy
manifest in image-content-source-policy.yaml and CatalogSource
manifest in one or more catalog-sourcesXXX.yaml. The files
images-mapping-to-filesystem.txt and
images-mapping-from-filesystem.txt are input to the oc
image mirror command, which copies the images to the file system and from
the file system to the registry respectively. |
~/.ibm-pak/data/logs |
This directory contains the oc-ibm_pak.log file, which
captures all the logs generated by the plug-in. |
Mirror images to final location
Complete the steps in this section on your host that is connected to both the local Docker registry and the Red Hat® OpenShift® Container Platform cluster.
-
Mirror images to the final location.
-
For mirroring from a bastion host (connected mirroring):
Mirror images to theTARGET_REGISTRY:oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=trueIf you generated manifests in the previous steps to mirror images to an intermediate registry server followed by a final registry server, run the following commands:
-
Mirror images to the intermediate registry server:
oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-registry.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true -
Mirror images from the intermediate registry server to the final registry server:
oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-registry.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=trueThe
oc image mirror --helpcommand can be run to see all the options available on the mirror command. Note that we usecontinue-on-errorto indicate that the command should try to mirror as much as possible and continue on errors.oc image mirror --helpNote: Sometimes based on the number and size of images to be mirrored, theoc image mirrormight take longer. If you are issuing the command on a remote machine it is recommended that you run the command in the background with a nohup so even if network connection to your remote machine is lost or you close the terminal the mirroring will continue. For example, the below command will start the mirroring process in background and write the log tomy-mirror-progress.txt.nohup oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ -a $REGISTRY_AUTH_FILE \ --filter-by-os '.*' \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true > my-mirror-progress.txt 2>&1 &You can view the progress of the mirror by issuing the following command on the remote machine:tail -f my-mirror-progress.txt
-
-
For mirroring from a file system (disconnected mirroring):
Mirror images to your file system:export IMAGE_PATH=<image-path> oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true \ --dir "$IMAGE_PATH"The
<image-path>refers to the local path to store the images. For example, in the previous section if providedfile://localas input during generate mirror-manifests, then the preceding command will create a subdirectory v2/local inside directory referred by<image-path>and copy the images under it.
The following command can be used to see all the options available on the mirror command. Note that
continue-on-erroris used to indicate that the command should try to mirror as much as possible and continue on errors.oc image mirror --helpNote: Sometimes based on the number and size of images to be mirrored, theoc image mirrormight take longer. If you are issuing the command on a remote machine, it is recommended that you run the command in the background withnohupso that even if you lose network connection to your remote machine or you close the terminal, the mirroring will continue. For example, the following command will start the mirroring process in the background and write the log tomy-mirror-progress.txt.export IMAGE_PATH=<image-path> nohup oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true \ --dir "$IMAGE_PATH" > my-mirror-progress.txt 2>&1 &You can view the progress of the mirror by issuing the following command on the remote machine:
tail -f my-mirror-progress.txt -
-
For disconnected mirroring only: Continue to move the following items to your file system:
- The
<image-path>directory you specified in the previous step - The
authfile referred by$REGISTRY_AUTH_FILE ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt
- The
-
For disconnected mirroring only: Mirror images to the target registry from file system
Complete the steps in this section on your file system to copy the images from the file system to the
$TARGET_REGISTRY. Your file system must be connected to the target docker registry.Important: If you used the placeholder value ofTARGET_REGISTRYas a parameter to--final-registryat the time of generating mirror manifests, then before running the following command, find and replace the placeholder value ofTARGET_REGISTRYin the file,images-mapping-from-filesystem.txt, with the actual registry where you want to mirror the images. For example, if you want to mirror images tomyregistry.com/mynamespacethen replaceTARGET_REGISTRYwithmyregistry.com/mynamespace.-
Run the following command to copy the images (referred in the
images-mapping-from-filesystem.txtfile) from the directory referred by<image-path>to the final target registry:export IMAGE_PATH=<image-path> oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \ -a $REGISTRY_AUTH_FILE \ --from-dir "$IMAGE_PATH" \ --filter-by-os '.*' \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true
-
Configure the cluster
-
Update the global image pull secret for your Red Hat OpenShift cluster. Follow the steps in Updating the global cluster pull secret.
The documented steps in the link enable your cluster to have proper authentication credentials in place to pull images from your
TARGET_REGISTRYas specified in theimage-content-source-policy.yamlwhich you will apply to your cluster in the next step. -
Create ImageContentSourcePolicy
Important:-
Before you run the command in this step, you must be logged into your OpenShift cluster. Using the
oc logincommand, log in to the Red Hat OpenShift Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.- If you used the placeholder value of
TARGET_REGISTRYas a parameter to--final-registryat the time of generating mirror manifests, then before running the following command, find and replace the placeholder value ofTARGET_REGISTRYin file,~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yamlwith the actual registry where you want to mirror the images. For example, replaceTARGET_REGISTRYwithmyregistry.com/mynamespace.
- If you used the placeholder value of
Run the following command to create ImageContentSourcePolicy:
oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yamlIf you are using Red Hat OpenShift Container Platform version 4.7 or earlier, this step might cause your cluster nodes to drain and restart sequentially to apply the configuration changes.
-
-
Verify that the ImageContentSourcePolicy resource is created.
oc get imageContentSourcePolicy -
Verify your cluster node status and wait for all the nodes to be restarted before proceeding.
oc get MachineConfigPool$ oc get MachineConfigPool -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-53bda7041038b8007b038c08014626dc True False False 3 3 3 0 10d worker rendered-worker-b54afa4063414a9038958c766e8109f7 True False False 3 3 3 0 10dAfter the
ImageContentsourcePolicyand global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until allMachineConfigPoolsare in theUPDATED=Truestatus before proceeding. -
Go to the project where deployment has to be done:
Note: You must be logged into a cluster before performing the following steps.export NAMESPACE=<YOUR_NAMESPACE>oc new-project $NAMESPACE -
Optional: If you use an insecure registry, you must add the target registry to the cluster insecureRegistries list.
oc patch image.config.openshift.io/cluster --type=merge \ -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}' -
Verify your cluster node status and wait for all the nodes to be restarted before proceeding.
oc get MachineConfigPool -wAfter the
ImageContentsourcePolicyand global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until allMachineConfigPoolsare updated.At this point your cluster is ready for IBM Connect:Direct Web Services deployment. The helm chart is present in
~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-cdws-1.0.x.tgzdirectory. Use it for deployment. Copy it in current directory.cp ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-cdws-1.0.x.tgz .Note: Replace with version information in above command. - Configuration required in Helm chart: To use the image mirroring in OpenShift cluster, helm chart should be configured to use the digest value for referring to container image. Set image.digest.enabled to true in values.yaml file or pass this parameter using Helm CLI.
Setting up a repeatable mirroring process
Once you complete a CASE save, you can mirror the CASE as
many times as you want to. This approach allows you to mirror a specific version of the IBM
Cloud Pak into development, test, and production stages using a private container
registry.
Follow the steps in this section if you want to save the CASE to multiple
registries (per environment) once and be able to run the CASE in the future
without repeating the CASE save process.
-
Run the following command to save the
CASEto ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION which can be used as an input during the mirror manifest generation:oc ibm-pak get \ $CASE_NAME \ --version $CASE_VERSION -
Run the
oc ibm-pak generate mirror-manifestscommand to generate theimage-mapping.txt:oc ibm-pak generate mirror-manifests \ $CASE_NAME \ $TARGET_REGISTRY \ --version $CASE_VERSIONThen add theimage-mapping.txtto theoc image mirrorcommand:oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true
If you want to make this repeatable across environments, you can reuse the same saved
CASE cache (~/.ibm-pak/$CASE_NAME/$CASE_VERSION) instead of executing a
CASE save again in other environments. You do not have to worry about
updated versions of dependencies being brought into the saved cache.
Applying Pod Security Standard for Kubernetes Cluster
Pod Security Standard should be applied to the Kubernetes namespace. This helm chart has been certified with baseline security standards with enforce security level. For more details, refer to Pod Security Standards.
SecurityContextConstraints (SCC) for IBM Sterling Connect:Direct Web Services Container
IBM Connect:Direct Web Services helm chart requires a SecurityContextConstraints (SCC) to be tied to the target namespace prior to deployment.
- This chart supports restricted SCC. For more details, refer to https://docs.openshift.com/container-platform/4.19/authentication/managing-security-context-constraints.html.
Configure UID and GID ranges for OpenShift
values.yaml:storageSecurity:
fsGroup: 45678
supplementalGroups: [65534]
runAsUser: 45678
runAsGroup: 45678
This range (40000–49999) covers the user and group IDs used in the
chart.
oc describe ns <namespace_name>Ensure that the UID and GID ranges include runAsUser,
runAsGroup, and fsGroup values from
values.yaml.
Creating storage for Data Persistence
- Kubernetes - Persistent Volumes
- Red Hat OpenShift - Persistent Volume Overview
- Dynamic Provisioning using storage classes
- Pre-created Persistent Volume
- Pre-created Persistent Volume Claim
- The only supported access mode is `ReadWriteOnce`
Dynamic Provisioning
- persistence.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
- pvClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.
Non-Dynamic Provisioning
Non-Dynamic Provisioning is supported using pre-created Persistent Volume and pre-created Persistent Volume Claim.
Using pre-created Persistent Volume- When creating Persistent Volume, make a note of the
storage class and metadata labels, that are required to configure Persistent Volume Claim's storage
class and label selector during deployment. This ensures that the claims are bound to Persistent
Volume based on label match. These labels can be passed to helm chart either by --set
flag or custom values.yaml file. The parameters defined
invalues.yaml for label name and its value are
pvClaim.selector.label and pvClaim.selector.value
respectively.
kind: PersistentVolume
apiVersion: v1
metadata:
name: <persistent volume name>
labels:
app.kubernetes.io/name: <persistent volume name>
app.kubernetes.io/instance: <release name>
app.kubernetes.io/managed-by: <service name>
helm.sh/chart: <chart name>
release: <release name>
purpose: cdwsconfig
spec:
storageClassName: <storage classname>
capacity:
storage: <storage size>
accessModes:
- ReadWriteOnce
nfs:
server: <NFS server IP address>
path: <mount path>kubectl create -f <peristentVolume yaml file>oc create -f <peristentVolume yaml file>Using pre-created Persistent Volume Claim (PVC)- The existing PVC can also be used for
deployment. The parameter for pre-created PVC is pvClaim.existingClaimName.
One should pass a valid PVC name to this parameter else deployment would fail.
Apart from required Persistent Volume, you can bind extra storage mounts using the
parameters provided in values.yaml. The parameters in
persistentVolumeExtra needs to be configured for the same.
- INSTALLATION_DIR/JSONFileSystem
- INSTALLATION_DIR/RestLogs
- INSTALLATION_DIR/mftws/BOOT-INF/classes
In the INSTALLATION_DIR/mftws/BOOT-INF/classes directory, only the following required files
are saved/persisted: application.properties, .hiddenFile,
ssl-server.jks, trustedkeystore.jks, and
log4j2.yaml.
Setting permission on storage
- Option A: The easiest and undesirable solution is to have open permissions on the NFS
exported directories.
chmod -R 777 <path-to-directory> - Option B: Alternatively, the permissions can be controlled at group level leveraging the supplementalGroups and fsGroup setting. For example - if we want to add GID to supplementalGroups or fsGroup, it can be done using storageSecurity.supplementalGroups or storageSecurity.fsGroup.
Root Squash NFS support
values.yaml file. Similarly, if extra NFS share is mounted then proper read/write permission can be provide to container user using supplemental groups only.Creating secret
Passwords are used for KeyStore, TrustStore and CA-Signed Key Certificate by Administrator to connect to Connect:Direct Web Services.
To separate application secrets from the Helm Release, a Kubernetes secret must be created based
on the examples given below and be referenced in the Helm chart as
secret.secretName value.
- Create a template file with Secret defined as described in the example below:
apiVersion: v1 kind: Secret metadata: name: <secret name> type: Opaque data: trustStorePassword: <base64 encoded password> keyStorePassword: <base64 encoded password> caCertPassword: <base64 encoded password>Here:trustStorePasswordrefers to the the Trust Store passwordkeyStorePasswordrefers to the Key Store password.caCertPasswordrefers to the CA Signed Certificate password. This parameter is required when user want to configure a CA-signed key certificate in web services.- After the secret is created, delete the yaml file for security reasons.
Note: Base64 encoded passwords need to be generated manually by invoking the below command:
Use the output of this command in the <secret yaml file>.echo -n “<your desired password>” | base64 - Run the following command to create the
Secret:Kubernetes:
kubectl create -f <secret yaml file>OpenShiftoc create -f <secret yaml file>To check the secret created invoke the following command:kubectl get secretsFor more details see, Secrets.
Default Kubernetes secrets management has certain security risks as documented here, Kubernetes Security.
Users should evaluate Kubernetes secrets management based on their enterprise policy requirements and should take steps to harden security.
- Secrets needs to be created to configure desired CA-Signed Key Certificate and Trusted
Certificate. It can be created using below example, as required -Kubernetes
kubectl create secret generic cdws-ca-cert-secret --from-file=/path/to/certificate_file1kubectl create secret generic cdws-trust-cert-secret --from-file=/path/to/certificate_file2OpenShiftoc create secret generic cdws-ca-cert-secret --from-file=/path/to/certificate_file1oc create secret generic cdws-trust-cert-secret --from-file=/path/to/certificate_file2Note:- Ensure that the CA-Signed key certificate contains the complete certificate chain.
Configuring- Understanding values.yaml
The following table describes configuration parameters listed in the
values.yaml file in Helm charts used to complete the installation.
| Parameter | Description | Default |
|---|---|---|
| affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity" | |
| affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity" | |
| affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity" | |
| affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity" | |
| affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity" | |
| affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity" | |
| arch | Node Architecture | amd64 |
| autoscaling.enabled | Autoscaling is enabled or not | true |
| autoscaling.maxReplicas | Maximum pod replica | 2 |
| autoscaling.minReplicas | Minimum pod replica | 1 |
| autoscaling.targetCPUUtilizationPercentage | Target CPU Utilization | 70 |
| autoscaling.targetMemoryUtilizationPercentage | Target Memory Utilization | 70 |
| cdwsParams.certificateExpiryTime | Self-signed certificate - Enter the certificate expiration time in days | |
| cdwsParams.certificateLabel | Certificate label for CA-signed Certificate/Self-signed certificate | |
| cdwsParams.commonName | Self-signed certificate - Identifies the host name associated with the certificate | |
| cdwsParams.country | Self-signed certificate - The two-letter ISO code for the country where your organization is location. | |
| cdwsParams.dnsName | Self-signed certificate - Identifies the domain name associated with the certificate. | |
| cdwsParams.emailId | Self-signed certificate - An email address used to contact your organization. | |
| cdwsParams.ipAddress | Self-signed certificate - Identifies the IP Address associated with the certificate. | |
| cdwsParams.locality | Self-signed certificate - The city where your organization is located. | |
| cdwsParams.organization | Self-signed certificate - The legal name of your organization. Should not be abbreviated and should include suffixes (Inc, Corp, LLC). | |
| cdwsParams.state | Self-signed certificate - The state/region where your organization is located. | |
| cdwsParams.restOnly | Self-signed certificate - The state/region where your organization is located. | | |
| dashboard.enabled | For making monitoring dashboard enabled | |
| defaultPodDisruptionBudget. | Minimum replicas required for pod disruption budget |
enabled: false minAvailable: 1 |
| hostAliases.enabled | Enable hostname and IP mapping for DNS resolution | false |
| hostAliases.hostEntries | For providing IP and hostname mapping | [] |
| image.digest.enabled | ||
| image.imageSecrets | Image pull secrets | |
| image.pullPolicy | Image pull policy | IfNotPresent |
| image.repository | Image full name including repository | |
| image.tag | Image tag | |
| ingress.annotations | Annotation for ingress resource | [] |
| ingress.controller | Ingress controller name | |
| ingress.enabled | Flag to enable or disable ingress | false |
| ingress.host | Ingress hostname | |
| ingress.tls.enabled | TLS is enabled or disabled for ingress resource | false |
| ingress.tls.secretName | TLS secret name if enabled | |
| initResources.limits.cpu | Init Container CPU limit | 500m |
| initResources.limits.memory | Init Container memory limit | 1Gi |
| initResources.requests.cpu | Init Container CPU requested | 250m |
| initResources.requests.memory | Init Container Memory requested | 1Gi |
| license | License agreement. Set true to accept the license. | false |
| licenseType | Specify prod or non-prod for production or non-production license type respectively | prod |
| livenessProbe.initialDelaySeconds | Initial delays for liveness | 15 |
| livenessProbe.periodSeconds | Time period for liveness | 15 |
| livenessProbe.timeoutSeconds | Timeout for liveness | 10 |
| networkPolicy.egress | Network Policy egress rules | {} |
| networkPolicy.ingress | Network Policy ingress rules | {} |
| persistence.enabled | To use persistent volume | true |
| persistence.useDynamicProvisioning | To use storage classes to dynamically create PV | false |
| pvClaim | Specify the existing PV claim name to be used for deployment | |
| pvClaim.accessMode | Access mode for PV Claim | ReadWriteOnce |
| pvClaim.existingClaimName | Provide name of existing PV claim to be used | |
| pvClaim.selector.label | PV label key to bind this PVC | |
| pvClaim.selector.value | PV label value to bind this PVC | |
| pvClaim.size | Size of PVC volume | 500Mi |
| pvClaim.storageClassName | Storage class of the PVC | |
| persistentVolumeExtra.accessMode | PV accessMode | ReadWriteOnce |
| persistentVolumeExtra.claimName | Already created PVC name | |
| persistentVolumeExtra.enabled | Persistent volume for user input | false |
| persistentVolumeExtra.selector.label | Label name for attaching PV | |
| persistentVolumeExtra.selector.value | Label value for attaching PV | |
| persistentVolumeExtra.size | Size of PVC volume | 100Mi |
| persistentVolumeExtra.storageClassName | Storage class of the PVC | manual |
| readinessProbe.initialDelaySeconds | Initial delays for readiness | 15 |
| readinessProbe.periodSeconds | Time period for readiness | 15 |
| readinessProbe.timeoutSeconds | Timeout for readiness | 10 |
| replicaCount | Number of deployment replicas | 1 |
| resources.limits.cpu | Container CPU limit | 1500m |
| resources.limits.memory | Container memory limit | 1Gi |
| resources.requests.cpu | Container CPU requested | 1000m |
| resources.requests.memory | Container Memory requested | 1Gi |
| route.enabled | Route for OpenShift Enabled/Disabled | false |
| secret.caCertSecretName | CA Certificate file to be imported at the time of install | |
| secret.secretName | Secret name for Secure Parameters | |
| secret.trustCertSecretName | Trusted Certificate file to be imported at the time of install | |
| secComp.profile | seccomp profile filepath | |
| secComp.type | seccomp profile type | RuntimeDefault |
| serviceAccount.create | Enable/disable service account creation | true |
| serviceAccount.name | Name of Service Account to use for container | |
| service.annotations missing | Add metadata to the Service object to support integration with external tools or controllers. | |
| service.allowIngressTraffic | Allowing Ingress traffic for Web Console | true |
| service.externalIP | External IP for service discovery | |
| service.externalTrafficPolicy | For passing external Traffic Policy | Local |
| service.loadBalancerIP | For passing load balancer IP | |
| service.loadBalancerSourceRanges | Load Balancer sources | [] |
| service.port | Web Console port number | 9443 |
| service.protocol | Web Console Protocol for service | TCP |
| service.sessionAffinity | Session Affinity | ClientIP |
| service.type | Kubernetes service type exposing ports | LoadBalancer |
| service.webConsoleName | Web Console name | cdws-web-console |
| storageSecurity.fsGroup | Used for controlling access to block storage | |
| storageSecurity.supplementalGroups | Groups IDs used for controlling access | 65534 |
| runAsGroup: | Specify the group ID under which the containerized process runs. | |
| runAsUser | Run apps in a container under a nondefault user account. | 1010 |
| timeZone | This flag is used for setting TimeZone of container | Asia/Calcutta |
Use the following steps to complete this action:
Method 1: Override
Parameters Directly with CLI Using --set
This approach uses the
--set argument to specify each parameter that needs to be overridden at
the time of installation.
Example for Helm Version 3:
helm install <release-name> \
--set service.port=9443 \
...
ibm-cdws-1.0.x.tgz
Method 2: Use a YAML File with Configured Parameters
Alternatively, specify configurable parameters in a
values.yaml file and use it during installation. This approach can be
helpful for managing multiple configurations in one place.
-
To obtain the
values.yamltemplate from the Helm chart:-
For Online Cluster:
helm inspect values ibm-helm/ibm-cdws > my-values.yaml -
For Offline Cluster:
helm inspect values <path-to-ibm-cdws-helm-chart> > my-values.yaml
-
-
Edit the
my-values.yamlfile to include your desired configuration values and use it with the Helm installation command:
Example for Helm Version 3:
helm install <release-name> -f my-values.yaml ... ibm-cdws-1.0.x.tgzpersistentVolumeExtra:
enabled: true
claimName: ""
#if claim name is not given and enabled is true then next 3 properties are required
storageClassName: "manual"
size: 100Mi
accessMode: "ReadWriteOnce"
selector:
label: "" value: ""An extra volume must be mounted when web services
running inside the container require access to files by using an absolute path. For example,
in the process control API, the processFile parameter requires the full
file path. This requirement can be met by using an extra volume mount.
/opt/process. Therefore, a
corresponding processFile value would
be:/opt/process/process_file.cdpAffinity
The chart provides ways in form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in Kubernetes. See, Kubernetes documentation for details.
Network Policy Change
Out of the box Network Policies
IBM Sterling Connect:Direct Web Services Container comes with predefined network policies based on mandatory security guidelines. By default, all outbound communication is restricted, permitting only intra-cluster communication.
-
Deny all Egress Traffic
-
Allow Egress Traffic within the Cluster
Defining Custom Network Policy
values.yaml under
networkPolicy.egress spec. Similarly, custom ingress policy can also be
defined in values.yaml under networkPolicy.ingress spec.
This can serve as a reference during helm chart deployment:ingress:{}
# - from:
# ports:
# - protocol: TCP
# port: 9443 #port should be same as defined for service.port
egress:{}
Pod Disruption Budget
defaultPodDisruptionBudget:
enabled: false
minAvailable: 1
For more information, refer Pod Disruption Budget.
Autoscaling
This chart provides method to configure horizontal as well as vertical scaling.
For vertical scaling, user can update the resources.limits in values.yaml and set the value for CPU and memory according to their requirement and resource availability.
resources:
limits:
cpu: 3000m
memory: 2Gi
ephemeral-storage: "3Gi"
HorizontalPodAutoscalar is used to scale the application horizontally. To
scale the application horizontally, update autoscaling.enabled to true in
values.yaml:autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 2
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 70
Fore more information, refer to Horizontal Pod Autoscaling.
Installing IBM Connect:Direct Web Services using Helm chart
helm install my-release --set license=true,image.repository=<reponame>,image.tag=<image tag>,image.imageSecrets=<image pull secret>,secret.secretName=<CDWS secret name> ibm-cdws-1.0.x.tgz
or
helm install my-release ibm-cdws-1.0.x.tgz -f my-values.yaml
This command deploys ibm-cdws-1.0.x.tgz chart on the Kubernetes cluster using the default configuration. Creating storage for Data Persistence lists parameters that can be configured at deployment.
| Parameter | Description | Default Value |
|---|---|---|
| license | License agreement for IBM Certified Container Software | false |
| image.repository | Image full name including repository | |
| image.tag | Image tag | |
| image.imageSecrets | Image pull secrets | |
| secret.secretName | Secret name for Connect:Direct Web Services password store |
Validating the Installation
After the deployment procedure is complete, you should validate the deployment to ensure that everything is working according to your needs. The deployment may take approximately 4-5 minutes to complete.
-
Check the Helm chart release status by invoking the following command and verify that the
STATUSisDEPLOYED:helm status my-release - Wait for the pod to be ready. To verify the pods status (READY) use the
dashboard or through the command line interface by invoking the following
command:
kubectl get pods -l release my-release -n my-namespace -o wide - To view the service and ports exposed to enable communication in a pod invoke
the following
command:
kubectl get svc -l release= my-release -n my-namespace -o wideThe screen output displays the external IP and exposed ports under EXTERNAL-IP and PORT(S) column respectively. If external LoadBalancer is not present, refer Master node IP as external IP.
Exposed Services
IBM Connect:Direct Web Services for Admin and User Functions can be accessed using LoadBalancer or external IP and mapped server port. If external LoadBalancer is not present, then refer to Master node IP for communication.
oc get route -l release=my-release -n my-namespace -o wideFrom the output of the above command, extract the HOST/PORT value and use it to access Connect:Direct Web Services (https://<HOST/PORT>/cdws-ui/index.html).
ingress:
enabled: false
host: ""
controller: "nginx"
annotations: {}
tls:
enabled: false
secretName: ""
ingress.enabled to true, update the
hostname in ingress.host, which will be used
to access the web services application. Enable ingress.tls and
update ingress.tls.secretName with the TLS secret name. Create a
Kubernetes TLS secret using the following
command:kubectl create secret tls ibm-cdws-tls --key=<key file path> --cert=<cert file path>kubectl get ingress -l release=my-release -n my-namespace -o widehostname from
HOSTS column and use it to access webservices like:
https://hostname/cdws-ui/index.html.DIME and DARE Security Considerations
This topic provides security recommendations for setting up Data In Motion Encryption (DIME) and Data At Rest Encryption (DARE). It is intended to help you create a secure implementation of the application.
- All sensitive application data at rest is stored in binary format so user cannot decrypt it. This chart does not support encryption of user data at rest by default. Administrator can configure storage encryption to encrypt all data at rest.
- Data in motion is encrypted using transport layer security (TLS 1.3).