Table of contents

Configuring your cluster to pull Cloud Pak for Data images

To ensure that your cluster can pull Cloud Pak for Data software images, you must update your cluster configuration.

Permissions you need for this task
You must be a cluster administrator.
When you need to complete this task
You must complete this task the first time you install Cloud Pak for Data.

The tasks that you must complete depend on whether your cluster pulls images directly from the IBM® Entitled Registry or from a private container registry.

Task IBM Entitled Registry Private container registry
1. Configuring the global image pull secret Required Required
2. Configuring an image content source policy Not applicable Required
3. Creating the catalog source Required Required

1. Configuring the global image pull secret

The global image pull secret ensures that your cluster has the necessary credentials to pull images.

The credentials that you need to specify depend on where you want to pull images from:

IBM Entitled Registry
If you are pulling images from the IBM Entitled Registry, the global image pull secret must contain your IBM entitlement API key.
Private container registry
If you are pulling images from a private container registry, the global image pull secret must contain the credentials of an account that can pull images from the registry.

If you have already configured the global image pull secret with the necessary credentials, you can skip this task.

Important: When you change the global image pull secret, each node in the cluster is automatically restarted so that the Machine Config Operator can apply the changes. This restart process happens one node at a time. The cluster will wait for a node to restart before starting the process on the next node. In some situations, it takes more than 30 minutes for all of the nodes to be restarted. During this process, you might notice that resources are temporarily unavailable.

If your deployment is on IBM Cloud, you must manually reload the worker nodes in your cluster for the changes to take effect.

To configure the global image pull secret:

  1. Determine whether there is an existing global image pull secret:
    oc extract secret/pull-secret -n openshift-config

    This command generates a JSON file called .dockerconfigjson in the current directory.

  2. Take the appropriate action based on the contents of the .dockerconfigjson file:
    Pull secret status Image content source policy
    The file is empty
    1. Set the following environment variables based on the container registry that OpenShift® is going to pull from:
      IBM Entitled Registry
      export REGISTRY_USER=cp
      export REGISTRY_PASSWORD=entitlement-key
      export REGISTRY_SERVER=cp.icr.io

      Replace entitlement-key with your entitlement key. For details, see IBM entitlement API key.

      Private container registry
      export REGISTRY_USER=username
      export REGISTRY_PASSWORD=password
      export REGISTRY_SERVER=registry-location
      Replace the following values:
      username
      The username of a user that can pull images from the private container registry
      password
      The password for the specified user.
      registry-location
      The location of the private container registry. For example, private-registry.example.com.
    2. Run the following command to create the pull secret:
      oc create secret docker-registry \
          --docker-server=${REGISTRY_SERVER} \
          --docker-username=${REGISTRY_USER} \
          --docker-password=${REGISTRY_PASSWORD} \
          --docker-email=${REGISTRY_USER} \
      -n openshift-config pull-secret
    There is an existing pull secret
    1. Encode the username and password using Base64 encoding:
      IBM Entitled Registry
      echo -n "cp:entitlement-key" | base64 -w0

      Replace entitlement-key with your entitlement key. For details, see IBM entitlement API key.

      Private container registry
      echo -n "username:password" | base64 -w0
      Replace the following values:
      username
      The username of a user that can pull images from the private container registry
      password
      The password for the specified user.
    2. Add an entry for the container registry to the auths section in the JSON file. In the following example,  1  is the new entry and  2  is the existing entry:
      {
         "auths":{
             1 "registry-location":{
               "auth":"base64-encoded-credentials",
               "email":"not-used"
            },
             2 "myregistry.example.com":{
               "auth":"b3Blb=",
               "email":"not-used"
            }
         }
      }
      Replace the following values:
      registry-location
      If you are pulling images from the IBM Entitled Registry, the value is cp.icr.io.

      If you are pulling images from a private container registry, specify the location of the private container registry. For example, private-registry.example.com.

      base64-encoded-credentials
      The encoded credentials that you generated in the previous step. For example, cmVnX3VzZXJuYW1lOnJlZ19wYXNzd29yZAo=.
    3. Apply the new configuration:
      oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.dockerconfigjson
    Important: For deployments on IBM Cloud, you must reload the worker nodes in your cluster for the changes to take effect. For details, see Adding a private registry to the global pull secret.

    If have a VPC Gen2 cluster and you use Portworx storage, see Portworx storage limitations before you reload your worker nodes.

  3. Get the status of the nodes:
    oc get node
    Wait until all the nodes are Ready before you proceed to the next step. For example, if you see Ready,SchedulingDisabled, wait for the process to complete:
    NAME                           STATUS                     ROLES    AGE     VERSION
    master0                        Ready                      master   5h57m   v1.20.0
    master1                        Ready                      master   5h57m   v1.20.0
    master2                        Ready                      master   5h57m   v1.20.0
    worker0                        Ready,SchedulingDisabled   worker   5h48m   v1.20.0
    worker1                        Ready                      worker   5h48m   v1.20.0
    worker2                        Ready                      worker   5h48m   v1.20.0

2. Configuring an image content source policy

If you are pulling images directly from the IBM Entitled Registry on a connected cluster, you can skip this step.

If you mirrored images to a private container registry, you must tell your cluster where to find the software images. (For more information how Red Hat® OpenShift Container Platform locates images from an mirrored repository, see Configuring image registry repository mirroring in the Red Hat OpenShift Container Platform documentation.)

Important: This process will temporarily disable scheduling on each node in the cluster, so you might notice that resources are temporarily unavailable. However, this process happens on one node at a time. The cluster will temporarily disable scheduling on a node, apply the configuration change, and then re-enable scheduling before starting the process on the next node.

To configure an image content source policy:

  1. Set the following environment variable to point to the location of the private container registry:
    export PRIVATE_REGISTRY=private-registry-location
  2. Create an image content source policy. The contents of the policy depend on whether you have an existing policy for IBM Cloud Pak® foundational services.
    Options Image content source policy
    IBM Cloud Pak foundational services is already installed on the cluster If IBM Cloud Pak foundational services is already installed, it is likely that you already have an image content source policy for quay.io/opencloudio. Therefore, you do not need to create a mirroring policy for those images.
    cat <<EOF |oc apply -f -
    apiVersion: operator.openshift.io/v1alpha1
    kind: ImageContentSourcePolicy
    metadata:
      name: cloud-pak-for-data-mirror
    spec:
      repositoryDigestMirrors:
      - mirrors:
        - ${PRIVATE_REGISTRY}/cp
        source: cp.icr.io/cp
      - mirrors:
        - ${PRIVATE_REGISTRY}/cp/cpd
        source: cp.icr.io/cp/cpd
      - mirrors:
        - ${PRIVATE_REGISTRY}/cpopen
        source: icr.io/cpopen
    EOF
    IBM Cloud Pak foundational services is not installed on the cluster If IBM Cloud Pak foundational services is not installed, it is unlikely that you have an image content source policy for quay.io/opencloudio, so you should create a mirroring policy for those images.
    cat <<EOF |oc apply -f -
    apiVersion: operator.openshift.io/v1alpha1
    kind: ImageContentSourcePolicy
    metadata:
      name: cloud-pak-for-data-mirror
    spec:
      repositoryDigestMirrors:
      - mirrors:
        - ${PRIVATE_REGISTRY}/opencloudio
        source: quay.io/opencloudio
      - mirrors:
        - ${PRIVATE_REGISTRY}/cp
        source: cp.icr.io/cp
      - mirrors:
        - ${PRIVATE_REGISTRY}/cp/cpd
        source: cp.icr.io/cp/cpd
      - mirrors:
        - ${PRIVATE_REGISTRY}/cpopen
        source: icr.io/cpopen
    EOF
  3. Verify that the image content source policy was created:
    oc get imageContentSourcePolicy
  4. Get the status of the nodes:
    oc get node
    Wait until all the nodes are Ready before you proceed to the next step. For example, if you see Ready,SchedulingDisabled, wait for the process to complete:
    NAME                           STATUS                     ROLES    AGE     VERSION
    master0                        Ready                      master   5h57m   v1.20.0
    master1                        Ready                      master   5h57m   v1.20.0
    master2                        Ready                      master   5h57m   v1.20.0
    worker0                        Ready,SchedulingDisabled   worker   5h48m   v1.20.0
    worker1                        Ready                      worker   5h48m   v1.20.0
    worker2                        Ready                      worker   5h48m   v1.20.0

3. Creating the catalog source

Operator Lifecycle Manager (OLM) uses an Operator catalog to discover and install Operators and their dependencies.

A catalog source is a repository of cluster service versions (CSVs), custom resource definitions (CRDs), and packages that comprise an application. To ensure that OLM can use the Cloud Pak for Data operators to install the software, you must create the appropriate catalog sources for your environment. (For more information about these terms, see the Operator Framework glossary of common terms in the Red Hat OpenShift Container Platform documentation.)

Important: If you are using a private container registry, the steps in this task assume that you are using the latest CASE package to create the catalog source. Older CASE packages are listed in Operator and operand versions.

To create the catalog source, complete the appropriate steps for your environment:

Image location Required catalog source
IBM Entitled Registry If you are pulling images from the IBM Entitled Registry, create the following catalog sources:
  1. Create the catalog source for the IBM Operator Catalog:
    This catalog source is used for :
    • IBM Cloud Pak foundational services
    • IBM Cloud Pak for Data platform operator
    • Service operators
      Important: If you are pulling images from the IBM Entitled Registry, all of the services are included in the IBM Operator Catalog. This means that you do not need to create catalog source objects for each service that you plan to install.
    1. Check whether the IBM Operator Catalog already exists on your cluster:
      oc get catalogsource -n openshift-marketplace

      Review the output to determine whether there is an entry called ibm-operator-catalog.

    2. If the IBM Operator Catalog does not exist, create it:
      cat <<EOF |oc apply -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: "IBM Operator Catalog"
        publisher: IBM
        sourceType: grpc
        image: icr.io/cpopen/ibm-operator-catalog:latest
        updateStrategy:
          registryPoll:
            interval: 45m
      EOF
    3. Verify that the IBM Operator Catalog was successfully created:
      oc get catalogsource -n openshift-marketplace

      Review the output to ensure that there is an entry called ibm-operator-catalog.

  2. Create the Db2U catalog source if you plan to install one of the following services:
    • Data Virtualization
    • Db2®
    • Db2 Big SQL
    • Db2 Warehouse
    • OpenPages® (required only if you want OpenPages to automatically provision a Db2 database)
    1. Check whether the IBM Db2U Catalog already exists on your cluster:
      oc get catalogsource -n openshift-marketplace

      Review the output to determine whether there is an entry called ibm-db2uoperator-catalog.

    2. If the IBM Db2U Catalog does not exist, create it:
      cat <<EOF |oc apply -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: ibm-db2uoperator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: docker.io/ibmcom/ibm-db2uoperator-catalog:latest
        imagePullPolicy: Always
        displayName: IBM Db2U Catalog
        publisher: IBM
        updateStrategy:
          registryPoll:
            interval: 45m
      EOF
    3. Verify that the IBM Db2U Catalog was successfully created:
      oc get catalogsource -n openshift-marketplace

      Review the output to ensure that there is an entry called ibm-db2uoperator-catalog.

Private container registry

The following steps assume that you have the CASE packages on you local file system from mirroring the images to your private container registry.

If you are running the commands on a different machine, you must download the necessary packages before you create the catalog source:

If you are pulling images from a private container registry, create the following catalog sources:

  1. Create the IBM Cloud Pak foundational services catalog source:

    Skip this step if a supported version of IBM Cloud Pak foundational services is already installed on your cluster.

    1. Run the following command to create the IBM Cloud Pak foundational services catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-cp-common-services-1.6.0.tgz \
        --inventory ibmCommonServiceOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--registry ${PRIVATE_REGISTRY} --inputDir ${OFFLINEDIR} --recursive"
    2. Verify that opencloud-operators is READY:
      oc get catalogsource -n openshift-marketplace opencloud-operators \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  2. Create the scheduling service catalog source:

    Skip this step if you are not installing the scheduling service.

    1. Run the following command to create the scheduling service catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-cpd-scheduling-1.2.2.tgz \
        --inventory schedulerSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-scheduling-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-scheduling-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  3. Create the IBM Cloud Pak for Data catalog source:
    1. Run the following command to create the IBM Cloud Pak for Data catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-cp-datacore-2.0.3.tgz \
        --inventory cpdPlatformOperator \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that cpd-platform is READY:
      oc get catalogsource -n openshift-marketplace cpd-platform \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  4. Create the Db2U catalog source if you plan to install one of the following services:
    • Data Virtualization
    • Db2
    • Db2 Big SQL
    • Db2 Warehouse
    • OpenPages (required only if you want OpenPages to automatically provision a Db2 database)
    1. Install the following Python software on the system where you are running the cloudctl commands:
      1. Python 2
        To install Python 2, run the following command:
        yum install -y python2
        alternatives --set python /usr/bin/python2
      2. pyyaml
        To install pyyaml, run the following command:
        pip2 install pyyaml
    2. Run the following command to create the Db2U catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-db2uoperator-4.0.3.tgz \
        --inventory db2uOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    3. Verify that ibm-db2uoperator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2uoperator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  5. Create the catalog source for each service that you mirrored to the private container registry. For details, see Service catalog source.

Service catalog source

If you are using a private container registry, create the catalog source for each service that you plan to install.

Remember: The steps assume that you are using the latest CASE package to create the catalog source. Older CASE packages are listed in Operator and operand versions.
    1. Run the following command to create the Analytics Engine Powered by Apache Spark catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-analyticsengine-4.0.1.tgz \
        --inventory analyticsengineOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-ae-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-ae-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Cognos Analytics catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-cognos-analytics-prod-4.0.3.tgz \
        --inventory ibmCaOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-ca-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-ca-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Cognos Dashboards catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-cde-2.0.1.tgz \
        --inventory cdeOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cde-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cde-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • The catalog source for Data Refinery is automatically created when you create the catalog source for either Watson™ Knowledge Catalog or Watson Studio.

    1. This service has a dependency on Db2U. Verify that the Db2U catalog source (ibm-db2uoperator-catalog) is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2uoperator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    2. Run the following command to create the Data Virtualization catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-dv-case-1.7.1.tgz \
        --inventory dv \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    3. Verify that ibm-dv-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-dv-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • Create the appropriate catalog source for your environment:

    DataStage Enterprise
    1. Run the following command to create the DataStage Enterprise catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-datastage-enterprise-4.0.2.tgz \
        --inventory datastageOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-datastage-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-datastage-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    DataStage Enterprise Plus
    1. Run the following command to create the DataStage Enterprise Plus catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-datastage-4.0.2.tgz \
        --inventory datastageOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-datastage-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-datastage-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. This service has a dependency on Db2U. Verify that the Db2U catalog source (ibm-db2uoperator-catalog) is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2uoperator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    2. Run the following command to create the Db2 catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-db2oltp-4.0.1.tgz \
        --inventory db2oltpOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    3. Verify that ibm-db2oltp-cp4d-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2oltp-cp4d-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. This service has a dependency on Db2U. Verify that the Db2U catalog source (ibm-db2uoperator-catalog) is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2uoperator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    2. Run the following command to create the Db2 Big SQL catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-bigsql-case-7.2.1+20210819.120000.tgz \
        --inventory bigsql \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    3. Verify that ibm-bigsql-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-bigsql-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Db2 Data Gate catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-datagate-prod-4.0.1.tgz \
        --inventory datagateOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-datagate-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-datagate-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Db2 Data Management Console catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-dmc-4.0.1.tgz \
        --inventory dmcOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-dmc-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-dmc-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • Not applicable. Contact IBM Software support if you plan to install this service.

    1. This service has a dependency on Db2U. Verify that the Db2U catalog source (ibm-db2uoperator-catalog) is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2uoperator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    2. Run the following command to create the Db2 Warehouse catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-db2wh-4.0.1.tgz \
        --inventory db2whOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    3. Verify that ibm-db2wh-cp4d-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2wh-cp4d-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Decision Optimization catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-dods-4.0.1.tgz \
        --inventory dodsOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-dods-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-dods-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the EDB Postgres catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-cpd-edb-4.0.1.tgz \
        --inventory ibmCPDEDBSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-edb-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-edb-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Execution Engine for Apache Hadoop catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-hadoop-4.0.1.tgz \
        --inventory hadoopSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-hadoop-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-hadoop-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • Not applicable. For details, see the Financial Services Workbench documentation.

    1. Run the following command to create the IBM Match 360 with Watson catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-mdm-1.0.48.tgz \
        --inventory mdmOperator \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-mdm-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-mdm-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • See Watson Studio Runtimes.

  • See Watson Studio Runtimes.

    1. Run the following command to create the MongoDB catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-cpd-mongodb-4.0.1.tgz \
        --inventory ibmCPDMongodbSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-mongodb-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-mongodb-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the OpenPages catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-openpages-2.0.1+20210715.004828.82030714.tgz \
        --inventory operatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-openpages-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-openpages-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    If you want OpenPages to automatically provision a Db2 database, you must also create the following catalog sources:
    Db2U
    Verify that the Db2U catalog source (ibm-db2uoperator-catalog) is READY:
    oc get catalogsource -n openshift-marketplace ibm-db2uoperator-catalog \
    -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    Db2 as a service
    1. Run the following command to create the Db2 as a service catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-db2aaservice-4.0.1.tgz \
        --inventory db2aaserviceOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-db2aaservice-cp4d-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-db2aaservice-cp4d-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Planning Analytics catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-planning-analytics-4.0.1.tgz \
        --inventory ibmPlanningAnalyticsOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-planning-analytics-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-planning-analytics-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Product Master catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-productmaster-1.0.0.tgz \
        --inventory productmasterOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-productmaster-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-productmaster-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the RStudio Server with R 3.6 catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-rstudio-1.0.1.tgz \
        --inventory rstudioSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-rstudio-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-rstudio-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the SPSS Modeler catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-spss-1.0.1.tgz \
        --inventory spssSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-spss-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-spss-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Voice Gateway catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-voice-gateway-1.0.2.tgz \
        --inventory voiceGatewayOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-voice-gateway-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-voice-gateway-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • Important: If you plan to install both Watson Discovery and Watson Assistant, you must create the catalog source for Watson Discovery first, and then create the catalog source for Watson Assistant. The order matters.
    1. Run the following command to create the Watson Assistant catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR_WA}/ibm-watson-assistant-4.0.0.tgz \
        --inventory assistantOperator \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--registry ${PRIVATE_REGISTRY} --inputDir ${OFFLINEDIR_WA} --recursive"
    2. Verify that ibm-watson-assistant-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-watson-assistant-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • Watson Assistant for Voice Interaction is comprised of the following services:
    • Voice Gateway
    • Watson Assistant
    • Watson Speech to Text
    • Watson Text to Speech
    1. Run the following command to create the Watson Discovery catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-watson-discovery-4.0.0.tgz \
        --inventory discoveryOperators \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-watson-discovery-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-watson-discovery-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    Important: If you plan to install both Watson Discovery and Watson Assistant, you must create the catalog source for Watson Discovery first, and then create the catalog source for Watson Assistant. The order matters.
    1. Run the following command to create the Watson Knowledge Catalog catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-wkc-4.0.1.tgz \
        --inventory wkcOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-wkc-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-wkc-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Watson Machine Learning catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-wml-cpd-4.0.2.tgz \
        --inventory wmlOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-wml-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-wml-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Watson Machine Learning Accelerator catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-wml-accelerator-2.3.1.tgz \
        --inventory wmla_operator_deploy \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-wml-accelerator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-wml-accelerator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Watson OpenScale catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-watson-openscale-2.1.0.tgz \
        --inventory ibmWatsonOpenscaleOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-openscale-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-openscale-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • The same operator is used for the Watson Speech to Text service and the Watson Text to Speech service. You only need to create the catalog source once.

    1. Run the following command to create the Watson Speech to Text catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-watson-speech-4.0.0.tgz \
        --inventory speechOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--registry ${PRIVATE_REGISTRY} --inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-watson-speech-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-watson-speech-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
    1. Run the following command to create the Watson Studio catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-wsl-2.0.1.tgz \
        --inventory wslSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-ws-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-ws-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • The same operator is used for the Jupyter Notebooks with Python 3.7 for GPU service and the Jupyter Notebooks with R 3.6 service. You only need to create the catalog source once.

    1. Run the following command to create the Watson Studio Runtimes catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-wsl-runtimes-1.0.1.tgz \
        --inventory runtimesOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-cpd-ws-runtimes-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-cpd-ws-runtimes-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'
  • The same operator is used for the Watson Speech to Text service and the Watson Text to Speech service. You only need to create the catalog source once.

    1. Run the following command to create the Watson Speech to Text catalog source:
      cloudctl case launch \
        --case ${OFFLINEDIR}/ibm-watson-speech-4.0.0.tgz \
        --inventory speechOperatorSetup \
        --namespace openshift-marketplace \
        --action install-catalog \
          --args "--registry ${PRIVATE_REGISTRY} --inputDir ${OFFLINEDIR} --recursive"
    2. Verify that ibm-watson-speech-operator-catalog is READY:
      oc get catalogsource -n openshift-marketplace ibm-watson-speech-operator-catalog \
      -o jsonpath='{.status.connectionState.lastObservedState} {"\n"}'