Installing the Cloud Pak for Data Operator on clusters connected to the internet

A Red Hat® OpenShift® cluster administrator can install the IBM Cloud Pak® for Data Operator on a cluster that is connected to the internet.

Before you begin

Required role
To complete this task, you must be a Red Hat OpenShift cluster administrator.

Before you install the Cloud Pak for Data Operator, ensure that:

Procedure

  1. Complete the appropriate steps to install the Cloud Pak for Data Operator on your environment:
    Environment Versions Installation options
    Environments with Operator Lifecycle Manager 4.64.5
    Environments without Operator Lifecycle Manager 4.64.5 3.11
  2. Complete the appropriate steps in What to do next.

Installing from the command-line interface on environments with Operator Lifecycle Manager

Use the following steps if you are installing on a Red Hat OpenShift 4.5 cluster where Operator Lifecycle Manager is installed.

From your installation node:

  1. Install the IBM Cloud Pak CLI (cloudctl):
    1. Download the cloudctl software from the IBM/cloud-pak-cli repository on GitHub. Download the appropriate package for your installation node.
    2. Extract the contents of the tar.gz:
      tar -xf cloudctl-architecture-tar.gz

      The value of architecture depends on the file that you downloaded.

    3. Change to the directory where you extracted the file and make the file executable:
      chmod 755 cloudctl-architecture

      The value of architecture depends on the file that you downloaded.

    4. Move the file to the /usr/local/bin directory:
      mv cloudctl-architecture /usr/local/bin/cloudctl
    5. Confirm that cloudctl is installed:
      cloudctl --help
  2. Install the IBM Cloud Paks Container Application Software for Enterprises (CASE) software:
    1. Download the IBM Cloud Paks CASE software from the IBM/cloud-pak repository on GitHub.
    2. Extract the contents of the ibm-cp-datacore-1.3.12.tgz file:
      tar -xf ibm-cp-datacore-1.3.12.tgz
  3. Log in to your Red Hat OpenShift cluster as an administrator:
    oc login OpenShift_URL:port
  4. Create the project (namespace) where you plan to install the Cloud Pak for Data Operator:
    oc new-project Operator_namespace

    Replace Operator_namespace with the name of the namespace you want to create, such as cpd-meta-ops

  5. Set the following environment variables on your command-line session:
    export CPD_REGISTRY=cp.icr.io/cp/cpd
    export CPD_REGISTRY_USER=cp
    export CPD_REGISTRY_PASSWORD=API_key
    export NAMESPACE=Operator_namespace

    Replace the following values:

    Variable Replace with
    API_key Your entitlement license API key.
    Operator_namespace The project you created in the previous step. This is the project where you plan to install the Cloud Pak for Data Operator.
  6. Run the following command to install the catalog and the operator:
    cloudctl case launch                 \
        --case ibm-cp-datacore                    \
        --namespace ${NAMESPACE}                  \
        --inventory cpdMetaOperatorSetup          \
        --action install-operator                 \
        --tolerance=1                             \
        --args "--entitledRegistry ${CPD_REGISTRY} --entitledUser ${CPD_REGISTRY_USER} --entitledPass ${CPD_REGISTRY_PASSWORD}"
  7. Confirm that the ibm-cp-data-operator was successfully deployed:
    oc get pods -n ${NAMESPACE} -l name=ibm-cp-data-operator

Installing from the OpenShift Console on environments with Operator Lifecycle Manager

Use the following steps if you are installing on a Red Hat OpenShift 4.5 cluster where Operator Lifecycle Manager is installed.

From your installation node:

  1. Log in to the OpenShift Console as a cluster administrator.
  2. From the command-line interface, create a secret to store your entitlement license API key:
    oc create secret docker-registry ibm-entitlement-key \
    --docker-server=cp.icr.io \
    --docker-username=cp \
    --docker-password=API_key 
    --namespace Operator_namespace

    Replace the following values:

    Variable Replace with
    API_key Your entitlement license API key.
    Operator_namespace The project you created in the previous step. This is the project where you plan to install the Cloud Pak for Data Operator.
  3. From the OpenShift Console, create a catalog source for the Cloud Pak for Data Operator:
    1. From the menu, click Administration > Cluster Settings > Global Configuration > Operator Hub > Sources.
    2. Click Create Catalog Source and specify the following information:
      • For the name, specify ibm-cp-data-operator-catalog.
      • For the display name, specify Cloud Pak for Data.
      • For the image, specify icr.io/cpopen/ibm-cp-data-operator-catalog:latest.
      • For the publisher, specify IBM.
    3. Select Cluster-wide catalog source for the default availability.
    4. Click Create.
    5. Wait until there is at least one operator associated with the ibm-cp-data-operator-catalog before continuing.

      You can see the number of operators associated with the catalog source on the Sources page.

  4. Install the Cloud Pak for Data Operator:
    1. From the menu, click Operators > Operator Hub Catalog.

      It might take a few minutes for the Cloud Pak for Data Operator to appear as a custom provider in the Operator Hub Catalog.

    2. Select the IBM Cloud Pak for Data Operator and click Install.
    3. On the Install page, ensure that the default options are set:
      • For the Update Channel, select v1.0.
      • For the Installation Mode, select All namespaces on the cluster.
      • For the Installed Namespace, select cpd-meta-ops
      • For the Approval Strategy, select Automatic.
    4. Click Install.
  5. To verify the operator is running properly, run the following command:
    oc get pods -n cpd-meta-ops -l name=ibm-cp-data-operator

Installing on environments without Operator Lifecycle Manager

Use the following steps if you are installing on a:

  • Red Hat OpenShift 3.11 cluster
  • Red Hat OpenShift 4.5 cluster where Operator Lifecycle Manager is not installed

From your installation node:

  1. Install the IBM Cloud Pak CLI (cloudctl):
    1. Download the cloudctl software from the IBM/cloud-pak-cli repository on GitHub. Download the appropriate package for your installation node.
    2. Extract the contents of the tar.gz:
      tar -xf cloudctl-architecture-tar.gz

      The value of architecture depends on the file that you downloaded.

    3. Change to the directory where you extracted the file and make the file executable:
      chmod 755 cloudctl-architecture

      The value of architecture depends on the file that you downloaded.

    4. Move the file to the /usr/local/bin directory:
      mv cloudctl-architecture /usr/local/bin/cloudctl
    5. Confirm that cloudctl is installed:
      cloudctl --help
  2. Install the IBM Cloud Paks Container Application Software for Enterprises (CASE) software:
    1. Download the IBM Cloud Paks CASE software from the IBM/cloud-pak repository on GitHub.
    2. Extract the contents of the ibm-cp-datacore-1.3.12.tgz file:
      tar -xf ibm-cp-datacore-1.3.12.tgz
  3. Log in to your Red Hat OpenShift cluster as an administrator:
    oc login OpenShift_URL:port
  4. Create the project (namespace) where you plan to install the Cloud Pak for Data Operator:
    oc new-project Operator_namespace

    Replace Operator_namespace with the name of the namespace you want to create, such as cpd-meta-ops

  5. Set the following environment variables on your command-line session:
    export CPD_REGISTRY=cp.icr.io/cp/cpd
    export CPD_REGISTRY_USER=cp
    export CPD_REGISTRY_PASSWORD=API_key
    export NAMESPACE=Operator_namespace

    Replace the following values:

    Variable Replace with
    API_key Your entitlement license API key.
    Operator_namespace The project you created in the previous step. This is the project where you plan to install the Cloud Pak for Data Operator.
  6. 3.11 environments only Create a service account called ibm-cp-data-operator-serviceaccount and assign it the cluster administrator role:
    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ibm-cp-data-operator-serviceaccount
    EOF
    
    oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:${NAMESPACE}:ibm-cp-data-operator-serviceaccount
  7. Run the following command to install the catalog and the operator:
    cloudctl case launch                 \
        --case ibm-cp-datacore                    \
        --namespace ${NAMESPACE}                  \
        --inventory cpdMetaOperatorSetup          \
        --action install-operator-native          \
        --tolerance=1                             \
        --args " --entitledRegistry ${CPD_REGISTRY} --entitledUser ${CPD_REGISTRY_USER} --entitledPass ${CPD_REGISTRY_PASSWORD}"
  8. Confirm that the ibm-cp-data-operator was successfully deployed:
    oc get pods -n ${NAMESPACE} -l name=ibm-cp-data-operator

What to do next