Installing the Cloud Pak for Data Operator on clusters connected to the internet
A Red Hat® OpenShift® cluster administrator can install the IBM Cloud Pak® for Data Operator on a cluster that is connected to the internet.
Before you begin
- Required role
- To complete this task, you must be a Red Hat OpenShift cluster administrator.
Procedure
- Complete the appropriate steps to install the Cloud Pak for Data Operator on your environment:
Environment Versions Installation options Environments with Operator Lifecycle Manager 4.64.5 Environments without Operator Lifecycle Manager 4.64.5 3.11 - Complete the appropriate steps in What to do next.
Installing from the command-line interface on environments with Operator Lifecycle Manager
Use the following steps if you are installing on a Red Hat OpenShift 4.5 cluster where Operator Lifecycle Manager is installed.
From your installation node:
- Install the IBM Cloud Pak CLI (
cloudctl):- Download the
cloudctlsoftware from theIBM/cloud-pak-clirepository on GitHub. Download the appropriate package for your installation node. - Extract the contents of the
tar.gz:
tar -xf cloudctl-architecture-tar.gzThe value of architecture depends on the file that you downloaded.
- Change to the directory where you extracted the file and make the file
executable:
chmod 755 cloudctl-architectureThe value of architecture depends on the file that you downloaded.
- Move the file to the /usr/local/bin
directory:
mv cloudctl-architecture /usr/local/bin/cloudctl - Confirm that
cloudctlis installed:cloudctl --help
- Download the
- Install the IBM Cloud Paks
Container Application Software for Enterprises (CASE) software:
- Download the IBM Cloud Paks
CASE software from the
IBM/cloud-pakrepository on GitHub. - Extract the contents of the ibm-cp-datacore-1.3.12.tgz
file:
tar -xf ibm-cp-datacore-1.3.12.tgz
- Download the IBM Cloud Paks
CASE software from the
- Log in to your Red Hat OpenShift cluster as an
administrator:
oc login OpenShift_URL:port - Create the project (namespace) where you plan to install the Cloud Pak for Data Operator:
oc new-project Operator_namespaceReplace Operator_namespace with the name of the namespace you want to create, such as
cpd-meta-ops - Set the following environment variables on your command-line
session:
export CPD_REGISTRY=cp.icr.io/cp/cpd export CPD_REGISTRY_USER=cp export CPD_REGISTRY_PASSWORD=API_key export NAMESPACE=Operator_namespaceReplace the following values:
Variable Replace with API_key Your entitlement license API key. Operator_namespace The project you created in the previous step. This is the project where you plan to install the Cloud Pak for Data Operator. - Run the following command to install the catalog and the
operator:
cloudctl case launch \ --case ibm-cp-datacore \ --namespace ${NAMESPACE} \ --inventory cpdMetaOperatorSetup \ --action install-operator \ --tolerance=1 \ --args "--entitledRegistry ${CPD_REGISTRY} --entitledUser ${CPD_REGISTRY_USER} --entitledPass ${CPD_REGISTRY_PASSWORD}" - Confirm that the
ibm-cp-data-operatorwas successfully deployed:oc get pods -n ${NAMESPACE} -l name=ibm-cp-data-operator
Installing from the OpenShift Console on environments with Operator Lifecycle Manager
Use the following steps if you are installing on a Red Hat OpenShift 4.5 cluster where Operator Lifecycle Manager is installed.
From your installation node:
- Log in to the OpenShift Console as a cluster administrator.
- From the command-line interface, create a secret to store your entitlement license API
key:
oc create secret docker-registry ibm-entitlement-key \ --docker-server=cp.icr.io \ --docker-username=cp \ --docker-password=API_key --namespace Operator_namespaceReplace the following values:
Variable Replace with API_key Your entitlement license API key. Operator_namespace The project you created in the previous step. This is the project where you plan to install the Cloud Pak for Data Operator. - From the OpenShift Console, create a catalog
source for the Cloud Pak for Data Operator:
- From the menu, click .
- Click Create Catalog Source and specify the following information:
- For the name, specify ibm-cp-data-operator-catalog.
- For the display name, specify Cloud Pak for Data.
- For the image, specify icr.io/cpopen/ibm-cp-data-operator-catalog:latest.
- For the publisher, specify IBM.
- Select Cluster-wide catalog source for the default availability.
- Click Create.
- Wait until there is at least one operator associated with the
ibm-cp-data-operator-catalog before continuing.
You can see the number of operators associated with the catalog source on the Sources page.
- Install the Cloud Pak for Data Operator:
- From the menu, click
.
It might take a few minutes for the Cloud Pak for Data Operator to appear as a custom provider in the Operator Hub Catalog.
- Select the IBM Cloud Pak for Data Operator and click Install.
- On the Install page, ensure that the
default options are set:
- For the Update Channel, select v1.0.
- For the Installation Mode, select All namespaces on the cluster.
- For the Installed Namespace, select cpd-meta-ops
- For the Approval Strategy, select Automatic.
- Click Install.
- From the menu, click
.
- To verify the operator is running properly, run the following
command:
oc get pods -n cpd-meta-ops -l name=ibm-cp-data-operator
Installing on environments without Operator Lifecycle Manager
Use the following steps if you are installing on a:
- Red Hat OpenShift 3.11 cluster
- Red Hat OpenShift 4.5 cluster where Operator Lifecycle Manager is not installed
From your installation node:
- Install the IBM Cloud Pak CLI (
cloudctl):- Download the
cloudctlsoftware from theIBM/cloud-pak-clirepository on GitHub. Download the appropriate package for your installation node. - Extract the contents of the
tar.gz:
tar -xf cloudctl-architecture-tar.gzThe value of architecture depends on the file that you downloaded.
- Change to the directory where you extracted the file and make the file
executable:
chmod 755 cloudctl-architectureThe value of architecture depends on the file that you downloaded.
- Move the file to the /usr/local/bin
directory:
mv cloudctl-architecture /usr/local/bin/cloudctl - Confirm that
cloudctlis installed:cloudctl --help
- Download the
- Install the IBM Cloud Paks
Container Application Software for Enterprises (CASE) software:
- Download the IBM Cloud Paks
CASE software from the
IBM/cloud-pakrepository on GitHub. - Extract the contents of the ibm-cp-datacore-1.3.12.tgz
file:
tar -xf ibm-cp-datacore-1.3.12.tgz
- Download the IBM Cloud Paks
CASE software from the
- Log in to your Red Hat OpenShift cluster as an
administrator:
oc login OpenShift_URL:port - Create the project (namespace) where you plan to install the Cloud Pak for Data Operator:
oc new-project Operator_namespaceReplace Operator_namespace with the name of the namespace you want to create, such as
cpd-meta-ops - Set the following environment variables on your command-line
session:
export CPD_REGISTRY=cp.icr.io/cp/cpd export CPD_REGISTRY_USER=cp export CPD_REGISTRY_PASSWORD=API_key export NAMESPACE=Operator_namespaceReplace the following values:
Variable Replace with API_key Your entitlement license API key. Operator_namespace The project you created in the previous step. This is the project where you plan to install the Cloud Pak for Data Operator. - 3.11 environments only Create a service account called
ibm-cp-data-operator-serviceaccountand assign it the cluster administrator role:cat <<EOF | oc apply -f - apiVersion: v1 kind: ServiceAccount metadata: name: ibm-cp-data-operator-serviceaccount EOF oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:${NAMESPACE}:ibm-cp-data-operator-serviceaccount - Run the following command to install the catalog and the
operator:
cloudctl case launch \ --case ibm-cp-datacore \ --namespace ${NAMESPACE} \ --inventory cpdMetaOperatorSetup \ --action install-operator-native \ --tolerance=1 \ --args " --entitledRegistry ${CPD_REGISTRY} --entitledUser ${CPD_REGISTRY_USER} --entitledPass ${CPD_REGISTRY_PASSWORD}" - Confirm that the
ibm-cp-data-operatorwas successfully deployed:oc get pods -n ${NAMESPACE} -l name=ibm-cp-data-operator
What to do next
- If you're planning to use Portworx storage, see Creating Portworx storage classes.
- If you're planning to install Cloud Pak for Data from the command-line interface, see:
- If you're planning to install Cloud Pak for Data by creating a CRD, see either:
The cluster meets the minimum requirements for installing