Installing the required components for an instance of IBM Software Hub
To install an instance of IBM Software Hub, you must install the required operators and custom resources for the instance.
- Installation phase
-
Setting up a client workstation
Setting up a cluster
Collecting required information
Preparing to run installs in a restricted network
Preparing to run installs from a private container registry
Preparing the cluster for IBM Software Hub
Preparing to install an instance of IBM Software Hub
Installing an instance of IBM Software Hub
Setting up the control plane
Installing solutions and services
- Who needs to complete this task?
-
Instance administrator An instance administrator can complete this task.
- When do you need to complete this task?
-
Repeat as needed If you plan to install multiple instances of IBM Software Hub, you must repeat this task for each instance that you plan to install.
Before you begin
Ensure that you source the environment variables before you run the commands in this task.
About this task
Use the setup-instance command to install the required
operators and custom resources for an instance of IBM Software Hub.
setup-instance commands in
this topic include the --run_storage_tests option. It is
strongly recommended that you run the command with the --run_storage_tests option to ensure that the storage in your environment
meets the minimum requirements for performance and functionality.If your storage does not meet
the minimum requirements, you can remove the --run_storage_tests option to continue the installation. However, your environment is likely to encounter
problems because of issues with your storage.
Procedure
-
Log the
cpd-cliin to the Red Hat® OpenShift® Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Review the license terms for the software that you plan to
install. The licenses are available online. Run the appropriate commands based on the license that you purchased:
IBM Cloud Pak for Data Enterprise Edition
cpd-cli manage get-license \ --release=${VERSION} \ --license-type=EE
IBM Cloud Pak for Data Standard Edition
cpd-cli manage get-license \ --release=${VERSION} \ --license-type=SE
IBM Data Gate for watsonx
cpd-cli manage get-license \ --release=${VERSION} \ --component=datagate \ --license-type=DGWXD
IBM Data Product Hub Cartridge
cpd-cli manage get-license \ --release=${VERSION} \ --component=dataproduct \ --license-type=DPH
Data Replication
Run the appropriate command based on the license that you purchased:
- IBM Data Replication Cartridge
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=replication \ --license-type=IDRC - IBM InfoSphere® Data Replication Cartridge
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=replication \ --license-type=IIDRC - IBM Data Replication Modernization
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=replication \ --license-type=IDRM - IBM InfoSphere Data Replication Modernization
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=replication \ --license-type=IIDRM - IBM Data Replication for Db2® z/OS® Cartridge
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=replication \ --license-type=IDRZOS - IBM InfoSphere Data Replication for watsonx.data™ Cartridge
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=replication \ --license-type=IIDRWXTO - IBM InfoSphere Data Replication Cartridge Add-on for IBM watsonx.data
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=replication \ --license-type=IIDRWXAO
Db2
Run the appropriate command based on the license that you purchased:
- IBM Db2 Standard Edition Cartridge for IBM Cloud Pak for Data
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=db2oltp \ --license-type=DB2SE - IBM Db2 Advanced Edition Cartridge for IBM Cloud Pak for Data
-
cpd-cli manage get-license \ --release=${VERSION} \ --component=db2oltp \ --license-type=DB2AE
IBM Knowledge Catalog Premium
cpd-cli manage get-license \ --release=${VERSION} \ --component=ikc_premium \ --license-type=IKCP
IBM Knowledge Catalog Standard
cpd-cli manage get-license \ --release=${VERSION} \ --component=ikc_standard \ --license-type=IKCS
Synthetic Data Generator
cpd-cli manage get-license \ --release=${VERSION} \ --component=syntheticdata \ --license-type=WXAI
IBM watsonx.ai
cpd-cli manage get-license \ --release=${VERSION} \ --component=watsonx_ai \ --license-type=WXAI
IBM watsonx.data
cpd-cli manage get-license \ --release=${VERSION} \ --component=watsonx_data \ --license-type=WXD
- Install the required components for an instance of IBM Software Hub: Tip: Before you run this command against your cluster, you can preview the
occommands that this command will issue on your behalf by running the command with the--preview=trueoption.The
occommands are saved to thepreview.shfile in theworkdirectory.The command that you run depends on the storage on your cluster and whether you plan to use tethered projects:
Red Hat OpenShift Data Foundation storage
- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
IBM Fusion Data Foundation storage
- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
IBM Fusion Global Data Platform storage
When you use IBM Fusion storage, both
${STG_CLASS_BLOCK}and${STG_CLASS_FILE}point to the same storage class, typicallyibm-spectrum-scale-scoribm-storage-fusion-cp-sc.- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
IBM Storage Scale Container Native storage
When you use IBM Storage Scale Container Native storage, both
${STG_CLASS_BLOCK}and${STG_CLASS_FILE}point to the same storage class, typically.ibm-spectrum-scale-sc- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
Portworx storage
- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --storage_vendor=portworx \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --storage_vendor=portworx \ --run_storage_tests=true
NFS storage
When you use NFS storage, both
${STG_CLASS_BLOCK}and${STG_CLASS_FILE}point to the same storage class, typically.managed-nfs-storage- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
AWS EFS storage only
When you use only EFS storage, both
${STG_CLASS_BLOCK}and${STG_CLASS_FILE}point to the same storage class, typically.efs-nfs-client- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
AWS EFS and EBS storage
- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
NetApp Trident
When you use NetApp Trident storage, both
${STG_CLASS_BLOCK}and${STG_CLASS_FILE}point to the same storage class, typically.ontap-nas- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
Nutanix storage
- Instances without tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true - Instances with tethered projects
-
cpd-cli manage setup-instance \ --release=${VERSION} \ --license_acceptance=true \ --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --run_storage_tests=true
Wait for thecpd-clito return the following message before proceeding to the next step:[SUCCESS] ... The setup-instance command ran successfully. - Confirm that the status of the operands is
Completed:cpd-cli manage get-cr-status \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} - Check the health of the resources in the operators
project:
cpd-cli health operators \ --operator_ns=${PROJECT_CPD_INST_OPERATORS} \ --control_plane_ns=${PROJECT_CPD_INST_OPERANDS}Confirm that the health check report returns the expected results:Test What the test checks Expected result Pod HealthcheckFor pods in the operators project, the status of each required pod is Running.[SUCCESS]Pod Usage HealthcheckFor pods in the operators project, the resource use for each pod is within the CPU and memory limits. [SUCCESS]Cluster Service Versions HealthcheckFor cluster service versions (CSVs) in the operators project, the phase of each CSV is Succeeded.[SUCCESS]Catalog Source HealthcheckFor catalog sources in the operators project, the last observed state of each catalog source is Ready.[SUCCESS]Install Plan HealthcheckFor operators in the operators project, the install plan approval for each operator is Automatic.[SUCCESS]Subscriptions HealthchecFor subscriptions in the operators project, there is an installed CSV for each subscription. [SUCCESS]Persistent Volume Claim HealthcheckFor persistent volume claims (PVCs) in the operators project, each PVC is bound. Note: There should not be any PVC in the operators project, so the test should be skipped.[SKIP...]Deployment HealthcheckFor deployments in the operators project, each deployment has the desired number of replicas. [SUCCESS]Namespace Scopes HealthcheckFor the NamespaceScopeoperator in the operators project, the projects that are specified in thememberslist exist.[SUCCESS]Stateful Set HealthcheckFor stateful sets in the operators project, the stateful sets have the desired number of replicas. Note: There should not be any stateful sets in the operators project, so the test should be skipped.[SKIP...]Common Services HealthcheckFor the common-service commonservicecustom resource in the operators project, the phase of the custom resource isSucceeded.[SUCCESS]Custom Resource HealthcheckFor any other custom resources in the operators project, the phase of each custom resource is Succeeded.Note: There should not be any other custom resources in the operators project, so the test should be skipped.[SKIP...]Operand Requests HealthcheckFor operand requests in the operators project, the phase of each operand request is Running,[SUCCESS] - Check the health of the resources in the operands
project:
cpd-cli health operands \ --control_plane_ns=${PROJECT_CPD_INST_OPERANDS}Confirm that the health check report returns the expected results:Test What the test checks Expected result Pod HealthcheckFor pods in the operands project, the status of each pod is Running.[SUCCESS]Pod Usage HealthcheckFor pods in the operands project, the resource use for each pod is within the CPU and memory limits. [SUCCESS]EDB Cluster HealthcheckFor EDB Postgres clusters in the operands project, the status of each cluster is Cluster in healthy state.[SUCCESS]Persistent Volume Claim HealthcheckFor persistent volume claims (PVCs) in the operands project, each PVC is bound. [SUCCESS]Deployment HealthcheckFor deployments in the operands project, each deployment has the desired number of replicas. [SUCCESS]Stateful Set HealthcheckFor stateful sets in the operands project, the stateful sets have the desired number of replicas. [SUCCESS]Common Services HealthcheckFor the common-service commonservicecustom resource in the operands project, the phase of the custom resource isSucceeded.[SUCCESS]Operand Requests HealthcheckFor operand requests in the operands project, the phase of each operand request is Running.[SUCCESS]Monitor Events HealthcheckThe platform monitors are not generating any Criticalevents.[SUCCESS]Custom Resource HealthcheckFor custom resources in the operands project, the phase of each custom resource is Succeeded.[SUCCESS]Platform HealthcheckThat the pods for required platform microservices are Running.[SUCCESS] - Get the URL and default credentials of the web client:
cpd-cli manage get-cpd-instance-details \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --get_admin_initial_credentials=true
What to do next
If you want to tether projects to this instance of IBM Software Hub, complete Tethering projects to the IBM Software Hub control plane.