Installing the required components for an instance of IBM Software Hub

To install an instance of IBM Software Hub, you must install the required operators and custom resources for the instance.

Installation phase
  • You are not here. Setting up a client workstation
  • You are not here. Setting up a cluster
  • You are not here. Collecting required information
  • You are not here. Preparing to run installs in a restricted network
  • You are not here. Preparing to run installs from a private container registry
  • You are not here. Preparing the cluster for IBM Software Hub
  • You are not here. Preparing to install an instance of IBM Software Hub
  • You are here icon. Installing an instance of IBM Software Hub
  • You are not here. Setting up the control plane
  • You are not here. Installing solutions and services
Who needs to complete this task?

Instance administrator An instance administrator can complete this task.

When do you need to complete this task?

Repeat as needed If you plan to install multiple instances of IBM Software Hub, you must repeat this task for each instance that you plan to install.

Before you begin

Best practice: You can run the commands in this task exactly as written if you set up environment variables. For instructions, see Setting up installation environment variables.

Ensure that you source the environment variables before you run the commands in this task.

About this task

Use the setup-instance command to install the required operators and custom resources for an instance of IBM Software Hub.

Important: The setup-instance commands in this topic include the --run_storage_tests option. It is strongly recommended that you run the command with the --run_storage_tests option to ensure that the storage in your environment meets the minimum requirements for performance and functionality.

If your storage does not meet the minimum requirements, you can remove the --run_storage_tests option to continue the installation. However, your environment is likely to encounter problems because of issues with your storage.

Procedure

  1. Log the cpd-cli in to the Red Hat® OpenShift® Container Platform cluster:
    ${CPDM_OC_LOGIN}
    Remember: CPDM_OC_LOGIN is an alias for the cpd-cli manage login-to-ocp command.
  2. Review the license terms for the software that you plan to install.
    The licenses are available online. Run the appropriate commands based on the license that you purchased:
    IBM Cloud Pak for Data Enterprise Edition
    cpd-cli manage get-license \
    --release=${VERSION} \
    --license-type=EE

    IBM Cloud Pak for Data Standard Edition
    cpd-cli manage get-license \
    --release=${VERSION} \
    --license-type=SE

    IBM Data Gate for watsonx
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=datagate \
    --license-type=DGWXD

    IBM Data Product Hub Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=dataproduct \
    --license-type=DPH

    Data Replication

    Run the appropriate command based on the license that you purchased:

    IBM Data Replication Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IDRC
    IBM InfoSphere® Data Replication Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRC
    IBM Data Replication Modernization
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IDRM
    IBM InfoSphere Data Replication Modernization
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRM
    IBM Data Replication for Db2® z/OS® Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IDRZOS
    IBM InfoSphere Data Replication for watsonx.data™ Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRWXTO
    IBM InfoSphere Data Replication Cartridge Add-on for IBM watsonx.data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRWXAO

    Db2

    Run the appropriate command based on the license that you purchased:

    IBM Db2 Standard Edition Cartridge for IBM Cloud Pak for Data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=db2oltp \
    --license-type=DB2SE
    IBM Db2 Advanced Edition Cartridge for IBM Cloud Pak for Data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=db2oltp \
    --license-type=DB2AE

    IBM Knowledge Catalog Premium
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=ikc_premium \
    --license-type=IKCP

    IBM Knowledge Catalog Standard
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=ikc_standard \
    --license-type=IKCS

    Synthetic Data Generator
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=syntheticdata \
    --license-type=WXAI

    IBM watsonx.ai
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=watsonx_ai \
    --license-type=WXAI

    IBM watsonx.data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=watsonx_data \
    --license-type=WXD

  3. Install the required components for an instance of IBM Software Hub:
    Tip: Before you run this command against your cluster, you can preview the oc commands that this command will issue on your behalf by running the command with the --preview=true option.

    The oc commands are saved to the preview.sh file in the work directory.

    The command that you run depends on the storage on your cluster and whether you plan to use tethered projects:


    Red Hat OpenShift Data Foundation storage
    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    IBM Fusion Data Foundation storage
    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    IBM Fusion Global Data Platform storage

    When you use IBM Fusion storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically ibm-spectrum-scale-sc or ibm-storage-fusion-cp-sc.

    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    IBM Storage Scale Container Native storage

    When you use IBM Storage Scale Container Native storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically ibm-spectrum-scale-sc.

    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    Portworx storage
    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --storage_vendor=portworx \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --storage_vendor=portworx \
    --run_storage_tests=true

    NFS storage

    When you use NFS storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically managed-nfs-storage.

    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    AWS EFS storage only

    When you use only EFS storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically efs-nfs-client.

    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    AWS EFS and EBS storage
    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    NetApp Trident

    When you use NetApp Trident storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically ontap-nas.

    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    Nutanix storage
    Instances without tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true
    Instances with tethered projects
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    Wait for the cpd-cli to return the following message before proceeding to the next step:
    [SUCCESS] ... The setup-instance command ran successfully.
  4. Confirm that the status of the operands is Completed:
    cpd-cli manage get-cr-status \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
  5. Check the health of the resources in the operators project:
    cpd-cli health operators \
    --operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --control_plane_ns=${PROJECT_CPD_INST_OPERANDS}
    Confirm that the health check report returns the expected results:
    Test What the test checks Expected result
    Pod Healthcheck For pods in the operators project, the status of each required pod is Running. [SUCCESS]
    Pod Usage Healthcheck For pods in the operators project, the resource use for each pod is within the CPU and memory limits. [SUCCESS]
    Cluster Service Versions Healthcheck For cluster service versions (CSVs) in the operators project, the phase of each CSV is Succeeded. [SUCCESS]
    Catalog Source Healthcheck For catalog sources in the operators project, the last observed state of each catalog source is Ready. [SUCCESS]
    Install Plan Healthcheck For operators in the operators project, the install plan approval for each operator is Automatic. [SUCCESS]
    Subscriptions Healthchec For subscriptions in the operators project, there is an installed CSV for each subscription. [SUCCESS]
    Persistent Volume Claim Healthcheck For persistent volume claims (PVCs) in the operators project, each PVC is bound.
    Note: There should not be any PVC in the operators project, so the test should be skipped.
    [SKIP...]
    Deployment Healthcheck For deployments in the operators project, each deployment has the desired number of replicas. [SUCCESS]
    Namespace Scopes Healthcheck For the NamespaceScope operator in the operators project, the projects that are specified in the members list exist. [SUCCESS]
    Stateful Set Healthcheck For stateful sets in the operators project, the stateful sets have the desired number of replicas.
    Note: There should not be any stateful sets in the operators project, so the test should be skipped.
    [SKIP...]
    Common Services Healthcheck For the common-service commonservice custom resource in the operators project, the phase of the custom resource is Succeeded. [SUCCESS]
    Custom Resource Healthcheck For any other custom resources in the operators project, the phase of each custom resource is Succeeded.
    Note: There should not be any other custom resources in the operators project, so the test should be skipped.
    [SKIP...]
    Operand Requests Healthcheck For operand requests in the operators project, the phase of each operand request is Running, [SUCCESS]
  6. Check the health of the resources in the operands project:
    cpd-cli health operands \
    --control_plane_ns=${PROJECT_CPD_INST_OPERANDS}
    Confirm that the health check report returns the expected results:
    Test What the test checks Expected result
    Pod Healthcheck For pods in the operands project, the status of each pod is Running. [SUCCESS]
    Pod Usage Healthcheck For pods in the operands project, the resource use for each pod is within the CPU and memory limits. [SUCCESS]
    EDB Cluster Healthcheck For EDB Postgres clusters in the operands project, the status of each cluster is Cluster in healthy state. [SUCCESS]
    Persistent Volume Claim Healthcheck For persistent volume claims (PVCs) in the operands project, each PVC is bound. [SUCCESS]
    Deployment Healthcheck For deployments in the operands project, each deployment has the desired number of replicas. [SUCCESS]
    Stateful Set Healthcheck For stateful sets in the operands project, the stateful sets have the desired number of replicas. [SUCCESS]
    Common Services Healthcheck For the common-service commonservice custom resource in the operands project, the phase of the custom resource is Succeeded. [SUCCESS]
    Operand Requests Healthcheck For operand requests in the operands project, the phase of each operand request is Running. [SUCCESS]
    Monitor Events Healthcheck The platform monitors are not generating any Critical events. [SUCCESS]
    Custom Resource Healthcheck For custom resources in the operands project, the phase of each custom resource is Succeeded. [SUCCESS]
    Platform Healthcheck That the pods for required platform microservices are Running. [SUCCESS]
  7. Get the URL and default credentials of the web client:
    cpd-cli manage get-cpd-instance-details \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --get_admin_initial_credentials=true

What to do next

If you want to tether projects to this instance of IBM Software Hub, complete Tethering projects to the IBM Software Hub control plane.

If you don't want to tether projects to this instance of IBM Software Hub, complete Setting up IBM Software Hub.