Installing Red Hat OpenShift AI

Several IBM® Software Hub services require Red Hat OpenShift AI. If you plan to install services that use Inference foundation models, you must install Red Hat OpenShift AI to start and serve the models.

Installation phase
  • You are not here. Setting up a client workstation
  • You are not here. Setting up a cluster
  • You are not here. Collecting required information
  • You are not here. Preparing to run installs in a restricted network
  • You are not here. Preparing to run installs from a private container registry
  • You are here icon. Preparing the cluster for IBM Software Hub
  • You are not here. Preparing to install an instance of IBM Software Hub
  • You are not here. Installing an instance of IBM Software Hub
  • You are not here. Setting up the control plane
  • You are not here. Installing solutions and services
Who needs to complete this task?

Cluster administrator A cluster administrator must complete this task.

When do you need to complete this task?

One-time setup Complete this task if you plan to install one or more of the following services:

  • IBM Knowledge Catalog Premium *
  • IBM Knowledge Catalog Standard *
  • Watson Speech services *
  • watsonx.ai™
  • watsonx Assistant *
  • Watsonx BI
  • watsonx Code Assistant™
  • watsonx Code Assistant for Red Hat Ansible® Lightspeed
  • watsonx Code Assistant for Z
  • watsonx Code Assistant for Z Agentic
  • watsonx Code Assistant for Z Code Explanation
  • watsonx Code Assistant for Z Code Generation
  • watsonx.data™ Premium
  • watsonx.data intelligence
  • watsonx™ Orchestrate *

An asterisk (*) indicates that the service requires Red Hat OpenShift AI in some situations.

About this task

The commands in this task install Red Hat OpenShift AI with the minimum components needed to support IBM Software Hub services.

Review the following table to determine whether you need to install Red Hat OpenShift AI based on the services that you plan to install:

Service Red Hat OpenShift AI
IBM Knowledge Catalog Premium
  • Required if you run the models on GPU.
  • Not required if you run the models on:
    • CPU
    • A remote instance of watsonx.ai
IBM Knowledge Catalog Standard
  • Required if you run the models on GPU.
  • Not required if you run the models on:
    • CPU
    • A remote instance of watsonx.ai
Watson Speech services

Required only if you want to enable enrichment.

watsonx.ai

Always required.

watsonx Assistant

Required only if you want to use features that require GPU.

watsonx BI

Always required.

watsonx Code Assistant

Always required.

watsonx Code Assistant for Red Hat Ansible Lightspeed

Always required.

watsonx Code Assistant for Z

Always required.

watsonx Code Assistant for Z Agentic

Always required.

watsonx Code Assistant for Z Code Explanation

Always required.

watsonx Code Assistant for Z Code Generation

Always required.

watsonx.data Premium

Always required.

watsonx.data intelligence

Always required.

watsonx Orchestrate
  • Required if you run models locally.
  • Not required if you use the AI gateway to access third-party models.

Use the following table to determine which version of Red Hat OpenShift AI you need to install:

Release Red Hat OpenShift AI Version
5.3.0 (December 2025) 2.25

Procedure

  1. Log in to Red Hat OpenShift Container Platform as a cluster administrator.
    ${OC_LOGIN}
    Remember: OC_LOGIN is an alias for the oc login command.
  2. Create the redhat-ods-operator project:
    oc new-project redhat-ods-operator

    The command returns the following response:

    namespace/redhat-ods-operator created
  3. Create the rhods-operator operator group in the redhat-ods-operator project:
    cat <<EOF |oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: rhods-operator
      namespace: redhat-ods-operator
    EOF

    The command returns the following response:

    operatorgroup.operators.coreos.com/rhods-operator created
  4. Set the CHANNEL_VERSION environment variable based on the version of Red Hat OpenShift AI that you are installing:

    Version 2.25
    export CHANNEL_VERSION=stable-2.25

  5. Create the rhods-operator operator subscription in the redhat-ods-operator project:
    cat <<EOF |oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: rhods-operator
      namespace: redhat-ods-operator
    spec:
      name: rhods-operator
      channel: ${CHANNEL_VERSION}
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      config:
         env:
            - name: "DISABLE_DSC_CONFIG"
    EOF

    The command returns the following response:

    subscription.operators.coreos.com/rhods-operator created
  6. Check the status of the rhods-operator-* pod in the redhat-ods-operator project:
    oc get pods -n redhat-ods-operator

    Confirm that the pod is Running. The command returns a response with the following format:

    NAME                              READY   STATUS    RESTARTS   AGE
    rhods-operator-56c85d44c9-vtk74   1/1     Running   0          3h57m
  7. Create a DSC Initialization (DSCInitialization) object named default-dsci in the redhat-ods-monitoring project:
    cat <<EOF |oc apply -f -
    apiVersion: dscinitialization.opendatahub.io/v1
    kind: DSCInitialization
    metadata:
      name: default-dsci
    spec:
      applicationsNamespace: redhat-ods-applications
      monitoring:
        managementState: Managed
        namespace: redhat-ods-monitoring
      serviceMesh:
        managementState: Removed
      trustedCABundle:
        managementState: Managed
        customCABundle: ""
    EOF
  8. Check the phase of the DSC Initialization (DSCInitialization) object:
    oc get dscinitialization

    Confirm that the object is Ready. The command returns a response with the following format:

    NAME           AGE     PHASE
    default-dsci   4d18h   Ready
  9. Create a Data Science Cluster (DataScienceCluster) object named default-dsc:
    cat <<EOF |oc apply -f -
    apiVersion: datasciencecluster.opendatahub.io/v1
    kind: DataScienceCluster
    metadata:
      name: default-dsc
    spec:
      components:
        codeflare:
          managementState: Removed
        dashboard:
          managementState: Removed
        datasciencepipelines:
          managementState: Removed
        kserve:
          managementState: Managed
          defaultDeploymentMode: RawDeployment
          serving:
            managementState: Removed
            name: knative-serving
        kueue:
          managementState: Removed
        modelmeshserving:
          managementState: Removed
        ray:
          managementState: Removed
        trainingoperator:
          managementState: Managed
        trustyai:
          managementState: Removed
        workbenches:
          managementState: Removed
    EOF

    The Red Hat OpenShift AI Operator installs and manages the services that are listed as Managed. Services that are Removed are not installed.

  10. Wait for the Data Science Cluster object to be Ready.
    To check the status of the object, run:
    oc get datasciencecluster default-dsc -o jsonpath='"{.status.phase}" {"\n"}'
  11. Confirm that the status of the following pods in the redhat-ods-applications project are Running:
    • kserve-controller-manager-* pod
    • kubeflow-training-operator-* pod
    • odh-model-controller-* pod
    oc get pods -n redhat-ods-applications

    The command returns a response with the following format:

    NAME                                         READY   STATUS      RESTARTS   AGE
    kserve-controller-manager-57796d5b44-sh9n5   1/1     Running     0          4m57s
    kubeflow-training-operator-7b99d5584c-rh5hb  1/1     Running     0          4m57s
  12. Edit the inferenceservice-config configuration map in the redhat-ods-applications project:
    1. Log in to the Red Hat OpenShift Container Platform web console as a cluster administrator.
    2. From the navigation menu, select Workloads > Configmaps.
    3. From the Project list, select redhat-ods-applications.
    4. Click the inferenceservice-config resource. Then, open the YAML tab.
    5. In the metadata.annotations section of the file, add opendatahub.io/managed: 'false':
      metadata:
        annotations:
          internal.config.kubernetes.io/previousKinds: ConfigMap
          internal.config.kubernetes.io/previousNames: inferenceservice-config
          internal.config.kubernetes.io/previousNamespaces: opendatahub
          opendatahub.io/managed: 'false'
    6. Find the following entry in the file:
      "domainTemplate": "{{ .Name }}-{{ .Namespace }}.{{ .IngressDomain }}",
    7. Update the value of the domainTemplate field to "example.com":
      "domainTemplate": "example.com",
    8. Click Save.

What to do next

Now that you've installed Red Hat OpenShift AI, you're ready to complete Installing and setting up Multicloud Object Gateway for IBM Software Hub.