Specifying installation options for services

Some services have required or optional settings that you must specify when you install them. Create an install-options.yml file to specify the installation options for the services that you plan to install.

Installation phase
  • You are not here. Setting up a client workstation
  • You are not here. Setting up a cluster
  • You are not here. Collecting required information
  • You are not here. Preparing to run installs in a restricted network
  • You are not here. Preparing to run installs from a private container registry
  • You are not here. Preparing the cluster for IBM Software Hub
  • You are not here. Preparing to install an instance of IBM Software Hub
  • You are not here. Installing an instance of IBM Software Hub
  • You are not here. Setting up the control plane
  • You are here icon. Installing solutions and services
Who needs to complete this task?

Instance administrator An instance administrator can complete this task.

When do you need to complete this task?

Repeat as needed Complete this task for each instance of IBM Software Hub on your cluster. The options that you specify depend on the services you install in each instance.

Which services have additional installation options?

Service Installation options Required?
AI Factsheets No installation options.  
Analytics Engine powered by Apache Spark Yes. See Analytics Engine powered by Apache Spark parameters. Optional.
Cognos Analytics No installation options.  
Cognos Dashboards No installation options.  
Data Gate No installation options.  
Data Privacy No installation options.  
Data Product Hub No installation options.  
Data Refinery No. Data Refinery cannot be separately installed.  
Data Replication Yes. See Data Replication parameters. Required.
DataStage No installation options.  
Data Virtualization No installation options.  
Db2 No installation options.  
Db2 Big SQL No installation options.  
Db2 Data Management Console No installation options.  
Db2 Warehouse No installation options.  
Decision Optimization No installation options.  
EDB Postgres No installation options.  
Execution Engine for Apache Hadoop No installation options.  
IBM Knowledge Catalog Yes. See IBM Knowledge Catalog parameters. Optional.
IBM Knowledge Catalog Premium Yes. See IBM Knowledge Catalog parameters. Optional.
IBM Knowledge Catalog Standard Yes. See IBM Knowledge Catalog parameters. Optional.
IBM Manta Data Lineage Yes. See IBM Knowledge Catalog parameters. Required.
IBM Match 360 Yes. See IBM Match 360 parameters. Optional.
IBM StreamSets No installation options.  
Informix Yes. See Informix parameters. Optional.
MANTA Automated Data Lineage No installation options.  
MongoDB No installation options.  
OpenPages No installation options.  
Orchestration Pipelines Yes. See Orchestration Pipelines parameters. Optional. Required only if you use OpenSSH and Db2 in Bash scripts in pipelines.
Planning Analytics No installation options.  
Product Master No installation options.  
RStudio® Server Runtimes No installation options.  
SPSS Modeler No installation options.  
Synthetic Data Generator No installation options.  
Unstructured Data Integration No installation options.  
Voice Gateway Yes. See Voice Gateway parameters. Optional.
Watson Discovery Yes. See Watson Discovery parameters. Optional.
Watson Machine Learning No installation options.  
Watson OpenScale No installation options.  
Watson Speech services Yes. See Watson Speech services parameters. Optional.
Watson Studio No installation options.  
Watson Studio Runtimes No installation options.  
watsonx.ai™ Yes. See watsonx.ai parameters. Optional.
watsonx Assistant Yes. See watsonx Assistant parameters. Optional.
watsonx™ BI Yes. See watsonx BI parameters. Required.
watsonx Code Assistant™ No installation options.  
watsonx Code Assistant for Red Hat® Ansible® Lightspeed No installation options.  
watsonx Code Assistant for Z No installation options.  
watsonx Code Assistant for Z Agentic No installation options.  
watsonx Code Assistant for Z Code Explanation No installation options.  
watsonx Code Assistant for Z Code Generation No installation options.  
watsonx.data™ Yes. See watsonx.data parameters. Required.
watsonx.data Premium Yes. See watsonx.data Premium parameters. Required.
watsonx.data intelligence Yes. See watsonx.data intelligence parameters. Optional.
watsonx.governance™ Yes. See watsonx.governance parameters. Required.
watsonx Orchestrate Yes. See watsonx Orchestrate parameters Required.

Installation parameter file

  1. Create a file called install-options.yml in the cpd-cli work directory.

    Depending on your settings, the work directory is in one of the following locations:

    Default location
    • If you made the cpd-cli executable from any directory, the path to the directory is:

      <current-directory>/cpd-cli-workspace/olm-utils-workspace/work

    • If you did not make the cpd-cli executable from any directory, the path to the directory is:

      <cli-install-directory>/cpd-cli-workspace/olm-utils-workspace/work

    Custom location
    If you set the CPD_CLI_MANAGE_WORKSPACE environment variable, path to the directory is:

    ${CPD_CLI_MANAGE_WORKSPACE}/work

  2. Review the following sections and add parameters to the install-options.yml file based on the services that you plan to install:
  3. Save your changes.

Analytics Engine powered by Apache Spark parameters

If you plan to install Analytics Engine powered by Apache Spark, you can specify the following installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.

The sample YAML content uses the default values.

################################################################################
# Analytics Engine powered by Apache Spark parameters
################################################################################

# ------------------------------------------------------------------------------
# Analytics Engine powered by Apache Spark service configuration parameters
# ------------------------------------------------------------------------------
#analyticsengine_spark_adv_enabled: true
#analyticsengine_job_auto_delete_enabled: true
#analyticsengine_kernel_cull_time: 30
#analyticsengine_image_pull_parallelism: "40"
#analyticsengine_image_pull_completions: "20"
#analyticsengine_kernel_cleanup_schedule: "*/30 * * * *"
#analyticsengine_job_cleanup_schedule: "*/30 * * * *"
#analyticsengine_skip_selinux_relabeling: false
#analyticsengine_mount_customizations_from_cchome: false

# ------------------------------------------------------------------------------
# Spark runtime configuration parameters
# ------------------------------------------------------------------------------
#analyticsengine_max_driver_cpu_cores: 5          # The number of CPUs to allocate to the Spark jobs driver. The default is 5.  
#analyticsengine_max_executor_cpu_cores: 5        # The number of CPUs to allocate to the Spark jobs executor. The default is 5.
#analyticsengine_max_driver_memory: "50g"         # The amount of memory, in gigabytes to allocate to the driver. The default is 50g.
#analyticsengine_max_executor_memory: "50g"       # The amount of memory, in gigabytes to allocate to the executor. The default is 50g. 
#analyticsengine_max_num_workers: 50              # The number of workers (also called executors) to allocate to spark jobs. The default is 50.
#analyticsengine_local_dir_scale_factor: 10       # The number that is used to calculate the temporary disk size on Spark nodes. The formula is temp_disk_size = number_of_cpu * local_dir_scale_factor. The default is 10.
Analytics Engine powered by Apache Spark service configuration parameters

The service configuration parameters determine how the Analytics Engine powered by Apache Spark service behaves.

Property Description
analyticsengine_spark_adv_enabled Specify whether to display the job UI.
Default value
true
Valid values
false
Do not display the job UI.
true
Display the job UI.
analyticsengine_job_auto_delete_enabled Specify whether to automatically delete jobs after they reach a terminal state, such as FINISHED or FAILED. The default is true.
Default value
true
Valid values
true
Delete jobs after they reach a terminal state.
false
Retain jobs after they reach a terminal state.
analyticsengine_kernel_cull_time The amount of time, in minutes, idle kernels are kept.
Default value
30
Valid values
An integer greater than 0.
analyticsengine_image_pull_parallelism The number of pods that are scheduled to pull the Spark image in parallel.

For example, if you have 100 nodes in the cluster, set:

  • analyticsengine_image_pull_completions: "100"
  • analyticsengine_image_pull_parallelism: "150"

In this example, at least 100 nodes will pull the image successfully with 150 pods pulling the image in parallel.

Default value
"40"
Valid values
An integer greater than or equal to 1.

Increase this value only if you have a very large cluster and you have sufficient network bandwidth and disk I/O to support more pulls in parallel.

analyticsengine_image_pull_completions The number of pods that should be completed in order for the image pull job to be completed.

For example, if you have 100 nodes in the cluster, set:

  • analyticsengine_image_pull_completions: "100"
  • analyticsengine_image_pull_parallelism: "150"

In this example, at least 100 nodes will pull the image successfully with 150 pods pulling the image in parallel.

Default value
"20"
Valid values
An integer greater than or equal to 1.

Increase this value only if you have a very large cluster and you have sufficient network bandwidth and disk I/O to support more pulls in parallel.

analyticsengine_kernel_cleanup_schedule Override the analyticsengine_kernel_cull_time setting for the kernel cleanup CronJob.

By default, the kernel cleanup CronJob runs every 30 minutes.

Default value
"*/30 * * * *"
Valid values
A string that uses the CronJob schedule syntax.
analyticsengine_job_cleanup_schedule Override the analyticsengine_kernel_cull_time setting for the job cleanup CronJob.

By default, the job cleanup CronJob runs every 30 minutes.

Default value
"*/30 * * * *"
Valid values
A string that uses the CronJob schedule syntax.
analyticsengine_skip_selinux_relabeling Specify whether to skip the SELinux relabeling.

To use this feature, you must create the required MachineConfig and RuntimeClass definitions. For more information, see Enabling MachineConfig and RuntimeClass definitions for certain properties.

Default value
false
Valid values
false
Do not skip the SELinux relabeling.
true
Skip the SELinux relabeling.
analyticsengine_mount_customizations_from_cchome Specify whether to you want to enable custom drivers. These drivers need to be mounted from the cc-home-pvc directory.

Common core services This feature is available only when the Cloud Pak for Data common core services are installed.

Default value
false
Valid values
false
You do not want to use custom drivers.
true
You want to enable custom drivers.
Spark runtime configuration parameters

The runtime configuration parameters determine how the Spark runtimes generated by the Analytics Engine powered by Apache Spark service behave.

Property Description
analyticsengine_max_driver_cpu_cores The number of CPUs to allocate to the Spark jobs driver.
Default value
5
Valid values
An integer greater than or equal to 1.
analyticsengine_max_executor_cpu_cores The number of CPUs to allocate to the Spark jobs executor.
Default value
5
Valid values
An integer greater than or equal to 1.
analyticsengine_max_driver_memory The amount of memory, in gigabytes to allocate to the driver.
Default value
"50g"
Valid values
An integer greater than or equal to 1.
analyticsengine_max_executor_memory The amount of memory, in gigabytes to allocate to the executor.
Default value
"50g"
Valid values
An integer greater than or equal to 1.
analyticsengine_max_num_worker The number of workers (also called executors) to allocate to Spark jobs.
Default value
50
Valid values
An integer greater than or equal to 1.
analyticsengine_local_dir_scale_factor The number that is used to calculate the temporary disk size on Spark nodes.

The formula is:

temp_disk_size = number_of_cpu * local_dir_scale_factor
Default value
10
Valid values
An integer greater than or equal to 1.

Data Replication parameters

If you plan to install Data Replication, you must specify the following installation option in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameter is required.

Replace <license> with the appropriate value for your environment.

################################################################################
# Data Replication parameters
################################################################################
replication_license_type: <license>
Parameter Description
replication_license_type Specify the license that you purchased.
Valid values:
IDRC
Specify this option if you purchased the IBM Data Replication Cartridge for IBM Software Hub.
IIDRC
Specify this option if you purchased the IBM InfoSphere Data Replication Cartridge for IBM Software Hub.
IDRM
Specify this option if you purchased IBM Data Replication Modernization.
IIDRM
Specify this option if you purchased IBM InfoSphere Data Replication Modernization.
IDRZOS
Specify this option if you purchased IBM Data Replication for Db2® z/OS® Cartridge.
IIDRWXTO
Specify this option if you purchased IBM InfoSphere® Data Replication for watsonx.data Cartridge.

IBM Knowledge Catalog parameters

If you plan to install IBM Knowledge Catalog, IBM Knowledge Catalog Premium, or IBM Knowledge Catalog Standard, you can specify installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.

The applicable parameters and default values depend on which service you install:

IBM Knowledge Catalog

The sample YAML content uses the default values.

################################################################################
# IBM Knowledge Catalog parameters
################################################################################
custom_spec:
  wkc:
#    enableDataQuality: False
#    enableKnowledgeGraph: False
#    useFDB: False
IBM Knowledge Catalog Premium

The sample YAML content uses the default values.

################################################################################
# IBM Knowledge Catalog parameters
################################################################################
custom_spec:
  wkc:
#    enableDataQuality: False
#    enableKnowledgeGraph: False
#    useFDB: False
#    enableAISearch: False
#    enableSemanticAutomation: False
#    enableSemanticEnrichment: True
#    enableSemanticEmbedding: False
#    enableTextToSql: False
#    enableModelsOn: 'cpu'
#    customModelTextToSQL: granite-3-3-8b-instruct
IBM Knowledge Catalog Standard

The sample YAML content uses the default values.

################################################################################
# IBM Knowledge Catalog parameters
################################################################################
custom_spec:
  wkc:
#    enableKnowledgeGraph: False
#    useFDB: False
#    enableAISearch: False
#    enableSemanticAutomation: False
#    enableSemanticEnrichment: True
#    enableSemanticEmbedding: False
#    enableTextToSql: False
#    enableModelsOn: 'cpu'
#    customModelTextToSQL: granite-3-3-8b-instruct
Property Description
enableDataQuality Specify whether to enable data quality features in projects.
Important: If you enable this feature, DataStage, specifically DataStage Enterprise, is automatically installed.

If you did not purchase a DataStage license, use of DataStage Enterprise is limited to creating, managing, and running data quality rules. For examples of accepted use, see Enabling optional features after installation or upgrade for IBM Knowledge Catalog.

Editions the setting applies to
  • IBM Knowledge Catalog
  • IBM Knowledge Catalog Premium
Default value
False
Valid values
False
Do not enable the data quality feature.
True
Enable the data quality feature.
enableKnowledgeGraph Specify whether to enable the knowledge graph feature. The knowledge graph provides the following capabilities:
  • Relationship explorer and business term relationship search
  • Lineage
    Important: Lineage requires IBM Manta Data Lineage or MANTA Automated Data Lineage.
Editions the setting applies to
  • IBM Knowledge Catalog
  • IBM Knowledge Catalog Premium
  • IBM Knowledge Catalog Standard
Default value
False
Valid values
False
Do not enable the knowledge graph feature.
True
Enable the knowledge graph feature.

If you set enableKnowledgeGraph: True, review useFDB.

useFDB Specify which database to use to store the data generated by knowledge graph.
The database depends on which service you use for lineage:
  • For IBM Manta Data Lineage, use Neo4j:
    useFDB: false
  • For MANTA Automated Data Lineage, use FoundationDB:
    useFDB: true
Default value
False
Valid values
False
Do not use FoundationDB. Use Neo4j.

Required if you use IBM Manta Data Lineage.

True
Use FoundationDB.

Required if you use MANTA Automated Data Lineage.

enableAISearch Specify whether to enable LLM-based semantic search for assets and artifacts across all workspaces.
Default value
False
Valid values
False
Do not enable LLM-based semantic search.
True
Enable LLM-based semantic search.
enableSemanticAutomation Specify whether to enable gen AI features.
Editions the setting applies to
  • IBM Knowledge Catalog Premium
  • IBM Knowledge Catalog Standard
Default value
False
Valid values
False
Do not enable gen AI based features.
True
Enable gen AI based features.
enableSemanticEnrichment Specify whether to enable gen AI metadata expansion. Metadata expansion includes:
  • Table name expansion
  • Column name expansion
  • Description generation
Editions the setting applies to
  • IBM Knowledge Catalog Premium
  • IBM Knowledge Catalog Standard
Prerequisite
This feature requires semantic automation. You must set enableSemanticAutomation:true.
Default value
False
Valid values
False
Do not enable gen AI metadata expansion.
True
Enable gen AI metadata expansion.
enableSemanticEmbedding

5.2.1 and later This parameter is available starting in IBM Software Hub Version 5.2.1.

Specify whether to enable semantic embedding.

You must enable semantic embedding if you plan to use the following features:
  • Text to SQL
Editions the setting applies to
  • IBM Knowledge Catalog Premium
  • IBM Knowledge Catalog Standard
Prerequisite

This feature requires GPU. You cannot run the required model on CPU.

In addition, this feature requires gen AI capabilities. You must set enableGenerativeAICapabilities: true.

Default value
false
Valid values
false
Do not enable semantic embedding.
true
Enable semantic embedding.
enableTextToSql

5.2.1 and later This parameter is available starting in IBM Software Hub Version 5.2.1.

Specify whether to generate SQL queries from natural language input. Text-to-SQL capabilities can be used to create query-based data assets, which can be use for data products or in searches.

Editions the setting applies to
  • IBM Knowledge Catalog Premium
  • IBM Knowledge Catalog Standard
Prerequisite

This feature requires GPU. You can choose where to run the required models:

  • To run the required models locally, set enableModelsOn: gpu
  • To run the required models on a remote instance of watsonx.ai, set enableModelsOn: remote

In addition, this feature requires the following settings:

  • Semantic embedding.

    You must set enableSemanticEmbedding: true.

Default value
false
Valid values
false
Do not convert natural language queries to SQL queries.
true
Convert natural language queries to SQL queries.
enableModelsOn Specify where you want the models that are used with the gen AI capabilities to run.
Editions the setting applies to
  • IBM Knowledge Catalog Premium
  • IBM Knowledge Catalog Standard
Prerequisite
This feature requires semantic automation. You must set enableSemanticAutomation:true.
Default value
'cpu'
Valid values
'cpu'
Run the foundation model on CPU.
Restriction: This option can be used only for expanding metadata and term assignment when enriching metadata (enableSemanticEnrichment: true).

This option is not supported for converting natural language queries to SQL queries ( enableTextToSql: true).

'gpu'
Run the foundation model on GPU.

If you are upgrading the service and you want to continue to run the model on GPU, you must specify enableModelsOn: 'gpu'.

Important: If you use this setting, the inference foundation models component (watsonx_ai_ifm) is automatically installed.

This option requires at least one GPU. For information about supported GPUs, see GPU requirements for models.

'remote'
Run the foundation model on a remote instance of watsonx.ai. The instance can be running on:
  • Another on-premises instance of IBM Software Hub
  • IBM watsonx as a Service
Important: If you use this setting, you must:
  1. Ensure that the foundation model is available and running on the remote instance.
  2. Create a connection to the remote instance.

    For more information, see Enabling users to connect to an external IBM watsonx.ai foundation model in the Cloud Pak for Data documentation.

If the preceding requirements are not met, any tasks that rely on the model will fail.

customModelTextToSql Specify a custom model for Text-To-SQL conversions.
Default model

By default, the Text-To-SQL feature uses the granite-3-8b-instruct model (ID: granite-3-8b-instruct).

Recommended model for better accuracy

You can improve the accuracy of results when converting plain text queries to SQL queries if you use the llama-3-3-70b-instruct model (ID: llama-3-3-70b-instruct).

However, this model requires significantly more resources than the granite-3-8b-instruct model. For more information about the resources required for each model, see GPU requirements for models.

Using other models

If you chose to use a different model, the accuracy of the results might vary.

Prerequisite

This option applies only to environments with local GPUs (enableModelsOn: gpu).

If you want to use a custom model on a remote instance of watsonx.ai (enableModelsOn: remote), see Enabling users to connect to an external IBM watsonx.ai foundation model in the Data Fabric documentation.

In addition, this feature requires the following settings:

  • Text-To-SQL conversions.

    You must set enableTextToSql: true.

Default value
granite-3-8b-instruct
Valid values
Specify the ID of the model that you want to use. The IDs of the recommended models are:
  • granite-3-8b-instruct
  • llama-3-3-70b-instruct

IBM Match 360 parameters

If you plan to install IBM Match 360, you can specify the following installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.

The sample YAML content uses the default values.

################################################################################
# IBM Match 360 with Watson parameters
################################################################################
#match360_scale_config: small
#match360_onboard_timeout: 300
#match360_ccs_http_timeout: 2000
Parameter Description
match360_scale_config Specify the size of the service.
Default value
small
Valid values
  • x-small
    Restriction: This size is valid only for proof of concept and demonstration installations.
  • small_mincpureq
  • small
  • medium
  • large

For detailed information about each size, refer to the component scaling guidance PDF.

match360_onboard_timeout The length of time, in seconds, before the onboarding process times out.

If your cluster is slow, increase this setting.

Default value
300
Valid values
An integer greater than or equal to 1.
match360_ccs_http_timeout The length of time, in seconds, before the connection to the Cloud Pak for Data common core services services times out.

If your cluster is slow, increase this setting.

Default value
2000
Valid values
An integer greater than or equal to 1.

Informix parameters

If you plan to install Informix, you can specify the following installation option in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameter is optional. If you do not set this installation parameter, the default value is used. To override the default value, uncomment the parameter and update the value appropriately.

The sample YAML content uses the default values.

################################################################################
# Informix parameters
################################################################################
#informix_cp4d_edition: EE
Parameter Description
informix_cp4d_edition Specify the license that you purchased.
Default value
EE
Valid values:
AEE
Specify this option if you purchased Advanced Enterprise Edition.
EE
Specify this option if you purchased Enterprise Edition.
WE
Specify this option if you purchased Workgroup Edition.

Orchestration Pipelines parameters

If you plan to install Orchestration Pipelines, you can specify the following installation option in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameter is optional. If you do not set this installation parameter, the default value is used. To override the default value, uncomment the parameter and update the value appropriately.

The sample YAML content uses the default values.

################################################################################
# Orchestration pipelines parameters
################################################################################
custom_spec:
  ws_pipelines:
#    rbsimage: rbs-ext
#    pythonImage: wsp-ts
Parameter Description
rbsimage Specify which image to use when running Bash scripts in Orchestration Pipelines.
Default value
rbs-ext
Valid values:
rbs-ext
Use an image that contains OpenSSH and Db2 binaries.

This image enables you to use secured channel communication with tools such as scp, ssh, and sftp.

run-bash-script
Use an image that does not include OpenSSH or Db2 binaries.
Important: If you use a private container registry, you must explicitly mirror the run-bash-script image to the private container registry.
pythonImage Specify which Python image to use in pipelines.
Default value
wsp-ts
Valid values:
wsp-ts
Use an image that contains a basic Python installation. The image does not include additional libraries.
pipelines-python-runtime
Use an image that contains additional Python libraries.

This option is required if you want to create pipelines that contain Python code that interacts directly with watsonx.ai.

Important: If you use a private container registry, you must explicitly mirror the pipelines-python-runtime image to the private container registry.

Voice Gateway parameters

If you plan to install Voice Gateway, you can specify the following installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.

The sample YAML content uses the default values.

################################################################################
# Voice Gateway parameters
################################################################################
voice_gateway_spec:
#
# ------------------------------------------------------------------------------
# Node selector parameters
# ------------------------------------------------------------------------------
#  nodeSelector:
#    key1: value
#    key2: value
# ------------------------------------------------------------------------------
# Toleration parameters
# ------------------------------------------------------------------------------
#  tolerations:
#  - key: "key-name" 
#    operator: "operator" 
#    value: "value" 
#    effect: "effect" 
# ------------------------------------------------------------------------------
# SSL configuration parameters
# ------------------------------------------------------------------------------
#  sslConfig:
#    disableSslCertValidation: false
#    mediaRelay:
#      enableSsl: false
#      sslClientCACertSecret: client-ca-cert-secret
#      enableMutualAuth: false
#      sslClientPkcs12FileSecret: ssl-client-pkcs12-file-secret
#      sslClientPassphraseSecret: ssl-client-passphrase-secret
#    sipOrchestrator:
#      enableSslorMutualAuth: false
#      sslKeyTrustStoreSecret: trust-store-file-secret
#      sslFileType: "JKS"
#      sslPassphraseSecret: ssl-passphrase-secret
# ------------------------------------------------------------------------------
# Port parameters
# ------------------------------------------------------------------------------
#  ports:
#    sipSignalingPortUdp: 5060
#    sipSignalingPortTcp: 5060
#    sipSignalingPortTls: 5061
#    sipOrchestratorHttpPort: 9086
#    sipOrchestratorHttpsPort: 9446
#    mediaRelayWsPort: 8080
#    rtpUdpPortRange: "16384-16394"
# ------------------------------------------------------------------------------
# Environment variable parameters
# ------------------------------------------------------------------------------
#  env:
#    sipOrchestrator:
#      - name: variable-name
#        value: "value"
#    mediaRelay:
#      - name: variable-name
#        value: "value"
# ------------------------------------------------------------------------------
# Storage parameters
# ------------------------------------------------------------------------------
#  storage:
#    recordings:
#      enablePersistentRecordings: false
#      storageClassName: ""
#      size: 15Gi
#    logs:
#      enablePersistentLogs: false
#      storageClassName: ""
#      size: 10Gi
# ------------------------------------------------------------------------------
# Container resource parameters
# ------------------------------------------------------------------------------
#  resources:
#    sipOrchestrator:
#      requests:
#        cpu: "1.0"
#        memory: 1Gi
#      limits:
#        cpu: "2.0"
#        memory: 2Gi
#    mediaRelay:
#      requests:
#        cpu: "1"
#        memory: 1Gi
#      limits:
#        cpu: "4"
#        memory: 4Gi
#    g729Codec:
#      requests:
#        cpu: "0.5"
#        memory: 0.5Gi
#      limits:
#        cpu: "1"
#        memory: 1Gi
# ------------------------------------------------------------------------------
# G729 Codec Service parameters
# ------------------------------------------------------------------------------
#  g729Codec:
#    enabled: false
#    logLevel: "INFO"
#    webSocketServerPort: 9001
# ------------------------------------------------------------------------------
# Media Resource Control Protocol parameters
# ------------------------------------------------------------------------------
#  mrcp:
#    enableMrcp: false
#    unimrcpConfigSecretName: unimrcp-config-secret
#    mrcpv2SipPort: 5555
Node selector parameters
Parameter Description
nodeSelector If you want Voice Gateway pods to run on specific nodes, you can add one or more node selectors to the nodeSelector block.

To use this feature, the nodes on your cluster must be labeled. For more information on node labels, see the Red Hat OpenShift® Container Platform documentation.

Default value
No default value. Node selectors are optional and are user-defined.
Valid values
Specify one or more node labels using key-value pairs with the format key: value.

Enter each key-value pair on a new line. The sample YAML file includes a formatting example.

Including this parameter
Ensure that you uncomment the nodeSelector line and any key-value pairs that you want to include. For example:
  nodeSelector:
    <key-name1>: <value1>
    <key-name2>: <value2>
Taint toleration parameters
Parameter Description
tolerations If you use taints to prevent pods from being scheduled on specific nodes, you can add one or more taint tolerations to the tolerations block.

To use this feature, the nodes on your cluster must be tainted. For more information on taints and taint tolerations, see the Red Hat OpenShift Container Platform documentation.

Default value
No default value. Taint tolerations are optional and are user-defined.
Valid values
Specify one ore more taint tolerations in list format. The sample YAML file includes a formatting example.

Enter each taint toleration as a new list item.

A toleration typically includes a key, an operator, a value, and an effect. The sample YAML file includes a formatting example.

Including this parameter
Ensure that you uncomment the tolerations line and any list items that you want to include. For example:
  tolerations:
  - key: "<key-name>" 
    operator: "<operator>" 
    value: "<value>" 
    effect: "<effect>"
SSL configuration parameters
Parameter Description
disableSslCertValidation Specify whether Voice Gateway should disable SSL certificate validation.
Default value
false
Valid values
false
Do not disable SSL certificate validation. The service will validate SSL certificates.

Set disableSslCertValidation: false if you plan to replace the default, self-signed TLS certificate that is shipped with Cloud Pak for Data with a certificate that is signed by a certificate authority.

true
Disable SSL certificate validation. The service will not validate SSL certificates.

Set disableSslCertValidation: true if you plan to use a self-signed TLS certificate.

Including this parameter
Ensure that you uncomment the following lines:
  sslConfig:
    disableSslCertValidation: true
enableSsl (Media Relay) Specify whether you want the Media Relay microservice to establish SSL connections to, Watson Speech to Text and Watson Text to Speech.
Default value
false
Valid values
false
Do not enable SSL connections to the servers.
true
Enable SSL connections to the servers.
If you want to use this option, you must create the following secrets in the PROJECT_CPD_INST_OPERANDS project:
  • A secret that contains the CA certificate file. Specify the name of the secret in the sslClientCACertSecret parameter.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the sslClientCACertSecret parameter:
  sslConfig:
    mediaRelay:
      enableSsl: true
      sslClientCACertSecret: <my-secret-name>
sslClientCACertSecret (Media Relay) Specify the name of the secret that contains the CA certificate file that is used when enableSsl is set to true.
To create the secret that contains the CA certificate file, run the following commands:
  1. Set the RELAY_CA_FILE environment variable to the name of the CA certificate PEM file:
    RELAY_CA_FILE=<fully-qualified-pem-file-name>
  2. Create the secret. The following command uses the recommended name, client-ca-cert-secret. You can change the name if it will conflict with another secret in your environment.
    oc create secret generic client-ca-cert-secret \
    --from-file=clientCaCertFile=${RELAY_CA_FILE} \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
Default value
client-ca-cert-secret

If you don't specify the sslClientCACertSecret parameter, the default secret name is used.

Valid values
The name of the secret that contains the CA certificate file.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the sslClientCACertSecret parameter:
  sslConfig:
    mediaRelay:
      enableSsl: true
      sslClientCACertSecret: <my-secret-name>
enableMutualAuth (Media Relay) Specify whether to enable mutual authentication between the client server and the Media Relay microservice.
Default value
false
Valid values
false
Do not enable mutual authentication.
true
Enable mutual authentication.
If you want to use this option, you must create the following secrets in the PROJECT_CPD_INST_OPERANDS project:
  • A secret that contains the SSL keystore. Specify the name of the secret in the sslClientPkcs12FileSecret parameter.
  • A secret that contains the SSL passphrase. Specify the name of the secret in the sslClientPassphraseSecret parameter.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  sslConfig:
    mediaRelay:
      enableMutualAuth: true
      sslClientPkcs12FileSecret: <my-secret-name>
      sslClientPassphraseSecret: <my-secret-name>
sslClientPkcs12FileSecret (Media Relay) Specify the name of the secret that contains the SSL keystore for mutual authentication.

The keystore can be a PKCS12 file, a JKS file, or a JCEKS file.

To create the secret that contains the SSL keystore file, run the following commands:
  1. Set the RELAY_KEYSTORE_FILE environment variable to the name of the SSL keystore file:
    RELAY_KEYSTORE_FILE=<fully-qualified-file-name>
  2. Create the secret. The following command uses the recommended name, ssl-client-pkcs12-file-secret. You can change the name if it will conflict with another secret in your environment.
    oc create secret generic ssl-client-pkcs12-file-secret \
    --from-file=clientPkcs12File=${RELAY_KEYSTORE_FILE} \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
Default value
ssl-client-pkcs12-file-secret

If you don't specify the sslClientPkcs12FileSecret parameter, the default secret name is used.

Valid values
The name of the secret that contains the SSL keystore file.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  sslConfig:
    mediaRelay:
      enableMutualAuth: true
      sslClientPkcs12FileSecret: <my-secret-name>
      sslClientPassphraseSecret: <my-secret-name>
sslClientPassphraseSecret (Media Relay) Specify the name of the secret that contains the SSL passphrase for mutual authentication.
To create the secret that contains the SSL passphrase, run the following commands:
  1. Set the RELAY_SSL_PASSPHRASE environment variable to the SSL passphrase
    RELAY_SSL_PASSPHRASE=<passphrase>
  2. Create the secret. The following command uses the recommended name, ssl-client-passphrase-secret. You can change the name if it will conflict with another secret in your environment.
    oc create secret generic ssl-client-passphrase-secret \
    --from-literal=sslClientPassphrase=${REPLAY_SSL_PASSPHRASE} \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
Default value
ssl-client-passphrase-secret

If you don't specify the sslClientPassphraseSecret parameter, the default secret name is used.

Valid values
The name of the secret that contains the SSL passphrase.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  sslConfig:
    mediaRelay:
      enableMutualAuth: true
      sslClientPkcs12FileSecret: <my-secret-name>
      sslClientPassphraseSecret: <my-secret-name>
enableSslorMutualAuth (SIP Orchestrator) Specify whether to enable SSL for the SIP Orchestrator microservice.

Depending on the certificate that you provide, this enables either:

  • SSL connections to watsonx Assistant
  • SSL connections to watsonx Assistant and mutual authentication between the client server and the SIP Orchestrator microservice
Default value
false
Valid values
false
Do not enable SSL.
true
Enable SSL.
If you want to use this option, you must create the following secrets in the PROJECT_CPD_INST_OPERANDS project:
  • A secret that contains the SSL keystore. Specify the name of the secret in the sslKeyTrustStoreSecret parameter.
  • A secret that contains the SSL passphrase. Specify the name of the secret in the sslPassphraseSecret parameter.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  sslConfig:
    sipOrchestrator:
      enableSslorMutualAuth: true
      sslKeyTrustStoreSecret: <my-secret-name>
      sslFileType: "<file-type>"
      sslPassphraseSecret: <my-secret-name>
sslKeyTrustStoreSecret (SIP Orchestrator) Specify the name of the secret that contains the SSL keystore.

The keystore can be a PKCS12 file, a JKS file, or a JCEKS file.

To create the secret that contains the SSL keystore file, run the following commands:
  1. Set the SIP_KEYSTORE_FILE environment variable to the name of the SSL keystore file:
    SIP_KEYSTORE_FILE=<fully-qualified-file-name>
  2. Create the secret. The following command uses the recommended name, trust-store-file-secret. You can change the name if it will conflict with another secret in your environment.
    oc create secret generic trust-store-file-secret \
    --from-file==trustStoreFile=${SIP_KEYSTORE_FILE} \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
Default value
trust-store-file-secret

If you don't specify the sslKeyTrustStoreSecret parameter, the default secret name is used.

Valid values
The name of the secret that contains the SSL keystore file.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  sslConfig:
    sipOrchestrator:
      enableSslorMutualAuth: true
      sslKeyTrustStoreSecret: <my-secret-name>
      sslFileType: "<file-type>"
      sslPassphraseSecret: <my-secret-name>
sslFileType (SIP Orchestrator) Specify the format of the SSL keystore file.

The keystore can be a file, a file, or a file.

Default value
JKS

If you don't specify the sslFileType parameter, the default file type is used.

Valid values
  • JCEKS
  • JKS
  • PKCS12
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  sslConfig:
    sipOrchestrator:
      enableSslorMutualAuth: true
      sslKeyTrustStoreSecret: <my-secret-name>
      sslFileType: "<file-type>"
      sslPassphraseSecret: <my-secret-name>
sslPassphraseSecret (SIP Orchestrator) Specify the name of the secret that contains the SSL passphrase.
To create the secret that contains the SSL passphrase, run the following commands:
  1. Set the SIP_SSL_PASSPHRASE environment variable to the SSL passphrase
    SIP_SSL_PASSPHRASE=<passphrase>
  2. Create the secret. The following command uses the recommended name, ssl-passphrase-secret. You can change the name if it will conflict with another secret in your environment.
    oc create secret generic ssl-passphrase-secret \
    --from-literal=sslPassphrase=${SIP_SSL_PASSPHRASE} \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
Default value
ssl-passphrase-secret

If you don't specify the sslPassphraseSecret parameter, the default secret name is used.

Valid values
The name of the secret that contains the SSL passphrase.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  sslConfig:
    sipOrchestrator:
      enableSslorMutualAuth: true
      sslKeyTrustStoreSecret: <my-secret-name>
      sslFileType: "<file-type>"
      sslPassphraseSecret: <my-secret-name>
Port parameters
Parameter Description
sipSignalingPortUdp Override the default UDP port for the SIP signaling protocol if the default port number will conflict with an existing port.
Default value
5060
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  ports:
    sipSignalingPortUdp: <port-number>
sipSignalingPortTcp Override the default TCP port for the SIP signaling protocol if the default port number will conflict with an existing port.
Default value
5060
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  ports:
    sipSignalingPortTcp: <port-number>
sipSignalingPortTls Override the default TLS port for the SIP signaling protocol if the default port number will conflict with an existing port.
Default value
5061
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  ports:
    sipSignalingPortTls: <port-number>
sipOrchestratorHttpPort Override the default HTTP port for the SIP Orchestrator microservice if the default port number will conflict with an existing port.
Default value
9446
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  ports:
    sipOrchestratorHttpPort: <port-number>
sipOrchestratorHttpsPort Override the default HTTPS port for the SIP Orchestrator microservice if the default port number will conflict with an existing port.
Default value
5060
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  ports:
    sipOrchestratorHttpsPort: <port-number>
mediaRelayWsPort Override the default web socket port for the Media Relay microservice if the default port number will conflict with an existing port.
Default value
8080
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  ports:
    mediaRelayWsPort: <port-number>
rtpUdpPortRange Adjust the number of RTP ports based on the number of concurrent calls that you want to support.
Default value
"16384-16394"

By default, Voice Gateway supports 10 concurrent calls.

Valid values
A range of available ports on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for each instance of <port-number>:
  ports:
    rtpUdpPortRange: "<port-number>-<port-number>"
Environment variable parameters
Parameter Description
sipOrchestrator environment variables Specify any SIP Orchestrator environment variables that you want to use to configure the SIP Orchestrator microservice.
Default value
No default. Environment variables are optional.
Valid values
Specify one ore more SIP Orchestrator environment variables in list format. The sample YAML file includes a formatting example.

Enter each environment variable as a new list item.

An environment variable includes the variable name and the value to use.

Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  env:
    sipOrchestrator:
      - name: <variable-name>
        value: "<value>"
mediaRelay environment variables Specify any Media Relay environment variables that you want to use to configure the Media Relay microservice.
Default value
No default. Environment variables are optional.
Valid values
Specify one ore more Media Relay environment variables in list format. The sample YAML file includes a formatting example.

Enter each environment variable as a new list item.

An environment variable includes the variable name and the value to use.

Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  env:
    mediaRelay:
      - name: <variable-name>
        value: "<value>"
Storage parameters
Parameter Description
enablePersistentRecordings (Recordings) Specify whether to save recordings to persistent storage.
Default value
false
Valid values
false
Do not store recordings.
true
Save recordings to persistent storage.

If you want to use this option, you must specify the name of the file storage class to use in the storageClassName parameter.

Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  storage:
    recordings:
      enablePersistentRecordings: true
      storageClassName: "<storage-class-name>"
storageClassName (Recordings) Specify the name of the storage class that points to file storage.
Default value
No default. The name depends on the storage classes that are defined on your cluster.
Valid values
The name of a file storage class on your cluster.

If you sourced the installation environment variables script, run the following command to determine the file storage class that is used by other Cloud Pak for Data services:

echo $STG_CLASS_FILE
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the storageClassName parameter:
  storage:
    recordings:
      enablePersistentRecordings: true
      storageClassName: "<storage-class-name>"
size (Recordings) Specify the size of the persistent volume.
Default value
15Gi

Assuming 1000 calls per day, you'd need 1.92 Gi worth of disk space to store recordings for that day. If you want to keep recordings for a week you'd need at least 15 Gi of disk space since recordings are not automatically cleaned up.

Valid values
Specify the amount of storage, in gibibyte (Gi) to allocate to the volume. Ensure that there is sufficient space on your storage device.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  storage:
    recordings:
      enablePersistentRecordings: true
      storageClassName: "<storage-class-name>"
      size: <integer>Gi
enablePersistentLogs (Logs) Specify whether to save logs to persistent storage.
Default value
false
Valid values
false
Do not store logs.
true
Save logs to persistent storage.

If you want to use this option, you must specify the name of the file storage class to use in the storageClassName parameter.

Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  storage:
    recordings:
      enablePersistentLogs: true
      storageClassName: "<storage-class-name>"
storageClassName (Logs) Specify the name of the storage class that points to file storage.
Default value
No default. The name depends on the storage classes that are defined on your cluster.
Valid values
The name of a file storage class on your cluster.

If you sourced the installation environment variables script, run the following command to determine the file storage class that is used by other Cloud Pak for Data services:

echo $STG_CLASS_FILE
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the storageClassName parameter:
  storage:
    recordings:
      enablePersistentLogs: true
      storageClassName: "<storage-class-name>"
size (Logs) Specify the size of the persistent volume.
Default value
10Gi

Assuming 1000 calls per day.

Valid values
Specify the amount of storage, in gibibyte (Gi) to allocate to the volume. Ensure that there is sufficient space on your storage device.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the parameters:
  storage:
    recordings:
      enablePersistentLogs: true
      storageClassName: "<storage-class-name>"
      size: <integer>Gi
Container resource parameters
Parameter Description
sipOrchestrator Specify the amount of vCPU and memory to allocate to the SIP Orchestrator microservice container.

It is recommended that you assign 20% of the resources to the SIP Orchestrator microservice container

Default value
vCPU
  • Requests: 1 vCPU ("1.0")
  • Limits: 2 vCPU ("2.0")
Memory
  • Requests: 1 Gi RAM (1Gi)
  • Limits: 2 Gi RAM (2Gi)
Valid values
Specify the amount of vCPU and memory to allocate to the microservice. Ensure that you have sufficient resources on the worker nodes in the cluster.
Including this parameter
Ensure that you uncomment the following lines:
  resources:
    sipOrchestrator:

Then uncomment the resource allocations that you want to override and specify the appropriate values:

#      requests:
#        cpu: "1.0"
#        memory: 2Gi
#      limits:
#        cpu: "1.0"
#        memory: 2Gi
mediaRelay Specify the amount of vCPU and memory to allocate to the Media Relay microservice container.

It is recommended that you assign 80% of the resources to the Media Relay microservice container.

Default value
vCPU
  • Requests: 1 vCPU ("1")
  • Limits: 4 vCPU ("4")
Memory
  • Requests: 1 Gi RAM (1Gi)
  • Limits: 4 Gi RAM (4Gi)
Valid values
Specify the amount of vCPU and memory to allocate to the microservice. Ensure that you have sufficient resources on the worker nodes in the cluster.
Including this parameter
Ensure that you uncomment the following lines:
  resources:
    mediaRelay:

Then uncomment the resource allocations that you want to override and specify the appropriate values:

#      requests:
#        cpu: "1"
#        memory: 1Gi
#      limits:
#        cpu: "4"
#        memory: 4Gi
g729Codec Specify the amount of vCPU and memory to allocate to the G729 Codec container, if the service is enabled.
Default value
vCPU
  • Requests: 0.5 vCPU ("0.5")
  • Limits: 1 vCPU ("1")
Memory
  • Requests: 0.5 Gi RAM (0.5Gi)
  • Limits: 1 Gi RAM (1Gi)
Valid values
Specify the amount of vCPU and memory to allocate to the microservice. Ensure that you have sufficient resources on the worker nodes in the cluster.
Including this parameter
Ensure that you uncomment the following lines:
  resources:
    g729Codec:

Then uncomment the resource allocations that you want to override and specify the appropriate values:

#      requests:
#        cpu: "0.5"
#        memory: 0.5Gi
#      limits:
#        cpu: "1"
#        memory: 1Gi
G729 Codec Service parameters
Parameter Description
enabled Specify whether you want to enable the G729 Codec service.
Default value
false
Valid values
false
Do not enable the G729 Codec service.
true
Enable the G729 Codec service
Including this parameter
Ensure that you uncomment the following lines:
  g729Codec:
    enabled: true
logLevel Specify the level of detail to include in the G729 Codec logs.
Default value
"INFO"
Valid values
  • "DEBUG"
  • "INFO"
  • "WARN"
  • "ERROR"
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the <log-level>:
  g729Codec:
    enabled: true
    logLevel: "<log-level>"
webSocketServerPort Override the default web socket port for the G729 Codec service if the default port number will conflict with an existing port.
Default value
9001
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  g729Codec:
    enabled: true
    webSocketServerPort: 9001
Media Resource Control Protocol parameters
Parameter Description
enableMrcp Specify whether you want to enable Media Resource Control Protocol Version 2 (MRCPv2) connections to enable the service to integrate with third-party speech to text and text to speech services.
Default value
false
Valid values
false
Do not enable MRCPv2 connections
true
Enable MRCPv2 connections.
If you want to use this option, you must create the following secrets in the PROJECT_CPD_INST_OPERANDS project:
  • A secret that contains the unimrcpclient.xml file. Specify the name of the secret in the unimrcpConfigSecretName parameter.
Including this parameter
Ensure that you uncomment the following lines:
  mrcp:
    enableMrcp: true
unimrcpConfigSecretName Specify the name of the secret that includes the unimrcpclient.xml file.
To create the secret that contains the unimrcpclient.xml file, run the following commands:
  1. Set the MRCP_FILE environment variable to the fully qualified name of the unimrcpclient.xml file:
    MRCP_FILE=<fully-qualified-file-name>
  2. Create the secret. The following command uses the recommended name, unimrcp-config-secret. You can change the name if it will conflict with another secret in your environment.
    oc create secret generic unimrcp-config-secret \
    --from-file=unimrcpConfig=${MRCP_FILE} \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
Default value
unimrcp-config-secret

If you don't specify the unimrcpConfigSecretName parameter, the default secret name is used.

Valid values
The name of the secret that contains the unimrcpclient.xml file.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the <secret-name>:
  mrcp:
    enableMrcp: true
    unimrcpConfigSecretName: <secret-name>
mrcpv2SipPort Override the default SIP port for the MRCPv2 connection if the default port number will conflict with an existing port.
Default value
5555
Valid values
An available port on the server.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate value for the <port-number>:
  mrcp:
    enableMrcp: true
    mrcpv2SipPort: <port-number>

Watson Discovery parameters

If you plan to install Watson Discovery, you can specify the following installation option in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameter is optional. If you do not set this installation parameter, the default value is used. Uncomment the parameter and update the value appropriately.

The sample YAML content uses the default value.

################################################################################
# Watson Discovery parameters
################################################################################
#discovery_deployment_type: Production
Property Description
discovery_deployment_type The deployment type for Watson Discovery.

The deployment type determines the number of resources allocated to Watson Discovery.

Default value
Production
Valid values
Production
A production deployment has at least two replicas of each pod to support production-scale workloads. For the production deployment, the deployment size is the small scaleConfig setting.
Starter
A starter deployment has fewer resources and less computing power than a production deployment. For the starter deployment, the deployment size is the xsmall scaleConfig setting.

In previous releases, this deployment type was called the development deployment type.

Watson Speech services parameters

If you plan to install the Watson Speech services, you can specify the following installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.

The sample YAML content uses the default values.

################################################################################
# Watson Speech services parameters
################################################################################

# ------------------------------------------------------------------------------
# Watson Speech to Text parameters
# ------------------------------------------------------------------------------
#watson_speech_enable_stt_async: false
#watson_speech_enable_stt_customization: false
#watson_speech_enable_stt_runtime: true
#watson_speech_stt_scale_config: xsmall

# ------------------------------------------------------------------------------
# Watson Text to Speech parameters
# ------------------------------------------------------------------------------
#watson_speech_enable_tts_customization: false
#watson_speech_enable_tts_runtime: true
#watson_speech_tts_scale_config: xsmall

# ------------------------------------------------------------------------------
# Watson Speech to Text models
# ------------------------------------------------------------------------------
#watson_speech_models: ["enUsBroadbandModel","enUsNarrowbandModel","enUsShortFormNarrowbandModel","enUsTelephony","enUsMultimedia"]

# ------------------------------------------------------------------------------
# Watson Text to Speech enhanced neural voices
# ------------------------------------------------------------------------------
#watson_speech_voices: ["enUSAllisonV3Voice","enUSLisaV3Voice","enUSMichaelV3Voice"]
Watson Speech to Text parameters

The following options apply only if you install the Watson Speech to Text service.

Property Description
watson_speech_enable_stt_async Specify whether to enable asynchronous HTTP requests. For example, enable this feature if you have large requests that you want to process asynchronously.
Default value
false
Valid values
false
Do not enable asynchronous HTTP requests.
true
Enable asynchronous HTTP requests.

When you set this property to true, it enables the /v1/recognitions interface.

watson_speech_enable_stt_customization Specify whether to enable Watson Speech to Text customizations:
  • Language model customization, which enables the service to more accurately recognize domain-specific terms.
  • Acoustic model customization, which enables the service to adapt to environmental noise, audio quality, and the accent or cadence of the speakers.
Default value
false
Valid values
false
Do not enable Watson Speech to Text customizations.
true
Enable Watson Speech to Text customizations.
When you set this property to true, it enables the following interfaces:
  • /v1/customizations for language model customization.
  • /v1/acoustic_customizations for acoustic model customization.
watson_speech_enable_stt_runtime Specify whether to enable the microservice for speech recognition. You must enable this microservice if you install the Watson Speech to Text service.
Default value
true
Valid values
false
Do not enable the microservice for speech recognition.
Important: This microservice is automatically enabled if you set either of the following properties to true:
  • watson_speech_enable_stt_customization
  • watson_speech_enable_stt_async
true
Enable the microservice for speech recognition.

When you set this property to true, it enables the /v1/recognize interface.

watson_speech_stt_scale_config Specify the size of the service.
Default value
xsmall
Valid values
  • xsmall
  • small
  • medium
  • large
  • custom

For detailed information about each size, refer to the component scaling guidance PDF.

Watson Text to Speech parameters

The following options apply only if you install the Watson Text to Speech service.

Property Description
watson_speech_enable_tts_customization Specify whether to enable Watson Text to Speech customizations, which enables the service to create a dictionary of words and their translations for a specific language.
Default value
false
Valid values
false
Do not enable Watson Text to Speech customizations.
true
Enable Watson Text to Speech customizations.

When you set this property to true, it enables the /v1/customizations interface for customization.

watson_speech_enable_tts_runtime Specify whether to enable the microservice for speech synthesis. You must enable this microservice if you install the Watson Text to Speech service.
Default value
true
Valid values
false
Do not enable the microservice for speech synthesis.
Important: This microservice is automatically enabled if you set watson_speech_enable_tts_customization to true.
true
Enable the microservice for speech synthesis.

When you set this property to true, it enables the /v1/synthesize interface.

watson_speech_tts_scale_config Specify the size of the service.
Default value
xsmall
Valid values
  • xsmall
  • small
  • medium
  • large
  • custom

For detailed information about each size, refer to the component scaling guidance PDF.

Watson Speech to Text models

The following options apply only if you install the Watson Speech to Text service.

Property Description
watson_speech_models Specify which Watson Speech to Text models are installed.

Specify the models as a comma-separated array. For example:

["enUsBroadbandModel","enUsNarrowbandModel","enUsShortFormNarrowbandModel",...]
Default value
By default, the following models are installed:
  • enUsBroadbandModel (US English (en-US) Broadband model)
  • enUsNarrowbandModel(US English (en-US) Narrowband model)
  • enUsShortFormNarrowbandModel (US English (en-US) Short-Form Narrowband model)
  • enUsMultimedia (US English (en-US) Multimedia model)
  • enUsTelephony (US English (en-US) Telephony model )
Valid Values
Previous- generation models
  • enUsBroadbandModel (US English (en-US) Broadband model)
  • enUsNarrowbandModel(US English (en-US) Narrowband model)
  • enUsShortFormNarrowbandModel (US English (en-US) Short-Form Narrowband model)
  • arMsBroadbandModel (Modern Standard Arabic (ar-MS) Broadband model)
  • deDeBroadbandModel (German (de-DE) Broadband model)
  • deDeNarrowbandModel (German (de-DE) Narrowband model)
  • enAuBroadbandModel (Australian English (en-AU) Broadband model)
  • enAuNarrowbandModel (Australian English (en-AU) Narrowband model)
  • enGbBroadbandModel (UK English (en-GB) Broadband model)
  • enGbNarrowbandModel (UK English (en-GB) Narrowband model)
  • esEsBroadbandModel (Castilian Spanish (es-ES, es-AR, es-CL, es-CO, es-MX, and es-PE) Broadband models)
  • esEsNarrowbandModel (Castilian Spanish (es-ES, es-AR, es-CL, es-CO, es-MX, and es-PE) Narrowband models)
  • frCaBroadbandModel (Canadian French (fr-CA) Broadband model)
  • frCaNarrowbandModel (Canadian French (fr-CA) Narrowband model)
  • frFrBroadbandModel (French (fr-FR) Broadband model)
  • frFrNarrowbandModel (French (fr-FR) Narrowband model)
  • itItBroadbandModel (Italian (it-IT) Broadband model)
  • itItNarrowbandModel (Italian (it-IT) Narrowband model)
  • jaJpBroadbandModel (Japanese (ja-JP) Broadband model)
  • jaJpNarrowbandModel (Japanese (ja-JP) Narrowband model)
  • koKrBroadbandModel (Korean (ko-KR) Broadband model)
  • koKrNarrowbandModel (Korean (ko-KR) Narrowband model)
  • nlNlBroadbandModel (Dutch (nl-NL) Broadband model)
  • nlNlNarrowbandModel (Dutch (nl-NL) Narrowband model)
  • ptBrBroadbandModel (Brazilian Portuguese (pt-BR) Broadband model)
  • ptBrNarrowbandModel (Brazilian Portuguese (pt-BR) Narrowband model)
  • zhCnBroadbandModel (Mandarin Chinese (zh-CN) Broadband model)
  • zhCnNarrowbandModel (Mandarin Chinese (zh-CN) Narrowband model)
Next-generation models
  • enUsMultimedia (US English (en-US) Multimedia model)
  • enUsTelephony (US English (en-US) Telephony model )
  • arMsTelephony (Modern Standard Arabic (ar-MS) Telephony model)
  • csCZTelephony (Czech (cs-CZ) Telephony model)
  • deDeMultimedia (German (de-DE) Multimedia model)
  • deDeTelephony (German (de-DE) Telephony model)
  • enAuMultimedia (Australian English (en-AU) Multimedia model)
  • enAuTelephony (Australian English (en-AU) Telephony model)
  • enGbMultimedia (UK English (en-GB) Multimedia model)
  • enGbTelephony (UK English (en-GB) Telephony model)
  • enInTelephony (Indian English (en-IN) Telephony model)
  • enWwMedicalTelephony (English (all supported dialects) Medical Telephony model)
  • esEsMultimedia (Castilian Spanish (es-ES) Multimedia model)
  • esEsTelephony (Castilian Spanish (es-ES) Telephony model)
  • esLaTelephony (Latin American Spanish (es-LA) Telephony model)
  • frCaMultimedia (Canadian French (fr-CA) Multimedia model)
  • frCaTelephony (Canadian French (fr-CA) Telephony model)
  • frFrMultimedia (French (fr-FR) Multimedia model)
  • frFrTelephony (French (fr-FR) Telephony model)
  • hiInTelephony (Indian Hindi (hi-IN) Telephony model)
  • itItMultimedia (Italian (it-IT) Multimedia model)
  • itItTelephony (Italian (it-IT) Telephony model)
  • jaJpMultimedia (Japanese (ja-JP) Multimedia model)
  • jaJpTelephony (Japanese (ja-JP) Telephony model)
  • koKrMultimedia (Korean (ko-KR) Multimedia model)
  • koKrTelephony (Korean (ko-KR) Telephony model)
  • nlBeTelephony (Belgian Dutch (nl-BE) Telephony model)
  • nlNlMultimedia (Netherlands Dutch (nl-NL) Multimedia model)
  • nlNlTelephony (Netherlands Dutch (nl-NL) Telephony model)
  • ptBrMultimedia (Brazilian Portuguese (pt-BR) Multimedia model)
  • ptBrTelephony (Brazilian Portuguese (pt-BR) Telephony model)
  • svSeTelephony (Swedish (sv-SE) Telephony model)
  • zhCnTelephony (Mandarin Chinese (zh-CN) Telephony model)
Large speech models
  • deDe (German (de-DE) model)
  • enUs (US English (en-US) model)
  • enAu (Australian English (en-AU) model)
  • enGb (UK English (en-GB) model)
  • enIn (Indian English (en-IN) model)
  • esAR (Argentinian Spanish (es-AR) model)
  • esCl (Chilean Spanish (es-CL) model)
  • esCo (Colombian Spanish (es-ES) model)
  • esEs (Castilian Spanish (es-ES) model)
  • esMx (Mexican Spanish (es-ES) model)
  • esPe (Peruvian Spanish (es-ES) model)
  • frCa (Canadian French (fr-CA) model)
  • frFr (French (fr-FR) model)
  • jaJp (Japanese (ja-JP) model)
  • ptBr (Brazilian Portuguese (pt-BR) model)
  • ptPt (Portugal Portuguese (pt-PT) model)
Watson Text to Speech voices

The following options apply only if you install the Watson Text to Speech service.

Property Description
watson_speech_voices Specify which Watson Text to Speech voices are installed.

Specify the voices as a comma-separated array. For example:

["enUSAllisonV3Voice","enUSLisaV3Voice","enUSMichaelV3Voice",...]
Default value
By default, the following voices are installed:
  • enUSAllisonV3Voice (US English (en-US) Allison enhanced neural voice)
  • enUSLisaV3Voice (US English (en-US) Lisa enhanced neural voice)
  • enUSMichaelV3Voice (US English (en-US) Michael enhanced neural voice)
Valid Values
Enhanced neural voices
  • enUSAllisonV3Voice (US English (en-US) Allison enhanced neural voice)
  • enUSLisaV3Voice (US English (en-US) Lisa enhanced neural voice)
  • enUSMichaelV3Voice (US English (en-US) Michael enhanced neural voice)
  • enUSEmilyV3Voice (US English (en-US) Emily enhanced neural voice)
  • enUSHenryV3Voice (US English (en-US) Henry enhanced neural voice)
  • enUSKevinV3Voice (US English (en-US) Kevin enhanced neural voice)
  • enUSOliviaV3Voice (US English (en-US) Olivia enhanced neural voice)
  • deDEBirgitV3Voice (German (de-DE) Birgit enhanced neural voice)
  • deDEDieterV3Voice (German (de-DE) Dieter enhanced neural voice)
  • deDEErikaV3Voice (German (de-DE) Erika enhanced neural voice)
  • enGBCharlotteV3Voice (UK English (en-GB) Charlotte enhanced neural voice)
  • enGBJamesV3Voice (UK English (en-GB) James enhanced neural voice)
  • enGBKateV3Voice (UK English (en-GB) Kate enhanced neural voice)
  • esESEnriqueV3Voice (Castilian Spanish (es-ES) Enrique enhanced neural voice)
  • esESLauraV3Voice (Castilian Spanish (es-ES) Laura enhanced neural voice)
  • esLASofiaV3Voice (Latin American Spanish (es-LA) Sofia enhanced neural voice)
  • esUSSofiaV3Voice (North American Spanish (es-US) Sofia enhanced neural voice)
  • frCALouiseV3Voice (French Canadian (fr-CA) Louise enhanced neural voice)
  • frFRNicolasV3Voice (French (fr-FR) Nicolas enhanced neural voice)
  • frFRReneeV3Voice (French (fr-FR) Renee enhanced neural voice )
  • itITFrancescaV3Voice (Italian (it-IT) Francesca enhanced neural voice)
  • jaJPEmiV3Voice (Japanese (ja-JP) Emi enhanced neural voice)
  • koKRJinV3Voice (Korean (ko-KR) Jin enhanced neural voice)
  • nlNLMerelV3Voice (Netherlands Dutch (nl-NL) Merel enhanced neural voice)
  • ptBRIsabelaV3Voice (Brazilian Portuguese (pt-BR) Isabela enhanced neural voice)
Expressive neural voices
  • enAUHeidiExpressive (Australian English (en-AU) Heidi expressive neural voice)
  • enAUJackExpressive (Australian English (en-AU) Jack expressive neural voice)
  • enGBGeorgeExpressive (GB English (en-GB) George expressive neural voice)
  • enUSAllisonExpressive (US English (en-US) Allison expressive neural voice)
  • enUSEmmaExpressive (US English (en-US) Emma expressive neural voice)
  • enUSLisaExpressive (US English (en-US) Lisa expressive neural voice)
  • enUSMichaelExpressive (US English (en-US) Michael expressive neural voice)
  • ptBRLucasExpressive (Brazilian Portuguese (pt-BR) Lucas expressive nueral voice)

watsonx.ai parameters

You can specify the following installation options for watsonx.ai in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used.

The sample YAML content uses the default values.

########################################################################
# watsonx.ai parameters
########################################################################
custom_spec:  
 watsonx_ai:
  tuning_disabled: false
  lite_install: false
Property Description
tuning_disabled Specify whether prompt-tuning is available in the Tuning Studio tool.
When prompt-tuning is enabled, more resources must be allocated to support prompt-tuning in watsonx.ai.
Tip: If you don't plan to use prompt-tuning immediately, you can enable prompt-tuning when you are ready.
Default value
false
Valid values
false
Enable prompt-tuning.
true
Disable prompt-tuning.
lite_install Specify whether you want to install the full watsonx.ai service or the watsonx.ai lightweight engine. For more information, see Choosing an IBM watsonx.ai installation mode.
Default value
false
Valid values
false
Install the full service.
true
Install the lightweight engine.

watsonx Assistant parameters

If you plan to install watsonx Assistant, you can specify the following installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.

The sample YAML content uses the default values.

################################################################################
# watsonx Assistant parameters
################################################################################
#watson_assistant_size: Production
#watson_assistant_bigpv: false
#watson_assistant_analytics_enabled: true
#watson_assistant_watsonx_ai_type: embedded
#watson_assistant_syom_models: []
#watson_assistant_ootb_models: []
Property Description
watson_assistant_size The deployment size for watsonx Assistant.

The deployment size determines the number of resources allocated to watsonx Assistant.

Default value
Production
Valid values
large
A large deployment has at least 3 replicas of each pod to support production-scale workloads with a large number of concurrent API calls. large is equivalent to the large scaleConfig setting.
Production
A production deployment has at least two replicas of each pod to support production-scale workloads. Production is equivalent to the medium scaleConfig setting.
Starter
A starter deployment has fewer resources and less computing power than a production deployment. Starter is the equivalent to the small scaleConfig setting.

In previous releases, this deployment type was called the development deployment type.

watson_assistant_bigpv Specify whether to create larger physical volumes to improve IOPS performance.

Create larger physical volumes if your storage class IOPS performance depends on the size of the physical volume.

Important: You cannot change this setting after you install watsonx Assistant.

You do not need to create larger physical volumes if you use the following storage:

  • Red Hat OpenShift Data Foundation
  • IBM Fusion Data Foundation
  • IBM Fusion Global Data Platform
  • IBM Storage Scale Container Native
  • Portworx
  • IBM Cloud Block Storage
Default value
false
Valid values
false
Create physical volumes with the default size.
true
Create larger physical volumes to improve IOPS performance.
watson_assistant_analytics_enabled Specify whether to store chat logs and analytics.
Default value
true
Valid values
false
Do not store chat logs and analytics.
true
Store chat logs and analytics.
watson_assistant_watsonx_ai_type Specify this option if you want to install Inference foundation models (watsonx_ai_ifm) to enable the following features, which require GPUs:

Omit this option if you do not want to enable the preceding features.

For more information about supported GPUs, see the GPU requirements for models.

Default value
The default value depends on whether you are installing or upgrading watsonx Assistant:
  • For installations, the default value is none.

    If you omit this option, the GPU features are not enabled.

  • For upgrades, the existing value is used as the default value.

    If you omit this option, the option, the current configuration is used.

Valid values
embedded
Install Inference foundation models (watsonx_ai_ifm) to enable features that require GPUs.
none
Do not install Inference foundation models (watsonx_ai_ifm).

The GPU features will not be enabled.

watson_assistant_syom_models Specify whether you want to use a specialized model that is specifically tuned for use with watsonx Assistant for:
Important: The following models will be automatically installed if you install Inference foundation models (watson_assistant_watsonx_ai_type: embedded) and you do not specify a value for watson_assistant_syom_models or watson_assistant_ootb_models:
  • ibm-granite-8b-unified-api-model-v2
  • granite-3-8b-instruct
Default value
[]
Valid values
[]
Do not install a specialized model.
ibm-granite-8b-unified-api-model-v2
Install the ibm-granite-8b-unified-api-model-v2 specialized model. This model enables assistants to:
  • Rewrite user questions to an understood format for conversational search
  • Gather information to fill in variables in a conversational skill
Including this parameter
Ensure that you uncomment the following line and specify the model name as a list item on a new line:
watson_assistant_syom_models:
  - ibm-granite-8b-unified-api-model-v2
watson_assistant_ootb_models Specify whether you want to use a general model for:
Important: The following models will be automatically installed if you install Inference foundation models (watson_assistant_watsonx_ai_type: embedded) and you do not specify a value for watson_assistant_syom_models or watson_assistant_ootb_models:
  • ibm-granite-8b-unified-api-model-v2
  • granite-3-8b-instruct
Default value
[]
Valid values
[]
Do not install a general model.
granite-3-8b-instruct
Install the granite-3-8b-instruct general model. This model enables assistants to:
  • Answer conversational search questions
Important: If you use a private container registry, you must explicitly mirror the granite-3-8b-instruct image to the private container registry.
llama-3-1-70b-instruct
Install the llama-3-1-70b-instruct general model. This model enables assistants to:
  • Rewrite user questions to an understood format for conversational search
  • Answer conversational search questions.
  • Gather information to fill in variables in a conversational skill
Important: If you use a private container registry, you must explicitly mirror the llama-3-1-70b-instruct image to the private container registry.
Including this parameter
Ensure that you uncomment the following line and specify the model name as a list item on a new line.
Install the granite-3-8b-instruct model
watson_assistant_ootb_models:
  - granite-3-8b-instruct
Install the llama-3-1-70b-instruct model
watson_assistant_ootb_models:
  - llama-3-1-70b-instruct
Install both models
watson_assistant_ootb_models:
  - granite-3-8b-instruct
  - llama-3-1-70b-instruct

watsonx BI parameters

If you plan to install watsonx BI, you must specify the following installation option in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameter is required.

Replace <license> with the appropriate value for your environment..

########################################################################
# watsonx BI parameters
########################################################################
wxbi_license_type: <license>
Property Description
wxbi_license_type Specify the watsonx BI license you purchased.
Status
Required.
Valid values
Premium
Specify this option if you purchased IBM watsonx BI Premium.
Premium_NonProd
Specify this option if you purchased IBM watsonx BI Premium Non-Production.
Premium_AddOn_CA
Specify this option if you purchased IBM watsonx BI Premium Add-On for Cognos® Analytics.

watsonx.data parameters

5.2.0 5.2.1 If you plan to install watsonx.data on IBM Software Hub Version 5.2.0 or 5.2.1, you must specify installation options. Some parameters are required. Some parameters are optional. If you do not set the optional parameters, the default values are used.

5.2.2 and later If you plan to install watsonx.data Version 5.2.2, you can specify installation options. The parameters are optional. If you do not set these installation parameters, the default values are used.

Specify installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

Parameters for Version 5.2.0 or 5.2.1

The sample YAML content uses the default values, where appropriate. You must replace <license> with the appropriate value for your environment..

Uncomment the optional parameters that you want to override and update the values appropriately.

########################################################################
# watsonx.data parameters
########################################################################
license_type: <license>
#wxd_lite_milvus_enabled: false
#watsonx_data_scale_config: small
Parameters for Version 5.2.2

The sample YAML content uses the default values, where appropriate. Uncomment the optional parameters that you want to override and update the values appropriately.

########################################################################
# watsonx.data parameters
########################################################################
#wxd_lite_milvus_enabled: false
#watsonx_data_scale_config: small
Property Description
license_type

5.2.0 5.2.1 This parameter applies only to IBM Software Hub Version 5.2.0 and 5.2.1.

Specify the watsonx.data license you purchased.

Status
Required for installations on IBM Software Hub Version 5.2.0 or 5.2.1.
Valid values
standard
Specify this option if you purchased IBM watsonx.data.
standard_non_prod
Specify this option if you purchased IBM watsonx.data Non-Production.
standard_reserved
Specify this option if you purchased IBM watsonx.data Reserved.
standard_reserved_non_prod
Specify this option if you purchased IBM watsonx.data Reserved Non-Production.
wxd_lite_milvus_enabled Specify whether you want to install the full watsonx.data service or the watsonx.data lightweight engine.
Status
Optional.
Default value
false

If you omit this option, the default value is used.

Valid values
false
Install the full service.
true
Install the lightweight engine.
watsonx_data_scale_config Specify the scaling configuration based on the value that you set for the wxd_lite_milvus_enabled parameter.
  • If you plan to install the full watsonx.data service (wxd_lite_milvus_enabled: false), set the scaling configuration to small.
  • If you plan to install the watsonx.data lightweight engine (wxd_lite_milvus_enabled: true), set the scaling configuration to lightweight.
Status
Optional.
Default value
small

If you omit this option, the default value is used.

Valid values
small
Use the small scaling configuration.
lightweight
Use the lightweight scaling configuration.
Important: You must set watsonx_data_scale_config to lightweight if you want to install the watsonx.data lightweight engine.

watsonx.data Premium parameters

If you plan to install watsonx.data Premium, you must specify installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are required and depend on whether you already installed watsonx.ai on this instance of IBM Software Hub.

watsonx.ai is not installed

watsonx.data Premium requires the full version of watsonx.ai, but does not include entitlement to Tuning Studio.

The sample YAML content uses the required values, where appropriate. You must replace <license> with the appropriate value for your environment.

################################################################################
# watsonx.data premium parameters
################################################################################
wxd_premium_enable_models_on: gpu
license_type: <license>
custom_spec: 
 watsonx_ai:
  tuning_disabled: true
  lite_install: false
watsonx.ai lightweight engine is installed

watsonx.data Premium requires the full version of watsonx.ai, but does not include entitlement to Tuning Studio.

The sample YAML content uses the required values.

################################################################################
# watsonx.data premium parameters
################################################################################
wxd_premium_enable_models_on: gpu
license_type: <license>
custom_spec: 
 watsonx_ai:
  tuning_disabled: true
  lite_install: false
watsonx.ai is installed

The sample YAML content uses the required values.

################################################################################
# watsonx.data premium parameters
################################################################################
wxd_premium_enable_models_on: gpu
license_type: <license>
Property Description
wxd_premium_enable_models_on Install watsonx.data Premium on GPU.
Required value
gpu
license_type Specify the watsonx.data Premium license you purchased.
Valid values
premium
Specify this option if you purchased IBM watsonx.data Premium Edition.
premium_non_prod
Specify this option if you purchased IBM watsonx.data Premium Edition Non-Production.
premium_reserved
Specify this option if you purchased IBM watsonx.data Premium Edition Reserved.
premium_reserved_non_prod
Specify this option if you purchased IBM watsonx.data Premium Edition Reserved Non-Production.
tuning_disabled watsonx.data Premium does not include entitlement to Tuning Studio.

If you are installing watsonx.ai as part of watsonx.data Premium and did not purchase separate entitlement to watsonx.ai, you must install watsonx.ai without installing Tuning Studio.

If you previously installed the full watsonx.ai service, omit this option.

Required value
true
lite_install watsonx.data Premium requires the full version of watsonx.ai.

If you previously installed the full watsonx.ai service, omit this option.

Required value
false

watsonx.data intelligence parameters

If you plan to install watsonx.data intelligence, you can specify installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.

The sample YAML content uses the default values.

################################################################################
# watsonx.data intelligence parameters
################################################################################
#custom_spec:
#  watsonx_dataintelligence:
#    enableAISearch: false
#    enableDataGovernanceCatalog: true
#    enableKnowledgeGraph: true
#    enableDataQuality: false
#    enableDataLineage: true
#    enableDataProduct: true
#    enableGenerativeAICapabilities: true
#    enableSemanticEnrichment: true
#    enableSemanticEmbedding: false
#    enableTextToSql: false
#    enableModelsOn: cpu
#    customModelTextToSQL: granite-3-3-8b-instruct
Property Description
enableAISearch Specify whether to enable LLM-based semantic search for assets and artifacts across all workspaces.
Default value
false
Valid values
false
Do not enable LLM-based semantic search.
true
Enable LLM-based semantic search.
If you set enableAISearch: true, you must set at least one of the following options to true:
  • enableDataGovernanceCatalog

    By default, enableDataGovernanceCatalog is set to true.

  • enableDataProduct

    By default, enableDataProduct is set to true.

enableDataGovernanceCatalog Specify whether to enable data governance and catalog features.
Default value
true
Valid values
false
Do not enable the data governance and catalog features.
true
Enable the data governance and catalog features.
You must set enableDataGovernanceCatalog: true if you plan to use the following features:
  • Data quality (enableDataQuality: true)
  • Knowledge graph (enableKnowledgeGraph: true)
Tip: You can enable LLM-based search on the assets and artifacts in the catalog by setting enableAISearch: true.
enableKnowledgeGraph Specify whether to enable the knowledge graph feature. The knowledge graph provides the following capabilities:
  • Relationship explorer
  • Business term relationship search
Prerequisite
This feature requires the data governance catalog. You must set enableDataGovernanceCatalog:true.
Default value
true
Valid values
false
Do not enable the knowledge graph feature.
true
Enable the knowledge graph feature.
enableDataQuality Specify whether to enable data quality features in projects so that you can measure, monitor, and maintain the quality of your data to ensure the data meets your expectations and standards for specific use cases.
Important: When you enable the data quality feature, DataStage Enterprise is automatically installed.

If you did not purchase a separate DataStage license, use of DataStage Enterprise is limited to creating, managing, and running data quality rules. For examples of accepted use, see Enabling additional features after installation or upgrade for watsonx.data intelligence.

Prerequisite
This feature requires the data governance catalog. You must set enableDataGovernanceCatalog:true.
Default value
false
Valid values
false
Do not enable the data quality feature.
true
Enable the data quality feature.
enableDataLineage

Specify whether to enable data lineage features.

Data lineage is the process of tracking data as it is moved and used by different software tools. Lineage tracks where data came from, how it was transformed, and where the data was moved to.

Default value
true
Valid values
false
Do not enable data lineage features.
true
Enable data lineage features.
enableDataProduct

Specify whether to enable data sharing features.

When you enable data sharing, data producers can package data and data-related assets into data products so that data consumers have access to secure, high quality data

Default value
true
Valid values
false
Do not enable data sharing features.
true
Enable data sharing features.
Tip: You can enable LLM-based search on data products by setting enableAISearch: true.
enableGenerativeAICapabilities Specify whether to enable gen AI capabilities.
Enable the gen AI capabilities if you plan to use the following features:
  • Semantic enrichment
  • Text to SQL 5.2.1 and later
Default value
true
Valid values
false
Do not enable generative AI capabilities.
true
Enable generative AI capabilities.
enableSemanticEnrichment Specify whether to enable gen AI metadata expansion. Metadata expansion includes:
  • Table name expansion
  • Column name expansion
  • Description generation
Prerequisite
This feature requires gen AI capabilities. You must set enableGenerativeAICapabilities: true.
Default value
true
Valid values
false
Do not enable gen AI metadata expansion.
true
Enable gen AI metadata expansion.
enableSemanticEmbedding

5.2.1 and later This parameter is available starting in IBM Software Hub Version 5.2.1.

Specify whether to enable semantic embedding.

You must enable semantic embedding if you plan to use the following features:
  • Text to SQL
Prerequisite

This feature requires GPU. You cannot run the required model on CPU.

In addition, this feature requires gen AI capabilities. You must set enableGenerativeAICapabilities: true.

Default value
false
Valid values
false
Do not enable semantic embedding.
true
Enable semantic embedding.
enableTextToSql

5.2.1 and later This parameter is available starting in IBM Software Hub Version 5.2.1.

Specify whether to generate SQL queries from natural language input. Text-to-SQL capabilities can be used to create query-based data assets, which can be use for data products or in searches.

Prerequisite

This feature requires GPU. You can choose where to run the required models:

  • To run the required models locally, set enableModelsOn: gpu
  • To run the required models on a remote instance of watsonx.ai, set enableModelsOn: remote

In addition, this feature requires the following settings:

  • Semantic embedding.

    You must set enableSemanticEmbedding: true.

Default value
false
Valid values
false
Do not convert natural language queries to SQL queries.
true
Convert natural language queries to SQL queries.
enableModelsOn Specify where you want the models that are used with the gen AI capabilities to run.
Prerequisite
This feature requires gen AI capabilities. You must set enableGenerativeAICapabilities: true.
Default value
'cpu'
Valid values
'cpu'
Run the foundation model on CPU.
Restriction: This option can be used only for expanding metadata and term assignment when enriching metadata (enableSemanticEnrichment: true).
This option is not supported for:
  • Converting natural language queries to SQL queries ( enableTextToSql: true)
'gpu'
Run the foundation model on GPU.
Note: If you use this setting, the inference foundation models component (watsonx_ai_ifm) is automatically installed.

This option requires at least one GPU. For information about supported GPUs, see GPU requirements for models.

'remote'
Run the foundation model on a remote instance of watsonx.ai. The instance can be running on:
  • Another on-premises instance of IBM Software Hub
  • IBM watsonx as a Service
Important: If you use this setting, you must:
  1. Ensure that the foundation model is available and running on the remote instance.
  2. Create a connection to the remote instance.

    For more information, see Enabling users to connect to an external IBM watsonx.ai foundation model in the Data Fabric documentation.

If the preceding requirements are not met, any tasks that rely on the model will fail.

customModelTextToSql Specify a custom model for Text-To-SQL conversions.
Default model

By default, the Text-To-SQL feature uses the granite-3-8b-instruct model (ID: granite-3-8b-instruct).

Recommended model for better accuracy

You can improve the accuracy of results when converting plain text queries to SQL queries if you use the llama-3-3-70b-instruct model (ID: llama-3-3-70b-instruct).

However, this model requires significantly more resources than the granite-3-8b-instruct model. For more information about the resources required for each model, see GPU requirements for models.

Using other models

If you chose to use a different model, the accuracy of the results might vary.

Prerequisite

This option applies only to environments with local GPUs (enableModelsOn: gpu).

If you want to use a custom model on a remote instance of watsonx.ai (enableModelsOn: remote), see Enabling users to connect to an external IBM watsonx.ai foundation model in the Data Fabric documentation.

In addition, this feature requires the following settings:

  • Text-To-SQL conversions.

    You must set enableTextToSql: true.

Default value
granite-3-8b-instruct
Valid values
Specify the ID of the model that you want to use. The IDs of the recommended models are:
  • granite-3-8b-instruct
  • llama-3-3-70b-instruct

watsonx.governance parameters

If you plan to install watsonx.governance, you must specify the following installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The parameters are required.

The sample YAML content uses the default values, where defaults exist.

################################################################################
# watsonx.governance parameters
################################################################################
custom_spec:
  watsonx_governance:
    installType: <install-type>
    enableFactsheet: true
    enableOpenpages: true
    enableOpenscale: true
#   openpagesInstanceCR: "op-wxgov-instance"
#   openPages:
#     databaseType: internal
#     database: Db2
#     dbSecretName: <secret-name>
#     enableGlobalSearch: false

#override_components_meta:
#  watsonx_governance:
#    status_max_retries: 240
Property Description
installType Specify which watsonx.governance entitlement you purchased.
Default value
all

If you omit this option, the default value is used.

Valid values
all
Specify this option if you purchased both Model Management and Risk and Compliance Foundation entitlement.
mm
Specify this option if you purchased Model Management entitlement.
rcf
Specify this option if you purchased Risk and Compliance Foundation entitlement.
enableFactsheet Specify whether to install AI Factsheets. This service enables you to track assets and record facts in AI use cases.

AI Factsheets is available with the Model Management entitlement.

If you purchased only the Risk and Compliance Foundation entitlement, you cannot install AI Factsheets.

Default value
true

If you omit this option, the default value is used.

Valid values
false
Do not install AI Factsheets.
true
Install AI Factsheets.
enableOpenpages Specify whether to install OpenPages. This service enables you to design workflows and view AI lifecycle activity from a dashboard to aid in meeting compliance and regulatory goals.

OpenPages is available with the Risk and Compliance Foundation entitlement.

If you purchased only the Model Management entitlement, you cannot install OpenPages.

Default value
true

If you omit this option, the default value is used.

Valid values
false
Do not install OpenPages.
true
Install OpenPages.
enableOpenscale Specify whether to install Watson OpenScale. This service enables you to evaluate and monitor generative AI prompts or machine learning assets for dimensions relating to fairness, quality, and drift.
Watson OpenScale is available with the Model Management entitlement.
  • only
  • Model Management and Risk and Compliance Foundation

If you purchased only Risk and Compliance Foundation entitlement, you cannot install Watson OpenScale.

Default value
true

If you omit this option, the default value is used.

Valid values
false
Do not install Watson OpenScale.
true
Install Watson OpenScale.
openpagesInstanceCR Specify the name of an existing OpenPages service instance.

This option applies only if you set enableOpenPages: true and you have an existing OpenPages service instance that you want to use with watsonx.governance.

Default value
"openpagesinstance-cr"

If you omit this option, the default value is used.

Restriction: If you do not want to use an existing OpenPages service instance, do not override the default value.
Valid values
The name of an existing OpenPages service instance. Ensure that the value has the following format:
"existing-cr-name"
databaseType Specify whether you want to use an existing external database or use an automatically created internal database.

This option applies only if you set enableOpenPages: true.

Restriction: Do not use this option if you plan to use an existing OpenPages service instance with watsonx.governance.
Default value
internal

If you omit this option, the default value is used.

Valid values
external
Use an existing external database.

If you specify this option, you must also specify the dbSecretName parameter.

Important: If you want to use an external database, you must ensure that it's properly configured before you create the OpenPages service instance. For more information, see Setting up an external Db2 database for OpenPages.
internal
Use an automatically created internal database.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the databaseType parameter:
   openPages:
     databaseType: <database-type>
database Specify whether you want to use Db2 or Oracle as your OpenPages database. If you're using an external database, specify the vendor.
Default value
Db2

If you omit this option, the default value is used.

Valid values
Db2
Use Db2 as the database.
Oracle
Use Oracle as the database.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the database parameter:
   openPages:
     database: <database>
dbSecretName

If you want to use an existing external database, you must specify the name of the OpenShift secret that references the database credential secrets in the vault.

This option applies only if you set enableOpenPages: true.

Restriction: Do not use this option if you plan to use an existing OpenPages service instance with watsonx.governance.
Valid values
The name of the OpenShift secret that you created when you completed Setting up an external Db2 database for OpenPages.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the dbSecretName parameter:
   openPages:
     databaseType: external
     dbSecretName: <secret-name>
enableGlobalSearch

Specify whether to enable global search across all object types to find records relevant to the specified search terms.

Default value
false

If you omit this option, the default value is used.

Valid values
false
Do not enable global search.
true
Enable global search.
status_max_retries Specify whether you want to override the default timeout value when installing components of watsonx.governance.
Default value
150 (minutes)

If you omit this option, the default value is used.

Valid values
An integer for the amount of minutes before timeout.
Including this parameter
Ensure that you uncomment the following lines and specify the appropriate values for the status_max_retries parameter:

override_components_meta:
  watsonx_governance:
    status_max_retries: <maximum minutes before timeout>

watsonx Orchestrate parameters

If you plan to install watsonx Orchestrate configuration, specify the appropriate installation options in a file named install-options.yml in the cpd-cli work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).

The installation options that you specify depend on several factors. The first factor that you must consider is where you want to install the foundation models for watsonx Orchestrate. You can use foundation models on:
The same cluster as watsonx Orchestrate
Choosing a model GPU requirements
You must use one of the models provided by IBM.

The features that you plan to use determine the model or models that you must install.

You must have sufficient GPU on the cluster where you plan to install watsonx Orchestrate.
A remote or external cluster by using AI gateway
Choosing a model GPU requirements
You can choose whether to use:
  • One of the models provided by IBM

    If you use the models provided by IBM, the features that you plan to use determine the models that you must install.

  • A custom model

    If you use a custom model, you must register the external model through AI gateway.

Local GPU is not required.
Remote GPU might be required:
  • If you plan to host models on a remote cluster, you must have sufficient GPU on the cluster where you plan to install the foundation models.

    For more information on GPU requirements, consult the documentation from the model provider.

  • If you plan to use models hosted by a third-party, you don't need GPU.
After you decide where you will install the foundation models, you can decide which features you want to install. You can install:
  • Only the agentic AI features
  • The agentic AI features and legacy features, such as conversational search and conversational skills.
Models provided by IBM

Review the following table to determine which model or models provide the features that you need:

Model
Agentic AI

Domain agents
Agentic AI

Tool and API orchestration
Conversational search

Answer generation
Conversational search

Query rewrite
Conversational skills

Custom actions information gathering
granite-3-8b-instruct No No Yes No No
ibm-granite-8b-unified-api-model-v2 No No No Yes Yes
llama-3-1-70b-instruct No Yes Yes Yes Yes
llama-3-2-90b-vision-instruct Yes Yes Yes Yes Yes
Important: The llama-3-2-90b-vision-instruct model is recommended over the llama-3-1-70b-instruct model. The llama-3-2-90b-vision-instruct model offers:
  • Better performance
  • More accurate results
Private container registry users: You must mirror the images for the models that you plan to use to the private container registry. For more information, see Determining which models to mirror to your private container registry.

Choose the appropriate YAML based on where you plan to install the foundation models:

Install foundation models on the same cluster as watsonx Orchestrate
Choose the appropriate YAML based on the features that you want to install:
Agentic features only
Choose the appropriate YAML based on the model that you want to install.
  • To install the llama-3-1-70b-instruct model, choose the appropriate YAML based on the version of IBM Software Hub you installed:
    Version 5.2.0 or Version 5.2.1
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_install_mode: lite
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_ootb_models:
      - llama-3-1-70b-instruct
      - ibm-slate-30m-english-rtrvr
    Version 5.2.2
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_ootb_models:
      - llama-3-1-70b-instruct
      - ibm-slate-30m-english-rtrvr
  • To install the llama-3-2-90b-vision-instruct model, choose the appropriate YAML file based on the version of IBM Software Hub you installed:
    Version 5.2.0 or Version 5.2.1
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_install_mode: lite
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_ootb_models:
      - llama-3-2-90b-vision-instruct
      - ibm-slate-30m-english-rtrvr
    Version 5.2.2
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_ootb_models:
      - llama-3-2-90b-vision-instruct
      - ibm-slate-30m-english-rtrvr
Legacy features and agentic AI features
The parameters that you specify depend on the models that you want to install:
  • To install the granite-3-8b-instruct model, choose the appropriate YAML based on the version of IBM Software Hub you installed:
    Version 5.2.0 or Version 5.2.1
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_ootb_models:
      - granite-3-8b-instruct
      - ibm-slate-30m-english-rtrvr
    Version 5.2.2
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_install_mode: agentic_skills_assistant
    watson_orchestrate_ootb_models:
      - granite-3-8b-instruct
      - ibm-slate-30m-english-rtrvr
  • To install the ibm-granite-8b-unified-api-model-v2 model, choose the appropriate YAML based on the version of IBM Software Hub you installed:
    Version 5.2.0 or Version 5.2.1
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_syom_models:
      - ibm-granite-8b-unified-api-model-v2
    watson_orchestrate_ootb_models:
      - ibm-slate-30m-english-rtrvr
    Version 5.2.2
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_install_mode: agentic_skills_assistant
    watson_orchestrate_syom_models:
      - ibm-granite-8b-unified-api-model-v2
    watson_orchestrate_ootb_models:
      - ibm-slate-30m-english-rtrvr
  • To install the llama-3-1-70b-instruct model, choose the appropriate YAML based on the version of IBM Software Hub you installed:
    Version 5.2.0 or Version 5.2.1
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_ootb_models:
      - llama-3-1-70b-instruct
      - ibm-slate-30m-english-rtrvr
    Version 5.2.2
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_install_mode: agentic_skills_assistant
    watson_orchestrate_ootb_models:
      - llama-3-1-70b-instruct
      - ibm-slate-30m-english-rtrvr
  • To install the llama-3-2-90b-vision-instruct model, choose the appropriate YAML based on the version of IBM Software Hub you installed:
    Version 5.2.0 or Version 5.2.1
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_ootb_models:
      - llama-3-2-90b-vision-instruct
      - ibm-slate-30m-english-rtrvr
    Version 5.2.2
    ################################################################################
    # watsonx Orchestrate parameters
    ################################################################################
    watson_orchestrate_watsonx_ai_type: true
    watson_orchestrate_install_mode: agentic_skills_assistant
    watson_orchestrate_ootb_models:
      - llama-3-2-90b-vision-instruct
      - ibm-slate-30m-english-rtrvr
Using foundation models on a remote or external cluster by using AI gateway
Important: If you choose this option, you must register the models through AI gateway after you install watsonx Orchestrate.

Choose the appropriate YAML based on the features that you want to install:

Agentic features only
Choose the appropriate YAML based on the version of IBM Software Hub you installed:
Version 5.2.0 or Version 5.2.1
################################################################################
# watsonx Orchestrate parameters
################################################################################
watson_orchestrate_install_mode: lite

The slate-30m-english-rtrvr model, which does not require GPU, is automatically installed when you install watsonx Orchestrate.

Version 5.2.2
################################################################################
# watsonx Orchestrate parameters
################################################################################
watson_orchestrate_watsonx_ai_type: false
Legacy features and agentic AI features
Choose the appropriate YAML based on the version of IBM Software Hub you installed:
Version 5.2.0 or Version 5.2.1
################################################################################
# watsonx Orchestrate parameters
################################################################################
watson_orchestrate_watsonx_ai_type: false

The slate-30m-english-rtrvr model, which does not require GPU, is automatically installed when you install watsonx Orchestrate.

Version 5.2.2
################################################################################
# watsonx Orchestrate parameters
################################################################################
watson_orchestrate_watsonx_ai_type: false
watson_orchestrate_install_mode: agentic_skills_assistant
Property Description
watson_orchestrate_watsonx_ai_type

Specify whether to install Inference foundation models (watsonx_ai_ifm) based on where you plan to install the foundation models.

You can install the foundation models on:
  • The same cluster as watsonx Orchestrate

    In this situation, you must install Inference foundation models.

    If you choose this option, you must have sufficient GPU on the cluster where you plan to install watsonx Orchestrate.

  • A remote cluster

    If you choose this option, you must have sufficient GPU on the cluster where you plan to install the foundation models.

    After you install watsonx Orchestrate, you must use the AI gateway to connect to the foundation models on the remote cluster.

Default value
true

If you omit this option, the default value is used.

Valid values
false
Do not install Inference foundation models.

Specify this option if you plan to install the foundation models on a remote cluster.

true
Install the Inference foundation models.

Specify this option if you plan to install the foundation models on the cluster where you plan to install watsonx Orchestrate.

watson_orchestrate_install_mode

Specify the features that you plan to install.

The usage of this option depends on the version of IBM Software Hub you installed:

  • 5.2.0 5.2.1 If you plan to use only agentic AI features in watsonx Orchestrate, use the watson_orchestrate_install_mode parameter to install only the features that are needed for agentic AI.

    If you plan to use conversational search or conversational skills, do not specify this parameter.

  • 5.2.2 If you plan to use only agentic AI features in watsonx Orchestrate, do not specify this parameter.
  • If you plan to use conversational search or conversational skills, use the watson_orchestrate_install_mode to install agentic AI features, conversational search, and conversational skills.
Valid values
lite
5.2.0 5.2.1 Specify this option only if both of the following statements are true:
  • IBM Software Hub Version 5.2.0 or Version 5.2.1 is installed
  • You want to install only the agentic AI features
If you set watson_orchestrate_watsonx_ai_type: true to install the foundation models on the same cluster as watsonx Orchestrate, specify which model you want to install:
llama-3-2-90b-vision-instruct
To use the llama-3-2-90b-vision-instruct model (ID: llama-3-2-90b-vision-instruct), specify:
watson_orchestrate_install_mode: lite
watson_orchestrate_ootb_models:
  - llama-3-2-90b-vision-instruct
  - ibm-slate-30m-english-rtrvr
llama-3-1-70b-instruct
To use the llama-3-1-70b-instruct model (ID: llama-3-1-70b-instruct), specify:
watson_orchestrate_install_mode: lite
watson_orchestrate_ootb_models:
  - llama-3-1-70b-instruct
  - ibm-slate-30m-english-rtrvr
agentic_skills_assistant
5.2.2 Specify this option only if both of the following statements are true:
  • IBM Software Hub Version 5.2.2 is installed
  • You want to install legacy features, such as conversational search or conversational skills.
If you set watson_orchestrate_watsonx_ai_type: true to install the foundation models on the same cluster as watsonx Orchestrate, specify which model you want to install:
granite-3-8b-instruct
To use the granite-3-8b-instruct model (ID: granite-3-8b-instruct), specify:
watson_orchestrate_watsonx_ai_type: true
watson_orchestrate_install_mode: agentic_skills_assistant
watson_orchestrate_ootb_models:
  - granite-3-8b-instruct
  - ibm-slate-30m-english-rtrvr
ibm-granite-8b-unified-api-model-v2
To use the ibm-granite-8b-unified-api-model-v2 model (ID: ibm-granite-8b-unified-api-model-v2), specify:
watson_orchestrate_watsonx_ai_type: true
watson_orchestrate_install_mode: agentic_skills_assistant
watson_orchestrate_syom_models:
  - ibm-granite-8b-unified-api-model-v2
watson_orchestrate_ootb_models:
  - ibm-slate-30m-english-rtrvr
llama-3-2-90b-vision-instruct
To use the llama-3-2-90b-vision-instruct model (ID: llama-3-2-90b-vision-instruct), specify:
watson_orchestrate_install_mode: lite
watson_orchestrate_install_mode: agentic_skills_assistant
watson_orchestrate_ootb_models:
  - llama-3-2-90b-vision-instruct
  - ibm-slate-30m-english-rtrvr
llama-3-1-70b-instruct
To use the llama-3-1-70b-instruct model (ID: llama-3-1-70b-instruct), specify:
watson_orchestrate_install_mode: lite
watson_orchestrate_install_mode: agentic_skills_assistant
watson_orchestrate_ootb_models:
  - llama-3-1-70b-instruct
  - ibm-slate-30m-english-rtrvr
watson_orchestrate_ootb_models

This option is valid only if you install the foundation models on the same cluster as watsonx Orchestrate (watson_orchestrate_watsonx_ai_type: true).

Specify whether to install one or more general models.

Install models based on the features that you want to enable.

Default value
[]
Valid values
[]
Do not install a general model.
granite-3-8b-instruct
Install the granite-3-8b-instruct model (ID: granite-3-8b-instruct).

To install this model, include the following parameters in your install-options.yml file:

watson_orchestrate_ootb_models:
  - granite-3-8b-instruct
  - ibm-slate-30m-english-rtrvr
llama-3-1-70b-instruct
Install the llama-3-1-70b-instruct model (ID: llama-3-1-70b-instruct).

To install this model, include the following parameters in your install-options.yml file:

watson_orchestrate_watsonx_ai_type: true
watson_orchestrate_ootb_models:
  - llama-3-1-70b-instruct
  - ibm-slate-30m-english-rtrvr
llama-3-2-90b-vision-instruct
Install the llama-3-2-90b-vision-instruct model (ID: llama-3-2-90b-vision-instruct).

To install this model, include the following parameters in your install-options.yml file:

watson_orchestrate_watsonx_ai_type: true
watson_orchestrate_ootb_models:
  - llama-3-2-90b-vision-instruct
  - ibm-slate-30m-english-rtrvr
Installing multiple general models
If you want to install more than one general model, add the ID of each model that you want to install as a list item. For example:
watson_orchestrate_watsonx_ai_type: true
watson_orchestrate_ootb_models:
  - llama-3-1-70b-instruct
  - ibm-slate-30m-english-rtrvr
watson_orchestrate_syom_models

This option is valid only if:

  • You install the foundation models on the same cluster as watsonx Orchestrate (watson_orchestrate_watsonx_ai_type: true)
  • You install the legacy features
    Restriction: Do not specify this parameter if you specify watson_orchestrate_install_mode: lite.

Specify whether to install a specialized model that is specifically tuned for use with watsonx Orchestrate.

Default value
[]
Valid values
[]
Do not install a specialized model.
ibm-granite-8b-unified-api-model-v2
Install the ibm-granite-8b-unified-api-model-v2 model (ID: ibm-granite-8b-unified-api-model-v2).

To install this model, include the following parameters in your install-options.yml file:

watson_orchestrate_watsonx_ai_type: true
watson_orchestrate_syom_models:
  - ibm-granite-8b-unified-api-model-v2

What to do next

Now that you've specified installation options for services, you're ready to complete Specifying the privileges that Db2U runs with.