Connecting to a remote watsonx.ai instance

To enable users to connect to a remote watsonx.ai model from IBM watsonx.data intelligence for using generative AI capabilities, you must set up a connection to the remote watsonx.ai instance for communication with your remote foundation models. In addition, you can manage which foundation models are used with the enabled gen AI capabilities.

The option to select the foundation models is available starting in watsonx.data intelligence 2.2.1.

Prerequisites

The following prerequisites must be met:

  • Generative AI capabilities must be enabled in your IBM watsonx.data intelligence deployment. This setup can be done during installation, upgrade, or at any time later. For more information about the deployment modes, see Preparing to install IBM watsonx.data intelligence in the IBM Software Hub documentation.

  • The use of remote foundation models must be enabled in your IBM watsonx.data intelligence deployment. This setup can also be done during installation, upgrade, or at any time later. If set up your system with models running on CPU and want to switch to using remote models, but do not want to change any other configuration settings, you can follow the instructions in Can't work with a remote watsonx.ai instance if the system is configured to run models on CPU in the IBM Software Hub documentation.

  • A remote watsonx.ai instance with appropriate foundation models must exist.

    For setting up a connection to a remote on-premises watsonx.ai instance, some additional prerequisites must be met to avoid connection errors.

    1. Check which models are available on the remote cluster. In watsonx.ai, open the Prompt Lab and check the list of available foundation models. For more information, see Provided foundation models that are ready to use.

    2. If the list contains the default granite-3-8b-instruct model, no action is required. If the default model is not available, complete one of these steps:

      • Recommended action: On the cluster where you set up the connection, update the semanticautomation-cr custom resource with one of the models that are available in the remote watsonx.ai instance. Select a model from the list that you retrieved in step 1.

        oc patch semanticautomation semanticautomation-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"customModelSemanticEnrichment": "<AvailableModelFromStep1>"}}'
        
      • Alternative action: On the remote cluster, update the watsonxaiifm-cr custom resource to start the default granite-3-8b-instruct model.

  • You must have your user API key for authenticating to the remote system available.

Required permissions

A IBM watsonx.data intelligence user with the Administrator role who has access to the watsonx.ai instance can set up the connection.

IBM watsonx.data intelligence users with any user role who need access to generative AI features must be collaborators in the deployment space that is selected in the setup. Users need to authenticate to the remote watsonx.ai instance with their user API key when they want to use any generative AI features in IBM watsonx.data intelligence.

Administrator tasks:

User tasks:

Setting up the connection to the remote watsonx.ai instance

To enable users to connect to a remote watsonx.ai model:

  1. Go to Administration > Configurations and settings and click Generative AI setup.

  2. To configure the connection, click Add connection.

  3. Provide the connection details. Select one of these options:

    Cloud service provider

    Select a cloud service provider. Before you can proceed, you must acknowledge that use of remote inferencing foundation models will incur additional costs and that data samples are sent to the remote model as additional context for the prompts.

    On premises (IP/URL)

    Provide the URL or IP address of the remote watsonx.ai instance and specify a port. For enhanced security, use an SSL-enabled port. To ensure that the origin cluster can trust the certificate of the remote cluster, provide the certificate chain of the remote cluster.

    To obtain this information, run the openssl command with the showcerts flag. Do not include the protocol in the <route> value.

    true | openssl s_client -showcerts -connect <route>:443 </dev/null 2>/dev/null | awk '/BEGIN/,/END/'
    

    The output of the command should have this format:

    -----BEGIN CERTIFICATE-----
    <first certificate>
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    <second certificate>
    -----END CERTIFICATE-----
    

    Copy the complete certificate into the SSL certificate field.

  4. Provide the credentials for authenticating to the remote system. Enter your personal user API key for use with the remote watsonx.ai instance. The API key is stored as a secret in the internal vault for future authentication. For authentication to a remote Cloud Pak for Data cluster, also specify the username that is associated with the provided user API key.

  5. Select a deployment space from your watsonx.ai instance that you have access to.

  6. Click Add connection.

Your connection setup is complete.

Selecting the gen AI models

Configure which foundation models are used for the generative AI capabilities. You can work with the default models or select from the foundation models that are supported in watsonx.ai.

The selections that you make in the generative AI setup UI overwrite the model configuration that was applied by using installation options. The selected models must be available in the deployment space that is configured in the connection to the watsonx.ai instance.

Important: Changing a model can impact the accuracy of the results and can incur additional cost for inferencing.

In general, you can set the models for enrichment, Text2SQL, and Text2SQL content linking:

Enrichment
The selected model is used for generating descriptions and display names for tables and table columns, and for creating new business terms based on the content of tables and table columns.
Text2SQL
The selected model is used for transforming user-provided natural language queries into SQL queries that can be consumed by the service.

Depending on your deployment configuration for the generative AI capabilities, not all options might be available. For more information about the deployment modes, see Preparing to install IBM watsonx.data intelligence in the IBM Software Hub documentation.

Deleting the configuration

If you want to delete the configuration and start over, click Remove connection. However, remember that this connection is used by any service that connects to the remote watsonx.ai instance to provide generative AI capabilities. If you remove the connection, all jobs that currently use the connection will fail. The model configuration is reset to using the default models.

Authenticating with the remote watsonx.ai instance

To authenticate to the remote system as a user of generative AI capabilities such as metadata expansion in metadata enrichment, complete these steps:

  1. Go to Administration > Configurations and settings and click Generative AI setup.

  2. Check the connection details for information about the remote system, the deployment space in which the models run, and your authentication status.

  3. Click Authenticate.

  4. Provide the credentials for authenticating to the remote system. Enter your personal user API key for use with the remote watsonx.ai instance. The API key is stored as a secret in the internal vault for future authentication. For authentication to a remote IBM Software Hub cluster, also specify the username that is associated with the provided user API key.

To update your credentials, for example, because your API key expired, click the Edit icon edit icon in the Authentication section of the connection details.

Learn more