Uploading JDBC drivers from the OpenShift command-line interface

A cluster administrator can upload JDBC driver files (JAR files) from the OpenShift command-line interface (oc CLI).

Who needs to complete this task?
To complete this task, you must be either:
  • A cluster administrator
  • An instance administrator
When do you need to complete this task?

Complete this task if the following statements are true:

  • You want to use a connector that requires a JDBC driver.
  • An administrator disabled the option to upload drivers through the web client.
Common core services This feature is available only when the Cloud Pak for Data common core services are installed. To determine whether the common core services are installed, run:
oc get ccs --namespace ${PROJECT_CPD_INST_OPERANDS}
  • If the common core services are installed, the command returns information about the common core services custom resource.
  • If the common core services are not installed, the command returns an empty response.

Before you begin

Best practice: You can run many of the commands in this task exactly as written if you set up environment variables for your installation. For instructions, see Setting up installation environment variables.

Ensure that you source the environment variables before you run the commands in this task.

To complete this task, you must have the following permissions in Cloud Pak for Data:

  • Administer platform
  • Manage configurations

About this task

You can manually upload JDBC drivers to the persistent volume that is used by wdp-connect-connection pods.

Procedure

  1. Log in to Red Hat® OpenShift Container Platform as a user with sufficient permissions to complete the task.
    oc login ${OCP_URL}
  2. Get the full name of the wdp-connect-connection pod:
    oc get pods -n=${PROJECT_CPD_INST_OPERANDS} | grep wdp-connect-connection
  3. Set the following environment variables:
    1. Set the POD_NAME environment variable to the name of the wdp-connect-connection pod:
      export POD_NAME=<full-pod-name>
    2. Set the JAR_FILE environment variable to the fully qualified file name of the JAR file that you want to upload:
      export JAR_FILE=<fully-qualified-file-name>
  4. Upload the JAR file to the tmp directory on the pod:
    oc cp ${JAR_FILE} ${POD_NAME}:/tmp \
    -n=${PROJECT_CPD_INST_OPERANDS}
  5. Open a remote shell on the pod:
    oc rsh ${POD_NAME} \
    -n=${PROJECT_CPD_INST_OPERANDS}
  6. Set the following environment variables in the remote shell on the pod:
    1. Set the JAR_FILE environment variable to the name of the JAR file. Do not specify any directory names.
      export JAR_FILE=<file-name>
    2. Set the CPD_USER environment variable to the username of a Cloud Pak for Data user with the Manage configurations permission:
      export CPD_USER=<username>
    3. Set the CPD_PASSWORD environment variable to the password of the username you specified for the CPD_USER environment variable:
      export CPD_PASSWORD=<password>
  7. Run the following command to load the JAR file to the persistent volume used by the wdp-connect-connection pods:
    sh /opt/ibm/wlp/usr/servers/defaultServer/apps/expanded/wdp-connect-connection.war/resources/files-api.sh \
    -u ${CPD_USER} \
    -p ${CPD_PASSWORD} \
    -action upload \
    -f /tmp/${JAR_FILE}

What to do next

Best practice: Test the JDBC drivers that you import to ensure that they are compatible with the tools you use to connect to data sources. Example tools include Jupyter Notebooks, SPSS® Modeler, and DataStage®.

After you test the drivers, users can create a Generic JDBC connection to a data source from the Platform connections page, catalogs, or projects.