Known issues on FIPS-enabled clusters

General issues

Curl commands fail to connect to FIPS clusters because of TLS errors

Applies to: 5.0.0 and later

When you connect to a FIPS-enabled cluster, client programs such as curl defer to OpenSSL to obtain the cipher list that they use to establish connections. Some versions of OpenSSL do not come with approved TLS 1.3 ciphers, such as TLS_AES_128_GCM_SHA256.

If you see TLS errors when you use curl commands to connect to a FIPS-enabled cluster, rerun the command with the --tls-max 1.2 curl parameter, or update your client's OpenSSL version to one that does support TLS 1.3 ciphers.

You cannot connect to external SMB storage volumes on FIPS-enabled clusters

Applies to: 5.0.0 and later

The SMB CSI Driver for Kubernetes (csi-smb-driver), which is required to connect to external SMB storage volumes, is not supported on FIPS-enabled clusters.

Common core services

Not all connectors are supported in a FIPS-enabled environment. See the information for the individual connectors at Connectors for projects and catalogs.

Services that use the Flight service

Applies to: 5.0.0 and later

On a FIPS-enabled cluster, the Flight service blocks the connection to any data source that does not support FIPS.

Data Refinery

Data Refinery flow job fails in a FIPS cluster for a SAV target file with encryption

Applies to: 5.0.0 and later

On a FIPS-enabled cluster if you run a Data Refinery flow job where the target is an SAV file and you enter an encryption key, the job will fail.

DataStage

Cannot run a DataStage job with data from certain connections

Applies to: 5.0.0 and later

DataStage does not support the Elasticsearch connection in a FIPS-enabled environment.

DataStage pods may have problems

Applies to: 5.0.0 and later

When deploying DataStage on FIPS-enabled OpenShift clusters, the ds-px-default-ibm-datastage-px-compute pods may have problems.

For example:
* Excerpt from ds-px-default-ibm-datastage-px-compute-0_px-compute-previous.log

2024-10-23T16:11:05.704147993Z + '[' -e /proc/sys/crypto/fips_enabled ']'

2024-10-23T16:11:05.704195382Z + grep 1 /proc/sys/crypto/fips_enabled

2024-10-23T16:11:05.706675750Z 1

2024-10-23T16:11:05.706881035Z Cluster is FIPS enabled but FIPS_MODE is not set to true

2024-10-23T16:11:05.706896001Z + '[' false '!=' true ']'

2024-10-23T16:11:05.706896001Z + echo 'Cluster is FIPS enabled but FIPS_MODE is not set to true'
To fix this problem, run the following:
  1. In the OpenShift® Container Platform UI console, go to Workloads, then StatefulSets.
  2. Find and open ds-px-default-ibm-datastage-px-compute.
  3. In the Environments tab, set FIPS_MODE to true, then save the settings.
  4. Delete the ds-px-default-ibm-datastage-px-compute pods

Execution Engine for Apache Hadoop

You cannot connect to a JDBC data source on your CDH Cluster without configuring the database to support FIPS encryption

Cloudera doesn't support FIPS for JDBC drivers. If you are using a FIPS-enabled cluster, you cannot establish a connection with a Cloudera cluster with the JDBC driver that is provided by Cloudera.

You cannot use Livy to connect to a Spark cluster without loading the digest package

Applies to: 5.0.0 and later

If you need to use Livy to connect to a Spark cluster or use any other packages that depend on the digest package, you must load the digest package from a non-IPDS compliant library. To load the digest package, run the following command.
library(digest, lib.loc='/opt/not-FIPS-compliant/R/library') 
library(sparklyr)
Note: If you load the digest package, Execution Engine for Apache Hadoop will no longer be FIPS-compliant.
You cannot connect to Impala via Execution Engine for Hadoop or Hive via Execution Engine for Hadoop data sources

Applies to: 5.0.0 and later

In FIPS-enabled clusters, you cannot connect to Impala via Execution Engine for Hadoop or Hive via Execution Engine for Hadoop data sources.

IBM Knowledge Catalog

Communication with external Kafka does not work in a FIPS-enabled cluster.
Communication with external Kafka does not work in a FIPS-enabled cluster.

RStudio® Server Runtimes

If you need to use Livy to connect to a Spark cluster or use any other packages that depend on the digest package, such as sparklyr, Shiny®, arulesViz, or htmltools packages, you must load the digest package from a non-FIPS compliant library. See Using Livy to connect to a Spark cluster.

You cannot connect to a database with JDBC when the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret

Applies to: 5.0.0 and later

When Cloud Pak for Data is running on a FIPS-enabled cluster, you cannot connect from a notebook to a database with JDBC if the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret.

To work around this problem, you can create a custom notebook image that disables FIPS for Java. Follow these steps to create and modify the custom notebook image.

  1. Follow the documented steps to create a custom image for notebooks.
  2. Modify the image by setting security.useSystemPropertiesFile to false in $JAVA_HOME/conf/security/java.security, as follows:
    1. Review the example code for Adding customizations to images. The sample code shows where to insert the following statement, between USER root:root and USER: wsbuild:wsbuild.
    2. Add this statement to the Dockerfile so that it runs the modification from root when the file runs.
      RUN sed -i.orig -e /security.useSystemPropertiesFile=/s/true/false/ $JAVA_HOME/conf/security/java.security
      For more information about the effects of this change, see Configure OpenJDK 17 in FIPS mode.
  3. Run your notebook with the custom image that you created.

Watson OpenScale

You cannot upload training data with Cloud Object Storage
If you're using Watson OpenScale on a FIPS-enabled cluster, you cannot upload training data to Cloud Object Storage. To work around this issue, you must upload training data to Db2 to enable model evaluations.

Watson Studio

You cannot use the Visual Studio Code extension when the Cloud Pak for Data route uses reencrypt termination.

The Visual Studio Code extension does not work on a FIPS-enabled cluster when the Cloud Pak for Data route uses reencrypt termination.

You cannot connect to a database with JDBC when the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret

Applies to: 5.0.0 and later

When Cloud Pak for Data is running on a FIPS-enabled cluster, you cannot connect from a notebook to a database with JDBC if the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret.

To work around this problem, you can create a custom notebook image that disables FIPS for Java. Follow these steps to create and modify the custom notebook image.

  1. Follow the documented steps to create a custom image for notebooks
  2. Modify the image by setting security.useSystemPropertiesFile to false in $JAVA_HOME/conf/security/java.security, as follows:
    1. Review the example code for Adding customizations to images. The sample code shows where to insert the following statement, between USER root:root and USER: wsbuild:wsbuild.
    2. Add this statement to the Dockerfile so that it executes the modification from root when the file runs.
      RUN sed -i.orig -e /security.useSystemPropertiesFile=/s/true/false/ $JAVA_HOME/conf/security/java.security

      For more information about the effects of this change, see Configure OpenJDK 17 in FIPS mode.

  3. Run your notebook with the custom image that you created.

Watson Machine Learning

Deployments with certain constricted software specifications fail after an upgrade
Applies to: 5.0.0

If you upgrade to a more recent version of IBM Cloud Pak for Data and deploy an R Shiny application asset that was created by using constricted software specifications in FIPS mode, your deployment fails. Deployments that use rstudio_r4.2 or shiny-r3.6 software specifications fail after you upgrade to IBM Cloud Pak for Data version 5.0. You might receive the error message Error 502 - Bad Gateway.

To prevent your deployment from failing, update the constricted specification for your deployed asset to use the latest software specification. For more information, see Managing outdated software specifications or frameworks. You can also delete your application deployment if you no longer need it.