Known issues on FIPS-enabled clusters
The following known issues apply to services on FIPS-enabled clusters.
- General issues
- Service-specific issues
General issues
- Curl commands fail to connect to FIPS clusters because of TLS errors
-
Applies to: 5.0.0 and later
When you connect to a FIPS-enabled cluster, client programs such as curl defer to OpenSSL to obtain the cipher list that they use to establish connections. Some versions of OpenSSL do not come with approved TLS 1.3 ciphers, such as TLS_AES_128_GCM_SHA256.
If you see TLS errors when you use curl commands to connect to a FIPS-enabled cluster, rerun the command with the
--tls-max 1.2curl parameter, or update your client's OpenSSL version to one that does support TLS 1.3 ciphers. - You cannot connect to external SMB storage volumes on FIPS-enabled clusters
-
Applies to: 5.0.0 and later
The SMB CSI Driver for Kubernetes (
csi-smb-driver), which is required to connect to external SMB storage volumes, is not supported on FIPS-enabled clusters.
Common core services
Not all connectors are supported in a FIPS-enabled environment. See the information for the individual connectors at Connectors for projects and catalogs.
- Services that use the Flight service
-
Applies to: 5.0.0 and later
On a FIPS-enabled cluster, the Flight service blocks the connection to any data source that does not support FIPS.
Data Refinery
- Data Refinery flow job fails in a FIPS cluster for a SAV target file with encryption
-
Applies to: 5.0.0 and later
On a FIPS-enabled cluster if you run a Data Refinery flow job where the target is an SAV file and you enter an encryption key, the job will fail.
DataStage
- Cannot run a DataStage job with data from certain connections
-
Applies to: 5.0.0 and later
DataStage does not support the Elasticsearch connection in a FIPS-enabled environment.
- DataStage pods may have problems
-
Applies to: 5.0.0 and later
When deploying DataStage on FIPS-enabled OpenShift clusters, the
ds-px-default-ibm-datastage-px-computepods may have problems.For example:* Excerpt from ds-px-default-ibm-datastage-px-compute-0_px-compute-previous.log 2024-10-23T16:11:05.704147993Z + '[' -e /proc/sys/crypto/fips_enabled ']' 2024-10-23T16:11:05.704195382Z + grep 1 /proc/sys/crypto/fips_enabled 2024-10-23T16:11:05.706675750Z 1 2024-10-23T16:11:05.706881035Z Cluster is FIPS enabled but FIPS_MODE is not set to true 2024-10-23T16:11:05.706896001Z + '[' false '!=' true ']' 2024-10-23T16:11:05.706896001Z + echo 'Cluster is FIPS enabled but FIPS_MODE is not set to true'
Execution Engine for Apache Hadoop
- You cannot connect to a JDBC data source on your CDH Cluster without configuring the database to support FIPS encryption
-
Cloudera doesn't support FIPS for JDBC drivers. If you are using a FIPS-enabled cluster, you cannot establish a connection with a Cloudera cluster with the JDBC driver that is provided by Cloudera.
- You cannot use Livy to connect to a Spark cluster without loading the digest package
-
Applies to: 5.0.0 and later
If you need to use Livy to connect to a Spark cluster or use any other packages that depend on the digest package, you must load the digest package from a non-IPDS compliant library. To load the digest package, run the following command.library(digest, lib.loc='/opt/not-FIPS-compliant/R/library') library(sparklyr)Note: If you load the digest package, Execution Engine for Apache Hadoop will no longer be FIPS-compliant.
- You cannot connect to Impala via Execution Engine for Hadoop or Hive via Execution Engine for Hadoop data sources
-
Applies to: 5.0.0 and later
In FIPS-enabled clusters, you cannot connect to Impala via Execution Engine for Hadoop or Hive via Execution Engine for Hadoop data sources.
IBM Knowledge Catalog
- Communication with external Kafka does not work in a FIPS-enabled cluster.
- Communication with external Kafka does not work in a FIPS-enabled cluster.
RStudio® Server Runtimes
If you need to use Livy to connect to a Spark cluster or use any other packages that depend on the digest package, such as sparklyr, Shiny®, arulesViz, or htmltools packages, you must load the digest package from a non-FIPS compliant library. See Using Livy to connect to a Spark cluster.
- You cannot connect to a database with JDBC when the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret
-
Applies to: 5.0.0 and later
When Cloud Pak for Data is running on a FIPS-enabled cluster, you cannot connect from a notebook to a database with JDBC if the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret.
To work around this problem, you can create a custom notebook image that disables FIPS for Java. Follow these steps to create and modify the custom notebook image.
- Follow the documented steps to create a custom image for notebooks.
- Modify the image by setting
security.useSystemPropertiesFiletofalsein$JAVA_HOME/conf/security/java.security, as follows:- Review the example code for Adding customizations to images. The sample code shows where to
insert the following statement, between
USER root:rootandUSER: wsbuild:wsbuild. - Add this statement to the Dockerfile so that it runs the modification from root when the file
runs.
For more information about the effects of this change, see Configure OpenJDK 17 in FIPS mode.RUN sed -i.orig -e /security.useSystemPropertiesFile=/s/true/false/ $JAVA_HOME/conf/security/java.security
- Review the example code for Adding customizations to images. The sample code shows where to
insert the following statement, between
- Run your notebook with the custom image that you created.
Watson OpenScale
- You cannot upload training data with Cloud Object Storage
- If you're using Watson OpenScale on a FIPS-enabled cluster, you cannot upload training data to Cloud Object Storage. To work around this issue, you must upload training data to Db2 to enable model evaluations.
Watson Studio
- You cannot use the Visual Studio Code extension when the Cloud Pak for Data route uses
reencrypttermination. -
The Visual Studio Code extension does not work on a FIPS-enabled cluster when the Cloud Pak for Data route uses
reencrypttermination.
- You cannot connect to a database with JDBC when the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret
-
Applies to: 5.0.0 and later
When Cloud Pak for Data is running on a FIPS-enabled cluster, you cannot connect from a notebook to a database with JDBC if the remote server does not support secure connections that use TLS 1.3 or TLS 1.2 with Extended Master Secret.
To work around this problem, you can create a custom notebook image that disables FIPS for Java. Follow these steps to create and modify the custom notebook image.
- Follow the documented steps to create a custom image for notebooks
- Modify the image by setting
security.useSystemPropertiesFiletofalsein$JAVA_HOME/conf/security/java.security, as follows:- Review the example code for Adding customizations to images. The sample code shows where to
insert the following statement, between
USER root:rootandUSER: wsbuild:wsbuild. - Add this statement to the Dockerfile so that it executes the modification from root when the
file runs.
RUN sed -i.orig -e /security.useSystemPropertiesFile=/s/true/false/ $JAVA_HOME/conf/security/java.securityFor more information about the effects of this change, see Configure OpenJDK 17 in FIPS mode.
- Review the example code for Adding customizations to images. The sample code shows where to
insert the following statement, between
- Run your notebook with the custom image that you created.
Watson Machine Learning
- Deployments with certain constricted software specifications fail after an upgrade
- Applies to: 5.0.0