Known issues and limitations for Data Refinery
- Known issues
-
- Tokenize GUI operation does not work for Data Refinery on R 4.3 with Spark 3.4 or on Default Spark 3.4 + R 4.3 environment on IBM Power
- Data Refinery might fail when upgrading IBM Software Hub to version 5.1.2
- Error opening a Data Refinery flow
- Target table loss and job failure when you use the Update option in a Data Refinery flow
- Concatenate operation does not allow you to put the new column next to the original column
- Google BigQuery connection: TRUNCATE TABLE statement fails in Data Refinery flow jobs
- Limitations
-
- Error opening a Data Refinery flow with connection with personal credentials
- Data Refinery does not support the Satellite Connector
- Data column headers cannot contain special characters
- Unable to use masked data in visualizations from data assets imported from 4.8 or earlier
- Tokenize GUI operation might not work on large data assets
- Data refinery does not support Kerberos authentication
Known issues
See also Known issues for Common core services for the issues in other services in Cloud Pak for Data that might affect Data Refinery.
- Tokenize GUI operation does not work for Data Refinery on R 4.3 with Spark 3.4 or on Default Spark 3.4 + R 4.3 environment on IBM Power
-
Applies to: 5.1.2 and later
Data Refinery flow jobs that include the Tokenize GUI operation do not work on Spark and R environment. Users can use the Default Data Refinery XS environment for small datasets.
- Data Refinery might fail when upgrading IBM® Software Hub to version 5.1.2
-
Applies to: 5.1.2 and later
When upgrading Watson Studio or IBM Knowledge Catalog to IBM Software Hub version 5.1.2 from an earlier version, Data Refinery might fail to complete.
Workaround: Apply this patch to ensure that Data Refinery completes the upgrade successfully:
or in Segregation of Duty (SoD) mode:oc patch pvc volumes-datarefinerylibvol-pvc -n <CPD_INSTANCE_NAMESPACE> -p '{"metadata":{"finalizers":null}}' --type=mergeoc patch pvc volumes-datarefinerylibvol-pvc -n <DATAPLANE_NAMESPACE> -p '{"metadata":{"finalizers":null}}' --type=merge - Error opening a Data Refinery flow
-
Applies to: 5.1.0 and later
When you open the Data Refinery user interface, you might obtain the error
The selected data set wasn't loaded. Error occurred while launching the container (retry attempts exceeded).Workaround: Delete the existing interactiveRuntimeAssembly(RTA)as follows:
or in Segregation of Duty (SoD) mode:oc -n <CPD_INSTANCE_NAMESPACE> delete rta -l type=service,component=shaperoc -n <DATAPLANE_NAMESPACE> delete rta -l type=service,component=shaper - Target table loss and job failure when you use the Update option in a Data Refinery flow
-
Applies to: 5.1.0 and later
Using the Update option for the Write mode target property for relational data sources (for example Db2) replaces the original target table and the Data Refinery job might fail.
Workaround: Use the Merge option as the Write mode and Append as the Table action.
- Concatenate operation does not allow you to put the new column next to the original column
-
Applies to: 5.1.0 and later
When you add a step with the Concatenate operation to your Data Refinery flow, and you select Keep original columns and also select Next to original column for the new column position, the step fails with an error.
You can, however, select Right-most column in the data set.
- Google BigQuery connection: TRUNCATE TABLE statement fails in Data Refinery flow jobs
-
Applies to: 5.1.0 and later
If you run a Data Refinery flow job with data from a Google BigQuery connection and the DDL includes a TRUNCATE TABLE statement, the job fails
Limitations
- Error opening a Data Refinery flow with connection with personal credentials
-
When you open a Data Refinery flow that uses a data asset that is based on a connection with personal credentials, you might see an error.
Workround: To open a Data Refinery flow that has assets which use connections with personal credentials, you must unlock the connection. You can unlock the connection either by editing the connection and entering your personal credentials, or by previewing the asset in the Project where you are prompted to enter your personal credentials. When you have unlocked the connection, you can then open the Data Refinery flow.
- Data Refinery does not support the Satellite Connector
-
You cannot use a Satellite Connector to connect to a database with Data Refinery
- Data column headers cannot contain special characters
- Unable to use masked data in visualizations from data assets imported from version 4.8 or earlier
-
Applies to: 5.1.0 and later
- Tokenize GUI operation might not work on large data assets
- Data refinery does not support Kerberos authentication
- Data refinery does not support connecting to data with Kerberos authentication.