Important:

IBM Cloud Pak® for Data Version 4.7 will reach end of support (EOS) on 31 July, 2025. For more information, see the Discontinuance of service announcement for IBM Cloud Pak for Data Version 4.X.

Upgrade to IBM Software Hub Version 5.1 before IBM Cloud Pak for Data Version 4.7 reaches end of support. For more information, see Upgrading IBM Software Hub in the IBM Software Hub Version 5.1 documentation.

Known issues for Common core services

The following known issues and limitations apply to Common core services.

After upgrade, Teradata connections do not work

Applies to: Upgrades from 4.6 to 4.7

If you applied a patch to the common core services custom resource before you upgraded to Cloud Pak for Data Version 4.7, the patch is not removed during upgrade. The patch prevents Teradata connections from working after upgrade to 4.7.

Diagnosing the problem
Get the contents of the common core services custom resource:

oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} -o yaml

If you see the following contents in the output, proceed to Resolving the problem:

portal_projects_image:
    name: portal-projects@sha256
    tag: ef00faeb724212ddf9b5270237bddd51d16fd91b265284502dfed9f01d75319d
    tag_metadata: 4.6.5.1.590-amd64



Resolving the problem

  1. Edit the common core services custom resource:
    oc edit ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}

  2. Remove the following entry from the custom resource:

    portal_projects_image:
        name: portal-projects@sha256
        tag: ef00faeb724212ddf9b5270237bddd51d16fd91b265284502dfed9f01d75319d
        tag_metadata: 4.6.5.1.590-amd64
    
  3. Save your changes to the custom resource.

  4. Wait for the custom resource status to be Completed.
    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}

If the status is InProgress, wait a few minutes before rerunning the command.

Write job fails for Netezza Performance Server and Netezza Performance Server (optimized) connections

Applies to: 4.7.3 and later

If you run a job that uses data from the Netezza Performance Server or the Netezza Performance Server (optimized) connection with a batch mode greater than 1, the data will not be inserted into the target table and the job will fail. The log will indicate a failure for creating the external table.

Workaround: The user who runs the job must have EXTERNAL TABLE permission in the database's ADMIN schema. The Netezza Performance Server database administrator can grant the user permission with these commands:

GRANT EXTERNAL TABLE IN .<schema_name> TO <user_name>;
GRANT TABLE IN .<schema_name> TO <user_name>;

Alternatively, you can change the batch mode to 1. However, for an insert with a large amount of data, this batch mode can negatively affect performance.

Creating a project takes over 30 seconds to complete

Applies to: 4.7.0

Fixed in: 4.7.1

Creating a project might take over 30 seconds to complete when the system is underload.

Workaround: You can choose to create project indexes asynchronously, instead of synchronously, to mitigate this problem. If the indexes are created asynchronously some project actions may temporarily fail until the index creation is complete.

To use the asynchronous method, set the following property on the Common Core Services custom resource spec:

oc patch ccs ccs-cr --type merge --patch '{"spec": {
"catalog_api_properties_synchronously_create_design_docs_for_new_catalogs": "false"
}}'

Project members of LDAP user groups might not receive platform notifications sent from tools

Applies to: 4.7.0, 4.7.1, 4.7.2, and 4.7.3

Project members of user groups configured with identity providers, such as LDAP, might not receive platform notifications sent from tools to projects. Project collaborators added directly or added as direct members of the user group still receive notifications.

rabbitmq pod fails to start during an upgrade to 4.7.0, 4.7.1, or 4.7.2

Applies to: 4.7.0, 4.7.1, and 4.7.2

Fixed in: 4.7.3

When you upgrade to Cloud Pak for Data 4.7.0, 4.7.1, or 4.7.2, the rabbitmq pod logs contains an error:

required feature flag not enabled! It must be enabled before upgrading RabbitMQ.

For example:

2022-07-13 11:29:28.366877+02:00 [error] <0.232.0> Feature flags: `implicit_default_bindings`: required feature flag not enabled! It must be enabled before upgrading RabbitMQ.
2022-07-13 11:29:28.366905+02:00 [error] <0.232.0> Failed to initialize feature flags registry: {disabled_required_feature_flag,
2022-07-13 11:29:28.366905+02:00 [error] <0.232.0>                                               implicit_default_bindings}
2022-07-13 11:29:28.372830+02:00 [error] <0.232.0>
2022-07-13 11:29:28.372830+02:00 [error] <0.232.0> BOOT FAILED
2022-07-13 11:29:28.372830+02:00 [error] <0.232.0> ===========
2022-07-13 11:29:28.372830+02:00 [error] <0.232.0> Error during startup: {error,failed_to_initialize_feature_flags_registry}
2022-07-13 11:29:28.372830+02:00 [error] <0.232.0>

Workaround:

  1. Scale down rabbitmq by running:

    oc scale sts/rabbitmq-ha --replicas 0
    
  2. Delete the rabbitmq persistant volume claim (PVC) by running:

    oc delete pvc data-rabbitmq-ha-0 data-rabbitmq-ha-1 data-rabbitmq-ha-2
    

Common core services will reconcile and recover.

Prevention:

You can prevent this issue from occuring by running the following command before upgrading Cloud Pak for Data. Note that this command must be run before initiating the upgrade.

oc exec rabbitmq-ha-0 -- rabbitmqctl enable_feature_flag all

Parent topic: Limitations and known issues in IBM Cloud Pak for Data