Limitations and known issues for IBM Match 360 with Watson
The following limitations and known issues apply to IBM® Match 360 with Watson™.
- Installation, upgrade, and setup
-
- After upgrading from Cloud Pak for Data 4.6.0 to 4.6.4, the RabbitMQ pod can enter an error state
- When upgrading from Cloud Pak for Data 4.0.x to 4.6, the IBM Match
360 FoundationDB resource can remain in a
Pending
state - After upgrading from Cloud Pak for Data 4.5.x to 4.6, IBM Match 360 shutdown operations will not work unless you manually delete Elasticsearch jobs
- Backup and restore
- IBM Match 360 connection
- General
- Limitations
- Resolved
- See the Resolved section to view the resolved issues.
For additional solutions to help you to resolve common problems that you might encounter with IBM Match 360, see Troubleshooting IBM Match 360 with Watson.
Installation, upgrade, and setup
- After upgrading from Cloud Pak for Data 4.6.0 to 4.6.4 or later, the RabbitMQ pod can enter an error state
- When upgrading from Cloud Pak for Data 4.0.x to 4.6, the IBM Match
360 FoundationDB resource can remain in a
Pending
state - After upgrading from Cloud Pak for Data 4.5.x to 4.6, IBM Match 360 shutdown operations will not work unless you manually delete Elasticsearch jobs
After upgrading from Cloud Pak for Data 4.6.0 to 4.6.4 or later, the RabbitMQ pod can enter an error state
For instances of the IBM Match 360 service that have been upgraded from Cloud Pak for Data 4.6.0 directly to 4.6.4 or later, the RabbitMQ pod can enter an error state. By deleting the pod, you can force RabbitMQ to get recreated so that IBM Match 360 can function correctly.
- Applies to
-
Upgrades from Cloud Pak for Data 4.6.0 to 4.6.4 or later
- Resolving the problem
-
After completing an upgrade from Cloud Pak for Data 4.6.0, manually delete the RabbitMQ pod.
Required role: To complete this task, you must be a cluster administrator.
- Run the following commands to remove the RabbitMQ pod:
mdm_rabbitmq_name=$(kubectl get rabbitmqcluster -n $PROJECT_CPD_OPS -l app.kubernetes.io/component=mdm-rabbitmq -o jsonpath='{.items[0].metadata.name}') mdm_rabbitmq_helm_secrets=$(kubectl get secrets -n $PROJECT_CPD_OPS | grep -E "sh.helm.release.v[0-9]+.${mdm_rabbitmq_name}.v[0-9]+" | awk '{printf $1 " "}') kubectl delete secrets ${mdm_rabbitmq_helm_secrets} -n $PROJECT_CPD_OPS kubectl delete RabbitMQCluster ${mdm_rabbitmq_name} -n $PROJECT_CPD_OPS
- Wait for the pod to be recreated before starting to use the IBM Match 360 service.
- Run the following commands to remove the RabbitMQ pod:
When upgrading from Cloud Pak for Data 4.0.x to 4.6,
the IBM Match
360
FoundationDB resource can remain in a
Pending
state
When upgrading the IBM Match
360 service on
Cloud Pak for Data 4.0.3 to the version delivered with
Cloud Pak for Data 4.6, the IBM FoundationDB resource gets stuck in a Pending
state that does not change to Ready
. This occurs because certain settings that were
not provided in the earlier version are required in the current version.
- Applies to
- Upgrades from Cloud Pak for Data 4.0.x to Cloud Pak for Data 4.6.0 and later
- Resolving the problem
-
To avoid encountering this issue, complete the following steps before upgrading the application.
Required role: To complete this task, you must be a cluster administrator.
- Download and extract one of the following archive files, depending on your deployment version:
- For Cloud Pak for Data 4.6.0, 4.6.1, or 4.6.2, download match360-fdb-crds.zip.
- For Cloud Pak for Data 4.6.3 and later, download match360-fdb-crds-463.zip
- foundationdb.opencontent.ibm.com_fdbclusters.yaml
- foundationdb.opencontent.ibm.com_fdbcontrollerconfigs.yaml
- Refresh the custom resource definition using the files from the appropriate
archive.
oc replace -f foundationdb.opencontent.ibm.com_fdbclusters.yaml oc replace -f foundationdb.opencontent.ibm.com_fdbcontrollerconfigs.yaml
- Get the updated FoundationDB
CR:
oc get crd/fdbclusters.foundationdb.opencontent.ibm.com -o yaml | grep -i mirror
- Confirm that the following parameter is set in the
CR:
stage_mirror: - stage_mirror
- Complete the IBM Match 360 service upgrade procedure as documented in Upgrading IBM Match 360 with Watson from Version 4.0 to Version 4.6.
- Download and extract one of the following archive files, depending on your deployment version:
After upgrading from Cloud Pak for Data 4.5.x to 4.6, IBM Match 360 shutdown operations will not work unless you manually delete Elasticsearch jobs
For instances of the IBM Match 360 service that have been upgraded from Cloud Pak for Data 4.5.x to 4.6, the service shutdown operations will not work unless you manually delete all existing Elasticsearch jobs.
- Applies to
-
Cloud Pak for Data 4.6.1 and later
- Resolving the problem
-
After completing an upgrade from Cloud Pak for Data 4.5.x, manually delete all existing Elasticsearch jobs before starting to use the IBM Match 360 service.
Required role: To complete this task, you must be a cluster administrator.
Run the following command to delete the existing Elasticsearch jobs:oc delete pod mdm-INSTANCE_ID-ibm-elasticsearch-create-snapshot -n ${PROJECT_CPD_OPS} oc delete job mdm-INSTANCE_ID-ibm-elasticsearch-create-snapshot-repo-job -n ${PROJECT_CPD_OPS}
Backup and restore
A timeout error can occur in successful offline backup and restore operations
During offline backup and restore operations, timeout errors can appear in the logs. These timeout errors are inconsequential and do not affect the success of the backup or restore operations. For example:
time=2023-02-07T23:20:15.126981Z level=info msg=resource: mdm.cpd.ibm.com/v1, Resource=masterdatamanagements mdm-cr, mdmStatus: InMaintenance time=2023-02-07T23:20:15.127038Z level=error msg=failed to wait for resource status mdmStatus: timed out waiting for the condition time=2023-02-07T23:20:15.127063Z level=info msg=exit executeBuiltin time=2023-02-07T23:20:15.127069Z level=error msg=1 error occurred: * timed out waiting for the condition
No workaround is required. You can safely ignore the timeout errors.
- Applies to
- Cloud Pak for Data 4.6.3 and later
- Resolving the problem
-
After completing an upgrade from Cloud Pak for Data 4.5.x, manually delete all existing Elasticsearch jobs before starting to use the IBM Match 360 service.
Required role: To complete this task, you must be a cluster administrator.
Run the following command:oc delete pod mdm-INSTANCE_ID-ibm-elasticsearch-create-snapshot -n ${PROJECT_CPD_OPS} oc delete job mdm-INSTANCE_ID-ibm-elasticsearch-create-snapshot-repo-job -n ${PROJECT_CPD_OPS}
IBM Match 360 connection
Exported IBM Match 360 relationship data is missing some columns
When you export IBM Match 360 relationship data by using DataStage® or Data Refinery, certain columns are missing from the exported data. This occurs even though the export job appears to be successful.
created_user
relationship_number
to_record_number
from_record_number
relationship_type
last_updated_user
created_date
last_updated_date
- Applies to
-
Cloud Pak for Data 4.6.4 and later
- Resolving the problem
-
To work around this problem, remove the empty columns by using Data Refinery.
- Open the data asset in Data Refinery.
- Click Prepare data. Wait while Data Refinery reads and processes a sample of the data.
- Create a Data Refineryjob to remove the
columns.
- Select one of the empty columns, then click the overflow menu in the column header. Select Remove column to add the Remove column step to your Data Refinery job.
- Repeat this step for each of the empty columns:
created_user
relationship_number
to_record_number
from_record_number
relationship_type
last_updated_user
created_date
last_updated_date
- Click the Jobs icon, then select Save and create a job. Define your job details and configure the job including optional schedules and notifications, then click Create and run.
Required role: To complete this task, you must have the Admin or Editor role to create and run the Data Refinery job.
General
IBM Match 360 operations can fail due to an Elasticsearch connection timeout issue
You can encounter failures from IBM Match
360
jobs and operations that involve data and matching. The root cause of these failures is an issue
that causes Elasticsearch instances on some clusters
to perform slower than usual. This slowness can result in connection timeout errors between IBM Match
360 and Elasticsearch. In this scenario, you might see an error
message such as failed to connect to ES with error
.
- Onboarding and deboarding
- Most operations and jobs that involve the Data microservice
- Most operations and jobs that involve the Matching microservice
- Applies to
-
Cloud Pak for Data 4.6.5 and later
- Resolving the problem
-
There is no confirmed workaround for this intermittent issue. If you encounter this problem, retry the failed operation.
The order of parameters in the IBM Match 360 SDK can change without warning
When the IBM Match 360 SDK is generated, the order of parameters can change. The OpenAPI YAML content is generated in a different order each time.
- Applies to
-
Cloud Pak for Data 4.5.x, 4.6.0 and later
- Resolving the problem
-
If you use the IBM Match 360 SDK, use the
builder
method for each parameter instead of theconstructor
method that includes multiple parameters. Because the order of the parameters cannot be guaranteed, using theconstructor
method can negatively affect backwards compatibility.
Limitations
This section describes limitations with the IBM Match 360 service. Where possible, these items provide instructions about how to work around the limitations.
The default Elasticsearch configuration supports data volumes up to 5,000,000 entities
By default, the Elasticsearch cluster that is installed with IBM Match 360 is configured to support data volumes up to 5,000,000 entities. This configuration does not limit the core IBM Match 360 load, derive, match, or report jobs in any way.
- Applies to
-
All releases of IBM Match 360
- Resolving the problem
-
If your enterprise requires support for a higher data volume, edit the value of the
search.max_buckets
parameter in the Elasticsearch cluster configuration. For example:name: search.max_buckets value: "5000000"
Required role: To complete this task, you must be a cluster administrator.
Resolved
- Resolved in Cloud Pak for Data 4.6.4
-
- On FIPS-enabled clusters, you can encounter timeout errors for some API calls
- When importing a configuration snapshot into a newly created master data configuration instance, the matching settings cannot be imported
- Data profile creation fails for data that was added to a catalog by using an IBM Match 360 platform connection
- Resolved in Cloud Pak for Data 4.6.3
-
- The IBM Match 360 CR can show an incorrect status of Completed
- Successful bulk export jobs from DataStage to the IBM Match 360 connection can suddenly terminate and write an unclear error to the log
- After being restored, the IBM Match
360 service remains in an
InProgress
state for longer than expected
- Resolved in Cloud Pak for Data 4.6.1
-
- IBM Match 360 does not support offline backup or service shutdowns in Cloud Pak for Data 4.6.0
- After restoring IBM Match 360 from backup, some IBM Redis objects are omitted from the owner reference
- Some IBM Match 360 pods might not function correctly and continuously restart on clusters that have resource constraints
- On AWS ROSA, the IBM Match
360 instance remains in a
Pending
state, making user access assignment impossible from the dashboard - The FoundationDB cluster can become unavailable
- Import operations using the IBM Match 360 platform connection can fail
- Resolved in Cloud Pak for Data 4.6.0
-
- When uninstalling a service instance that was restored from a backup, you must manually remove some objects
- When you update MCP on the cluster, some worker nodes can become stuck
- When the Cloud Pak for Data cluster is restarted, FoundationDB backup pods can erroneously get evicted
- The
backup prehooks
command fails with a timeout error - Extra steps required to create or restore a backup of IBM Match 360 when it is shut down
- When a delimited data file with a header row is added to IBM Match 360 through the Cloud Pak for Data platform, the first row of data is not loaded into the service
- You cannot view IBM Match 360 jobs on the screen
The following items describe resolved known issues and limitations.
On FIPS-enabled clusters, you can encounter timeout errors for some API calls
IBM Match 360 API calls can result in timeout errors when Cloud Pak for Data is installed on a FIPS-enabled cluster. This issue affects API calls, but does not affect IBM Match 360 data jobs such as loading, matching, and exporting data. Depending on your Cloud Pak for Data configuration and system load, you might not encounter this issue.
This issue does not affect Cloud Pak for Data deployments running on non-FIPS-enabled environments.
This issue does not have a workaround.
- Fixed in
-
Cloud Pak for Data 4.6.4
- Applied to
-
Cloud Pak for Data 4.6.3
When importing a configuration snapshot into a newly created master data configuration instance, the matching settings cannot be imported
When you set up a new master data configuration instance of IBM Match 360 and then try to import a configuration snapshot from a previously existing instance, you are unable to import the matching settings. The system displays a message saying that the match settings are incompatible.
- Fixed in
-
Cloud Pak for Data 4.6.4
- Applied to
-
Cloud Pak for Data 4.6.3
- Resolving the problem
-
To work around this issue, load the snapshot in two stages.
Required role: To complete this task, you must belong to the IBM Match 360 DataEngineer user group.
- After seeing the Match settings are incompatible message, complete a partial snapshot load, including only the data model portion of the snapshot. For more information about loading snapshots, see Loading a snapshot.
- Publish the imported data model settings to ensure that they are compatible with your master data configuration.
- Load the snapshot again to separately import the match settings.
Data profile creation fails for data that was added to a catalog by using an IBM Match 360 platform connection
When you add the IBM Match 360 connection from the platform connections page and then use it to add entity and record data to a catalog, IBM Watson® Knowledge Catalog cannot profile the data. Instead, you are shown a Data profile creation failed error.
In addition to problems with profiling, this issue can also prevent you from applying data protection rules (data masking) on connected IBM Match 360 entity assets.
This issue does not have a workaround.
- Fixed in
-
Cloud Pak for Data 4.6.4
- Applied to
-
Cloud Pak for Data 4.6.1, 4.6.2, and 4.6.3
The IBM Match
360 CR can show an incorrect
status of Completed
The IBM Match
360 custom resource can display a
status of Completed
before all related deployments have actually finished coming
up. This is an intermittent problem, typically seen when network or performance issues cause service
pods to enter a Ready
state later than is typical.
- Fixed in
-
Cloud Pak for Data 4.6.3
- Applied to
-
Cloud Pak for Data 4.6.1 and 4.6.2
- Resolving the problem
-
When all deployments are ready and available, the IBM Match 360 service is ready to use. To confirm that the service is ready for use, run the following command to get the statuses of all IBM Match 360 related deployments:
oc get deployments -n ${PROJECT_CPD_OPS} -l icpdsupport/addOnName=mdm-cr
Required role: To complete this task, you must be a cluster administrator.
Successful bulk export jobs from DataStage to the IBM Match 360 connection can suddenly terminate and write an unclear error to the log
Bulk export jobs from DataStage to the IBM Match 360 connection can suddenly terminate, even when the job is successful. This issue causes the following error message to be written to the DataStage job logs: "ERROR IIS-SCAPI-BRIDGE-00023 <IBM_Match_360_1> Inside WRAPUP Phase"
No workaround is required for this issue. The bulk export job completes successfully. You can safely ignore the logged error message.
- Fixed in
-
Cloud Pak for Data 4.6.3
- Applied to
-
Cloud Pak for Data 4.6.1 and 4.6.2
After being restored, the IBM Match
360
service remains in an InProgress
state for longer than expected
After restoring the IBM Match
360 service from
a backup, the mdm
service status remains as InProgress
for an
extended period of time, even after other services reach a Completed
state. The
mdm
status eventually does resolve to Completed
, but it can take
up to 24 hours.
- Fixed in
-
Cloud Pak for Data 4.6.3
- Applied to
-
Cloud Pak for Data 4.6.1 and 4.6.2
- Resolving the problem
-
To force the
mdm
service's status to show asCompleted
, run the following command:oc delete job mdm-post-translations-job -n ${PROJECT_CPD_INSTANCE}
Required role: To complete this task, you must be a cluster administrator.
IBM Match 360 does not support offline backup or service shutdown
In the 4.6.0 release, IBM Match
360 does not
support the use of the shutdown
setting to manually shut down and restart the
service. This means that it also cannot support the creation of offline backups.
- Fixed in
-
Cloud Pak for Data 4.6.1
- Applied to
-
Cloud Pak for Data 4.6.0
- Resolving the problem
-
Instead of completing offline backups and restores, use the procedure for online backups and restores. For details, see Cloud Pak for Data online backup and restore.
Required role: To complete this task, you must be a cluster administrator.
After restoring IBM Match 360 from backup, some IBM Redis objects are omitted from the owner reference
After restoring an IBM Match 360 service instance from an online or offline backup, you must add certain IBM Redis objects to the service's owner reference. If you do not, then these objects will not be removed when the service is later uninstalled. If the objects remain on the cluster and then you try to reinstall IBM Match 360 on the same cluster, you will encounter installation problems.
- Fixed in
-
Cloud Pak for Data 4.6.1
- Applied to
-
Cloud Pak for Data 4.6.0
- Resolving the problem
-
After restoring an IBM Match 360 from backup, run the following script to ensure that missing Redis objects are included in the owner reference. Update the value defined for
MDM_INSTANCE_NAME
if it is different than the default,mdm-cr
.#Update MDM instance name if different than mdm-cr export MDM_INSTANCE_NAME=mdm-cr REDIS_SENTINEL_CR=$(oc get redissentinels --no-headers -n ${PROJECT_CPD_OPS} | awk '{print $1}') MDM_UID=$(oc get masterdatamanagements.mdm.cpd.ibm.com $MDM_INSTANCE_NAME -n ${PROJECT_CPD_OPS} -o json | jq '.metadata.uid') oc patch redissentinels $REDIS_SENTINEL_CR -n ${PROJECT_CPD_OPS} --type=merge -p '{"metadata":{"ownerReferences":[{"apiVersion": "mdm.cpd.ibm.com/v1", "kind": "MasterDataManagement", "name": "'$MDM_INSTANCE_NAME'", "uid": '$MDM_UID'}]}}' FORMATION_CR=$(oc get formation.redis --no-headers -n ${PROJECT_CPD_OPS} | awk '{print $1}') SENTINEL_UID=$(oc get redissentinels $REDIS_SENTINEL_CR -n ${PROJECT_CPD_OPS} -o json | jq '.metadata.uid') oc patch formation.redis $FORMATION_CR -n ${PROJECT_CPD_OPS} --type=merge -p '{"metadata":{"ownerReferences":[{"apiVersion": "redis.databases.cloud.ibm.com/v1", "blockOwnerDeletion":true, "controller":true, "kind": "RedisSentinel", "name": "'${REDIS_SENTINEL_CR}'", "uid": '${SENTINEL_UID}'}]}}'
Required role: To complete this task, you must be a cluster administrator.
Some IBM Match 360 pods might not function correctly and continuously restart on clusters that have resource constraints
On resource-constrained clusters, such as those that are scaled to use the Small t-shirt size,
some IBM Match
360 (mdm
) pods can
repeatedly restart because there isn't enough CPU capacity for the container to run correctly. This
issue has been reported in the mdm_job
pod, but could also occur on other IBM Match
360 pods such as mdm_config
,
mdm_data
, or mdm_matching
.
- Get the detailed information of your
pod.
oc describe pod/<POD_NAME>
- Check to see the value of the pod container's
Limits: cpu
property, such as in the following example:
If the CPU limit isContainers: mdm-job: Limits: cpu: 500m ephemeral-storage: 350Mi memory: 2Gi
500m
, then you might need to increase the value. For details, see the workaround section.
- Fixed in
-
Cloud Pak for Data 4.6.1
- Applied to
-
Cloud Pak for Data 4.6.0
- Resolving the problem
-
To resolve this issue, increase the CPU limit from 500m to 1000m in the IBM Match 360 custom resource (
mdm-cr
).Required role: To complete this task, you must be a cluster administrator.
Update the CR to increase the resource allocation for the affected pods. For example, to increase resources for themdm_job
pod, update the CR as follows:- Edit the IBM Match
360 custom resource
(
mdm-cr
).oc edit mdm mdm-cr --namespace ${PROJECT_CPD_OPS}
- Add or edit the following properties in the CR's
Spec
section to increase the CPU limit:mdm_job: deployment: replicas: 3 resources: limits: cpu: 1000m ephemeral-storage: 350Mi memory: 2Gi requests: cpu: 250m ephemeral-storage: 250Mi memory: 1Gi
- Edit the IBM Match
360 custom resource
(
On AWS ROSA, the IBM Match
360 instance remains in a
Pending
state, making user access assignment impossible from the dashboard
In the Amazon Web Services ROSA environment, IBM Match
360 service instances can remain in a
Pending
state in the Cloud Pak for Data
dashboard, regardless of the actual deployment completion status. This issue means that
administrators cannot use the dashboard to assign new IBM Match
360 user access and permissions.
- Fixed in
-
Cloud Pak for Data 4.6.1
- Applied to
-
Cloud Pak for Data 4.5.3 and 4.6.0
- Resolving the problem
-
Instead of using the Cloud Pak for Data dashboard to manage IBM Match 360 users and permissions, use the REST API. For details about using the API to manage IBM Match 360 user access, see Giving users access to IBM Match 360 with Watson.
Required role: To complete this task, you must be a Cloud Pak for Data administrator.
The FoundationDB cluster can become unavailable
In certain scenarios when the IBM FoundationDB (FDB) storage pods are restarted, such as after service upgrades, operating system upgrades, or cluster restarts, the corresponding IP addresses might change. Because of this, IBM Match 360 service pods are unable to connect to FDB. This issue is due to a limitation with the FDB operator.
#!/usr/bin/env bash
set -e
DATA_POD_NAME=$(oc get pods -lapp.kubernetes.io/component=mdm-data | awk '{ print $1}' | tail -n 1)
echo -e "[INFO] Checking for FDB coordinator error using Match 360 pod '${DATA_POD_NAME}'"
STATUS=$(oc exec $DATA_POD_NAME -- fdbcli --exec "status json" | jq .client.coordinators)
QUORUM_REACHABLE=$(echo -e $STATUS | jq .quorum_reachable)
if [ $QUORUM_REACHABLE == "false" ]; then
echo -e "[INFO] Coordinator fix is required! Quorum is not reachable."
exit
fi
COORDINATORS=$(echo -e $STATUS | jq -c .coordinators[])
for C in $COORDINATORS; do
REACHABLE=$(echo -e $C | jq .reachable)
if [ $REACHABLE == "false" ]; then
echo -e "[INFO] Coordinator fix is required! One or more coordinators are not reachable."
exit
fi
done
echo -e "[INFO] Coordinator fix is NOT required!"
If this issue is the cause of your
problem, the response output includes "Coordinator fix is required!
" When this problem occurs, the FDB cluster is unable to recover on its own. Complete the workaround steps to recover FDB and keep IBM Match 360 services up-to-date.
- Fixed in
-
Cloud Pak for Data 4.6.1
- Applied to
-
All IBM Match 360 versions before Cloud Pak for Data 4.6.1
- Resolving the problem
-
Complete the following steps to resolve this issue.
Required role: To complete this task, you must be a cluster administrator.
- Set the following environment variables:
- Set
CPD_OPERAND_NAMESPACE
to the namespace of the project where IBM Match 360 is installed. - Set
OS
to the operating system type for your environment, eitherlinux
ormacos
.
- Set
- Run the following script to repair the issue.
#!/usr/bin/env bash set -e if [ -z "$CPD_OPERAND_NAMESPACE" ]; then echo -e "[ERROR] Please set environment variable 'CPD_OPERAND_NAMESPACE' to MDM operand namespace." exit 1 fi if [ -z "$OS" ]; then echo -e "[ERROR] Please set environment variable 'OS' to 'linux' or 'macos' depending on environment." exit 1 fi oc project "${CPD_OPERAND_NAMESPACE}" kubectl proxy & proxy_pid=$! echo -e "[INFO] Started kubectl proxy at pid ${proxy_pid}" FDB_CLUSTER_NAME=$(oc get pods -l app.kubernetes.io/name=mdm-foundationdb,fdb-instance-id=storage-1 -o jsonpath='{.items[0].metadata.labels.fdb-cluster-name}') FDB_CONFIGMAP=$(oc get cm -l foundationdb.org/fdb-cluster-name=$FDB_CLUSTER_NAME | awk '{ print $1 }' | tail -1) CLUSTER_PREFIX=$(oc get cm "${FDB_CONFIGMAP}" -o jsonpath={.data.cluster-file} | cut -d "@" -f 1) STORAGE_IPS=($(oc get pods -l foundationdb.org/fdb-process-class=storage -o wide | awk '{ print $6 }' | tail -3)) CONNECTION_STRING="${CLUSTER_PREFIX}@${STORAGE_IPS[0]}:4500:tls,${STORAGE_IPS[1]}:4500:tls,${STORAGE_IPS[2]}:4500:tls" echo -e "[INFO] New connection string is '${CONNECTION_STRING}'. Updating FdbCluster..." curl localhost:8001/apis/apps.foundationdb.org/v1beta1/namespaces/${CPD_OPERAND_NAMESPACE}/foundationdbclusters/${FDB_CLUSTER_NAME}/status --header "Content-Type: application/json-patch+json" --request PATCH --data "[{\"op\": \"replace\", \"path\": \"/status/connectionString\", \"value\": \"${CONNECTION_STRING}\"}]" function cleanup { echo -e "[INFO] Killing kubectl proxy" >&2 kill $proxy_pid } trap cleanup EXIT echo -e "\n[INFO] FdbCluster updated. Patching MDM configmap '${FDB_CONFIGMAP}'..." oc patch cm "${FDB_CONFIGMAP}" -p "{\"data\": {\"cluster-file\": \"${CONNECTION_STRING}\"}}" --type=merge echo -e "[INFO] Configmap patched. Executing FDB 'fix-coordinator-ips'..." VERSION="v0.43.0" curl -sLo kubectl-fdb "https://github.com/FoundationDB/fdb-kubernetes-operator/releases/download/${VERSION}/kubectl-fdb-${VERSION}-${OS}" chmod +x kubectl-fdb BIN_DIR=$(dirname $(which kubectl)) sudo mv ./kubectl-fdb ${BIN_DIR} kubectl-fdb fix-coordinator-ips -c ${FDB_CLUSTER_NAME} # for older versions we need to manually restart the fdb servers because pkill is not available pods=$(oc get pods -l app.kubernetes.io/name=mdm-foundationdb,fdb-process-class --no-headers -o custom-columns=NAME:.metadata.name) for pod in $pods; do oc exec $pod -c foundationdb -t -- bash -O extglob -c 'all_procs=$(cd /proc && echo -e +([0-9])); procs=($all_procs); for proc in "${procs[@]}"; do (ls -l /proc/$proc/exe 2>/dev/null | grep fdbserver) && kill $proc && echo -e "Killed fdbserver proc $proc" && break; done' done echo -e "[INFO] FDB connection details were corrected. Restarting all MDM service pods..." oc delete pods --wait -lapp.kubernetes.io/component=mdm-data oc delete pods --wait -lapp.kubernetes.io/component=mdm-model oc delete pods --wait -lapp.kubernetes.io/component=mdm-configuration oc delete pods --wait -lapp.kubernetes.io/component=mdm-em-ui oc delete pods --wait -lapp.kubernetes.io/component=mdm-config-ui oc delete pods --wait -lapp.kubernetes.io/component=mdm-job oc delete pods --wait -lapp.kubernetes.io/component=mdm-matching echo -e "[INFO] Old pods have been deleted. Waiting for new pods to be ready for up to 120s..." oc wait --for=condition=ready --timeout=120s pod -lapp.kubernetes.io/component=mdm-data oc wait --for=condition=ready --timeout=120s pod -lapp.kubernetes.io/component=mdm-model oc wait --for=condition=ready --timeout=120s pod -lapp.kubernetes.io/component=mdm-configuration oc wait --for=condition=ready --timeout=120s pod -lapp.kubernetes.io/component=mdm-em-ui oc wait --for=condition=ready --timeout=120s pod -lapp.kubernetes.io/component=mdm-config-ui oc wait --for=condition=ready --timeout=120s pod -lapp.kubernetes.io/component=mdm-job oc wait --for=condition=ready --timeout=120s pod -lapp.kubernetes.io/component=mdm-matching echo -e "[INFO] All MDM service pods were restarted. Connection to FDB should be restored."
- Set the following environment variables:
Import operations using the IBM Match 360 platform connection can fail
IBM Match 360 connection imports can fail due to changes in how Cloud Pak for Data handles PVC storage volumes.
- Fixed in
-
Cloud Pak for Data 4.6.1
- Applied to
-
Cloud Pak for Data 4.6.0
- Resolving the problem
-
To resolve this issue, add the label
zen_storage_volume_include: "true"
in the IBM Match 360 PVCmdm-shared-persistence-<MATCH360_INSTANCE_ID>
.Required role: To complete this task, you must be a cluster administrator.
- Switch to the namespace where IBM Match
360 is
installed,
${PROJECT_CPD_OPS}
. - Run the following command to add the required
label:
oc patch pvc/$(oc get pvc |grep mdm-shared-persistence|awk '{print $1}') -p '{"metadata":{"labels":{"zen_storage_volume_include":"true"}}}'
- Switch to the namespace where IBM Match
360 is
installed,
When uninstalling a service instance that was restored from a backup, you must manually remove some objects
If you uninstall a IBM Match 360 service instance that was previously restored from a backup, you must manually remove some objects to fully complete the uninstallation. If you fail to remove these objects and then try to reinstall IBM Match 360 on the same cluster, you will encounter installation problems.
- Fixed in
-
Cloud Pak for Data 4.6.0
- Applied to
-
Cloud Pak for Data 4.5.x
When you update MCP on the cluster, some worker nodes can become stuck
When you attempt to update machine configuration pools (MCP) on the Cloud Pak for Data cluster, you can encounter an issue that causes some worker nodes to become stuck. This occurs when pod disruption budget (PDB) resources are in place that prevent certain pods from being evicted during the MCP update process.
- Fixed in
-
Cloud Pak for Data 4.6.0
- Applied to
-
Cloud Pak for Data 4.0.0 to 4.5.3
When the Cloud Pak for Data cluster is restarted, FoundationDB backup pods can erroneously get evicted
After a cluster restart, the FoundationDB backup pod can show a status of Evicted. When this happens, the pod will recreate itself on the cluster. This issue does not affect the health of the cluster or the operation of the IBM Match 360 service, and no backup snapshot data is lost from the FoundationDB backup pods.
- Fixed in
-
Cloud Pak for Data 4.6.0
- Applied to
-
Cloud Pak for Data 4.5.3
The backup prehooks
command fails with a timeout error
When running the backup prehooks
command in a Cloud Pak for Data cluster, the command can fail intermittently with
a timeout error related to the IBM Match
360
resource.
When reconciliation of the IBM Match
360
operator gets triggered shortly after entering an InMaintenance
state, the CR
status can switch to InProgress
and become stuck in that state. The operator then
attempts to fetch the IBM Match
360 service
instance ID before checking the CR status. The fetch action always fails if the status is already
InMaintenance
.
- Fixed in
-
Cloud Pak for Data 4.6.0
- Applied to
-
Cloud Pak for Data 4.5.0 to 4.5.3
Extra steps required to create or restore a backup of IBM Match 360 when it is shut down
- Creating a backup of the IBM Match
360 service
when the status is
shutdown: true
. - Restoring a backup of the IBM Match
360
service that was taken when the status was
shutdown: true
.
- Fixed in
-
Cloud Pak for Data 4.5.2 and 4.6.0
- Applied to
-
Cloud Pak for Data 4.5.2 and 4.5.3
When a delimited data file with a header row is added to IBM Match 360 through the Cloud Pak for Data platform, the first row of data is not loaded into the service
When IBM Match 360 retrieves a delimited (CSV, TSV, or PSV) file for bulk load from the Cloud Pak for Data platform, the first row of data can be erroneously removed. This occurs when there is a header row. The header row is removed upon retrieval by Cloud Pak for Data, but then IBM Match 360 also erroneously strips the first row, under the assumption that it is removing the header row. As a result, the first row of data in a delimited file containing a header row is not loaded into the IBM Match 360 service.
- Fixed in
-
Cloud Pak for Data 4.6.0
- Applied to
-
Cloud Pak for Data 4.0 to 4.5.3
You cannot view IBM Match 360 jobs on the screen
IBM Match 360 job details do not display on the Cloud Pak for Data screen. Even when filtering by the Master Data Management asset type in the Jobs screen, you cannot see any job information associated with IBM Match 360 service instances.
- Fixed in
-
Cloud Pak for Data 4.6.0
- Applied to
-
Cloud Pak for Data 4.5.3