Limitations and known issues for Watson Knowledge Catalog
These known issues apply to Watson Knowledge Catalog.
- General
- Upgrade to 3.5.x fails because of a problem with post-install processing
- Caption for permission does not display correctly
- Shared services and sample-data-job pods fail during the installation or upgrade process
- The wkc-post-upgrade-init pod fails during upgrade installation
- User groups not supported in certain areas
- Categories might not show the latest updates
- Exporting Custom Attribute definitions and data lineage reports does not work in Chrome browser
- Users in time zones less than GMT see their selected date value that is shown as one day before the actual value
- Migration of legacy HDFS connections default to SSL
- Execution history cannot be migrated because request entity is too large
- When the admin user is disabled, upgrading from 3.5.x to 3.5.y fails with ccs-post-install job failure
- Can’t add or preview a data asset with over 5,000 columns in a catalog
- Searching platform connections might not return results
- Catalogs
- After you upgrade from Cloud Pak for Data 3.0.1 to 3.5.x, the password is cleared for connections in a Watson Knowledge Catalog catalog
- The default “all_users” group is missing from the Platform assets catalog
- Business Analyst role does not have access catalog permission
- Reference connections do not sync correctly
- Blank page produced when you hover over an asset
- Missing previews
- Add collaborators with lowercase email addresses
- Multiple concurrent connection operations might fail
- After upgrade you can’t add or test connections during metadata import or discovery
- Can’t enable enforcing data protection rules after catalog creation
- Assets are blocked if evaluation fails
- Asset owner might be displayed as “Unavailable”
- Synchronizing information assets
- You can’t remove information assets from the default catalog or a project
- Log-in prompt is displayed in Organize section
- Default catalog is missing
- Quick scan jobs remain in status QUEUED for a long time after restart of the quick scan pod
- An error occurs when you import metadata
- An error occurs when you retrieve the catalog details
- Update profiling triggered by a non-owner of the asset in a governed catalog fails
- Business terms can be edited by all admin and editor users regardless of asset membership
- Adding related assets in the UI is not supported for users with Viewer access
- Cannot see data types in profiling results after you publish to a catalog from NGQS
- Unable to preview connected assets
- Can’t preview image and text assets from an IBM Cloud Object Storage connection with service credentials
- Can’t preview assets of the file type xlsm
- Special or double-byte characters in the data asset name are truncated on download
- Governance artifacts
- Import process times out
- Use of recent version of Chrome might cause problems with viewing asset details
- Related term is not displayed
- Data protection rule does not show in related content tab
- Data classes might not have the correct data class hierarchy
- Async publish redirection to published asset not working properly
- When you add a governance artifact to a category on the Category details page, the search feature does not work
- Unable to create a rule
- Cannot view glossary artifacts that have a custom attribute of type “numeric” if the value is set to 0
- Asset sync issues are produced when you try to publish terms
- An error occurs when the user attempts to add a custom attribute of text type for a category
- Delay in showing artifacts after upgrade
- Query strings longer than 15 characters of a single token can miss results
- Reindexing might fail on an upgraded environment
- Business term relationships to reference data values are not shown
- Limitations to automatic processing in data protection rules
- Can’t reactivate inactive artifacts
- Can’t export top-level categories
- Data classes display multiple primary categories
- Predefined data classes are missing
- Reimport might fail to publish when previous import request is not finished
- After you import and publish many data classes, the Profile page is not updated and refreshed
- You can’t publish data classes with the same name in different categories
- Clicking a data class with a special character in its name causes an exception
- Child categories are listed as root categories when import is in progress
- Import of categories fails
- Assigning “All users” as a Data Steward in governance artifacts
- Upgrading to Cloud Pak for Data 3.5 might cause permissions to be removed from custom roles
- Post discovery user is unable to drill down any data quality details
- After an upgrade, users with the Data Quality Analyst role cannot see governance artifacts
- After upgrading to version 3.5.8 from an earlier version, governance artifacts aren’t synced
- Some relationships between catalog assets and governance artifacts might not be retained during the upgrade
- Projects
- Governance artifact workflows
- Task inbox might not display all task details
- When you filter by artifact type on the “Workflow configuration” tab, an error is produced
- Null pointer exception is produced when you click “Send for approval” for a “Multiple approvals” workflow template
- When you select multiple tasks from the middle of the task list, items disappear and you cannot select other tasks
- When a task is displayed in the task inbox, the result is displayed instead of the artifact type and workflow status
- The Activities pane is not getting loaded for single asset tasks
- If a language other than English is used, a condition that is set for all categories can’t be removed
- Workflow types do not load, causing an error
- Users might not be able to view their task after they start to act on the task
- Fields are not automatically filled in
- After you complete all workflow tasks, the title and the buttons from the last completed task are still displayed
- The Any Workflow Status filter option in the Governance Workflows Draft Overview page does not list the user’s workflows
- If a workflow task is overdue for a user, the user does not get an overdue notification through email and pop-up notification
- Adding artifact types to an inactive default workflow
- Limitation for draft workflows on Firefox
- Incorrect notifications for completed workflow tasks
- Task details are displayed even after the task is completed
- Workflow details unavailable after upgrade
- If you enable notifications in your workflow configurations, you must also add collaborators
- Users or permissions in overview cannot be added
- Tasks are not being generated correctly for workflows started pre-upgrade
- Workflow stalls for user tasks without an assignee
- You might not be able to discard a draft artifact marked for deletion
- Custom workflows
- URLs that contain fewer than 20 characters cause errors in the Workflow management page
- A processing error is produced when the “deliver” step is selected in a workflow configuration
- The conditions matrix doesn’t show the correct content
- Activation modal does not show the correct conditions
- The conditions that are already set in a workflow configuration are not shown as selected in the conditions matrix
- The first custom workflow configuration can’t have conditions
- Task fields for custom workflows might show information for previously viewed tasks
- Completed tasks view doesn’t show last action for custom workflows
- Selections in tasks disappear in subsequent steps
- Date fields in custom workflow templates don’t work
- Some radio button fields in custom workflow templates don’t work
- URL task fields for custom workflows incorrectly show as editable and cause an error
- Activate a default workflow for a new workflow type before you add users or other workflow configurations
- Data curation
- Unable to overwrite term assignments when you publish quick scan results
- Regular expression fails validation in the UI
- Personal connection assets results from profiling are viewable without credentials
- Publication of quick scan results from the schema filter view does not work
- When the Solr pod stays offline for a long time, quick scan jobs are not restarted automatically
- Unable to resume quick scan jobs that are in a paused state
- Publishing an asset fails if the iis-service pod is restarted
- Data quality dashboard reports data quality threshold chart incorrectly
- Data quality columns tab view might not properly display term assignments
- Project sometimes does not appear in the UI
- Not able to scroll all data assets in a data quality project in tile view
- Data sets remain in loading state and do not publish
- Updates to the database name in platform connections are not propagated to discovery connections
- Quick scan fails on a Hive connection if the JDBC URL has an embedded space
- Quick scan asset preview encounters an error after publishing and is not visible
- Unable to browse the connection in the discover page
- Incorrect connections are associated with connected data assets after automated discovery
- Data discovery fails when started by a Data Steward
- Data Stewards can’t create automation rules
- Discovery on a Teradata database fails
- Changes to platform-level connections aren’t propagated for discovery
- Approving tables in quick scan results fails
- Virtual tables are not supported for BigQuery connections
- Column analysis fails if system resources or the Java heap size are not sufficient
- Quick scan hangs when it is analyzing a Hive table that is defined incorrectly
- Automated discovery might fail when the data source contains a large amount of data
- Discovery jobs fail due to an issue with connecting to the Kafka service
- Settings for discovery or analysis might be lost after a pod restart or upgrade
- Discovery jobs are not resumed on restore
- If the quick scan (NGQS) is run for many tables and takes more than 13 hours authentication token times out and the scan fails
- Quick scan Approve assets or Publish assets operation fails
- Connection that was made by using HDFS through the Execution Engine cannot be used with auto discovery
- After you publish several assets in quick scan only one of the duplicate assets is published, while other duplicates fail
- Quick scan dashboard generates a long URL, which causes the dashboard load to fail
- Quick scan jobs ran with pre-3.5 quick scan are not shown
- On Firefox, no details are shown for assets that are affected by an automation rule
- Known issues with Hive and HDFS connections for data discovery
- When you add data files to the data quality project, the Tree view doesn’t show data files
- Run analysis button cannot be found
- The owner of the table-assets that are synced to the default catalog is shown as an administrator instead of an asset owner
- Loading assets to workspace fails with a Solr query limit error (Sql Error: 414)
- Data quality score tab in list of data sets doesn’t show correct results after first click
- Password change is not propagated to the copy handled by auto discovery and quick scan
- Platform connections with encoded certificates cannot be used for discovery
- Without data sampling, analysis or discovery jobs fail when run on very large tables
- Can’t filter quick scan results by data classes or terms
- Can’t use connections using the Amazon S3 connector
- Quick scan or automated discovery might not work for generic JDBC platform connections with values in the JDBC properties field
- Db2 SSL/TLS connections can’t be used in discovery
- Some connection names aren’t valid for automated discovery
- Can’t view the results of a data rule run
- Overriding rule or rule set runtime settings at the columns level causes an error
- Global search
Also see:
- Known issues for Data Refinery
- Troubleshooting Watson Knowledge Catalog
- Cluster imbalance overwhelms worker nodes
General issues
You might encounter these known issues and restrictions when you work with the Watson Knowledge Catalog service.
Upgrade to 3.5.x fails because of a problem with post-install processing
Upgrade to 3.5.x fails with the following symptoms:
When you run the command oc get pods | grep wkc-post-upgrade-init
, you see results similar to the following example:
NAME READY STATUS RESTARTS AGE
wkc-post-upgrade-init-994p2 0/1 Error 0 4m11s
wkc-post-upgrade-init-j5vq7 0/1 Error 0 3m3s
wkc-post-upgrade-init-v4cpf 0/1 Error 0 4m21s
wkc-post-upgrade-init-vmz7p 0/1 Error 0 3m31s
wkc-post-upgrade-init-vp72j 0/1 Error 0 8m49s
You also see, in the wkc-post-upgrade-init-*
files, information similar to the following example:
[...] INFO: Waiting for wkc-glossary-service pod to be ready
error: .status.conditions accessor error: Failure is of the type string, expected map[string]interface{}
or
[...] INFO: Waiting for wdp-policy-engine pod to be ready
error: .status.conditions accessor error: Failure is of the type string, expected map[string]interface{}
Workaround:
Run the post-install processing manually:
- Identify the pods:
[root@ocp469px2-inf ~]# oc get pods | grep wkc-post-install wkc-post-install-init-9z2bt 0/1 Error 0 125m wkc-post-install-init-t7khf 0/1 Error 0 123m wkc-post-install-init-tl5mk 0/1 Error 0 125m wkc-post-install-init-xb48v 0/1 Error 0 125m
- Using the name of one of the pods, run the following command:
# oc debug wkc-post-install-init-xb48v sh-4.4$ /bin/sh /wkc/post-install.sh
Applies to: 3.5.x
Caption for permission does not display correctly
In User management, when you edit a role, the caption for the “Access advanced governance capabilities” permission in the Data Governance section is displayed as “no value”.
Workaround:
The checkbox for this permission still enables and disables the permission as expected.
Applies to: 3.5.6 and later
Shared services and sample-data-job pods fail during the installation or upgrade process
During a fresh installation of or an upgrade to Cloud Pak for Data 3.5.4, shared services such as Cassandra, Kafka, Solr, and Zookeeper and sample-data-job pods fail with the following error:
Error: container has runAsNonRoot and image has non-numeric user (cassandra), cannot verify user is non-root
Workaround: Remove the default
service account from the anyuid
SCC:
- Edit the
anyuid
SCC:oc edit scc anyuid
- Remove the line
- system:serviceaccount:[NAMESPACE]:default
underusers:
in the editor. ForNAMESPACE
, use the namespace where Watson Knowledge Catalog was installed. - Save the content.
- Retry the failed installation or upgrade.
Applies to: Versions prior to 3.5.x
The wkc-post-upgrade-init pod fails during upgrade installation
The wkc-glossary-service pod and the wkc-post-upgrade-init pod are started at the same time during an upgrade installation. During the upgrade, the wkc-glossary-service pod requires much more time to start than during a fresh installation because the wkc-glossary-service pod performs a schema upgrade. Because the wkc-glossary-service pod is not finshed with the schema upgrade, the wkc-post-upgrade-init pod fails.
Workaround:
Run wkc-post-upgrade-init manually after the upgrade completes:
$ oc debug [wkc-post-install-init-pod-name-xxxxx]
$ /bin/sh /wkc/post-install.sh
See the following example:
[root@ocp469px2-inf ~]# oc get po | grep wkc-post
wkc-post-install-init-9z2bt 0/1 Error 0 125m
wkc-post-install-init-t7khf 0/1 Error 0 123m
wkc-post-install-init-tl5mk 0/1 Error 0 125m
wkc-post-install-init-xb48v 0/1 Error 0 125m
[root@ocp469px2-inf ~]# oc debug wkc-post-install-init-xb48v
Starting pod/wkc-post-install-init-xb48v-debug, command was: /bin/sh /wkc/post-install.sh
Pod IP: 10.254.18.247
If you don't see a command prompt, try pressing enter.
sh-4.4$ /bin/sh /wkc/post-install.sh
[24-February-2021 10:34 UTC] INFO: Check status of categories bootstrap process...
Applies to: 3.5.4
User groups not supported in certain areas
These areas do not support user groups:
- Data discovery
- Data quality
- Information assets
- Watson Knowledge Catalog workflow
- Categories (except the All users group, which in categories represents all users who have permission to access governance artifacts)
Applies to: 3.5.0 and later
Categories might not show the latest updates
Categories and their contents can be processed in different areas of Watson Knowledge Catalog. For this reason, the contents of categories that are currently being viewed might not show the latest updates.
To ensure that you are viewing the latest updates, it is recommended that you manually refresh the view.
Exporting Custom Attribute definitions and data lineage reports does not work in Chrome browser
Use Firefox browser to export Custom Attribute definitions and data lineage reports.
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
Users in time zones less than GMT see their selected date value that is shown as one day before the actual value
When you edit custom attributes of type date, if you are in a time zone less than GMT you see your selected date value shown as one day before the actual value. If you choose 11/03/2020 in the DatePicker, the value will be displayed as 11/02/2020 after it is selected. When the selection is saved, the correct date is displayed in the UI. This situation does not affect users in time zones equal to or greater than GMT.
Workaround: Restart pod: wkc-glossary-service
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
Migration of legacy HDFS connections default to SSL
After migrating legacy WebHDFS connections, you might receive the following error from the migrated Apache HDFS connection:
The assets request failed: CDICO0100E: Connection failed: SCAPI error: An invalid custom URL (https://www.example.com) was specified. Specify a valid HTTP or HTTPS URL.
Workaround: Modify your WebHDFS URL protocol from https to http.
Applies to: 3.5.0 and later
Execution history cannot be migrated because request entity is too large
For larger projects, the migration command runs for several hours and shows a message that says the execution history cannot be migrated because the request entity is too large.
Workaround:
- Log in to your cluster.
- Edit the ugi-addons configmap:
oc edit cm ugi-addon
- Update
icpdata_addon_version: 3.0.1
toicpdata_addon_version: 3.5.0
.
Applies to: 3.5.0 and later
When the admin user is disabled, upgrading from 3.5.x to 3.5.y fails with ccs-post-install job failure
When you disabled the admin user by using this procedure and then started the upgrade from a 3.5.x version to a 3.5.y version, the ccs-post-install job failed and caused the upgrade to fail.
Workaround:
Before you start the upgrade, enable the admin user again and assign it to the platform catalog as a collaborator. When the upgrade is complete, you can disable the admin user again.
Applies to: 3.5.0 and later
Can’t add or preview a data asset with over 5,000 columns in a catalog
Assets with more than 5,000 columns cannot be added to a catalog or previewed.
Applies to: 3.5.0 and later
Searching platform connections might not return results
Searching for a connection on the Platform connections page might not return results because only the displayed connections are searched, although there might be more connections.
Workaround: Click Show more until all connections are loaded and rerun our search.
Applies to: 3.5.9 and later
Catalog issues
You might encounter these known issues and restrictions when you use catalogs.
After you upgrade from Cloud Pak for Data 3.0.1 to 3.5.x, the password is cleared for connections in a Watson Knowledge Catalog catalog
Workaround: Edit the connection in the catalog and reapply the password.
Applies to: 3.5.5
The default “all_users” group is missing from the Platform assets catalog
The “all_users” group is normally part of the Platform assets catalog by default, but in some cases the group might be missing.
Workaround: Use the Asset API to add the “all_users” group to the Platform assets catalog. The token of the user who can add collaborators, which is usually the administrator, must be used.
post /v2/catalogs/{catalog_id}/members
Request Body :
{
"members": [
{
"access_group_id": "10000",
"role": "viewer"
}
]
}
Applies to: 3.5.1, 3.5.2
Business Analyst role does not have access catalog permission
A user with the Business Analyst role cannot access the catalog, even though the Business Analyst role supports access to the catalog.
Workaround: An administrator can add the Access Catalog permission to the Business Analyst role. For more information, see Predefined roles and permissions and Managing roles.
Applies to: 3.5.2
Reference connections do not sync correctly
Reference connections that are created from from the platform catalog do not sync properly from Watson Knowledge Catalog into Information Assets.
Workaround: Create the connections as part of running a quick scan, and the connection syncs correctly. Another option is to create the connection directly in the catalog where the connection needs to be used.
Applies to: 3.5.2
Blank page produced when you hover over an asset
If you are looking at the Activities tab of a model and you hover over an asset, then a blank page is produced.
Applies to: 3.5.0 and later
Missing previews
You might not see previews of assets in these circumstances:
- In a catalog or project, you might not see previews or profiles of connected data assets that are associated with connections that require personal credentials. You are prompted to enter your personal credentials to start the preview or profiling of the connection asset.
- In a catalog, you might not see previews of JSON, text, or image files that were published from a project.
- In a catalog, the previews of JSON and text files that are accessed through a connection might not be formatted correctly.
- In a project, you cannot view the preview of image files that are accessed through a connection.
Applies to: 3.5.0 and later
Add collaborators with lowercase email addresses
When you add collaborators to the catalog, enter email addresses with all lowercase letters. Mixed-case email addresses are not supported.
Applies to: 3.5.0 and later
Multiple concurrent connection operations might fail
An error might be encountered when multiple users are running connection operations concurrently. The error message can vary.
Applies to: 3.5.0 and later
After upgrade, you can’t add or test connections during metadata import or discovery
You upgraded to Watson Knowledge Catalog. When you try to add new or test existing connections during metadata import or discovery, the connection might end up waiting for connection.
Workaround: Restart the agent in the is-en-conductor-0
pod. If the agent is still not rendering any requests, then delete the conductor pod. This way, you create a new instance of the pod and you can add or test connections again.
Applies to: 3.5.0 and later
Can’t enable enforcing data protection rules after catalog creation
You cannot enable the enforcement of data protection rules after you create a catalog. To apply data protection rules to the assets in a catalog, you must enable enforcement during catalog creation.
Applies to: 3.5.0 and later
Assets are blocked if evaluation fails
The following restrictions apply to data assets in a catalog with policies enforced: File-based data assets that have a header can’t have duplicate column names, a period (.), or single quotation mark (‘) in a column name.
If evaluation fails, the asset is blocked to all users except the asset owner. All other users see an error message that the data asset cannot be viewed because evaluation failed and the asset is blocked.
Applies to: 3.5.0 and later
Asset owner might be displayed as “Unavailable”
Applies to: 3.5.0 and later
Synchronizing information assets
In general, the following types of information assets are synchronized:
- Tables and their associated columns
- Files and their associated columns
- Connections (limited to specific types)
Data assets that are discovered from Amazon S3 are currently not synchronized from the Information assets view to the default catalog.
Workaround: Add the respective data assets directly to the default catalog.
Applies to: 3.5.0 and later
You can’t remove information assets from the default catalog or a project
You can’t remove information assets from data quality projects or the default catalog. These assets are still available in the default catalog.
Workaround: To remove an information asset from the default catalog or data quality projects, you first have to remove it from Information assets view. The synchronization process propagates the delete from Information assets view into the default catalog. However, you can remove assets from the default catalog or projects if they are not synchronized.
Applies to: 3.5.0 and later
Log-in prompt is displayed in Organize section
When you’re working in the Organize section, a log-in prompt might be displayed, even though you’re active.
Workaround: Provide the same credentials that you used to log in to Cloud Pak for Data.
Applies to: 3.5.0 and later
Missing default catalog and predefined data classes
The automatic creation of the default catalog after installation of the Watson Knowledge Catalog service can fail. If it does, the predefined data classes are not automatically loaded and published as governance artifacts.
Workaround: Ask someone with the Administrator role to follow the instructions for creating the default catalog manually.
Applies to: 3.5.0 and later
Quick scan jobs remain in status QUEUED for a long time after restart of the quick scan pod
When you run large automated discovery jobs or if large jobs were run recently, and you then restart the quick scan pod, quick scan jobs might remain in status QUEUED for a long time until they are processed due to the number of messages that need to be skipped during pod startup.
To reduce the amount of time until quick scan jobs are started, run the following steps:
- Pause the quick scan jobs in status QUEUED.
- Edit the deployment of the quick scan pod:
oc edit deploy odf-fast-analyzer
- Locate the lines that contain:
name: ODF_PROPERTIES value: -Dcom.ibm.iis.odf.kafka.skipmessages.older.than.secs=43200
- Replace
43200
with a smaller value such as3600
to limit the number of messages that need to be skipped. - Storing the update triggers pod recreation. Wait until the quick scan pod is in status RUNNING.
- Resume the quick scan job paused in step 1. Its status changes to RUNNING within a short period.
Applies to: 3.5.0 and later
An error occurs when you import metadata
When you import metadata, you might encounter an error message. Wait a moment, then retry to import metadata again.
Make sure that you have the required permissions, see Metadata import.
Applies to: 3.5.0 and later
An error occurs while you retrieve the catalog details
You might occasionally receive the message, “An error occurred while retrieving the catalog details”, with options to View logs and Retry.
Workaround:
You can:
- Refresh the browser; or
- Click another tab, such as Overview, Asset, or Access, then return to the Profile tab.
Applies to: 3.5.0 and later
Non-owner cannot update profile of an asset in a governed catalog
Failure to update profiling by a non-owner of the asset in a governed catalog.
Workaround: Profiling is failing as the asset is protected by policies. The owner of the asset must update the profile.
Applies to: 3.5.0 and later
Business terms can be edited by all admin and editor users regardless of asset membership
For a catalog asset, business terms can be edited by all admin and editor users regardless of asset membership. This will be corrected in a patch to allow only admin users or editor users who are asset members to edit the business terms of an asset.
Applies to: 3.5.0 and later
Viewer access restrictions
Adding related assets in the UI is not supported for users with only Viewer role. This issue will be properly hidden in a patch.
Applies to: 3.5.0 and later
Cannot see data types in profiling results after you publish to a catalog from NGQS
To view data types:
- Check the data types under the Access tab; or
- Update the profile by clicking Update profile.
Applies to: 3.5.0 and later
Unable to preview connected assets
When you want to preview some of the connected assets, the error ‘This file type may not be supported.’ is displayed. The issue occurs when the connection path for the data contains the dot character ‘.’. The dot character causes an incorrect file extension to be detected. The issue is related to connections that store data in a tabular format.
Applies to: 3.5.0 and later
Can’t preview image and text assets from an IBM Cloud Object Storage connection with service credentials
You can’t preview connected image and text assets from an IBM Cloud Object Storage connection if that connection is configured to use service credentials for authentication.
Workaround: To be able to preview such assets, edit the connection asset and explicitly specify values for Resource instance ID, API key, Access key, and Secret key.
Applies to: 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, and 3.5.6
Fixed in: 3.5.7
Can’t preview assets of the file type xlsm
You can’t preview Microsoft Excel documents of the file type xlsm that were uploaded from the local file system.
Workaround: Download the asset to your local file system and save it as a CSV file. Then, upload the CSV file to the catalog. This new data asset can be previewed.
Applies to: 3.5.11
Special or double-byte characters in the data asset name are truncated on download
When you download a data asset with a name that contains special or double-byte characters from a catalog, these characters might be truncated from the name. For example, a data asset named special chars!&@$()テニス.csv
will be downloaded as specialchars!().csv
.
The following character sets are supported:
- Alphanumeric characters:
0-9
,a-z
,A-Z
- Special characters:
! - _ . * ' ( )
Applies to: 3.5
Governance artifacts
You might encounter these known issues and restrictions when you use governance artifacts.
Import process times out
The initial import operation of a file that has 1000 or more artifacts sometimes produces a time-out error and doesn’t fully complete in the background.
Workaround: Import the same file again and the import completes successfully. Each import creates separate drafts, so you can publish drafts from the second attempt and discard the drafts from the first attempt.
Applies to: 3.5.5 and later
Use of recent version of Chrome might cause problems with viewing asset details
Viewing asset details by using a recent version of Chrome (v87+) might cause the asset details view to scroll out of view when you click asset details checkboxes.
Workaround: Use Firefox, or revert to Chrome v84 or earlier.
Applies to: 3.5.5
Fixed in: 3.5.6
Related term is not displayed
For an asset in a catalog with a related term, clicking the link for the related term produces an error, and the term is not displayed.
Workaround: View the term details in the business terms governance view.
Applies to: 3.5.5
Data protection rule does not show in related content tab
A related data protection rule does not show in the related content tab of a classification. This issue is intermittent.
The data protection rule eventually displays in the related content tab after a short period.
Applies to: 3.5.5
Fixed in: 3.5.6
Data classes might not have the correct data class hierarchy
After Watson Knowledge Catalog is installed, some of the data classes that are included by default might not have the correct data class hierarchy.
This issue impacts only Watson Knowledge Catalog 3.5.5 fresh installations. Watson Knowledge Catalog 3.5.5 upgrade installations do not have this problem.
A script is available for you to run on Watson Knowledge Catalog 3.5.5 fresh installation environments. The script checks for data classes that have an incorrect hierarchy and provides the option to fix those incorrect data classes. For more information, see the following technote: https://www.ibm.com/support/pages/node/6454221
Applies to: 3.5.5
Fixed in: 3.5.6
Async publish redirection to published asset not working properly
Reference data async publish redirection to published asset is not working properly when an approval process setup is in the workflow.
Workaround: Go to the Published tab and open the published asset.
Applies to: 3.5.4 and later
When you add a governance artifact to a category on the Category details page, the search feature does not work
The search feature does not work during the process of adding a governance artifact to a category on the Category details page.
Workaround: Add artifacts to categories from the artifact detail pages.
Applies to: 3.5.4
Fixed in: 3.5.5
Unable to create a rule
When you create a new governance artifact and try to use it to create a rule, the creation of the rule appears to fail.
Workaround: The cache might take up to 10 minutes to rebuild before you can see the artifact in the drop-down list. Wait for this period to elapse, then check again.
Applies to: 3.5.2 and later
Cannot view glossary artifacts that have a custom attribute of type “numeric” if the value is set to 0
If a glossary artifact has a custom attribute of the type “numeric” and the value of the type is set to 0, the UI does not display the artifact.
Applies to: 3.5.3 and later
Asset sync issues are produced when you try to publish terms
If you upgrade from Cloud Pak for Data version 3.0.1 directly to version 3.5.2 or version 3.5.3, asset sync issues are produced when you try to publish terms. To avoid these issues, you must upgrade from Cloud Pak for Data version 3.0.1 to version 3.5.1, then upgrade from version 3.5.1 to version 3.5.2 or 3.5.3.
Workaround: Instead of upgrading from version 3.0.1 to 3.5.2 or 3.5.3, upgrade from 3.0.1 to 3.5.4.
Applies to: 3.5.2, 3.5.3
An error occurs when the user attempts to add a custom attribute of text type for a category
An error occurs when the user attempts to add a custom attribute of text type for a category.
Applies to: 3.5.2
Fixed in: 3.5.3
Delay in showing artifacts after upgrade
When you first log in after an upgrade, the appearance of artifacts is delayed.
Workaround: Wait a few minutes for the artifacts to appear. If you experience this issue consistently, set the following environment variables on the wkc-search pod:
oc set env deploy/wkc-search dps_retry_timeout=300000
You must reset these variables every time the pod is restarted and the UI might take up to 5 minutes to respond in some cases.
Applies to: 3.5.0 and later
Query strings longer than 15 characters of a single token can miss results
Query strings longer than 15 characters of single token can miss results that have the exact matching of those characters in a longer token.
For example, a query for “abcdefghijklmn” finds “abcdefghijklmnopqrstuv” but a query for “abcdefghijklmnop” query does not find the right result.
Workaround: Return to the drafts list page, reopen the asset, then try publishing again.
Applies to: 3.5.0 and later
Reindexing might fail on an upgraded environment
After an upgrade, reindexing might not work, resulting in not being able to use data quality, discovery, or quick scan. You also cannot view information assets, and if you import any assets by using istools, the assets are not visible.
Workaround:
Re-create all the custom SQL views under the CMVIEWS, IGVIEWS, IAVIEWS, CEFVIEWS, and REMVIEWS schemas by running the following command:
cd <IS_INSTALL_DIR>/ASBServer/bin
./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///5.3/ASCLModel.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties
./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///2.3/ASCLCustomAttribute.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties
./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///ASCLModel/5.2/ASCLAnalysis.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties
./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///4.3/GlossaryExtensions.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties
./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///ASCLModel/6.4/investigate.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties
./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///1.0/CommonEvent.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties
./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///ASCLModel/1.1/EMRemediation.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties
Applies to: 3.5.0 and later
Business term relationships to reference data values are not shown
You can add a related business term to a value in a reference data artifact. However, the related content for a business term does not show the related reference data value.
Applies to: 3.5.0 and later
Limitations to automatic processing in data protection rules
Data protection rules do not automatically process artifacts that are closely related to artifacts that are explicitly specified in these cases:
- Synonyms to business terms in conditions are not automatically processed. Only terms that are explicitly specified in conditions are considered.
- Dependent data classes of data classes in conditions or actions are not automatically considered. For example, if you specify the data class “Drivers License” in a condition, the dependent data classes, such as, New York State Driver’s License, are not processed by the rule.
Applies to: 3.5.0 and later
Can’t reactivate inactive artifacts
When an effective end date for a governance artifact passes, the artifact becomes inactive. You can’t reset an effective end date that’s in the past. You can’t reactivate an inactive artifact. Instead, you must re-create the artifact.
Applies to: 3.5.0 and later
Can’t export top-level categories
You can’t export top-level categories. You can export only lower-level categories. However, you can export a top-level category if it is included with an export of its subcategories.”
Applies to: 3.5.0 and later
Data classes display multiple primary categories
When you assign a subcategory to a data class to be the primary category, all higher-level categories of the selected subcategory are also displayed in the details of this data class as primary categories. However, they are not assigned.
Applies to: 3.5.0 and later
Reimport might fail to publish when previous import request is not yet finished
Reimport might fail to publish governance artifacts such as business terms when called immediately after a previous import request.
Importing and publishing many governance artifacts is done in the background and might take some time. If you republish artifacts when the initial publishing process of the same artifacts isn’t finished yet, the second publishing request fails and the status of the governance artifact drafts shows Publish failed
.
Make sure that the publishing process is finished before you try to import and publish the same artifacts again.
Applies to: 3.5.0 and later
After you import and publish many data classes, the Profile page is not updated and refreshed
If you create and publish a data class, the Profile page of an asset is updated and refreshed. However, if you import and publish many data classes (for example, more than 50 data classes) by using a CSV file, the Profile page is not updated and refreshed for these imported data classes.
Workaround: If you must import and publish many data classes and you notice that the Profile page is not updated and refreshed, wait a moment, then edit just one data class, such as by adding a blank to this data class, and publish it. As a result, the Profile page is updated and refreshed to show all data classes that you published including the imported large number of data classes.
Applies to: 3.5.0 and later
You can’t publish data classes with the same name in different categories
The names of data classes must be unique. Don’t create data classes with the same name in different categories.
Note: Use globally unique names for data classes if you want to process data quality or data discovery assets.
Applies to: 3.5.0 and later
Clicking a data class with a special character in its name causes an exception
If you run an NGQS scan and then click a data class with an apostrophe (or other special characters), the data class details are not opened and an unexpected exception is produced.
Workaround: Change the name of the data class and remove the special character from the name.
Applies to: 3.5.0 and later
Child categories are listed as root categories when import is in progress
If users refresh or open the categories list while an import of categories is in progress, some child categories might be listed as root categories.
Workaround: Reload the categories page after the import completed successfully to see the correct hierarchy.
Applies to: 3.5.0 and later
Import of categories fails
Import of categories fails if the category hierarchy path that includes the path separators “ » “ is longer than 1,020 bytes.
Applies to: 3.5.0 and later
Assigning “All users” as a Data Steward in governance artifacts
Do not select All users in the Data Stewards list when you assign users to the steps in workflow configuration. Assigning All users as a Data Steward for a data protection rule causes issues when you open the rule.
Do not select All users in the Add stewards modal in all asset types as doing so also causes issues with the UI.
Applies to: 3.5.0 and later
Upgrading to Cloud Pak for Data 3.5 might cause permissions to be removed from custom roles
Due to permission changes, upgrading to Cloud Pak for Data 3.5 might cause permissions to be removed from custom roles.
Workaround: An administrator must log in to user management, review the permissions of any custom roles they defined, and adjust the permissions as needed.
Applies to: 3.5.0 and later
Post discovery user is not able to drill down any data quality details
Workaround: Restart the XMETA pod. You must wait until the restart is complete, then restart IS services.
If you are unable to exec or rsh to the XMETA pod:
- Run the following command:
oc get pods|grep xmeta
- Then, run:
oc delete pod podname
Applies to: 3.5.0 and later
After an upgrade, users with the Data Quality Analyst role cannot see governance artifacts
Workaround: The following two permissions need to be granted to the Data Quality Analyst role:
- Access governance artifacts; and
- Manage data protection rules.
Applies to: 3.5.0 and later
After upgrading to version 3.5.8 from an earlier version, governance artifacts aren’t synced
When you upgrade to assembly version 3.5.8 or version 4.0.0 through 4.0.5, addition, update, or deletion of governance artifacts might not be synced to the classic Information Governance Catalog.
This can happen in these cases:
- If the initially installed version is 3.5.0 or 3.0.x and you upgraded to version 3.5.8 or 4.0.0 through 4.0.5 (directly or indirectly). Upgrades to version 3.5.1 through 3.5.7 are not affected.
- If you upgraded from version 3.5.7 to version 3.5.8 or 4.0.0 through 4.0.5.
Workaround: To enable synchronization, complete the following steps:
-
Download the required script to the cluster. The script is attached to the following technote: Enable synchronization of governance artifacts after upgrading
-
Run the following command:
bg_omrs_config_v1.sh <wkc_hostname> <username> <password> <namespace> <release>
Replace <release> with one of these values: 3.5.8, 4.0.0, 4.0.1, 4.0.2, 4.0.3, 4.0.4, 4.0.5
If there are still some artifacts not synced to the classic Information Governance Catalog after you run the script, you can sync them manually by following these instructions:
- Obtain a bearer token as described in IBM Cloud Pak for Data Platform API: Get authorization token.
-
Resync governance artifacts by running the following command for each artifact type individually. Do not use the default option All. Always start with category, then continue with glossary_term, classification, data_class, reference_data, policy, and rule.
curl -k -X GET "https://<hostname>/v3/glossary_terms/admin/resync?artifact_type=<type>" -H "accept: application/json" -H "Authorization: bearer <token>"
Applies to: 3.0.0, 3.0.1, 3.5.0, 3.5.11 (assembly version 3.5.8)
Fixed in: 3.5.12 (assembly version 3.5.9)
Some relationships between catalog assets and governance artifacts might not be retained during the upgrade
Relationships between catalog assets and data classifications or business terms might not be retained during an upgrade. This applies to upgrades to any version of 3.5 including version 3.5.11.
Applies to: 3.5.0
Fixed in: 3.5.12
Analytics projects
You might encounter these known issues and restrictions when you use analytics projects.
Data is not masked in some analytics project tools
When you add a connected data asset that contains masked columns from a catalog to a project, the columns remain masked when you view the data and when you refine the data in the Data Refinery tool. However, other tools in projects do not preserve masking when they access data through a connection. For example, when you load connected data in a Notebook, you access the data through a direct connection and bypass masking.
Workaround To retain masking of connected data, create a new asset with Data Refinery:
- Open the connected data asset and click Refine. Data Refinery automatically includes the data masking steps in the Data Refinery flow that transforms the full data set into the new target asset.
- If necessary, adjust the target name and location.
- Click the Run button, and then click Save and Run. The new connected data asset is ready to use.
- Remove the original connected data asset from your project.
Applies to: 3.5.0 and later
You can’t use platform-level connections in metadata imports
When you create a metadata import asset, you cannot use platform-level connections. Although you can select a platform connection, it is not added as a connection.
As a workaround, create an identical connection when you create the metadata import asset.
Applies to: 3.5.0 and later
Governance artifact workflows
You might encounter these known issues and restrictions when you use governance workflows.
Task inbox might not display all task details
After you upgrade to Watson Knowledge Catalog 3.5.5, some workflow tasks in the task inbox might not display all task details.
No workaround is available, but you can still act on the task to reject, approve, publish, or delete it.
Applies to: 3.5.5
When you filter by artifact type on the “Workflow configuration” tab, an error is produced
When you filter by artifact type on the “Workflow configuration” tab, an error is produced that says “Something went wrong. Contact your system administrator.”
Applies to: 3.5.4
Fixed in: 3.5.5
Null pointer exception is produced when you click “Send for approval” for a “Multiple approvals” workflow template
A null pointer exception is produced when you are using a “Multiple approvals” workflow template with category roles and a data steward role and you click Send for approval.
This issue has no workaround. If you want to use a “Multiple approvals” workflow template, do not select the category role (Owner/Admin/Reviewer/Editor), the artifact role, or the data steward role in the configuration. Sending for approval works only for individual users and the workflow requester.
Applies to: 3.5.4, 3.5.5
Fixed in: 3.5.6
When you select multiple tasks from the middle of the task list, items disappear and you cannot select other tasks
When you select multiple tasks from the middle of the task list, items disappear and you cannot select other tasks.
Applies to: 3.5.3
Fixed in 3.5.4
When a task is displayed in the task inbox, the result is displayed instead of the artifact type and workflow status
When a task is displayed in the task inbox, the result is displayed instead of the artifact type and workflow status.
Workaround: The task can be selected and worked on as usual.
Applies to: 3.5.4
In the task inbox, the addition of a comment in the Activities panel of the task shows an error message
In the task inbox, the addition of a comment in the Activities panel of the task shows an error message.
Applies to: 3.5.4
Fixed in: 3.5.5
The Activities pane is not getting loaded for single asset tasks
When only a single asset is in a workflow inbox task, the Activities pane is not loaded.
Applies to: 3.5.4
Fixed in: 3.5.5
If a language other than English is used, a condition that is set for all categories can’t be removed
If a language other than English is used, a condition that is set for all categories can’t be removed.
Workaround:
Set your browser language to “English.”
Applies to: 3.5.3
Fixed in: 3.5.4
Workflow types do not load, causing an error
When you open the workflow management page, the workflow types do not load, which causes an error.
Workaround: Refresh your browser page.
Applies to: 3.5.3
Users might not be able to view their task after they start to act on the task
After a system restart, the internal configuration cache might be corrupted. As a consequence, when a user with the ability to act on a workflow acts on a task, the generated follow-up tasks might not be assigned properly to the appropriate users.
Workaround: Either provide permission to manage governance workflows to these users, or let an admin apply any change to the applicable workflow configuration to force a reload of the configuration cache whenever the issue is observed and after each system restart.
Applies to: 3.5.3
Fixed in: 3.5.4
Fields are not automatically filled in
In the steps of some workflows, some fields that are automatically filled with information from previous steps are not passed when you confirm the workflow action. This behavior results in an error.
Workaround: Enter the necessary information into the unfilled fields yourself to avoid the error.
Applies to: 3.5.2
Fixed in: 3.5.3
After you complete all workflow tasks, the title and the buttons from the last completed task are still displayed
After you complete all workflow tasks, the title and the buttons from the last completed task are still displayed.
Applies to: 3.5.2
The Any Workflow Status filter option in the Governance Workflows Draft Overview page does not list the user’s workflows.
The Any Workflow Status filter option in the Governance Workflows Draft Overview page does not list the user’s workflows.
Applies to: 3.5.2, 3.5.3
Fixed in: 3.5.4
If a workflow task is overdue for a user, the user does not get an overdue notification through email and pop-up notification
If a workflow task (say publishing) is overdue for a user, then the user is supposed to get an overdue notification through email and pop-up notification for one time. But the email and pop-up notification do not appear.
Applies to: 3.5.2
Fixed in: 3.5.3
Adding artifact types to an inactive default workflow
You can’t move artifacts types to the default workflow if it is inactive by deleting another workflow. You must deactivate another workflow and then manually activate the default workflow.
To move artifact types to the default workflow:
- Click Administer > Workflow configuration.
- Open an active workflow by clicking its name.
- Click Deactivate and then confirm by clicking Deactive. The artifact types for the workflow are moved to the default workflow automatically.
- Open the “default workflow configuration” workflow.
- Click Activate.
Applies to: 3.5.0 and later
Limitation for draft workflows on Firefox
You can’t select any artifact types when you view workflow drafts in the Firefox web browser version 60. Use a different browser.
Applies to: 3.5.0 and later
Incorrect notifications for completed workflow tasks
If a workflow task has multiple assignees, and one person completes the task, then the other assignees see an incorrect notification. The notification states that the task couldn’t be loaded, instead of that the task is already completed by another assignee.
Applies to: 3.5.0 and later
Task details are displayed even after the task is completed
When you complete a workflow task, its details are still displayed. The issue occurs when there are fewer than 10 tasks in the list.
Workaround: Select the task from the list to refresh the details.
Applies to: 3.5.0 and later
Workflow details unavailable after upgrade
After an upgrade from Cloud Pak for Data 2.5 to 3.0, the details of workflow configuration show “Unavailable” for the user who created or modified the workflow.
Applies to: 3.5.0 and later
If you enable notifications in your workflow configurations, you must also add at least one collaborator
When you configure a workflow, you can select tasks and enable notifications. If you enable notifications for a task, you also must add at least one collaborator, who can be the same as one of the assignees. Otherwise, the checkbox of the task you selected is cleared with the next refresh.
To enable notifications:
- Add the assignees in the Details section.
- Scroll down to the Notifications section. Then, select the required action and add the collaborators to be notified or at least one collaborator or assignee.
Applies to: 3.5.0 and later
Users or permissions in “Overview” cannot be added
Users or permissions in Overview cannot be added.
Workaround:
Create a default workflow configuration, activate it, and go to Overview to add users.
Applies to: 3.5.0 and later
Tasks are not being generated correctly for workflows started pre-upgrade
For workflows that were started pre-upgrade, tasks are not being generated correctly.
Workaround: The legacy data needs to be patched by using a workflow script.
Two HTTP requests need to be sent to the Workflow Service “Early adopter” API to deploy a custom workflow and to run it.
Insert the following values:
- < host > : Target system hostname
- < token > : Bearer token for a user with the authority to manage workflows
- < path > : Path to the patch workflow file from the wkc-workflow-service repository at
subprojects/app/src/main/resources/processes/ApplyLegacyWorkflowPatch.bpmn
curl --location --request POST 'https://<host>/v3/workflows/flowable-rest/service/repository/deployments' \
--header 'Content-Type: multipart/form-data' \
--header 'Authorization: Bearer <token>' \
--form 'files=@<path>/ApplyLegacyWorkflowPatch.bpmn'
curl --location --request POST 'https://<host>/v3/workflows/flowable-rest/service/runtime/process-instances' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <token>' \
--data-raw '{
"processDefinitionKey": "applyLegacyWorkflowPatch"
}'
Applies to: 3.5.0 and later
Workflow stalls for user tasks without an assignee
A user task might have no assignee because, for example, the category role selected for that task isn’t assigned to any user or an empty user group was added. In this case, the workflow stalls.
Workaround: None for individual category roles. Avoid assigning empty category roles to a workflow configuration. The workaround for user groups is to add users to that group.
Applies to: 3.5.0 and later
You might not be able to discard a draft artifact marked for deletion
If you want to cancel the deletion process of an artifact that is subject to a workflow with at least one approval step, the draft marked for deletion cannot be discarded.
Workaround: To resolve this issue:
- Unlock the draft manually.
-
Get the artifact ID and the version ID of the draft from the URL of the draft artifact details page. A URL for a draft artifact typically has the format
https://hostname/gov/<artifact_type>/<artifact_id>/<version_id>
. -
Obtain a bearer token as described in IBM Cloud Pak for Data Platform API: Get authorization token.
-
Run the following command replacing these values:
- <artifact_type> with the appropriate artifact type
- <host> with the hostname or IP address of the Cloud Pak for Data cluster
- <artifact_ID> and <version_ID> with the values obtained in step 1
- <token> with the token obtained in step 2
export $ARTIFACT_TYPE=<artifact_type> curl -k -X POST "https://<host>/v3/governance_artifact_types/$ARTIFACT_TYPE/<artifact_ID>/versions/<version_ID>/logs" -H "accept: application/json" -H 'Content-Type: application/json' -H "Authorization: Bearer <token>" -d '{"new_wf_status":"unlocked", "action":"unlock by api", "allow_edits":true, "user_id": "<user_id>" }'
-
-
Delete the draft by using the following API call. Replace values as described in step 2.c.
curl -k -X DELETE "https://<host>/v3/glossary_terms/<artifact_ID>/versions/<version_ID>" -H "Content-Type: application/json" -H "accept: application/json" -H "Authorization: Bearer <token>"
Applies to: 3.5.9 and later
Custom workflows
You might encounter these known issues and restrictions when you use custom workflows.
URLs that contain fewer than 20 characters cause errors in the Workflow management page
Creating a workflow http task fails when the URL is fewer than 20 characters.
Applies to: 3.5.3
Fixed in: 3.5.4
A processing error is produced when the “deliver” step is selected in a workflow configuration
If a custom template was uploaded with a step that had no description at all, the step can’t be displayed in the UI.
Workaround: Add a description in the template for the step where the description is missing. If a template was already uploaded that had an empty step description, a new one can be uploaded with the fixed template and override the old one.
Applies to: 3.5.4
Fixed in: 3.5.5
The conditions matrix doesn’t show the correct content
The conditions matrix doesn’t show the correct content in that the name of the workflows that are currently handling the conditions is wrong. Although the information is wrong, functionality is not affected.
Applies to: 3.5.3
Activation modal does not show the correct conditions
The activation modal doesn’t show the correct conditions. This issue has no functional impact.
Applies to: 3.5.3
The conditions that are already set in a workflow configuration are not shown as selected in the conditions matrix
The conditions that are already set in a workflow configuration are not shown as selected in the conditions matrix. The result is that the conditions get removed if other conditions are set for this specific category.
Workaround:
Select those conditions again when you modify the conditions for a category.
Applies to: 3.5.3
The first custom workflow configuration can’t have conditions
When you create the first custom workflow configuration for a new workflow type and you attempt to add conditions, you see a matrix load error. Therefore, the first workflow configuration for a custom workflow type must be the default workflow configuration, it can’t have any conditions.
Applies to: 3.5.0 and later
Task fields for custom workflows might show information for previously viewed tasks
If multiple tasks from custom requests are in the Task inbox that use the same text fields (like title, summary) and the user switches from the display of one such task, say task A, to another, say task B, it might occur that the contents shown in such text fields for task B is not refreshed and still shows the values from task A.
Workaround: Refresh your browser page, then open the task that you want to view.
Applies to: 3.5.0 - 3.5.2
Fixed in: 3.5.3
Completed tasks view doesn’t show last action for custom workflows
Task inbox tab “Completed by me” shows tasks that were completed by the current user in their original version, before the action was submitted. The values or entries in text fields that were selected for this action are not shown.
Applies to: 3.5.0 - 3.5.2
Fixed in: 3.5.3
Selections in tasks for custom workflows disappear in subsequent steps
Tasks that are generated by custom workflows don’t show the following types of selections that were made in previous steps of the workflow:
- Values in fields that allow multiple selections
- Values in category fields
- Values in single-select lists
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
Date fields in custom workflow templates don’t work
For tasks from custom requests in Task inbox that use a date field (a calendar selection), the value of the selected date is ignored.
Workaround: Do not use custom workflow templates with date fields.
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
Some radio button fields in custom workflow templates don’t work
For tasks from custom requests that use a radio field, the selection of one of the choices can cause an error when you submit the form. An error occurs if the definition of the field in the workflow template has a mismatch between the “id” and the “name” of the field. For example, if id=”high” and name=”High”.
Workaround: Use only custom workflow templates that have radio fields for which the ID and the name are defined as identical strings.
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
URL task fields for custom workflows incorrectly show as editable and cause an error
When a workflow task contains a URL field that should be read-only in the task, this field is incorrectly displayed as an editable field. If you edit the URL field, an error occurs and you can’t continue the task.
Workaround: After the error, refresh the page in the browser and continue with the task without editing the URL field.
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
Activate a default workflow for a new workflow type before you add users or other workflow configurations
After you create a new workflow type, you must first create and activate a default workflow configuration for this workflow type before you can complete these tasks:
- Add users and category roles to the Access section on the Overview tab of the workflow type. Doing so ensures a proper configuration of the workflow type before the request is shown to regular users. Initially, only the person who created the workflow type can select it in the New Request dialog.
- Create other workflow configurations for that workflow type that have conditions. If you attempt to create a new workflow configuration before you define a default workflow configuration, the Add conditions window remains empty and prevents you from adding conditions.
Applies to: 3.5.0 and later
Data curation
You might encounter these known issues and restrictions when you use advanced data curation tools.
Unable to overwrite term assignments when you publish quick scan results
You are unable to overwrite term assignments when you publish quick scan results and the asset already exists in the default catalog.
Workaround:
Run the following command to allow changing or overwriting term assignments on republish of quick scan results when published assets are already in the catalog:
oc set env deploy ia-analysis REPUBLISH_TERM_ASSIGNMENT=true
Applies to: 3.5.6
Regular expression fails validation in the UI
When you configure data class matching, the regular expression might fail validation in the UI.
Workaround:
Although the regular expression fails when it is validated in the UI, the data classification can still be successfully applied for auto term assignment when you run analysis jobs.
Applies to: 3.5.6
Personal connection assets results from profiling are viewable without credentials
Any user can view profiling results for assets from a personal connection without providing credentials. But updating the profile fails for the user who does not provide personal credentials.
Workaround:
Provide the personal credentials in the Asset tab of the Asset details page.
Applies to: 3.5.6
Publication of quick scan results from the schema filter view does not work
Publish of quick scan results from the schema filter view does not work.
Applies to: 3.5.4
Fixed in: 3.5.5
When the Solr pod stays offline for a long time, quick scan jobs are not restarted automatically
If the Solr pod goes down for a long time when a quick scan is running, the quick scan remains in the state of “Analyzing” in the UI and cannot be reset.
Workaround:
Start a new quick scan job to reanalyze the assets.
Applies to: 3.5.4
Unable to resume quick scan jobs that are in a paused state
Scenario: You start multiple quick scan jobs and only one job is running at a time (odf-fast-analyzer pod replicas=1 by default). All the jobs in the queue are either manually cancelled or cancelled because of an iis-services pod restart. Now, if you try to resume the cancelled quick scan jobs, they are automatically restarted because of the restart mechanism for the iis-services pod restart scenario. A permission error is produced.
Workaround:
First, delete the cancelled quick scan jobs and initiate new quick scan jobs. Then, open a remote shell session to the iis-services container:
oc get pods | grep iis-services
oc rsh <iis services pod name>
Next, set the feature flag to skip the workplace permissions check:
/opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -s -k com.ibm.iis.odf.check.ialight.workspace.access -value false
Applies to: 3.5.4
Fixed in: 3.5.5
Publishing an asset fails if the iis-service pod is restarted
Publishing an asset fails if the iis-service pod is restarted. The review status of that asset remains in the “Loading” state.
Workaround:
Run the command d -r <quickscan job id>
in ODFAdmin to reset all “Loading” tables to “Loading error.” Then, re-run the publication manually.
Applies to: 3.5.4
Fixed in: 3.5.5
Data quality dashboard reports data quality threshold chart incorrectly
The data quality dashboard reports the data quality threshold chart incorrectly. For all the analyzed data sets, the benchmark status is marked as ‘No Threshold.’
Applies to: 3.5.3
Fixed in: 3.5.4
Data quality columns tab view might not properly display term assignments
When a business term with a long name is assigned to a column, the name is truncated in the data quality columns tab view and other term assignments cannot be seen.
Workaround: Use the governance tab to view term assignments and term details, including the full term name.
Applies to: 3.5.3
Fixed in: 3.5.4
Project sometimes does not appear in the UI
After you create a project from the UI, sometimes the project does not appear in the UI later.
Workaround: Run the following command in the iis-services pod. Use the user ID of the user who needs access for the workspace and use the name of the workspace or project.
curl -k -u isadmin:$ISADMIN_PASSWORD -H 'Content-Type: application/json' -X POST -d '{"users":[{"id":"<userIs>","role":"SorcererBusinessAnalyst,SorcererOperator"}]}' https://localhost:9446/ibm/iis/ia/api/configuration/project/<workspace-name>/addUser
Applies to: 3.5.1, 3.5.2, 3.5.3
Not able to scroll all data assets in a data quality project in tile view
You might not able to scroll all data assets in a data quality project in the tile view under certain conditions. This issue is specific to browsers on Windows that are zoomed in or with screen resolution low for the height dimension.
Workaround: Zoom out (Ctrl + ‘-‘) in the browser window.
Applies to: 3.5.3
Data sets remain in loading state and do not publish
When you try to publish data sets from the quick scan discovery UI, the status of the data sets remains in the “Loading…” state. This issue occurs in particular after some pods, especially the Solr pod, are restarted during the publishing process.
Workaround: Complete the following steps to publish the data sets again.
- Run the following command:
oc exec is-en-conductor-0 -- /opt/IBM/InformationServer/ASBNode/bin/ODFAdmin.sh d -r qs_123456789
where
qs_123456789
is the ID of the discovery job. Now all data sets that were previously set to the “Loading” state are set to “Loading error.” - Publish the data sets with “Loading error” again. You can find these data sets by using the UI to filter for all Asset type “Table” and Status “Loading error.”
Applies to: 3.5.3
Updates to the database name in platform connections are not propagated to discovery connections
When you use a platform connection in discovery jobs, a copy of the connection is created. If you later change the database name in the platform-level connection, this change is not propagated to the discovery connection. Due to this mismatch, publishing assets from quick scan results fails.
Workaround: Make sure that, in platform connections used for discovery with quick scan, the database name is not updated.
Applies to: 3.5.0 and later
Quick scan fails on a Hive connection if the JDBC URL has an embedded space
Quick scan runs can fail if a space is embedded in the URL of a JDBC driver for a Hive connection.
Workaround: Correct the JDBC URL by removing any extra spaces that are inside of it.
Applies to: 3.5.0 and later
Quick scan asset preview encounters an error after publishing and is not visible
After you run a quick scan and publish the results, the asset might not be visible and you might get an error message that says a connection is required to view the asset.
This issue applies only to pre-3.5 quick scan (cases when quick scan is loading Information Governance Catalog). The issue is not applicable to next generation quick scan when it is loading Watson Knowledge catalogs.
Workaround: The process of publishing the results eventually finishes even though the error message appears. You do not need to rerun any other process. When the connection metadata arrives, each data asset updates with the attachment information. Check again later to see whether it completed.
Applies to: 3.5.0 and later
Incorrect connections that are associated with connected data assets after automated discovery
When you add connected data assets through automated discovery, the associated connection assets might be incorrect. Connections that have the same database and hostnames are indistinguishable to automated discovery, despite different credentials and table names. For example, many Db2 databases on IBM Cloud have the same database and hostnames. An incorrect connection with different credentials might be assigned and then the data asset can’t be previewed or accessed.
Applies to: 3.5.0 and later
Unable to browse the connection in the discover page
After you complete a backup or restore, then scale up pods, you are unable to browse the connection in the discover page.
Workaround: The connection is accessible 15 minutes after you scale up the pods, even though the conductor status shows as running with 1/1 status.
Applies to: 3.5.0 and later
Data discovery fails when started by a Data Steward
Users with the Data Steward role can start a data discovery, even though they don’t have sufficient permissions to run the discovery. As a result, the discovery fails. You must have the Data Quality Analyst role to run discovery.
Applies to: 3.5.0 and later
Data Stewards can’t create automation rules
Users with the Data Steward role can start creating an automation rule, even though they don’t have sufficient permissions to manage automation rules. As a result, the automation rule is not saved and an error is displayed. You must have the Data Quality Analyst role to create automation rules.
Applies to: 3.5.0 and later
Discovery on a Teradata database fails
When you run a data discovery on a Teradata database by using JDBC connector, and the CHARSET is set to UTF8, the analysis fails with an error.
Example error content: The connector detected character data truncation for the link column C3. The length of the value is 12 and the length of the column is 6.
Workaround: When a database has Unicode characters in the schemas or tables, set the CHARSET attribute to UTF16 when you create a data connection.
Applies to: 3.5.0 and later
Changes to platform-level connections aren’t propagated for discovery
After you add a platform-level connection to the data discovery area, any subsequent edit to or deletion of the platform-level connection is not propagated to the connection information in the data discovery area and is not effective.
Workaround: Delete the discovery connection manually. You must have the Access advanced governance capabilities permission to be able to complete the required steps:
- Go to Governance > Metadata import
- Go to the Repository Management tab.
- In the Navigation pane, select Browse assets > Data connections.
- Select the connection that you want to remove and click Delete.
Readd updated platform-level connections to the data discovery area as appropriate.
Applies to: 3.5.0 and later
Approving tables in quick scan results fails
When a table name contains a special character, its results cannot be loaded to a project. When you click Approve assets, an error occurs.
Also, when you select more than one table to approve, and one of them fails to be loaded, the rest of the tables fail. The only way to approve the assets is to rediscover the quick scan job.
Applies to: 3.5.0 and later
Virtual tables are not supported for BigQuery connections
You cannot create SQL virtual tables for data assets that were added from Google BigQuery connections.
Applies to: 3.5.0 and later
Column analysis fails if system resources or the Java heap size are not sufficient
Column analysis might fail due to insufficient system resources or insufficient Java heap size. In this case, modify your workload management system policies as follows:
-
Open the Information Server operations console by entering its URL in your browser:
https://<server>/ibm/iis/ds/console/
-
Go to Workload Management > System Policies. Check the following settings and adjust them if necessary:
Job Count setting: If the Java Heap size is not sufficient, reduce the number to 5. The default setting is 20.
Job Start setting: Reduce the maximum number of jobs that can start within the specified timeframe from 100 in 10 seconds (which is the default) to 1 in 5 seconds.
Applies to: 3.5.0 and later
Quick scan hangs when it is analyzing a Hive table that was defined incorrectly
When analyzing a schema that contains an incorrectly defined Hive table, quick scan starts looping when it tries to access the table. Make sure that the table definition for all Hive tables is correct.
Applies to: 3.5.0 and later
Automated discovery might fail when the data source contains a large amount of data
When the data source contains a large amount of data, automated discovery can fail. The error message indicates that the buffer file systems ran out of file space.
Workaround: To have the automated discovery complete successfully, use one of these workarounds:
- Use data sampling to reduce the number of records that are being analyzed. For example, set the sample size to 10% of the total number of records.
-
Have an administrator increase the amount of scratch space for the engine that is running the analysis process. The administrator needs to use the Red Hat OpenShift cluster tools to increase the size of the volume where the scratch space is, typically
/mnt/dedicated_vol/Engine
in the is-en-conductor pod. Depending on the storage class that is used, the scratch space might be on a different volume.The size requirements for scratch space depend on the workload. As a rule, make sure to have enough scratch space to fit the largest data set that is processed. Then, multiply this amount by the number of similar analyses that you want to run concurrently. For more information about expanding volumes, see the instructions in the OpenShift Container Platform documentation.
Applies to: 3.5.0 and later
Discovery jobs fail due to an issue with connecting to the Kafka service
Automated discovery and quick scan jobs fail if no connection to Kafka service can be established. The iis-services and odf-fast-analyzer deployment logs show error messages similar to the following examples:
org.apache.kafka.common.KafkaException: Failed create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:338)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:52)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.createTopicIfNotExistsNew(KafkaQueueConsumer.java:184)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.createTopicIfNotExists(KafkaQueueConsumer.java:248)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.startConsumption(KafkaQueueConsumer.java:327)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.run(KafkaQueueConsumer.java:260)
at java.lang.Thread.run(Thread.java:811)
To resolve the issue, an administrator must restart Kafka manually by running the following command:
oc delete pod kafka-0
Applies to: 3.5.0 and later
Settings for discovery or analysis might be lost after a pod restart or upgrade
After pod restart or upgrade, settings might be lost or reverted to their default values, such as properties on RHEL level on the pod in the nproc
file or MaximumHeapSize
in ASBNode/conf/proxy.xml
. Refer to Analysis or discovery jobs fail with an out-of-memory error for more information on settings.
Workaround:
Check your settings before you start upgrading. Most settings are retained, but some settings might be reverted to their default settings. Check your /etc/security/limits.conf
on every compute node in the cluster and add or edit the required settings as follows:
-
The parameters from is-en-conductor-0 pod:
/opt/IBM/InformationServer/Server/DSEngine/bin/dsadmin -listenv ANALYZERPROJECT | grep DEFAULT_TRANSPORT_BLOCK APT_DEFAULT_TRANSPORT_BLOCK_SIZE=3073896 com.ibm.iis.odf.datastage.max.concurrent.requests=4 contained in odf.properties
-
The parameters from iis-services pod:
/opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.ia.max.columns.inDQAOutputTable com.ibm.iis.ia.max.columns.inDQAOutputTable=500 /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.ia.server.jobs.postprocessing.timeout com.ibm.iis.ia.server.jobs.postprocessing.timeout=84600000 /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.events.kafkaEventConsumer.timeout com.ibm.iis.events.kafkaEventConsumer.timeout=10000 /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -key com.ibm.iis.ia.jdbc.connector.heapSize com.ibm.iis.ia.jdbc.connector.heapSize=2048 /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -key com.ibm.iis.ia.engine.javaStage.heapSize com.ibm.iis.ia.engine.javaStage.heapSize=1024 /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.ia.server.useSingleFDTable com.ibm.iis.ia.server.useSingleFDTable=true
-
The limits (in limits.conf) are defined here:
root soft nofile 65000 root hard nofile 500000 * soft nproc 65000 * soft nofile 65000 * hard nofile 500000 dsadm soft nproc 65000 dsadm soft nofile 65000 hdfs soft nproc 65000 root soft nproc 65000
Applies to: 3.5.0 and later
Discovery jobs are not resumed on restore
When you restore the system from a snapshot, discovery jobs that were in progress at the time the snapshot was taken are not automatically resumed. You must explicitly stop and restart them.
Applies to: 3.5.0 and later
Quick scan times out and fails
If you run quick scan (NGQS) for many tables and the scan takes more than 13 hours the authentication token times out and the scan fails.
Workaround: Set the IIS property com.ibm.iis.ia.server.accessAllProjects
to true. Doing so allows the quick scan to complete even if the token used for writing results to IALight is generated by isadmin user.
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
Quick scan Approve assets or Publish assets operation fails
If the quick scan Approve assets or Publish assets operation fails for some assets, reattempt Approve or Publish for the failed assets.
Applies to: 3.5.0 and later
A Connection that was made by using HDFS through the Execution Engine cannot be used with auto discovery
In the Global connection, you can make an HDFS connection two ways: by using HDFS through the Execution Engine or by using Apache HDFS. A connection that was made by using HDFS through the Execution Engine cannot be used with auto discovery.
Workaround: Use the Apache HDFS option to make an HDFS connection with auto discovery.
Applies to: 3.5.0 and later
After you publish several assets in quick scan only one of the duplicate assets is published, while other duplicates fail
When you publish several assets in quick scan (> 3.5 quick scan), with the same name but from different database schema names, only one of the duplicate assets (tables) are published, while the other duplicates fail.
Workaround: You can publish the assets on a “per schema” base only. Publish the assets of schema A first, then schema B, and so on.
Applies to: 3.5.0 and later
Quick scan dashboard generates a long URL, which causes the dashboard load to fail
If a user opens the WKC quick scan dashboard and the user has access to many Information Analyzer projects, a long URL, which exceeds the header size of the request is generated and causes the loading of the dashboard to fail.
Workaround: An administrator can change the default request header size for queries that are issued against Solr by running the following command on the OpenShift cluster to increase the default header size from 8192 to 65535. The following command sets the request header size to 65535 and solves this problem in most cases. If not, further increase this value.
oc patch sts solr --patch '{"spec": {"template": {"spec": {"containers": [{"env": [{"name": "SOLR_OPTS","value": "-Dsolr.jetty.request.header.size=65535"}],"name":"solr"}]}}}}'
You can also avoid the use of a single “global Quick Scan superuser” account with access to >100 projects and use several business area-specific accounts instead.
Applies to: 3.5.0 and later
Quick scan jobs ran with pre-3.5 quick scan are not shown
To upgrade, follow these steps:
- Log in to Watson Knowledge Catalog by using a user with admin privileges
- Open another tab in the browser and run
https://<CloudPakforData_URL>/ibm/iis/odf/v1/discovery/fastanalyzer/monitor/reindex
.
You should now be able to see all previously run quick scan jobs.
Applies to: 3.5.0 and later
On Firefox, no details are shown for assets that are affected by an automation rule
When you save an automation rule in the Firefox web browser, the details of the affected asset might not be displayed when you click Show details. In this case, the message No details to display
is shown. As a workaround, use a different browser.
Applies to: 3.5.0 and later
When you add data files to the data quality project, the Tree view doesn’t show data files
Use the Search view to find data files to be added to a project.
Applies to: 3.5.0 and later
The owner of the table-assets that are synced to the default catalog is shown as an administrator instead of an asset owner
After a pre-3.5 quick scan is run and the results are published to the default catalog, the owner of the table-assets that are synced to the default catalog is shown as admin instead of asset owner.
Workaround: On the iis-services pod, run the following command:
/opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -s -k com.ibm.iis.odf.qsPublishAsTask -value true
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
The Run analysis button cannot be found
When you work in the Relationships tab in your data quality project, you cannot see the Run analysis button if there are many data assets in your project.
Workaround: Make the font size smaller in your browser until the button becomes visible.
Applies to: 3.5.0 and later
Password change is not propagated to the copy handled by auto discovery and quick scan
When you use auto discovery or quick scan, connections are created in Global Connections and the connection is copied to auto discovery and quick scan. If later the password is changed in the connection in Global Connections, the password change is not propagated to the copy handled by auto discovery and quick scan.
Workaround:
- Open IMAM by selecting Metadata import in the Catalogs menu. You must have the “Access advanced governance” permission to open IMAM.
- Locate the connection and update the password.
Applies to: 3.5.0, 3.5.1
Fixed in: 3.5.2
Platform connections with encoded certificates cannot be used for discovery
SSL-enabled platform connections that use a base64 encoded certificate cannot be used in discovery jobs. Connections that use decoded certificates will work.
Applies to: 3.5.0 and later
Loading assets to workspace fails with Solr query limit error (Sql Error: 414)
If you approve multiple assets at the same time, the loading of the workspace might fail with a Solr query error similar to the following example:
ibm.iis.discovery.fastanalyzer.impl.FastAnalyzerSolrWrapper E Solr query returned with errors 414
Workaround:
- Find the iss -services pod by running:
oc get pods | grep services
oc rsh
- Disable the following feature flag:
/opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -s -k com.ibm.iis.odf.quickscan.publish.load.with.where -val false
Applies to: 3.5.0 and later
Quality score tab in list of data sets doesn’t show correct results after first click
Within your data quality project, clicking the Quality score tab in the list of data sets fails to sort the results correctly and doesn’t produce an error.
Workaround: Click the Quality score tab a second time to sort the results correctly.
Applies to: 3.5.0 and later
Without data sampling, analysis or discovery jobs fail when run on very large tables
Data quality analysis or automated discovery job without data sampling fail when they are run on large tables. The is-en-conductor-0 stops unexpectedly and restarts during the job run.
Workaround: Disable the Suspect values data quality dimension for all or just a specific project if you must run the jobs without data sampling.
Applies to: 3.5.6 and later
Can’t filter quick scan results by data classes or terms
Filtering quick scan results by data classes and terms doesn’t work. An error message is displayed instead of any results.
This is due to the filter cache no longer working with the latest version of Solr.
Workaround: An administrator can change the filter cache by completing the following steps on any computer that has the oc command-line interface installed:
-
Log in to the Solr pod:
oc rsh solr-0
-
On the Solr pod, download the configuration of the
analysis
collection:solr zk -downconfig -z zookeeper:2181/solr -n analysis -d /tmp/analyis
-
Edit the
/tmp/analysis/conf/solrconfig.xml
file and look for the following XML tag:<filterCache class="solr.FastLRUCache" size="512" initialSize="512" autowarmCount="0"/>
-
Change the tag as follows replacing
FastLRUCache
withCaffeineCache
:<filterCache class="solr.CaffeineCache" size="512" initialSize="512" autowarmCount="0"/>
-
Upload and activate the changed configuration:
solr zk -upconfig -z zookeeper:2181/solr -n analysis -d /tmp/analysis curl "http://solr:8983/solr/admin/collections?action=RElOAD&name=analysis&wt=json"
Applies to: 3.5.8
Fixed in: 3.5.9
Can’t use connections using the Amazon S3 connector
No connections to Amazon S3 can be established by using the Amazon S3 connector. Therefore, no such Amazon S3 connections are available for read, write, or metadata import operations.
Applies to: 3.5.7 and 3.5.8
Fixed in: 3.5.9
Quick scan or automated discovery might not work for generic JDBC platform connections with values in the JDBC properties field
Quick scan or automated discovery don’t not work with a Generic JDBC connection that was created in the Platform connections UI and has JDBC property information in the JDBC properties field.
Workaround: Append all JDBC properties to the URL in the JDBC url field instead.
Before:
After:
Applies to: 3.5.0 and later
Db2 SSL/TLS connections can’t be used in discovery
When you create a discovery job, you can’t add a Db2 platform connection that is configured with SSL and uses a custom TLS certificate. When you try to add such platform connection to an automated discovery or quick scan job, the following error occurs:
Failed to add connection. No connection could be created for discovery. Try again later
The request createDataConnection could not be processed because the following error occurred in the server: The connector failed to connect to the data source. The reported error is: com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][201][11237][4.28.11] Connection authorization failure occurred. Reason: Security mechanism not supported. ERRORCODE=-4214, SQLSTATE=28000. Transaction ID: 4899157575.
This error occurs because the security mechanism that is set by default for all SSL connections when a discovery connection is added doesn’t match the security mechanism defined at the Db2 server level.
Workaround: Create 2 connections with the same name:
- A platform connection in Data > Platform connections
- A connection for use in discovery in Catalogs > Metadata import
When you set up a discovery job, use the metadata import connection.
Applies to: 3.5.0 and later
Some connection names aren’t valid for automated discovery
If the name of a connection in the default catalog contains special characters, this connection can’t be used in automated discovery jobs.
Workaround: Do not use any special characters in connection names.
Applies to: 3.5.2 and later
Can’t view the results of a data rule run
In a data quality project, the following error occurs when you try to view the run details of a data rule.
CDICO0100E: Connection failed: Disconnect non transient connection error: [jcc][t4][2043][11550][4.26.14] Exception No route to host error: Error opening socket to server /172.30.217.104 on port 50,000 with message: No route to host (Host unreachable). ERRORCODE=-4499, SQLSTATE=08001
Workaround: Complete the following steps:
- Log in to the iis-services pod by running the following command:
oc rsh `oc get pods | grep -i "iis-services" | awk -F' ' '{print $1}'` bash
- Run the following command for each data quality project where data rules are defined:
/opt/IBM/InformationServer/ASBServer/bin/IAAdmin.sh -user -password -migrateXmetaMetadataToMicroservice -projectName <DQ-PROJECT_NAME> -forceUpdate true
Applies to: 3.5
Overriding rule or rule set runtime settings at the columns level causes an error
In a data quality project, if you try to override runtime settings for a rule or a rule set when you run the rule or rule set on the Rules tab in the Column page, an error occurs. Instead of the Data quality user interface, an error message is displayed.
Workaround: Override the rule or rule set run time settings only when you run the rule or rule set on the Rules tab in the Project or Data asset page.
Applies to: 3.5
Global search
You might encounter these known issues and restrictions when you use global search.
Missing path details for some results in global search
Governance artifacts and information assets don’t display path details in global search results.
Applies to: 3.5.0 and later