Table of contents

Limitations and known issues for Watson Knowledge Catalog

These known issues apply to Watson Knowledge Catalog.

Also see:

General issues

You might encounter these known issues and restrictions when you work with the Watson Knowledge Catalog service.

Sync configuration of Watson Knowledge Catalog does not happen

If common core services (CCS) are installed, then uninstalled before Watson Knowledge Catalog is installed, the sync configuration of Watson Knowledge Catalog does not happen, resulting in some functions not working properly.

Do not install Watson Knowledge Catalog without having CCS already installed.

Applies to: 4.0.1

Unable to assign users as stewards

You are unable to assign Cloud Pak for Data users as information assets stewards.

Workaround:,
Manually add the users as stewards in the information assets view. To do so, access the information assets administration view page directly by entering the following URL in your browser: https:///ibm/iis/igc/#manageStewards

Then, manually add stewards from the drop-down list.

Applies to: 4.0.1

User groups not supported in certain areas

These areas do not support user groups:

  • Data discovery
  • Data quality
  • Information assets
  • Watson Knowledge Catalog workflow
  • Categories (except the All users group, which in categories represents all users who have permission to access governance artifacts)

Applies to: 3.5.0 and later

Categories might not show the latest updates

Categories and their contents can be processed in different areas of Watson Knowledge Catalog. For this reason, the contents of categories that are currently being viewed might not show the latest updates.

To ensure that you are viewing the latest updates, it is recommended that you manually refresh the view.

Migration of legacy HDFS connections default to SSL

After migrating legacy WebHDFS connections, you might receive the following error from the migrated Apache HDFS connection: The assets request failed: CDICO0100E: Connection failed: SCAPI error: An invalid custom URL (https://www.example.com) was specified. Specify a valid HTTP or HTTPS URL.

Workaround: Modify your WebHDFS URL protocol from https to http.

Applies to: 3.5.0 and later

Execution history cannot be migrated because request entity is too large

For larger projects, the migration command runs for several hours and shows a message that says the execution history cannot be migrated because the request entity is too large.

Workaround:

  1. Log in to your cluster.
  2. Edit the ugi-addons configmap:
    oc edit cm ugi-addon
    
  3. Update icpdata_addon_version: 3.0.1 to icpdata_addon_version: 3.5.0.

Applies to: 3.5.0 and later

Catalog issues

You might encounter these known issues and restrictions when you use catalogs.

Catalog UI produces error if the Redis data store connection is dropped

If the Redis connection is dropped, the catalog UI becomes inoperable and displays “unknown error”. The catalog UI continues to try to reconnect to Redis for up to 1 hour. 

If Redis resumes within an hour, the catalog UI recovers and no further action is required. However, if Redis does not become available within an hour, restart the catalog UI pods (dc-main and portal-catalog) when Redis is running again so the pods can reestablish the Redis connection.

Applies to: 4.0.1

Downloading data asset in catalog fails after you switch tabs

Downloading a data asset in catalog fails after you switch tabs several times when you are working with a single asset.

Workaround:
Refresh the page or click Refresh in the preview table.

Applies to: 4.0.1

“Create catalog” permission does not work

If you create a role that has the “create catalog” permission, the permission does not take effect, even though that permission appears to be available.

Workaround:
Use the “manage catalogs” permission instead.

Applies to: 4.0
Fixed in: 4.0.1

The default “all_users” group is missing from the Platform assets catalog

The “all_users” group is normally part of the Platform assets catalog by default, but in some cases the group might be missing.

Workaround: Use the Asset API to add the “all_users” group to the Platform assets catalog. The token of the user who can add collaborators, which is usually the administrator, must be used.

post /v2/catalogs/{catalog_id}/members 
Request Body :
{
  "members": [
    {
      "access_group_id": "10000",
      "role": "viewer"
    }
  ]
}

Applies to: 3.5.1, 3.5.2 and later

Business Analyst role does not have access catalog permission

A user with the Business Analyst role cannot access the catalog, even though the Business Analyst role supports access to the catalog.

Workaround: An administrator can add the Access Catalog permission to the Business Analyst role. For more information, see Predefined roles and permissions and Managing roles.

Applies to: 3.5.2 and later

Reference connections do not sync correctly

Reference connections that are created from from the platform catalog do not sync properly from Watson Knowledge Catalog into Information Assets.

Workaround: Create the connections as part of running a quick scan, and the connection syncs correctly. Another option is to create the connection directly in the catalog where the connection needs to be used.

Applies to: 3.5.2 and later

Blank page produced when you hover over an asset

If you are looking at the Activities tab of a model and you hover over an asset, then a blank page is produced.

Applies to: 3.5.0 and later

Missing previews

You might not see previews of assets in these circumstances:

  • In a catalog or project, you might not see previews or profiles of connected data assets that are associated with connections that require personal credentials. You are prompted to enter your personal credentials to start the preview or profiling of the connection asset.
  • In a catalog, you might not see previews of JSON, text, or image files that were published from a project.
  • In a catalog, the previews of JSON and text files that are accessed through a connection might not be formatted correctly.
  • In a project, you cannot view the preview of image files that are accessed through a connection.

Applies to: 3.5.0 and later

Add collaborators with lowercase email addresses

When you add collaborators to the catalog, enter email addresses with all lowercase letters. Mixed-case email addresses are not supported.

Applies to: 3.5.0 and later

Multiple concurrent connection operations might fail

An error might be encountered when multiple users are running connection operations concurrently. The error message can vary.

Applies to: 3.5.0 and later

After upgrade, you can’t add or test connections during metadata import or discovery

You upgraded to Watson Knowledge Catalog. When you try to add new or test existing connections during metadata import or discovery, the connection might end up waiting for connection.

Workaround: Restart the agent in the is-en-conductor-0 pod. If the agent is still not rendering any requests, then delete the conductor pod. This way, you create a new instance of the pod and you can add or test connections again.

Applies to: 3.5.0 and later

Can’t enable enforcing data protection rules after catalog creation

You cannot enable the enforcement of data protection rules after you create a catalog. To apply data protection rules to the assets in a catalog, you must enable enforcement during catalog creation.

Applies to: 3.5.0 and later

Assets are blocked if evaluation fails

The following restrictions apply to data assets in a catalog with policies enforced: File-based data assets that have a header can’t have duplicate column names, a period (.), or single quotation mark (‘) in a column name.

If evaluation fails, the asset is blocked to all users except the asset owner. All other users see an error message that the data asset cannot be viewed because evaluation failed and the asset is blocked.

Applies to: 3.5.0 and later

Asset owner might be displayed as “Unavailable”

When data assets are created in advanced data curation tools, the owner name for those assets in the default catalog is displayed as “Unavailable”.

Applies to: 3.5.0 and later

Synchronizing information assets

In general, the following types of information assets are synchronized:

  • Tables and their associated columns
  • Files and their associated columns
  • Connections (limited to specific types)

Data assets that are discovered from Amazon S3 are currently not synchronized from the Information assets view to the default catalog.

Workaround: Add the respective data assets directly to the default catalog.

Applies to: 3.5.0 and later

You can’t remove information assets from the default catalog or a project

You can’t remove information assets from data quality projects or the default catalog. These assets are still available in the default catalog.

Workaround: To remove an information asset from the default catalog or data quality projects, you first have to remove it from Information assets view. The synchronization process propagates the delete from Information assets view into the default catalog. However, you can remove assets from the default catalog or projects if they are not synchronized.

Applies to: 3.5.0 and later

Log-in prompt is displayed in Organize section

When you’re working in the Organize section, a log-in prompt might be displayed, even though you’re active.

Workaround: Provide the same credentials that you used to log in to Cloud Pak for Data.

Applies to: 3.5.0 and later

Missing default catalog and predefined data classes

The automatic creation of the default catalog after installation of the Watson Knowledge Catalog service can fail. If it does, the predefined data classes are not automatically loaded and published as governance artifacts.

Workaround: Ask someone with the Administrator role to follow the instructions for creating the default catalog manually.

Applies to: 3.5.0 and later

Quick scan jobs remain in status QUEUED for a long time after restart of the quick scan pod

When you run large automated discovery jobs or if large jobs were run recently, and you then restart the quick scan pod, quick scan jobs might remain in status QUEUED for a long time until they are processed due to the number of messages that need to be skipped during pod startup.

To reduce the amount of time until quick scan jobs are started, run the following steps:

  1. Pause the quick scan jobs in status QUEUED.
  2. Edit the deployment of the quick scan pod:
    oc edit deploy odf-fast-analyzer
    
  3. Locate the lines that contain:
    name: ODF_PROPERTIES
    value: -Dcom.ibm.iis.odf.kafka.skipmessages.older.than.secs=43200
    
  4. Replace 43200 with a smaller value such as 3600 to limit the number of messages that need to be skipped.
  5. Storing the update triggers pod recreation. Wait until the quick scan pod is in status RUNNING.
  6. Resume the quick scan job paused in step 1. Its status changes to RUNNING within a short period.

Applies to: 3.5.0 and later

An error occurs when you import metadata

When you import metadata, you might encounter an error message. Wait a moment, then retry to import metadata again.

Make sure that you have the required permissions, see Metadata import.

Applies to: 3.5.0 and later

An error occurs while you retrieve the catalog details

You might occasionally receive the message, “An error occurred while retrieving the catalog details”, with options to View logs and Retry.

Workaround:

You can:

  • Refresh the browser; or
  • Click another tab, such as Overview, Asset, or Access, then return to the Profile tab.

Applies to: 3.5.0 and later

Non-owner cannot update profile of an asset in a governed catalog

Failure to update profiling by a non-owner of the asset in a governed catalog.

Workaround: Profiling is failing as the asset is protected by policies. The owner of the asset must update the profile.

Applies to: 3.5.0 and later

Business terms can be edited by all admin and editor users regardless of asset membership

For a catalog asset, business terms can be edited by all admin and editor users regardless of asset membership. This will be corrected in a patch to allow only admin users or editor users who are asset members to edit the business terms of an asset.

Applies to: 3.5.0 and later

Viewer access restrictions

Adding related assets in the UI is not supported for users with only Viewer role. This issue will be properly hidden in a patch.

Applies to: 3.5.0 and later

Cannot see data types in profiling results after you publish to a catalog from NGQS

To view data types:

  1. Check the data types under the Access tab; or
  2. Update the profile by clicking Update profile.

Applies to: 3.5.0 and later

Flickering scroll bars on the profile tab

For some browser resolution settings, the horizontal and vertical scroll bars on the profile tab flicker. This happens for Mozilla Firefox and for Google Chrome web browsers.

Workaround: Click somewhere on the profiling page and change the browser zoom level. On a Mac keyboard, press Command+plus sign key (+) to zoom in or Command+minus sign key (-) to zoom out. On a Windows keyboard, press Ctrl+plus sign key (+) or Ctrl+minus sign key (-).

Applies to: 4.0
Fixed in: 4.0.1

Governance artifacts

Export to a compressed file times out

Exporting data to a compressed file fails with a timeout after 1 minute.

Applies to: 4.0.1

CSV file does not contain all categories that are viewable

When you export a CSV file, the file does not contain all the categories that you are able to see in the UI.

Workaround:
Users must be assigned as viewers in categories directly, not by using only the “All users” group.

Applies to: 4.0.1

You cannot delete relationships for non-term assets

You cannot delete relationships for non-term assets, such as policies, data classes, categories, and rules.

Workaround: Use the API to delete relationships from non-term assets.

Applies to: 4.0.1

After upgrade, the InfoSphere Information Server glossary migration service is unavailable

After you upgrade from 3.5.x to 4.0.1, the InfoSphere Information Server glossary services migration service is unavailable.

Workaround: Restart the wkc-glossary-service pod:

oc get pods  | grep wkc-glossary-service
oc delete pod <wkc-glossary-service>

Applies to: 4.0.1

Update to glossary asset properties does not sync

If you update a term property, such as ‘description’, the property does not update.

Applies to: 4.0

You might encounter these known issues and restrictions when you use governance artifacts.

Artifacts are not synced to Information Assets view

Glossary artifacts that were created on the core version of Watson Knowledge Catalog before installing the full version of Watson Knowledge Catalog are not synced to the Information Assets view. This problem occurs even after a manual reindex.

Workaround: Run a batch load command to sync the artifacts. See the following example:

curl -k -X GET "https://<hostname>/v3/glossary_terms/admin/resync?artifact_type=XXXXX" -H  "accept: application/json" -H  "Authorization: bearer <token>"

where <artifact_type> can be all, category, glossary_term, classification, data_class, reference_data, policy, or rule.

Applies to: 4.0

Async publish redirection to published asset not working properly

Reference data async publish redirection to published asset is not working properly when an approval process setup is in the workflow.

Workaround: Go to the Published tab and open the published asset.

Applies to: 3.5.4 and later

When you add a governance artifact to a category on the Category details page, the search feature does not work

The search feature does not work during the process of adding a governance artifact to a category on the Category details page.

Workaround: Add artifacts to categories from the artifact detail pages.

Applies to: 3.5.4 and later

Unable to create a rule

When you create a new governance artifact and try to use it to create a rule, the creation of the rule appears to fail.

Workaround: The cache might take up to 10 minutes to rebuild before you can see the artifact in the drop-down list. Wait for this period to elapse, then check again.

Applies to: 3.5.2, 3.5.3 and later

Cannot view glossary artifacts that have a custom attribute of type “numeric” if the value is set to 0

If a glossary artifact has a custom attribute of the type “numeric” and the value of the type is set to 0, the UI does not display the artifact.

Applies to: 3.5.3 and later

Asset sync issues are produced when you try to publish terms

If you upgrade from Cloud Pak for Data version 3.0.1 directly to version 3.5.2 or version 3.5.3, asset sync issues are produced when you try to publish terms. To avoid these issues, you must upgrade from Cloud Pak for Data version 3.0.1 to version 3.5.1, then upgrade from version 3.5.1 to version 3.5.2 or 3.5.3.

Workaround: Instead of upgrading from version 3.0.1 to 3.5.2 or 3.5.3, upgrade from 3.0.1 to 3.5.4.

Applies to: 3.5.2, 3.5.3 and later

Delay in showing artifacts after upgrade

When you first log in after an upgrade, the appearance of artifacts is delayed.

Workaround: Wait a few minutes for the artifacts to appear. If you experience this issue consistently, set the following environment variables on the wkc-search pod:

oc set env deploy/wkc-search dps_retry_timeout=300000

You must reset these variables every time the pod is restarted and the UI might take up to 5 minutes to respond in some cases.

Applies to: 3.5.0 and later

Query strings longer than 15 characters of a single token can miss results

Query strings longer than 15 characters of single token can miss results that have the exact matching of those characters in a longer token.

For example, a query for “abcdefghijklmn” finds “abcdefghijklmnopqrstuv” but a query for “abcdefghijklmnop” query does not find the right result.

Workaround: Return to the drafts list page, reopen the asset, then try publishing again.

Applies to: 3.5.0 and later

Reindexing might fail on an upgraded environment

After an upgrade, reindexing might not work, resulting in not being able to use data quality, discovery, or quick scan. You also cannot view information assets, and if you import any assets by using istools, the assets are not visible.

Workaround:

Re-create all the custom SQL views under the CMVIEWS, IGVIEWS, IAVIEWS, CEFVIEWS, and REMVIEWS schemas by running the following command:

cd <IS_INSTALL_DIR>/ASBServer/bin

./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///5.3/ASCLModel.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties

./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///2.3/ASCLCustomAttribute.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties

./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///ASCLModel/5.2/ASCLAnalysis.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties

./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///4.3/GlossaryExtensions.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties

./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///ASCLModel/6.4/investigate.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties

./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///1.0/CommonEvent.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties

./xmetaAdmin.sh processSqlViews -createViews -nsuri http:///ASCLModel/1.1/EMRemediation.ecore -artifact GovernanceViews -allowReservedWords -dbfile ../conf/database.properties

Applies to: 3.5.0 and later

Business term relationships to reference data values are not shown

You can add a related business term to a value in a reference data artifact. However, the related content for a business term does not show the related reference data value.

Applies to: 3.5.0 and later

Limitations to automatic processing in data protection rules

Data protection rules do not automatically process artifacts that are closely related to artifacts that are explicitly specified in these cases:

  • Synonyms to business terms in conditions are not automatically processed. Only terms that are explicitly specified in conditions are considered.
  • Dependent data classes of data classes in conditions or actions are not automatically considered. For example, if you specify the data class “Drivers License” in a condition, the dependent data classes, such as, New York State Driver’s License, are not processed by the rule.

Applies to: 3.5.0 and later

Can’t reactivate inactive artifacts

When an effective end date for a governance artifact passes, the artifact becomes inactive. You can’t reset an effective end date that’s in the past. You can’t reactivate an inactive artifact. Instead, you must re-create the artifact.

Applies to: 3.5.0 and later

Can’t export top-level categories

You can’t export top-level categories. You can export only lower-level categories. However, you can export a top-level category if it is included with an export of its subcategories.”

Applies to: 3.5.0 and later

Data classes display multiple primary categories

When you assign a subcategory to a data class to be the primary category, all higher-level categories of the selected subcategory are also displayed in the details of this data class as primary categories. However, they are not assigned.

Applies to: 3.5.0 and later

Reimport might fail to publish when previous import request is not yet finished

Reimport might fail to publish governance artifacts such as business terms when called immediately after a previous import request.

Importing and publishing many governance artifacts is done in the background and might take some time. If you republish artifacts when the initial publishing process of the same artifacts isn’t finished yet, the second publishing request fails and the status of the governance artifact drafts shows Publish failed.

Make sure that the publishing process is finished before you try to import and publish the same artifacts again.

Applies to: 3.5.0 and later

After you import and publish many data classes, the Profile page is not updated and refreshed

If you create and publish a data class, the Profile page of an asset is updated and refreshed. However, if you import and publish many data classes (for example, more than 50 data classes) by using a CSV file, the Profile page is not updated and refreshed for these imported data classes.

Workaround: If you must import and publish many data classes and you notice that the Profile page is not updated and refreshed, wait a moment, then edit just one data class, such as by adding a blank to this data class, and publish it. As a result, the Profile page is updated and refreshed to show all data classes that you published including the imported large number of data classes.

Applies to: 3.5.0 and later

You can’t publish data classes with the same name in different categories

The names of data classes must be unique. Don’t create data classes with the same name in different categories.

Note: Use globally unique names for data classes if you want to process data quality or data discovery assets.

Applies to: 3.5.0 and later

Clicking a data class with a special character in its name causes an exception

If you run an NGQS scan and then click a data class with an apostrophe (or other special characters), the data class details are not opened and an unexpected exception is produced.

Workaround: Change the name of the data class and remove the special character from the name.

Applies to: 3.5.0 and later

Child categories are listed as root categories when import is in progress

If users refresh or open the categories list while an import of categories is in progress, some child categories might be listed as root categories.

Workaround: Reload the categories page after the import completed successfully to see the correct hierarchy.

Applies to: 3.5.0 and later

Import of categories fails

Import of categories fails if the category hierarchy path that includes the path separators “ » “ is longer than 1,020 bytes.

Applies to: 3.5.0 and later

Assigning “All users” as a Data Steward in governance artifacts

Do not select All users in the Data Stewards list when you assign users to the steps in workflow configuration. Assigning All users as a Data Steward for a data protection rule causes issues when you open the rule.

Do not select All users in the Add stewards modal in all asset types as doing so also causes issues with the UI.

Applies to: 3.5.0 and later

Upgrading to Cloud Pak for Data 3.5 might cause permissions to be removed from custom roles

Due to permission changes, upgrading to Cloud Pak for Data 3.5 might cause permissions to be removed from custom roles.

Workaround: An administrator must log in to user management, review the permissions of any custom roles they defined, and adjust the permissions as needed.

Applies to: 3.5.0 and later

Post discovery user is not able to drill down any data quality details

Workaround: Restart the XMETA pod. You must wait until the restart is complete, then restart IS services.

If you are unable to exec or rsh to the XMETA pod:

  1. Run the following command:
    oc get pods|grep xmeta
    
  2. Then, run:
    oc delete pod podname
    

Applies to: 3.5.0 and later

After an upgrade, users with the Data Quality Analyst role cannot see governance artifacts

Workaround: The following two permissions need to be granted to the Data Quality Analyst role:

  1. Access governance artifacts; and
  2. Manage data protection rules.

Applies to: 3.5.0 and later

Governance artifacts: custom relationships

You might encounter these known issues and restrictions when you use governance artifacts custom relationships.

Only five values are visible

When a custom relationship in a governance artifact has more than five values, only five are visible.

Applies to: 4.0
Fixed in: 4.0.1

More entries than expected are added a relationship is edited

When you edit a reverse custom relationship, more entries are added than expected.

Workaround: Edit the relationship in the “forward” direction and do not use the reverse relationship.

Applies to: 4.0
Fixed in: 4.0.1

Reverse custom relationships are not shown

Reverse custom relationships are not shown in the target governance artifact when the source and target of the relationship are of different artifact types.

Applies to: 4.0

Adding new entries might override existing relationship values

When you are editing custom relationships on a category or a governance rule, adding new entries might override existing values.

Workaround: Select both existing and new entries when you edit the relationship values.

Applies to: 4.0

Analytics projects

You might encounter these known issues and restrictions when you use analytics projects.

Data is not masked in some analytics project tools

When you add a connected data asset that contains masked columns from a catalog to a project, the columns remain masked when you view the data and when you refine the data in the Data Refinery tool. However, other tools in projects do not preserve masking when they access data through a connection. For example, when you load connected data in a Notebook, you access the data through a direct connection and bypass masking.

Workaround To retain masking of connected data, create a new asset with Data Refinery:

  1. Open the connected data asset and click Refine. Data Refinery automatically includes the data masking steps in the Data Refinery flow that transforms the full data set into the new target asset.
  2. If necessary, adjust the target name and location.
  3. Click the Run button, and then click Save and Run. The new connected data asset is ready to use.
  4. Remove the original connected data asset from your project.

Applies to: 3.5.0 and later

Metadata import from Box fails for some Excel files

Importing a Microsoft Excel file from a Box data source fails if the Excel file contains one or more empty sheets.

Applies to: 4.0

Fixed in: 4.0.1

Governance artifact workflows

You might encounter these known issues and restrictions when you use governance workflows.

The task inbox continues to show the task after an action is completed

The task inbox continues to show the task after an action is completed, and the action buttons remain visible. If you click on the task, an error is produced.

Subsequent tasks in a list cannot be completed until the first task is completed

In the task inbox, if a custom workflow request task is at the beginning of a list of assigned tasks, then subsequent tasks in the list, such as import, approve, or publish, cannot be completed until the first task is completed.

Applies to: 4.0

Task inbox still shows a task after an action is completed

The task inbox continues to show the task after the action on it (for example, approval) is completed and the action buttons remain visible. Clicking the action button again produces an error message. 

Applies to: 4.0

Comment not visible until a manual refresh

In the task inbox, a comment that is added to the activity panel is not visible until the panel is refreshed manually. The same issue applies to editing or deleting a comment.

Applies to: 4.0

Comment is not shown after being refreshed

When an asset is created that matches the configuration of one of the current workflow types, a user who was assigned to that configuration gets assigned a task that shows in the task inbox. When the assigned user tries to add a comment in the activity panel, the comment is not shown when the comment is reloaded.

Applies to: 4.0

An override warning during the template import process is not valid 

When you import a template file that you previous imported, but deleted, you get a warning that this template will override the existing template. If you ignore the warning and proceed, the template is imported successfully.

Applies to: 4.0

The Task link from an email notification does not show the right task in task inbox. Only the task inbox itself is shown, rather than a specific task. 

Applies to: 4.0

When you filter by artifact type on the “Workflow configuration” tab, an error is produced

When you filter by artifact type on the “Workflow configuration” tab, an error is produced that says “Something went wrong. Contact your system administrator.”

Applies to: 3.5.4 and later

Null pointer exception is produced when you click “Send for approval” for a “Multiple approvals” workflow template

A null pointer exception is produced when you are using a “Multiple approvals” workflow template with category roles and a data steward role and you click Send for approval.

This issue has no workaround. If you want to use a “Multiple approvals” workflow template, do not select the category role (Owner/Admin/Reviewer/Editor), the artifact role, or the data steward role in the configuration. Sending for approval works only for individual users and the workflow requester.

Applies to: 3.5.4 and later

When a task is displayed in the task inbox, the result is displayed instead of the artifact type and workflow status

When a task is displayed in the task inbox, the result is displayed instead of the artifact type and workflow status.

Workaround: The task can be selected and worked on as usual.

Applies to: 3.5.4 and later

Workflow types do not load, causing an error

When you open the workflow management page, the workflow types do not load, which causes an error.

Workaround: Refresh your browser page.

Applies to: 3.5.3 and later

After you complete all workflow tasks, the title and the buttons from the last completed task are still displayed

After you complete all workflow tasks, the title and the buttons from the last completed task are still displayed.

Applies to: 3.5.2 and later

Adding artifact types to an inactive default workflow

You can’t move artifacts types to the default workflow if it is inactive by deleting another workflow. You must deactivate another workflow and then manually activate the default workflow.

To move artifact types to the default workflow:

  1. Click Administer > Workflow configuration.
  2. Open an active workflow by clicking its name.
  3. Click Deactivate and then confirm by clicking Deactive. The artifact types for the workflow are moved to the default workflow automatically.
  4. Open the “default workflow configuration” workflow.
  5. Click Activate.

Applies to: 3.5.0 and later

Limitation for draft workflows on Firefox

You can’t select any artifact types when you view workflow drafts in the Firefox web browser version 60. Use a different browser.

Applies to: 3.5.0 and later

Incorrect notifications for completed workflow tasks

If a workflow task has multiple assignees, and one person completes the task, then the other assignees see an incorrect notification. The notification states that the task couldn’t be loaded, instead of that the task is already completed by another assignee.

Applies to: 3.5.0 and later

Task details are displayed even after the task is completed

When you complete a workflow task, its details are still displayed. The issue occurs when there are fewer than 10 tasks in the list.

Workaround: Select the task from the list to refresh the details.

Applies to: 3.5.0 and later

Workflow details unavailable after upgrade

After an upgrade from Cloud Pak for Data 2.5 to 3.0, the details of workflow configuration show “Unavailable” for the user who created or modified the workflow.

Applies to: 3.5.0 and later

If you enable notifications in your workflow configurations, you must also add at least one collaborator

When you configure a workflow, you can select tasks and enable notifications. If you enable notifications for a task, you also must add at least one collaborator, who can be the same as one of the assignees. Otherwise, the checkbox of the task you selected is cleared with the next refresh.

To enable notifications:

  1. Add the assignees in the Details section.
  2. Scroll down to the Notifications section. Then, select the required action and add the collaborators to be notified or at least one collaborator or assignee.

Applies to: 3.5.0 and later

Users or permissions in “Overview” cannot be added

Users or permissions in Overview cannot be added. 

Workaround:
Create a default workflow configuration, activate it, and go to Overview to add users.

Applies to: 3.5.0 and later

Tasks are not being generated correctly for workflows started pre-upgrade

For workflows that were started pre-upgrade, tasks are not being generated correctly.

Workaround: The legacy data needs to be patched by using a workflow script.

Two HTTP requests need to be sent to the Workflow Service “Early adopter” API to deploy a custom workflow and to run it.

Insert the following values:

  • < host > : Target system hostname
  • < token > : Bearer token for a user with the authority to manage workflows
  • < path > : Path to the patch workflow file from the wkc-workflow-service repository at  subprojects/app/src/main/resources/processes/ApplyLegacyWorkflowPatch.bpmn
curl --location --request POST 'https://<host>/v3/workflows/flowable-rest/service/repository/deployments' \
--header 'Content-Type: multipart/form-data' \
--header 'Authorization: Bearer <token>' \
--form 'files=@<path>/ApplyLegacyWorkflowPatch.bpmn'

curl --location --request POST 'https://<host>/v3/workflows/flowable-rest/service/runtime/process-instances' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <token>' \
--data-raw '{
    "processDefinitionKey": "applyLegacyWorkflowPatch"
}'

Applies to: 3.5.0 and later

Incorrect due date might be shown in task inbox

The due date of a workflow task is saved in UTC time format. This can lead to a different due date shown in the task inbox in some time zones.

Applies to: 4.0

Data curation

You might encounter these known issues and restrictions when you use advanced data curation tools.

Column analysis/auto discovery analysis generates “data out of range” error

A “data out of range” error is produced when you run a column analysis/auto discovery analysis.

Workaround:
Complete the following steps to support persisting values of more than 20 characters in the DATACLASSIFICATION column of the frequency distribution (FD) table.

  1. Log on to the Db2U pod:
    oc rsh c-db2oltp-iis-db2u-0 bash
    
  2. Connect to the Information Analyzer database where the FD tables are persisted:
    db2 connect to iadb
    
  3. Generate a script that contains an ALTER TABLE command for each of the FD tables that are in the Information Analyzer database:
    db2 -x "select trim('alter table ' || trim(tabschema) || '.' || trim(tabname) || ' alter column ' || colname || ' SET DATA TYPE VARGRAPHIC(100);') from syscat.columns where tabschema like 'IAUSER%' and tabname like '%FD' and colname='DATACLASSIFICATION'" > /tmp/MODIFY_FD_TABLES.sql
    
  4. Run the script that contains the ALTER TABLE command:
    db2 -tvf /tmp/MODIFY_FD_TABLES.sql
    

    Applies to: 4.0.1

Cannot see data rule execution history details

You are not able to see the data rule execution history details in a data quality project that was created before migration.

Workaround:
Sync all the data quality projects in the XMETA repository with the ia-analysis pod:

oc exec -it $(oc get pod -l app=iis-services --no-headers -o custom-columns="Name:metadata.name") -- /opt/IBM/InformationServer/ASBServer/bin/IAAdmin.sh -user admin -password password -migrateXmetaMetadataToMicroservice -tableName dummy -columnName dummy -ruleName dummy -forceUpdate true

You must run this command as-is, specifically the values dummy. How long this command runs depends on how many data quality projects you have. It takes about 3 - 5 seconds to process one data quality project by this command. When the command finishes, the output on command shell should look something like the following example:

<?xml version="1.0" encoding="UTF-8"?><iaapi:Warnings xmlns:iaapi="http://www.ibm.com/investigate/api/iaapi">
  <Summary>
    <Message>Migration terminated:
1 Connection
127 Workspaces
127 WorkspaceSettingss</Message>
  </Summary>
</iaapi:Warnings>

Applies to: 4.0.1

Error is produced when you add connection to quick scan

When you add a connection to quickscan discovery, an “Unkown error” is produced.

Workaround:
Restart the redis pod:

oc get pods | grep redis
oc delete pod <redis pod>

Applies to: 4.0.1

Data classes cannot be applied

If the name of the data quality project that is selected for a quick scan job contains spaces or non-ASCII characters, the data classes that are defined in that data quality project cannot be applied. Instead, the default data classes are used.

Applies to: 4.0
Fixed in: 4.0.1

Personal connection assets results from profiling are viewable without credentials

Any user can view profiling results for assets from a personal connection without providing credentials. But updating the profile fails for the user who does not provide personal credentials.

Workaround:
Provide the personal credentials in the Asset tab of the Asset details page.

Applies to: 4.0
Fixed in: 4.0.1

Term assignment on the columns tab is cleared

In a data quality project, when you switch across columns in a data asset, the term assignment on the columns tab is cleared.

Workaround: Refresh the column details page.

Applies to: 4.0
Fixed in: 4.0.1

Connections that use Cloud Pak for Data credentials for authentication can’t be used in discovery jobs

When you create a discovery job, you cannot add a platform connection that is configured to use Cloud Pak for Data credentials for authentication.

Workaround: Modify the platform connection to use other supported authentication options or select a different connection.

Applies to: 4.0 and later

When the Solr pod stays offline for a long time, quick scan jobs are not restarted automatically

If the Solr pod goes down for a long time when a quick scan is running, the quick scan remains in the state of “Analyzing” in the UI and cannot be reset.

Workaround:

Start a new quick scan job to reanalyze the assets.

Applies to: 3.5.4 and later

Unable to resume quick scan jobs that are in a paused state

Scenario: You start multiple quick scan jobs and only one job is running at a time (odf-fast-analyzer pod replicas=1 by default). All the jobs in the queue are either manually cancelled or cancelled because of an iis-services pod restart. Now, if you try to resume the cancelled quick scan jobs, they are automatically restarted because of the restart mechanism for the iis-services pod restart scenario. A permission error is produced.

Workaround:

First, delete the cancelled quick scan jobs and initiate new quick scan jobs. Then, open a remote shell session to the iis-services container:

oc get pods | grep iis-services
oc rsh <iis services pod name>

Next, set the feature flag to skip the workplace permissions check:

/opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -s -k com.ibm.iis.odf.check.ialight.workspace.access -value false

Applies to: 3.5.4 and later

Publishing an asset fails if the iis-service pod is restarted

Publishing an asset fails if the iis-service pod is restarted. The review status of that asset remains in the “Loading” state.

Workaround:

Run the command d -r <quickscan job id> in ODFAdmin to reset all “Loading” tables to “Loading error.” Then, re-run the publication manually.

Applies to: 3.5.4 and later

Project sometimes does not appear in the UI

After you create a project from the UI, sometimes the project does not appear in the UI later.

Workaround: Run the following command in the iis-services pod. Use the user ID of the user who needs access for the workspace and use the name of the workspace or project.

curl -k -u isadmin:$ISADMIN_PASSWORD -H 'Content-Type: application/json' -X POST -d '{"users":[{"id":"<userIs>","role":"SorcererBusinessAnalyst,SorcererOperator"}]}' https://localhost:9446/ibm/iis/ia/api/configuration/project/<workspace-name>/addUser

Applies to: 3.5.1, 3.5.2, 3.5.3 and later

Not able to scroll all data assets in a data quality project in tile view

You might not able to scroll all data assets in a data quality project in the tile view under certain conditions. This issue is specific to browsers on Windows that are zoomed in or with screen resolution low for the height dimension.

Workaround: Zoom out (Ctrl + ‘-‘) in the browser window.

Applies to: 3.5.3 - 4.0.0
Fixed in: 4.0.1

Data sets remain in loading state and do not publish

When you try to publish data sets from the quickscan discovery UI, the status of the data sets remains in the “Loading…” state. This issue occurs in particular after some pods, especially the Solr pod, are restarted during the publishing process.

Workaround: Complete the following steps to publish the data sets again.

  1. Run the following command:
    oc exec is-en-conductor-0 -- /opt/IBM/InformationServer/ASBNode/bin/ODFAdmin.sh d -r qs_123456789 
    

    where qs_123456789 is the ID of the discovery job. Now all data sets that were previously set to the “Loading” state are set to “Loading error.”

  2. Publish the data sets with “Loading error” again. You can find these data sets by using the UI to filter for all Asset type “Table” and Status “Loading error.”

Applies to: 3.5.3 and later

Updates to the database name in platform connections are not propagated to discovery connections

When you use a platform connection in discovery jobs, a copy of the connection is created. If you later change the database name in the platform-level connection, this change is not propagated to the discovery connection. Due to this mismatch, publishing assets from quick scan results fails.

Workaround: Make sure that, in platform connections used for discovery with quick scan, the database name is not updated.

Applies to: 3.5.0 and later

Quick scan fails on a Hive connection if the JDBC URL has an embedded space

Quick scan runs can fail if a space is embedded in the URL of a JDBC driver for a Hive connection.

Workaround: Correct the JDBC URL by removing any extra spaces that are inside of it. 

Applies to: 3.5.0 and later

Quick scan asset preview encounters an error after publishing and is not visible

After you run a quick scan and publish the results, the asset might not be visible and you might get an error message that says a connection is required to view the asset.

This issue applies only to pre-3.5 quick scan (cases when quick scan is loading Information Governance Catalog). The issue is not applicable to next generation quick scan when it is loading Watson Knowledge catalogs.

Workaround: The process of publishing the results eventually finishes even though the error message appears. You do not need to rerun any other process. When the connection metadata arrives, each data asset updates with the attachment information. Check again later to see whether it completed.

Applies to: 3.5.0 and later

Incorrect connections that are associated with connected data assets after automated discovery

When you add connected data assets through automated discovery, the associated connection assets might be incorrect. Connections that have the same database and hostnames are indistinguishable to automated discovery, despite different credentials and table names. For example, many Db2 databases on IBM Cloud have the same database and hostnames. An incorrect connection with different credentials might be assigned and then the data asset can’t be previewed or accessed.

Applies to: 3.5.0 and later

Unable to browse the connection in the discover page

After you complete a backup or restore, then scale up pods, you are unable to browse the connection in the discover page.

Workaround: The connection is accessible 15 minutes after you scale up the pods, even though the conductor status shows as running with 1/1 status. 

Applies to: 3.5.0 and later

Data discovery fails when started by a Data Steward

Users with the Data Steward role can start a data discovery, even though they don’t have sufficient permissions to run the discovery. As a result, the discovery fails. You must have the Data Quality Analyst role to run discovery.

Applies to: 3.5.0 and later

Data Stewards can’t create automation rules

Users with the Data Steward role can start creating an automation rule, even though they don’t have sufficient permissions to manage automation rules. As a result, the automation rule is not saved and an error is displayed. You must have the Data Quality Analyst role to create automation rules.

Applies to: 3.5.0 and later

Discovery on a Teradata database fails

When you run a data discovery on a Teradata database by using JDBC connector, and the CHARSET is set to UTF8, the analysis fails with an error.

Example error content: The connector detected character data truncation for the link column C3. The length of the value is 12 and the length of the column is 6.

Workaround: When a database has Unicode characters in the schemas or tables, set the CHARSET attribute to UTF16 when you create a data connection.

Applies to: 3.5.0 and later

Changes to platform-level connections aren’t propagated for discovery

After you add a platform-level connection to the data discovery area, any subsequent edit to or deletion of the platform-level connection is not propagated to the connection information in the data discovery area and is not effective.

Workaround: Delete the discovery connection manually. You must have the Access advanced governance permission to be able to complete the required steps:

  1. Go to Governance > Metadata import
  2. Go to the Repository Management tab.
  3. In the Navigation pane, select Browse assets > Data connections.
  4. Select the connection that you want to remove and click Delete.

Readd updated platform-level connections to the data discovery area as appropriate.

Applies to: 3.5.0 and later

Approving tables in quick scan results fails

When a table name contains a special character, its results cannot be loaded to a project. When you click Approve assets, an error occurs.

Also, when you select more than one table to approve, and one of them fails to be loaded, the rest of the tables fail. The only way to approve the assets is to rediscover the quick scan job.

Applies to: 3.5.0 and later

Virtual tables are not supported for BigQuery connections

You cannot create SQL virtual tables for data assets that were added from Google BigQuery connections.

Applies to: 3.5.0 and later

Column analysis fails if system resources or the Java heap size are not sufficient

Column analysis might fail due to insufficient system resources or insufficient Java heap size. In this case, modify your workload management system policies as follows:

  1. Open the Information Server operations console by entering its URL in your browser: https://<server>/ibm/iis/ds/console/

  2. Go to Workload Management > System Policies. Check the following settings and adjust them if necessary:

    Job Count setting: If the Java Heap size is not sufficient, reduce the number to 5. The default setting is 20.

    Job Start setting: Reduce the maximum number of jobs that can start within the specified timeframe from 100 in 10 seconds (which is the default) to 1 in 5 seconds.

Applies to: 3.5.0 and later

Quick scan hangs when it is analyzing a Hive table that was defined incorrectly

When analyzing a schema that contains an incorrectly defined Hive table, quick scan starts looping when it tries to access the table. Make sure that the table definition for all Hive tables is correct.

Applies to: 3.5.0 and later

Automated discovery might fail when the data source contains a large amount of data

When the data source contains a large amount of data, automated discovery can fail. The error message indicates that the buffer file systems ran out of file space.

Workaround: To have the automated discovery complete successfully, use one of these workarounds:

  • Use data sampling to reduce the number of records that are being analyzed. For example, set the sample size to 10% of the total number of records.
  • Have an administrator increase the amount of scratch space for the engine that is running the analysis process. The administrator needs to use the Red Hat OpenShift cluster tools to increase the size of the volume where the scratch space is, typically /mnt/dedicated_vol/Engine in the is-en-conductor pod. Depending on the storage class that is used, the scratch space might be on a different volume.

    The size requirements for scratch space depend on the workload. As a rule, make sure to have enough scratch space to fit the largest data set that is processed. Then, multiply this amount by the number of similar analyses that you want to run concurrently. For more information about expanding volumes, see the instructions in the OpenShift Container Platform documentation.

Applies to: 3.5.0 and later

Discovery jobs fail due to an issue with connecting to the Kafka service

Automated discovery and quick scan jobs fail if no connection to Kafka service can be established. The iis-services and odf-fast-analyzer deployment logs show error messages similar to the following examples:

org.apache.kafka.common.KafkaException: Failed create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:338)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:52)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.createTopicIfNotExistsNew(KafkaQueueConsumer.java:184)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.createTopicIfNotExists(KafkaQueueConsumer.java:248)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.startConsumption(KafkaQueueConsumer.java:327)
at com.ibm.iis.odf.core.messaging.kafka.KafkaQueueConsumer.run(KafkaQueueConsumer.java:260)
at java.lang.Thread.run(Thread.java:811)

To resolve the issue, an administrator must restart Kafka manually by running the following command:

oc delete pod kafka-0

Applies to: 3.5.0 and later

Settings for discovery or analysis might be lost after a pod restart or upgrade

After pod restart or upgrade, settings might be lost or reverted to their default values, such as properties on RHEL level on the pod in the nproc file or MaximumHeapSize in ASBNode/conf/proxy.xml. Refer to Analysis or discovery jobs fail with an out-of-memory error for more information on settings.

Workaround: Check your settings before you start upgrading. Most settings are retained, but some settings might be reverted to their default settings. Check your /etc/security/limits.conf on every compute node in the cluster and add or edit the required settings as follows:

  • The parameters from is-en-conductor-0 pod:

    /opt/IBM/InformationServer/Server/DSEngine/bin/dsadmin -listenv ANALYZERPROJECT | grep DEFAULT_TRANSPORT_BLOCK
    APT_DEFAULT_TRANSPORT_BLOCK_SIZE=3073896
    
    com.ibm.iis.odf.datastage.max.concurrent.requests=4 contained in odf.properties
    
  • The parameters from iis-services pod:

    /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.ia.max.columns.inDQAOutputTable
    com.ibm.iis.ia.max.columns.inDQAOutputTable=500
    
    /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.ia.server.jobs.postprocessing.timeout
    com.ibm.iis.ia.server.jobs.postprocessing.timeout=84600000
    
    /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.events.kafkaEventConsumer.timeout
    com.ibm.iis.events.kafkaEventConsumer.timeout=10000
    
    /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -key com.ibm.iis.ia.jdbc.connector.heapSize
    com.ibm.iis.ia.jdbc.connector.heapSize=2048
    
    /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -key com.ibm.iis.ia.engine.javaStage.heapSize
    com.ibm.iis.ia.engine.javaStage.heapSize=1024
    
    /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -d -k com.ibm.iis.ia.server.useSingleFDTable
    com.ibm.iis.ia.server.useSingleFDTable=true
    
  • The limits (in limits.conf) are defined here:

    root soft nofile 65000
    root hard nofile 500000
    * soft nproc 65000
    * soft nofile 65000
    * hard nofile 500000
    dsadm soft nproc 65000
    dsadm soft nofile 65000
    hdfs soft nproc 65000
    root soft nproc 65000
    

Applies to: 3.5.0 and later

Discovery jobs are not resumed on restore

When you restore the system from a snapshot, discovery jobs that were in progress at the time the snapshot was taken are not automatically resumed. You must explicitly stop and restart them.

Applies to: 3.5.0 and later

Quick scan Approve assets or Publish assets operation fails

If the quick scan Approve assets or Publish assets operation fails for some assets, reattempt Approve or Publish for the failed assets.

Applies to: 3.5.0 and later

A connection that was made by using HDFS through the Execution Engine cannot be used with auto discovery

In the Global connection, you can make an HDFS connection two ways: by using HDFS through the Execution Engine or by using Apache HDFS. A connection that was made by using HDFS through the Execution Engine cannot be used with auto discovery.

Workaround: Use the Apache HDFS option to make an HDFS connection with auto discovery.

Applies to: 3.5.0 and later

After you publish several assets in quick scan only one of the duplicate assets is published, while other duplicates fail

When you publish several assets in quick scan (> 3.5 quick scan), with the same name but from different database schema names, only one of the duplicate assets (tables) are published, while the other duplicates fail.

Workaround: You can publish the assets on a “per schema” base only. Publish the assets of schema A first, then schema B, and so on.

Applies to: 3.5.0 and later

Quick scan dashboard generates a long URL, which causes the dashboard load to fail

If a user opens the WKC quick scan dashboard and the user has access to many Information Analyzer projects, a long URL, which exceeds the header size of the request is generated and causes the loading of the dashboard to fail.

Workaround: An administrator can change the default request header size for queries that are issued against Solr by running the following command on the OpenShift cluster to increase the default header size from 8192 to 65535. The following command sets the request header size to 65535 and solves this problem in most cases. If not, further increase this value.

oc patch sts solr --patch '{"spec": {"template": {"spec": {"containers": [{"env": [{"name": "SOLR_OPTS","value": "-Dsolr.jetty.request.header.size=65535"}],"name":"solr"}]}}}}'

You can also avoid the use of a single “global Quick Scan superuser” account with access to >100 projects and use several business area-specific accounts instead.

Applies to: 3.5.0 and later

Quick scan jobs ran with pre-3.5 quick scan are not shown

To upgrade, follow these steps:

  1. Log in to Watson Knowledge Catalog by using a user with admin privileges
  2. Open another tab in the browser and run https://<CloudPakforData_URL>/ibm/iis/odf/v1/discovery/fastanalyzer/monitor/reindex.

You should now be able to see all previously run quick scan jobs.

Applies to: 3.5.0 and later

On Firefox, no details are shown for assets that are affected by an automation rule

When you save an automation rule in the Firefox web browser, the details of the affected asset might not be displayed when you click Show details. In this case, the message No details to display is shown. As a workaround, use a different browser.

Applies to: 3.5.0 and later

When you add data files to the data quality project, the Tree view doesn’t show data files

Use the Search view to find data files to be added to a project.

Applies to: 3.5.0 and later

The Run analysis button cannot be found

When you work in the Relationships tab in your data quality project, you cannot see the Run analysis button if there are many data assets in your project.

Workaround: Make the font size smaller in your browser until the button becomes visible.

Applies to: 3.5.0 and later

Platform connections with encoded certificates cannot be used for discovery

SSL-enabled platform connections that use a base64 encoded certificate cannot be used in discovery jobs. Connections that use decoded certificates will work.

Applies to: 4.0 and later

Data curation: automation rules

You might encounter these known issues and restrictions when you use automation rules.

Automation rules menu entry is not visible

Automation rules menu entry is not visible for users with Manage data quality permission.

Workaround: Assign the Manage asset discovery permission.

Applies to: 4.0

Fixed in: 4.0.1

You might encounter these known issues and restrictions when you use global search.

Governance artifacts and information assets don’t display path details in global search results.

Applies to: 3.5.0 and later

Loading assets to workspace fails with Solr query limit error (Sql Error: 414)

If you approve multiple assets at the same time, the loading of the workspace might fail with a Solr query error similar to the following example:

ibm.iis.discovery.fastanalyzer.impl.FastAnalyzerSolrWrapper E Solr query returned with errors 414

Workaround:

  1. Find the iss -services pod by running:
    oc get pods | grep services
    
    oc rsh
    
  2. Disable the following feature flag:
    /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -s -k com.ibm.iis.odf.quickscan.publish.load.with.where -val false
    

Applies to: 3.5.0 and later

Quality score tab in list of data sets doesn’t show correct results after first click

Within your data quality project, clicking the Quality score tab in the list of data sets fails to sort the results correctly and doesn’t produce an error.

Workaround: Click the Quality score tab a second time to sort the results correctly.

Applies to: 3.5.0 and later

Migration from InfoSphere Information Server

Associations between stewards and migrated assets are lost

Cloud Pak for Data users with the Data Steward role must be available as stewards in Information Governance Catalog before you migrate any governance artifacts or assets. Otherwise, any associations between stewards and the migrated artifacts or assets are lost.

Workaround: To preserve the associations between stewards and migrated artifacts or assets, add Cloud Pak for Data users with the Data Steward role manually as stewards in Information Governance Catalog before import and migration. To do so, access the Information Governance Catalog administration page for managing stewards directly by entering the URL in your browser:

https://<CloudPakforData_URL>/ibm/iis/igc/#manageStewards

Search for users by first or last name, by username, or by email address and add them one by one.

After the stewards are available in Information Governance Catalog, you can import the governance artifacts and other assets from the source InfoSphere Information Server system and continue with migration. 

Applies to: 4.0