You might encounter these known issues when you migrate data from InfoSphere Information Server.
Migration process
- Progress of import isn't continuously updated in job status
- Applies to: 5.1
- The job status for a running import job doesn't change continuously but in increments because
the status is updated only after each individual import step. So the status might move from 0% to
9%, then to 15%, and so on until the import is complete.
-
- Collecting logs during the migration cleanup might fail
- Applies to: 5.1
- If you try to collect logs during the migration cleanup but before the migration is complete,
the log collection will fail.
- Workaround:
You can manually delete the ug-plan-config-map ConfigMap
if you want to collect logs during the migration cleanup while the migration is still
ongoing.
- Removing assets from the catalog trash bin
- Applies to: 5.1
- During the process to rehome assets inside the default catalog to the IBM Cloud Pak for Data system, you must remove any assets from the
catalog trash bin before importing.
- If this is not done, then the rehoming job might get stuck.
- After removing the assets from the catalogs, the assets will be placed in the trash bin.
- For more information on how to remove assets from the catalog trash bin, see Removing assets from the catalog
trash bin.
User migration
- Importing users or user groups might not be possible if the LDAP integration is provided by the
IBM Cloud Pak foundational services
Identity Management Service
- Applies to: 5.1
- If your IBM Cloud Pak for Data cluster is set up with
the LDAP integration provided by the IBM Cloud Pak foundational services
Identity Management Service, you might not be able to import
the users and user groups that you exported from IBM
InfoSphere Information
Server.
- Workaround:
Configure IBM Cloud Pak for Data
to use the embedded LDAP integration. For more information, see Configuring Cloud Pak
for Data to use the embedded LDAP integration.
Automated discovery jobs
- Metadata imports created during migration might not be functional
- Applies to: 5.1
-
You might not be able to run a metadata import that is created by the migration process although
the metadata import and its related job are available in the project. As a workaround, you can
update the metadata import with the job ID manually by using a Watson Data API PATCH call, and then run the import.
- Metadata enrichments created during migration might not be functional
- Applies to: 5.1
- Metadata enrichment assets that are created when you migrate automated discovery jobs have the
IDs of those categories attached that were associated with the automated discovery job in the source
system. Actual category information is not available when the metadata enrichment asset is created
in the project. Therefore, these category IDs can't be mapped to categories in the target system.
Matches are coincidental so that the metadata enrichment might or might not run successfully.
- Workaround: Edit the metadata enrichment asset, remove all selected categories, and then
select from the available categories as required.
-
Data quality projects
- Certain special characters in data quality project names are not preserved
- Applies to: 5.1
- During migration, certain special characters in the names of data quality projects are replaced
as follows:
| Special character |
Replacement string |
| < |
LT |
| > |
GT |
| \ or \\ |
BS |
| " |
DQ |
| % |
PC |
-
- Import of data quality projects with names longer than 100 characters fails
- Applies to: 5.1
- If the name of a data quality projects exceeds 100 characters after the name is checked for
unsupported characters, a new name with the following pattern is created during migration:
79
characters from the name with replaced unsupported special characters + _ + a hash
code of 8 to 10 alphanumeric characters that is generated from the original project name hash code +
- + suffix
suffix is the value of the
CONTAINER_SUFFIX parameter in the import_params.yaml. If that
parameter is omitted, the default value migration is
used.
Example:
Original data quality project name:
ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyz012345678890ABCDEFGHIJKLMNOPQRSTUVWXYZ123
Modified
project name in IBM Knowledge
Catalog:
ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyz012345678890ABCDEFG_793521066-migration
-
- For some data rules, output configuration settings are different after migration
- Applies to: 5.1
- Legacy data rules can be configured to write only statistics to the output table. This option is
not available for the new data quality rules. Therefore, the output configuration is set to include
all records when such data rules are migrated.
-
- Project settings
- Applies to: 5.1
- Not all data quality project settings have an equivalent setting in the new project. For more
information, see Migrated data quality projects.
-
Connections
- After migration, users need to manually update certain data source properties
- Applies to: 5.1
-
This includes the following connections:
- For JDBC Amazon dynamo connections, update the jar path connection
property.
- For JDBC SAP HANA connections, update the jar path connection
property.
- During migration, JDBC Driver for Cassandra connections fail to import
- Applies to: 5.1
-
Connections with JDBC drivers for Cassandra fail to import during the migration process. This is
because the legacy migration export currently fails to get the keyspace attribute
which is a mandatory connection property needed in IBM Knowledge
Catalog.
- During migration, ODBC connections might not be migrated properly if source connections do not
work
- Applies to: 5.1
- The source connections need to be working for migration to run successfully.
- If the source ODBC connections do not work, ensure that the corresponding DSN entries in the
odbc.ini file are valid before the migration process starts.
- For Hive connections, the JDBC URL might
be missing the database name
- Applies to: 5.1
- If the database name of a Hive connection
to be migrated is not included in the JDBC URL but part of the AssetsToImport parameter, the
database name is not included in the migrated information.
Migrated assets
- Import failure due to long
DataField names with custom relationships
- Applies to: 5.1
- If the combined length of a
DataField (table column or data file field) name
and the IIS host name exceeds 159 characters, the migration import will fail if a custom
relationship value is assigned to that DataField.
- How to Identify Problematic
DataFieldsRun the following command in
the source IIS environment to list
DataFields names whose length exceeds 120
characters:
cd <ISHOME>/ASBServer/bin
./xmetaAdmin.sh query -expr "select rid(x), x.name, length(x.name) from x in DataField where length(x.name) > 120" \
http:///5.3/ASCLModel.ecore -dbfile ../conf/database.properties
For each
DataField names identified, calculate the effective length: effective Length =
DataField name length + IIS host name length
If the effective length exceeds
159, check whether a custom relationship value is assigned. If yes, this will cause an import
failure.
-
Before starting export: Edit and remove the custom relationship values from these
DataFields.
After migration: Reassign the custom relationship values in IBM® Knowledge Catalog.
- Workaround: Do not assign custom relationship values to
DataFields whose
combined length (DataField name + IIS host name) is greater than 159
characters.
- If such DataFields already exist with custom relationship values assigned and an export has
already been performed, re-run the export after removing the custom relationship values assigned. If
you do not want to re-run the export, delete the problematic JSON file from the exported archive
with the help of IBM Support before starting the import.
- Certain migrated assets can't be profiled
- Applies to: 5.1
- Migrated data files can't be profiled.
-
- Export creates data lineage assets for
matchByName references of OpenIGC flow
assets although not supported by migration
- Applies to: 5.1
- OpenIGC flow assets can have
matchByName references to external assets that are
established through a fully-qualified resource name (FQRN) structure. Migration of such assets is
not supported. When you run the inspection tool, such references are identified and listed in the
inspection report as cannot be migrated. However, the export tool still creates
such data lineage assets but they do not reconcile properly after the import.
Migrated import areas
- General limitations and known issues for migrated import areas
- Applies to: 5.1
- When migrating import areas based on business intelligence bridges, you might encounter the
following issues and limitations:
- You must update the passwords of any migrated connection to a reporting tool (for importing
business intelligence assets).
- Business lineage import areas are migrated to metadata imports with the goal
Discovery.
- You cannot rerun a metadata import created from an import area because the referenced catalog ID
is invalid.
- You cannot duplicate a metadata import created from an import area from within the metadata
import asset.
- You can create new metadata import assets to import data from migrated connections.