Preparing for migration in InfoSphere Information Server (Linux, AIX)

Before you export any data from InfoSphere Information Server to Cloud Pak for Data, complete a set of setup tasks.

Before you begin

To complete these tasks, you must be logged in to the InfoSphere Information Server node as root.

Also, make sure that the prerequisites listed in Prerequisites to migrating data are met.

Tasks to complete on the InfoSphere Information Server system before migrating

Setting environment variables

Complete the following steps. Follow the instructions for your operating system:

  1. Log in to the InfoSphere Information Server node as root.
  2. Open a bash shell:
    bash
  3. Set the following environment variables.
    IIS_INSTALL_PATH=<IIS installation path>
    IIS_HOST=<IIS host>
    IIS_PORT=<IIS port>
    IIS_USERNAME=<IIS username>
    IIS_PASSWORD=<IIS password>
    TOOLKIT_PATH=<directory for storing the toolkit content; the directory must not be under the /root path>

    IIS_INSTALL_PATH example: If InfoSphere Information Server is installed in the default location, set the IIS_INSTALL_PATH variable to the value /opt/IBM/InformationServer.

Installing required tools

Download and install the required tools for your operating system as root user. Complete the steps that apply to your operating system.

Red Hat® Enterprise Linux

Install the jq utility.

  1. Change to the ${TOOLKIT_PATH} directory.
    cd ${TOOLKIT_PATH} 
  2. To install the utility, run the following commands:
    curl -LO https://github.com/jqlang/jq/releases/download/jq-1.7.1/jq-linux-i386
    chmod +x ./jq-linux-i386
    cp jq-linux-i386 jq
AIX

Install the wget, curl, jq, and dos2unix utilities. Then, run the dos2unix tool to convert the database.properties file into the required format.

  1. Change to the ${TOOLKIT_PATH} directory.
    cd ${TOOLKIT_PATH} 
  2. To install the utilities, run the following commands:
    dnf install wget -y
    dnf install curl -y
    dnf install jq -y
    dnf install dos2unix -y
    dos2unix ${IIS_INSTALL_PATH}/ASBServer/conf/database.properties
SUSE Linux and SUSE Linux on System z

Install the jq utility.

  1. Change to the ${TOOLKIT_PATH} directory.
    cd ${TOOLKIT_PATH} 
  2. To install the utility, run the following commands:
    zypper install jq
Red Hat Linux on System z

Install the jq utility.

  1. Change to the ${TOOLKIT_PATH} directory.
    cd ${TOOLKIT_PATH} 
  2. To install the utility, run the following commands:
    curl -LO https://github.com/jqlang/jq/releases/download/jq-1.7/jq-linux-s390x
    chmod +x ./jq-linux-s390x
    cp jq-linux-s390x jq
    

Increasing the expiry time for the CSRF token

Increase the expiry time for the CSRF token. Run the commands that apply to your environment.

  1. Set the expiry time to 600 seconds by running the following command:
    ${IIS_INSTALL_PATH}/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.isf.security.CsrfTokenExpiryTime -value 600
  2. Confirm the setting by running the following command:
    ${IIS_INSTALL_PATH}/ASBServer/bin/iisAdmin.sh -d  | grep com.ibm.iis.isf.security.CsrfTokenExpiryTime

Removing invalid users

Remove all invalid users from the user registry. Run the commands that apply to your environment.

  1. Get the list of invalid users.
    ${IIS_INSTALL_PATH}/ASBServer/bin/DirectorySync.sh -url https://${IIS_HOST}:${IIS_PORT}  -user ${IIS_USERNAME} -password ${IIS_PASSWORD}  -giu
    
  2. Delete all users that were returned in the previous step. Pass the usernames as a tilde-delimited list to the DirectorySync.sh script. If the entries to be deleted are long full DN names, enclose each username in double quotation marks (").
    ${IIS_INSTALL_PATH}/ASBServer/bin/DirectorySync.sh -user ${IIS_USERNAME} -password ${IIS_PASSWORD} -url https://${IIS_HOST}:${IIS_PORT} -delete_user_ids user1~user2~…userN

Assigning the Suite User role to users with inherited roles

Assign the Suite User role to all users that do not have any security roles assigned directly but inherit the roles from the groups that they are part of.

Run the commands that apply to your environment.

  1. Get the list of users without direct role assignments:
    ${IIS_INSTALL_PATH}/ASBServer/bin/UsersSync.sh -url https://${IIS_HOST}:${IIS_PORT}  -user ${IIS_USERNAME} -password ${IIS_PASSWORD} -list USERS
  2. Assign the Suite User role to the users returned in the previous step:
    ${IIS_INSTALL_PATH}/ASBServer/bin/UsersSync.sh -url https://${IIS_HOST}:${IIS_PORT}  -user ${IIS_USERNAME} -password ${IIS_PASSWORD} -list USERS -sync

Granting access to all data quality projects

Grant access to all data quality projects. Run the commands that apply to your environment.

${IIS_INSTALL_PATH}/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ia.server.accessAllProjects -value true
${IIS_INSTALL_PATH}/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ismigration -value true

Improving export performance

To improve the performance of the migration during the export, complete these steps as root.

  1. Create additional indexes in the metadata repository. Complete the following steps depending on where your metadata repository is hosted. These steps must be completed on the InfoSphere Information Server services tier:
    Metadata repository on Db2
    Run these xmetaAdmin commands:
    cd ${IIS_INSTALL_PATH}/ASBServer/bin
    ./xmetaAdmin.sh addIndex -model ASCLModel -class DataFileFolder importedVia_DataConnection ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model ASCLModel -class DataConnection accesses_DataStore ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model DataStageX -class DSDataConnection accesses_DataStore ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model ASCLModel -class DataCollection of_PhysicalModel ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model ASCLLogicalModel -class Relationship of_LogicalModel ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model ASCLModel -class HostSystem name ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model ASCLModel -class Connector hostedBy_HostSystem ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model ASCLModel -class Connector connectionType ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model ASCLModel -class DataConnection usedBy_Connector ASC -dbfile ../conf/database.properties
    ./xmetaAdmin.sh addIndex -model DataStageX -class DSDataConnection usedBy_Connector ASC -dbfile ../conf/database.properties
    Metadata repository on Oracle
    Run the following commands:
    CREATE INDEX IDX2102100719410 ON investigateDtQltyDmnsn (OFDATAQUALITYCONFIGURATIONXMET ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC, HAS_BENCHMARK_XMETA ASC, IGNORED_XMETA ASC, WEIGHT_XMETA ASC);
    CREATE INDEX IDX2102100926320 ON issMstrDtFldrfFrmDtFld (DATAFIELD_XMETA ASC);
    CREATE INDEX IDX2102100927200 ON investigateDatQltyRslt (FROM_EXECUTIONHISTORY_XMETA ASC, NBRECORDSTESTED_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC);
    CREATE INDEX IDX2102100927180 ON ASCLAnalysisQultyPrblm (FROM_DATAQUALITYRESULT_XMETA ASC, NBOCCURRENCES_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC);
    CREATE INDEX IDX2102100927560 ON investigateExectnHstry (OF_QUALITYCOMPONENT_XMETA ASC, STATUS_XMETA ASC, STARTTIME_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC, HAS_EXECUTIONRESULT_XMETA ASC);
    CREATE INDEX IDX2102100928040 ON investigatQltyPrblmTyp (CODE_XMETA ASC, DESCRIPTION_XMETA ASC, NAME_XMETA ASC);
    CREATE INDEX IDX2102100925570 ON investigateRuleCompnnt (OF_ANALYSISPROJECT_XMETA ASC, SHORTDESCRIPTION_XMETA ASC, NAME_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC);
    CREATE INDEX IDX2102100734290 ON ASCLAnalysisClassifctn (METHOD_XMETA ASC, STATE_XMETA DESC);
    CREATE INDEX IDX2102100734240 ON ASCLAnalysisClassifctn (STATE_XMETA ASC, METHOD_XMETA DESC);
    CREATE INDEX IDX2102100734490 ON investigtClmnnlyssMstr (PROJECTRID_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC, TABLEANALYSISMASTER_XMETA ASC);
    CREATE INDEX IDX2102100735000 ON ArfFrmrgntdFrmClssfctn (ORIGINATEDFROMCLASSIFICATINXMT ASC);
    CREATE INDEX IDX2102100735420 ON ASCLAnalysis_DataClass (NAME_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC);
    CREATE INDEX IDX2102100736020 ON ASCLAnalysisClassifctn (STATE_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC, VALUEFREQUENCY_XMETA ASC);
    CREATE INDEX IDX2102100735470 ON investigtClmnnlyssMstr (PROJECTRID_XMETA ASC, TABLEANALYSISMASTER_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC);
    CREATE INDEX IDX2102100738180 ON investigateExectnHstry ("OF_QUALITYCOMPONENT_XMETA" ASC, "STARTTIME_XMETA" DESC);
    CREATE INDEX IDX2102100736090 ON investigateRuleCompnnt ("OF_ANALYSISPROJECT_XMETA" ASC, "NAME_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC);
    CREATE INDEX IDX2102100741130 ON investigateDatQltyRslt ("FROM_EXECUTIONHISTORY_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC);
    CREATE INDEX IDX2102100744410 ON investgtClmnDtTypSmmry (COLUMNANALYSISRESULTS_XMETA ASC, RECORDPERCENT_XMETA ASC, RECORDCOUNT_XMETA ASC, DATATYPE_XMETA ASC);
    CREATE INDEX "IDX2102100653330" ON investgtClmnnlyssRslts("COLUMNANALYSISMASTER_XMETA" ASC, "RECORDCOUNT_XMETA" DESC);
    CREATE INDEX "IDX2102100653560" ON ASCLModel_DataFile("HOSTEDBY_HOSTSYSTEM_XMETA" ASC, "PATH_XMETA" ASC, "NAME_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC);
    CREATE INDEX IDX2102100659300 ON investigtClmnnlyssMstr (TABLEANALYSISMASTER_XMETA ASC, COLUMNPROPERTIES_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA ASC);
    CREATE INDEX "IDX2102100657460" ON investigateTblPKCnddts ("OF_TABLEANALYSISMASTER_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC, "CANDIDATEFLAG_XMETA" ASC,"SELECTED_XMETA" ASC, "INFERRED_XMETA" ASC);
    CREATE INDEX "IDX2102100704290" ON investigatTblnlyssMstr ("ANALYSISMASTER_XMETA" ASC, "TABLEANALYSISSTATUS_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC);
    CREATE INDEX "IDX2102100704280" ON iDtCllctnrfFrmDtCllctn("DATACOLLECTION_XMETA" ASC);
    CREATE INDEX "IDX2102100706510" ON investigateTblPKCnddts("OF_TABLEANALYSISMASTER_XMETA" ASC, "SELECTED_XMETA" ASC, "REJECTED_XMETA" ASC);
    CREATE INDEX "IDX2102100709420" ON ASCLModel_Annotation("NOTELABEL_XMETA" ASC, "OF_COMMONOBJECT_XMETA" DESC);
    CREATE INDEX "IDX2102100708160" ON ASCLRules_RuleVariable("FROM_RULE_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC, "DEFAULT_RULEBINDING_XMETA" ASC);
    CREATE INDEX "IDX2102100708310" ON investigateRuleCompnnt ("OF_ANALYSISPROJECT_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" DESC);
    CREATE INDEX "IDX2102100713390" ON investigateAnalyssptns ("OF_ANALYSISSUITE_XMETA" ASC, "USEAUTOMATICDATAQLTYCNFGRTNXMT" DESC);
    CREATE INDEX "IDX2102100715420" ON investigateTblPKCnddts("SELECTED_XMETA" ASC, "COLUMNANALYSISMASTER_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC);
    CREATE INDEX "IDX2102100716030" ON investigateKeyComponnt ("OF_TABLEPKCANDIDATE_XMETA" ASC, "USESCOLUMNANALYSISMASTERXMETA" DESC);
    CREATE INDEX "IDX2102100717450" ON investigateAnalyssptns ("UNIQUENESSTHRESHOLD_XMETA" ASC, "OF_ANALYSISSUITE_XMETA" DESC);
    CREATE INDEX "IDX2102100717410" ON investigateAnalyssptns("UNIQUENESSTHRESHOLD_XMETA" ASC, "OF_ANALYSISPROJECT_XMETA" DESC);
    CREATE INDEX "IDX2102100717390" ON investigateAnalyssptns("UNIQUENESSTHRESHOLD_XMETA" ASC, "OF_TABLEANALYSISMASTER_XMETA" DESC);
    CREATE INDEX "IDX2102100716190" ON investigateAnalyssptns("OF_COLUMNANALYSISMASTER_XMETA" ASC, "OF_TABLEANALYSISMASTER_XMETA" ASC, "OF_ANALYSISPROJECT_XMETA" ASC, "OF_ANALYSISMASTER_XMETA" ASC, "OF_ANALYSISSUITE_XMETA" ASC);
    CREATE INDEX "IDX2102100733120" ON ASCLRules_RuleBinding ("FROM_RULEEXECUTABLE_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" ASC, "BINDS_RULEVARIABLE_XMETA" ASC);
    CREATE INDEX "IDX2102100730040" ON investigatentgrDstrbtn ("VALUE_XMETA" ASC, "OFRULESETEXECUTIONRESULTXMETA" ASC, "ABSOLUTEFREQUENCY_XMETA" ASC, "FREQUENCY_XMETA" ASC) ;
    CREATE INDEX IDX2312060847540 ON investigatTblQltynlyss (OF_TABLEANALYSISMASTER_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA DESC);
    CREATE INDEX IDX2312060848270 ON investigateExectnHstry (OF_QUALITYCOMPONENT_XMETA ASC, ENDTIME_XMETA ASC, STARTTIME_XMETA ASC, HAS_EXECUTIONRESULT_XMETA ASC);
    
  2. Complete the following steps depending on where your metadata repository is hosted. These steps must be completed on the InfoSphere Information Server metadata repository tier.
    Metadata repository on Db2
    1. Set the following environment variable:

      DB2_INSTANCE_NAME=<db2-instance-name>
    2. Change to the Db2 instance user, set an environment variables, and connect to the metadata repository:
      su ${DB2_INSTANCE_NAME}
      . ~/sqllib/db2profile
      DB2_INSTANCE_NAME=<db2-instance-name>
      XMETA_SCHEMA_NAME=<xmeta-schema-name>
      db2 connect to xmeta
    3. Create the indexes:
      db2 "CREATE INDEX ${DB2_INSTANCE_NAME}.IDX2312060847540 ON ${XMETA_SCHEMA_NAME}.INVESTIGATE_TABLEQUALITYANALYSIS ( OF_TABLEANALYSISMASTER_XMETA ASC, XMETA_REPOS_OBJECT_ID_XMETA DESC) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS"
      db2 "CREATE INDEX ${DB2_INSTANCE_NAME}.IDX2312060848270 ON ${XMETA_SCHEMA_NAME}.INVESTIGATE_EXECUTIONHISTORY ( OF_QUALITYCOMPONENT_XMETA ASC, ENDTIME_XMETA ASC, STARTTIME_XMETA ASC, HAS_EXECUTIONRESULT_XMETA ASC) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS"
      db2 "CREATE UNIQUE INDEX ${DB2_INSTANCE_NAME}.IDX2312060848320 ON ${XMETA_SCHEMA_NAME}.INVESTIGATE_TABLEANALYSISMASTER ( XMETA_REPOS_OBJECT_ID_XMETA ASC ) INCLUDE ( TABLEANALYSISSTATUS_XMETA , ANALYSISMASTER_XMETA ) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS"
      db2 "CREATE INDEX ${DB2_INSTANCE_NAME}.IDX2312060848360 ON ${XMETA_SCHEMA_NAME}.INVESTIGATE_TABLEANALYSISMASTER_DATACOLLECTION_REFFROM_DATACOLLECTION (DATACOLLECTION_XMETA ASC) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS"
      db2 "CREATE UNIQUE INDEX ${DB2_INSTANCE_NAME}.IDX2312060848530 ON ${XMETA_SCHEMA_NAME}.INVESTIGATE_TABLEANALYSISSTATUS (XMETA_REPOS_OBJECT_ID_XMETA ASC) INCLUDE (DATAQUALITYANALYSISDATE_XMETA, DATAQUALITYANALYSISSTATUS_XMETA) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS"
      db2 "COMMIT"
    4. Update the DBMS statistics for all the tables in the metadata repository before you start the migration.
      db2 -x "SELECT 'runstats on table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),' and indexes all;' FROM SYSCAT.TABLES WHERE (type = 'T') AND (tabschema = '${XMETA_SCHEMA_NAME}')" > /tmp/runstats_xmeta.out
      db2 -tvf /tmp/runstats_xmeta.out
    5. Exit the Db2 instance owner account:
      exit
    Metadata repository on Oracle
    Update the DBMS statistics for your metadata repository after the indexes are created with the assistance of a DBA.
    1. Log in to the repository tier with root credentials.
    2. Connect to SQL*Plus as the Oracle system user:
      sqlplus xmeta-schema-name/password@oracle_sid
      If this command returns Command not found, complete the following steps before proceeding to step 2.c:
      1. Locate the Oracle home directory and add it to the PATH variable.
        
        export ORACLE_HOME=<oracle home directory> 
        export PATH=$ORACLE_HOME/bin:$PATH
      2. Rerun the command for connecting to SQL*Plus.
    3. Define a substitution variable XMETA_SCHEMA_NAME. Replace <xmeta-schema-name> with the schema name of the metadata repository in your environment:
      DEFINE XMETA_SCHEMA_NAME=<xmeta-schema-name>
    4. Run the following command:
      EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname => '&&XMETA_SCHEMA_NAME')
    Note: It is recommended to update the DBMS statistics for all the tables in the metadata repository before you start the migration.
  3. Create extra indexes for data assets in data quality projects. Follow the instructions in the support document Indices for performance improvement in legacy migration from InfoSphere Information Server to IBM Knowledge Catalog.

Installing the migration toolkit

Install the migration toolkit.

  1. Download the migration toolkit for InfoSphere Information Server to the ${TOOLKIT_PATH} directory. Follow the instructions on this support page:

    Migration from IBM InfoSphere Server to IBM Knowledge Catalog: Applying patches and toolkit to the IBM InfoSphere Server 11.7.1.x installation (Part 1 of 2)

    This document is updated when a new version of the migration toolkit is released and also contains information about any prerequisite patches that you might need to install.

  2. Set the toolkit version and change to the ${TOOLKIT_PATH} directory.
    TOOLKIT_VERSION=<toolkit version>
    cd $TOOLKIT_PATH
  3. Extract the downloaded file to the ${TOOLKIT_PATH} directory.

    On Linux, run the following command:

    tar -zxvf iis-migration-toolkit-${TOOLKIT_VERSION}.tar.gz -C ${TOOLKIT_PATH}

    On AIX, run the following command:

    gunzip -c iis-migration-toolkit-${TOOLKIT_VERSION}.tar.gz | tar -xvf -

Running the initialization script

Run the init_migration_iis.sh script. The script is downloaded as part of the migration toolkit and can be found in the TOOLKIT_PATH directory.

  1. Run the script as root user:
    ${TOOLKIT_PATH}/migration/iis/init_migration_iis.sh "$IIS_INSTALL_PATH"
  2. Give the wkc user write and execute permission to the ${TOOLKIT_PATH} directory. Follow the instructions for your operating system:
    Red Hat Enterprise Linux

    Run the following command:

    setfacl -m u:wkc:rwx ${TOOLKIT_PATH}
    AIX
    Complete these steps:
    1. Set the editor for editing the access control information.
      export EDITOR=/usr/bin/vi
    2. Edit the access control information for the ${TOOLKIT_PATH} directory.
      acledit ${TOOLKIT_PATH}
      Add the following entry and save the information:
      extended permissions
          enabled
          permit rwx u:wkc
    3. Edit the access control information for the /tmp directory.
      acledit /tmp
      Add the following entry and save the information:
      extended permissions
          enabled
          permit rwx u:wkc
    SUSE Linux
    Run the following commands:
    zypper install acl
    setfacl -m u:wkc:rwx ${TOOLKIT_PATH}
    setfacl -m u:wkc:rwx /tmp
  3. Set the path to the export data directory and grant the wkc user write permission to that directory:
    1. Set the EXPORT_DATA_DIR environment variable:
      EXPORT_DATA_DIR=<path to the export data directory>
    2. Give the wkc user write and execute permission to the ${EXPORT_DATA_DIR} directory. Follow the instructions for your operating system:
      Red Hat Enterprise Linux

      Run the following command:

      setfacl -m u:wkc:rwx ${EXPORT_DATA_DIR}
      AIX
      Complete these steps:
      1. Set the editor for editing the access control information.
        export EDITOR=/usr/bin/vi
      2. Edit the access control information for the ${EXPORT_DATA_DIR} directory.
        acledit ${EXPORT_DATA_DIR}
        Add the following entry and save the information:
        extended permissions
            enabled
            permit rwx u:wkc
      SUSE Linux
      Run the following commands:
      setfacl -m u:wkc:rwx ${EXPORT_DATA_DIR}

Checking data integrity

To check the integrity of the data in InfoSphere Information Server, run the ISALite tool as root user.

Important: If InfoSphere Information Server is not installed in the default path, you must update the ${TOOLKIT_PATH}/migration/iis_scripts/isalite_adt_mr_response_11714.txt response file (for InfoSphere Information Server 11.7.1.4) or the ${TOOLKIT_PATH}/migration/iis_scripts/isalite_adt_mr_response.txt response file (for InfoSphere Information Server 11.7.1.5) before you run the ISALite tool. Make sure that fieldTask.IS.root is set to the correct installation path.
Run the following command:
${TOOLKIT_PATH}/migration/iis_scripts/run_IIS_ISALite.sh ${IIS_INSTALL_PATH}

The tool will take some time to process the data and generate a report. This report is stored in the ISA_XMetHC_localhost_EngServ_${timestamp} subdirectory of the current directory.

Open the XMETAHealthChecker.html report file in a browser, and review the results and instructions that it contains. Verify that no errors occurred and that all the probes show the status SUCCESS.

Increasing the timeout value for the LTPA token

To ensure that the session can refresh without any issues, increase the timeout value for the LTPA token.

For WebSphere Application Server Liberty
Complete these steps:
  1. Edit the ${IIS_INSTALL_PATH}/wlp/usr/servers/iis/server.xml file.
  2. Find the <ltpa expiration="795m"/> entry and update this expiration value with a larger number. Either change it to 1440m, which corresponds to 24 hours, or 2880m, which is 48 hours.
  3. Restart the application server. Complete the steps that apply to your operating system.
    1. Stop the application server:
      ${IIS_INSTALL_PATH}/ASBServer/bin/MetadataServer.sh stop
    2. Start the application server:
      ${IIS_INSTALL_PATH}/ASBServer/bin/MetadataServer.sh run
For WebSphere Application Server Network Deployment
Complete these steps:
  1. Log in to the administrative console of the WebSphere application server.
  2. Go to Security > Global Security > LTPA > LTPA timeout.
  3. Increase the timeout value. Either change it to 1440m, which corresponds to 24 hours, or 2880m, which is 48 hours.
  4. Click Apply, OK, and Save.
  5. Restart the application server. Complete the instructions that apply to your operating system.
    Stand-alone installation
    1. Stop the application server:
      ${IIS_INSTALL_PATH}/ASBServer/bin/MetadataServer.sh stop
    2. Start the application server:
      ${IIS_INSTALL_PATH}/ASBServer/bin/MetadataServer.sh run
    Clustered installation
    1. Stop the cluster as described in WebSphere Application Server Network Deployment: Stopping clusters.
    2. Start the cluster as described in WebSphere Application Server Network Deployment: Starting clusters.

Installing required software packages

Download and install the IBM Semeru Runtimes package. Follow the instructions for your operating system.

Red Hat Enterprise Linux

Complete these steps:

  1. Change to the wkc user and open a bash shell:
    su wkc
    bash
  2. Change to the directory where the toolkit content is stored:
    TOOLKIT_PATH=<toolkit_path>
    cd $TOOLKIT_PATH
  3. Download and install the IBM Semeru Runtimes OpenJDK 17.
    Download and install jdk-17.0.9:
    curl -LO https://github.com/ibmruntimes/semeru17-binaries/releases/download/jdk-17.0.9%2B9_openj9-0.41.0/ibm-semeru-open-jdk_x64_linux_17.0.9_9_openj9-0.41.0.tar.gz
    tar -zxvf ibm-semeru-open-jdk_x64_linux_17.0.9_9_openj9-0.41.0.tar.gz
  4. Set the path to point to the IBM JDK 17 java installed in the previous steps.
    export PATH=${TOOLKIT_PATH}/jdk-17.0.9+9/bin:${TOOLKIT_PATH}:$PATH
AIX

Complete these steps:

  1. Change to the wkc user:
    su wkc
  2. Change to the directory where the toolkit content is stored:
    TOOLKIT_PATH=<toolkit_path>
    cd $TOOLKIT_PATH
  3. Download and install the IBM Semeru Runtimes OpenJDK 17.
    curl -LO https://github.com/ibmruntimes/semeru17-binaries/releases/download/jdk-17.0.9%2B9_openj9-0.41.0/ibm-semeru-open-jdk_ppc64_aix_17.0.9_9_openj9-0.41.0.tar.gz
    gunzip -c ibm-semeru-open-jdk_ppc64_aix_17.0.9_9_openj9-0.41.0.tar.gz | tar -xvf -
  4. Set the path to point to the IBM JDK 17 java installed in the previous steps.
    export PATH=${TOOLKIT_PATH}/jdk-17.0.9+9/bin:${TOOLKIT_PATH}:$PATH
SUSE Linux

Complete these steps:

  1. Change to the wkc user:
    su wkc
  2. Change to the directory where the toolkit content is stored:
    TOOLKIT_PATH=<toolkit_path>
    cd $TOOLKIT_PATH
  3. Download and install the IBM Semeru Runtimes OpenJDK 17.
    curl -LO https://github.com/ibmruntimes/semeru17-binaries/releases/download/jdk-17.0.9%2B9_openj9-0.41.0/ibm-semeru-open-jdk_x64_linux_17.0.9_9_openj9-0.41.0.tar.gz
    tar -zxvf ibm-semeru-open-jdk_x64_linux_17.0.9_9_openj9-0.41.0.tar.gz
  4. Set the path to point to the IBM JDK 17 java installed in the previous steps.
    export PATH=${TOOLKIT_PATH}/jdk-17.0.9+9/bin:${TOOLKIT_PATH}:$PATH
Red Hat Linux on System z and SUSE Linux on System z

Complete these steps:

  1. Change to the wkc user:
    su wkc
  2. Change to the directory where the toolkit content is stored:
    TOOLKIT_PATH=<toolkit_path>
    cd $TOOLKIT_PATH
  3. Download and install the IBM Semeru Runtimes OpenJDK 17.
    curl -LO https://github.com/ibmruntimes/semeru17-binaries/releases/download/jdk-17.0.9%2B9_openj9-0.41.0/ibm-semeru-open-jdk_s390x_linux_17.0.9_9_openj9-0.41.0.tar.gz
    tar -zxvf ibm-semeru-open-jdk_s390x_linux_17.0.9_9_openj9-0.41.0.tar.gz
  4. Set the path to point to the IBM JDK 17 java installed in the previous steps.
    export PATH=${TOOLKIT_PATH}/jdk-17.0.9+9/bin:${TOOLKIT_PATH}:$PATH

Creating a db2dsdriver.cfg configuration file

  1. On the InfoSphere Information Server services tier, run the following command to list the Db2 connections:
    ${IIS_INSTALL_PATH}/ASBServer/bin/xmetaAdmin.sh query -expr "select dc.name as connection_name, dc.username as user_name, dc.connectionString as database_name from connector in Connector, dc in connector->uses_DataConnection where connector.name='DB2Connector'" -dbfile ${IIS_INSTALL_PATH}/ASBServer/conf/database.properties http:///5.3/ASCLModel.ecore
    1. Verify the output and check if the Db2 connections are valid and required for migration.
    2. Proceed to step 2, and the following steps, only if Db2 connections are to be migrated.
  2. Log in to the engine tier as the Db2 instance user.
  3. Check if the Db2 client is on the engine tier.
  4. Create a db2dsdriver.cfg configuration file for the Db2 database on the InfoSphere Information Server engine tier host and make the configuration file available to the ASBNode agent and the Connector Access Service (CAS).
    1. Set the following environment variables:
      DB2_INSTANCE_NAME=<db2-instance-name>
      OUTPUT_FOLDER=<output folder>
    2. Create and populate db2dsdriver.cfg configuration file by running the following command:
      db2dsdcfgfill -i ${DB2_INSTANCE_NAME} -o ${OUTPUT_FOLDER}
    3. Make sure that read permission to the generated db2dsdriver.cfg file is granted to the group Other users. Run the following command:
      chmod 644 ${OUTPUT_FOLDER}/db2dsdriver.cfg
    4. Check the content of the generated db2dsdriver.cfg file. If you find any local database entries with the setting host="LOCALHOST" and port="0", replace LOCALHOST with the correct hostname and update the port entry with the correct Db2 port number. Save your changes.

      For some Db2 versions, running the db2dsdcfgfill command might not create the db2dsdriver.cfg configuration file in the specified folder.

      If this error occurs when you run the db2dsdcfgfill command, check your Db2 client version and upgrade to version 11.5.7.0 if necessary. For information about upgrading the Db2 client, see Upgrading your IBM Db2 client instance.

    5. Make the db2dsdriver.cfg configuration file available to the ASBNode agent and to CAS. As root user, complete these steps:
      1. Set the following environment variables.
        IIS_INSTALL_PATH=<IIS installation path>
        DB2_INSTANCE_NAME=<db2-instance-name>
        OUTPUT_FOLDER=<output folder>
      2. Add the following environment variable to the ${IIS_INSTALL_PATH}/ASBNode/bin/NodeAgents_env_DS.sh:
        export CC_DB2_CONNECTION_MIGRATION_DB2DSDRIVER_CFG_${DB2_INSTANCE_NAME}=${OUTPUT_FOLDER}/db2dsdriver.cfg
      3. Restart the ASBNode agent by running the following commands. You must have read permission on the db2dsdriver.cfg configuration file.
        ${IIS_INSTALL_PATH}/ASBNode/bin/NodeAgents.sh stop
        ${IIS_INSTALL_PATH}/ASBNode/bin/NodeAgents.sh start

If you have multiple Db2 instances, complete these steps for each instance.

Determining the scope of export

Evaluate which data you want to migrate and remove all unnecessary data to avoid cluttering the new deployment.

What to do next

Complete the setup tasks for Cloud Pak for Data in Preparing for migration in IBM Cloud Pak for Data.