When you configure an IBM Content Analytics DB2 crawler to crawl a remote server, some databases in DB2COPY environments might not be discovered if more than one copy of DB2COPY is installed on the same DB2 Enterprise Server Edition Version 9.1, 9.5, or 9.7 server.
Resolving The Problem
Use one of the following methods to create a crawler for crawling DB2 databases in a multiple DB2COPY environment:
- Use the JDBC database crawler instead of the DB2 crawler. Specify the database URL (DBURL) for a database that is not discovered.
- Catalog a database that is not discovered on the crawler server so that the database can be accessed as if it were a local database.
For example, assume that the SAMPLE1 database on DB2COPY1 at example.ibm.com is not discovered by the DB2 crawler, and that DB2COPY1 uses port 50001 for remote access.
- Open a DB2 command line process on the crawler server.
- Enter the following command to catalog the DB2COPY1 server:
CATALOG TCPIP NODE example REMOTE example.ibm.com SERVER 50001
- Enter the following command to catalog the SAMPLE1 database:
CATALOG DATABASE SAMPLE1 AT NODE example
- Log in to the administration console and create a DB2 crawler. Select Local or cataloged databases on the DB2 Database Type page, and then discover the database.
23 June 2018