IBM Support

Release Notes: problems and solutions in Watson Content Analytics Version 3.5

Release Notes

Requirements

Fix Pack 4. Watson Content Analytics Version Version 3.5 Fix Pack 4 extends support to the following resources:
  • Java update:

  • - IBM Java 7.0 SR9 FP50 (7.0.9.50)
  • High availability platform:

  • - Windows Server 2012 Enterprise Edition Failover Cluster
  • Browser:

  • - Microsoft Edge Version 20 and future fix packs

Fix Pack 3. Watson Content Analytics Version Version 3.5 Fix Pack 3 extends support to the following resources:
  • Java update:
    • - IBM Java 7.0 SR9 FP10 (7.0.9.10)

Fix Pack 2. Watson Content Analytics Version Version 3.5 Fix Pack 2 extended support to the following data sources:
  • IBM Connections Version 5.0
  • IBM InfoSphere BigInsights 3.0 Fix Pack 2

Fix Pack 1. Watson Content Analytics Version Version 3.5 Fix Pack 1 extended support to the following resources:
  • Data sources:
    • - Microsoft Exchange 2013
      - Microsoft SharePoint 2010/2013 with Active Directory Federation Services (AD FS) 2.0
      - IBM Web Content Manager 8.5
      - IBM WebSphere Portal 8.5
  • Integration:
  • Browser:
    • - Internet Explorer 11

For the most current information about all system requirements and supported data sources, including requirements for IBM Watson Content Analytics Studio, see Watson Content Analytics system requirements.


Release Notes for Version 3.5.0.4


What's new in Fix Pack 4?

High availability platform


Windows Server 2012 Enterprise Edition Failover Cluster



Installing Fix Pack 4

For information about installing Version 3.5.0.4, download the readme file from Fix Central. For more information about connecting to Fix Central to obtain the readme file or fix pack, see the download document.



Review the following known issues before you install or uninstall the fix pack.
No known installation issues exist.



Known issues and workarounds in Fix Pack 4

Unless otherwise noted, known issues in Versions 3.5, 3.5.0.1, 3.5.0.2, and 3.5.0.3 also apply in Version 3.5.0.4.



Limitation and procedures for the flag function when exported under a distributed server environment
The flag function doesn't support searcher backup sessions. When you use a flag, you need to remove the searcher backup session from $ES_NODE_ROOT/<col_id>.indexservice/partition.properties

If you used the flag function under multi-searcher node configuration on a version that is older than this version, you need to initialize all flag data for the collection (you don't need to clear flag definition on the Admin page).

To initialize all flag data:
      1. Enter "$ES_INSTALL_ROOT/cloudscape/ij" to start the ij console.
      2. Enter "connect 'jdbc:derby://localhost:1527/flag;create=false';" to connect the database session.
      3. Enter "delete from "<col_id>".DOCFLAGS;" to delete the collection's flag data.
      4. Enter "exit;" to exit the ij console.




Release Notes for Version 3.5.0.3


What's new in Fix Pack 3?

Configuring support for DFS


Beginning with Fix Pack 3, you can configure your Content Analytics system to support secure search of Microsoft Windows Distributed File System (DFS) files.

Prerequisite setup
  • The DFS environment must consist of a Domain Controller server (Windows 2008 R2 server) and at least one member server (Windows 2008 R2).
  • All servers must be configured with DFS and a common share directory, such as \\example.server.com\<main_shares>.
  • All domain users and groups must have permission to access the DFS share directories.
  • The Content Analytics server must be a domain member in the same domain.

Set up and configure Watson Content Analytics 3.5 Fix Pack 3 by using WebSphere Liberty Profile
  1. Install Watson Content Analytics V3.5, if it is not already installed.
  2. Install Watson Content Analytics V3.5 Fix Pack 3.
  3. Configure LDAP for Watson Content Analytics. Configure the system to use Active Directory LDAP (DC).
  4. Create a collection and enable security for the collection. Create crawlers to crawl DFS files by specifying the DFS UNC path name and the credentials for a domain user that has permission to access the share files.
  5. To test secure search, search the secure collection by logging into the user application as various domain users.

Notes:


Installing Fix Pack 3

For information about installing Version 3.5.0.3, download the readme file from Fix Central. For more information about connecting to Fix Central to obtain the readme file or fix pack, see the download document.



Review the following known issues before you install or uninstall the fix pack.

Rebuild Content Analytics Studio resources
After installing Content Analytics Studio Version 3.5.0.3, you must upgrade the workspace and rebuild all resources. Otherwise you may see invalid type priorities error when you analyze documents.

Ensure that the First Steps program is not running
You must stop the First Steps program before you run the fix pack installation program. When the program is run, some files are moved. If Watson Content Analytics processes are running, some files might be locked and prevented from being moved. The program can detect some processes and stop them as necessary, but the program cannot stop the First Steps program.

Installing or Upgrading in a WebSphere Application Server Network Deployment environment
If you install Watson Content Analytics as a non-root user and select WebSphere Application Server Network Deployment as the application server, the Watson Content Analytics applications might not be available because the WebSphere Application Server cluster, ICACluster, has no members. If this situation occurs, run following commands to add members to ICACluster and make the applications available:

startccl.sh (UNIX) or startccl.exe (Windows)
esadmin configmanager start
esadmin configmanager WASNDaddClusterMember

Upgrading in a distributed server WebSphere Application Server environment
If you installed Watson Content Analytics in a distributed server configuration, and you use WebSphere Application Server base edition as the application server, the fix pack installation program does not upgrade the search and analytics applications automatically.

To resolve this problem:

1. If you are upgrading from Version 3.5.0.2, update the following files before you run the Version 3.5.0.3 installation program.
  • Edit the ES_INSTALL_ROOT/bin/installer.properties file and change the WAS_DEPLOYED property.
    • from:
      WAS_DEPLOYED=
      to:
      WAS_DEPLOYED=true
  • Edit the ES_INSTALL_ROOT/nodeinfo/es.cfg file and change the value of the WASDeployed property.
    • from:
      WASDeployed=false
      to:
      WASDeployed=true

2. Regardless of whether you are upgrading from Version 3.5 or Version 3.5.0.2, edit the files described in Step 1 and make the same changes after you run the Version 3.5.0.3 installation program.

Upgrading in a multiple server WebSphere Application Server Network Deployment environment
If you installed Watson Content Analytics in a distributed server configuration or in an all-on-one configuration that includes additional servers, and you use WebSphere Application Server Network Deployment Version 8.0, 8.5, or 8.5.5 as the application server, the fix pack installation program does not upgrade the search and analytics applications automatically. Additional problems affect your ability to export documents from search results.

To resolve these problems:

1. If you are upgrading from Version 3.5.0.2, update the following files before you run the Version 3.5.0.3 installation program.
  • Edit the ES_INSTALL_ROOT/bin/installer.properties file and change the WAS_DEPLOYED property.
    • from:
      WAS_DEPLOYED=
      to:
      WAS_DEPLOYED=true
  • Edit the ES_INSTALL_ROOT/nodeinfo/es.cfg file and change the value of the WASDeployed property.
    • from:
      WASDeployed=false
      to:
      WASDeployed=true

2. Install Version 3.5.0.3 on each server. First upgrade the master node and then upgrade the additional nodes.
3. Regardless of whether you are upgrading from Version 3.5 or Version 3.5.0.2, edit the files described in Step 1 and make the same changes after you run the Version 3.5.0.3 installation program.
4. Start Deployment Manager on the master node.
5. Log in to the WebSphere Application Server administrative console on the master node.
6. On the All applications page, click the ESAdminRestServer20 application. Click the Manage Modules link on the application.
7. Review the ESAdminRestServer module's mapped servers. If the module is mapped incorrectly to ESAdminServer and some web server on a node that is configured to use the search role (for example, "WebSphere:cell=kn001Cell01,node=nodeSearch,server=webserver1" and "WebSphere:cell=kn001Cell01,node=nodeMaster,server=ESAdminServer"), fix the mapping by following these steps:
      7a. Click the check box for the ESAdminRestServer module.
      7b. Select ESAdminServer and the web server on the master node, for example,
        "WebSphere:cell=wca001Cell01,node=nodeMaster,server=webserver1" and "WebSphere:cell=wca001Cell01,node=nodeMaster,server=ESAdminServer"
      7c. Click Apply. Make sure that the selected servers are correctly mapped to the module.
8. After saving these changes, restart the ESAdminServer instance in the WebSphere Application Server administrative console.

Upgrading in silent installation mode and Websphere Application Server (base edition or Network Deployment edition) environment
If you install Watson Content Analytics and select WebSphere Application Server (base edition or Network Deployment edition) as the application server and install Fix Pack 3 in silent installation mode, you must manually replace the ES_INSTALL_ROOT/wlp directory after Fix Pack 3 is installed.

If you do not replace the wlp directory, errors like "FFQG0024E An internal error occurred. Exception message: java.lang.NoClassDefFoundError: javax.servlet.ServletException" might occur when the Watson Content Analytics processes are started.

To replace the wlp directory:
 
1. On the master server, log in as the Content Analytics administrator.
2. At the command prompt, enter "esadmin system stopall" to stop the system.
3. Rename the ES_INSTALL_ROOT/wlp directory to ES_INSTALL_ROOT/_wlp.
4. Extract all files in the wlp-core-embeddable-8.5.5.7.zip file in the ES_INSTALL_ROOT/bin directory. After extracting files, the ES_INSTALL_ROOT/wlp directory should exist.
    • Example for UNIX: unzip $ES_INSTALL_ROOT/bin/wlp-core-embeddable-8.5.5.7.zip -d $ES_INSTALL_ROOT/
    • Example for Windows: Right click %ES_INSTALL_ROOT\bin\wlp-core-embeddable-8.5.5.7.zip and extract the contents to C:\Program Files\IBM\es.
 5. Set the same owner and permissions assigned to the _wlp directory to the wlp directory.
    • Example for UNIX: chown -R esadmin:esadmin  $ES_INSTALL_ROOT/wlp
    • Example for Windows: Right click %ES_INSTALL_ROOT%\wlp and select Properties, then set Full control to the "Administrators" group in the Security tab.
6. At the command prompt, enter "esadmin system startall" to start the system.

Uninstalling the Fix Pack in a WebSphere Application Server environment
If you use WebSphere Application Server and want to remove the fix pack, the uninstall program does not remove the Content Analytics applications from the WebSphere Application Server server. To completely remove the Watson Content Analytics fix pack and return to version 3.5.0.0, do the following steps:

1. On the master server, log in as the Content Analytics administrator.
2. At the command prompt, enter "esadmin system stopall" to stop the system.
3. Log in to the WebSphere Application Server administration console as the WebSphere Application Server administrator.
4. Go to Applications > Application Types > WebSphere enterprise applications.
5. Repeat steps 5.a - 5.h for each Content Analytics application (ESAdmin, ESAdminRestServer, ESAdminRestServer20, ESRestServer, and commonui):
      a) Select an application and click the "Update" button.
      b) Select "Replace the entire application" and "Remote file system", then click the "Browse" button.
      c) Select <node name>/ES_INSTALL_ROOT/bin/<application name>.ear (for example, commonui.ear).
      d) Follow the instructions in the wizard.
      e) Select "merge new and existing bindings" for the "Specify bindings to use" field.
      f) Leave the installation options at the default settings.
      g) Confirm that <application> is mapped to webserver1 (an HTTP server definition) and ESSearchServer.
      h) Finish and save the changes.
6. At the command prompt, enter "esadmin system startall" to start the Content Analytics system.
7. Confirm that the base version of the Content Analytics applications are running:
      a) For commonui: Open the About dialog in the enterprise search application and content analytics miner. The version number 3.5.0.0 should be displayed.
      b) For ESAdmin, ESAdminRestServer, ESAdminRestServer20, and ESRestServer: Go to the ES_INSTALL_ROOT/installedApps/<application>.ear/<application>.war/WEB-INF directory. The bldinfo.txt file should contain the version number 3.5.0.0.

Uninstalling Agent for Windows File Systems Fix Pack 3
If you uninstall Version 3.5.0.3 of the Agent for Windows File Systems, you might see service errors or other problems when you attempt to run or stop Version 3.5 of the Agent for Windows file systems crawler. To resolve this problem, enter the command appropriate for your installation.
  • For a local user, enter the following command:
    sc create "CCA_Agent_Service" binpath= "AGENT_INSTALLATION_DIRECTORY/bin/esagentservice.exe" start= auto error= normal obj= ".\WIN_USER_NAME" password= PASSWORD displayname= "IBM Watson Content Analytics Agent for Windows File Systems"
  • For a domain user, enter the following command:
    sc create "CCA_Agent_Service" binpath= "AGENT_INSTALLATION_DIRECTORY/bin/esagentservice.exe" start= auto error= normal obj= "WIN_DOMAIN_NAME_\WIN_USER_NAME" password= PASSWORD displayname= "IBM Watson Content Analytics Agent for Windows File Systems"

    Where:

  • AGENT_INSTALLATION_DIRECTORY is the directory where Agent crawler installed.
    WIN_USER_NAME is the Windows user name that was used to install the Agent crawler.
    WIN_DOMAIN_NAME is the Windows domain name of the Windows user name.
    PASSWORD is password of the Windows user.

JVM version is not reverted after uninstalling the fix pack.


When you uninstall the fix pack and select "Uninstall IBM Watson Content Analytics v3.5.0.3 and restore IBM Watson Content Analytics v3.5.0.x" to restore the system to the previous level, the JVM version remains at the latest version. To resolve this problem, restore JVM directories manually by following these steps after you run the Version 3.5.0.3 installation program:
  1. Log in to the system as the Content Analytics administrator and change to the ES_INSTALL_ROOT directory, such as /opt/IBM/es or C:\Program Files\IBM\es.
  2. Rename the _jvm_<Version_Number> directory to _jvm. If you upgrade from version 3.5 fix pack 2, the directory name is jvm_3.5.0.2.
  3. If you run in a 64-bit environment, rename the _jvm64_<Version_Number> directory to _jvm64. If you upgrade from version 3.5 fix pack 2, the directory name is _jvm64_3.5.0.2.
  4. UNIX: If you installed the product in a UNIX environment, run the following commands to assign appropriate permissions to the files under JVM directory:
    • $ chmod 755 -R ES_INSTALL_ROOT/_jvm
      $ chmod 755 -R ES_INSTALL_ROOT/_jvm64

Known issues and workarounds in Fix Pack 3

Unless otherwise noted, known issues in Versions 3.5, 3.5.0.1, and 3.5.0.2 also apply in Version 3.5.0.3.

WebSphere Portal search bar integration
The WebSphere Portal search bar integration function is not supported if the Watson Content Analytics portlet is deployed to a page under some Virtual Portal.

List of collections to search is not displayed
If you are not logged in to the search (or content analytics miner) application, and there is one or more secure data sources in a secure collection, no collections are shown in the list of collections that are available to search. The workaround for this issue is to enable login security for the application:
  • If you use the embedded web application server, enable login security in the Watson Content Analytics administration console.
  • If you use WebSphere Application Server, enable application security in the WebSphere Application Server administrative console.

Issues with copying the Admin REST API security token from the administration console
Problem: The administrator cannot access the Security > Token roles > Edit a Token User page, and thus cannot copy the Admin REST security token. The security token is needed, for example, to use the IBM Content Navigator plug-in.
Workaround: Use one of the following methods:
  • Instead of accessing the Edit a Token User page, add a new security token on the Add a Token User page.
  • Edit the ES_NODE_ROOT/master_config/admin/tokenUser.properties file, and specify the token value.

Customizer link is unavailable on the distributed master node
If you want to enable the Search or Analytics application customizer on the distributed master node, set following property as JVM Options:

ICA_CUSTOMIZER_SEARCHREST_URL=http://SEARCH_NODE_HOSTNAME:port (for example, http://searchnode:8393 )

If you use the embedded web application server, insert the following property in the ES_INSTALL_ROOT/configurations/interfaces/admin__interface.ini file:

JVMOptions=-DICA_CUSTOMIZER_SEARCHREST_URL=http://searchnode:8393

If you use WebSphere Application Server, set the JVM Options property under Java Virtual Machine on the Custom properties page for the WebSphere Application Server instance on which the applications are running. For WebSphere Application Server Base, the instance name is ESSearchSever. For WebSphere Application Server ND, the instance name is ICACluster-MemberNN.

Using the Notes crawler with NRPC to search secure collections
By default, search applications and the content analytics miner are configured to use DIIOP mode, which can process secure searches against crawled documents by using the Notes DIIOP protocol. If your secure collection uses the NRPC protocol to crawl Notes documents, an error message similar to the following message occurs:

FFQEP0002E An error occurred when processing a remote API. The reason is : FFQEP1101E The identity management component failed to validate the specified user user_name.

If you want to enable secure search for NRPC sources, follow these steps to modify the configuration:
  1. Open the config.properties file for the application in a text editor. The paths for the default applications are ES_NODE_ROOT/master_config/searchserver/repo/search/default and ES_NODE_ROOT/master_config/searchserver/repo/analytics/default. To specify a custom application, replace "default" in the path with the name of your application directory.
  2. Insert the line "preferences.useNRPCForSecureSearch=true" into the file and save the change. For example, add the line at the end of the file.
  3. Restart the Watson Content Analytics system.


Release Notes for Version 3.5.0.2


What's new in Fix Pack 2?

Importing and exporting field mappings


Beginning with Watson Content Analytics Version 3.5.0.2, you can run commands to export field mappings from one crawler, and then import the field mappings to configure a different crawler. For details, see Importing and exporting crawler field mapping configurations.



Installing Fix Pack 2

For information about installing Version 3.5.0.2, download the readme file from Fix Central. For more information about connecting to Fix Central to obtain the readme file or fix pack, see the download document.



Review the following known issues before you install or uninstall the fix pack.

Rebuild Content Analytics Studio resources
Version 3.5.0.2 fixed a problem with the type priorities list not being created properly in Content Analytics Studio. However, after installing Content Analytics Studio Version 3.5.0.2, you must upgrade the workspace and rebuild all resources. Otherwise you may see invalid type priorities error when you analyze documents.

Ensure that the First Steps program is not running
You must stop the First Steps program before you run the fix pack installation program. When the program is run, some files are moved. If Watson Content Analytics processes are running, some files might be locked and prevented from being moved. The program can detect some processes and stop them as necessary, but the program cannot stop the First Steps program.

Upgrading in a distributed server WebSphere Application Server environment
If you installed Watson Content Analytics in a distributed server configuration, and you use WebSphere Application Server base edition as the application server, the fix pack installation program does not upgrade the search and analytics applications automatically.

To resolve this problem:

1. If you are upgrading from Version 3.5.0.1, update the following files before you run the Version 3.5.0.2 installation program.
  • Edit the ES_INSTALL_ROOT/bin/installer.properties file and change the WAS_DEPLOYED property.
from:
WAS_DEPLOYED=
to:
WAS_DEPLOYED=true
  • Edit the ES_INSTALL_ROOT/nodeinfo/es.cfg file and change the value of the WASDeployed property.
from:
WASDeployed=false
to:
WASDeployed=true

2. Regardless of whether you are upgrading from Version 3.5 or Version 3.5.0.1, edit the files described in Step 1 and make the same changes after you run the Version 3.5.0.2 installation program.

Upgrading in a multiple server WebSphere Application Server Network Deployment environment
If you installed Watson Content Analytics in a distributed server configuration or in an all-on-one configuration that includes additional servers, and you use WebSphere Application Server Network Deployment Version 8.0, 8.5, or 8.5.5 as the application server, the fix pack installation program does not upgrade the search and analytics applications automatically. Additional problems affect your ability to export documents from search results.

To resolve these problems:

1. If you are upgrading from Version 3.5.0.1, update the following files before you run the Version 3.5.0.2 installation program.
  • Edit the ES_INSTALL_ROOT/bin/installer.properties file and change the WAS_DEPLOYED property.
from:
WAS_DEPLOYED=
to:
WAS_DEPLOYED=true
  • Edit the ES_INSTALL_ROOT/nodeinfo/es.cfg file and change the value of the WASDeployed property.
from:
WASDeployed=false
to:
WASDeployed=true

2. Install Version 3.5.0.2 on each server. First upgrade the master node and then upgrade the additional nodes.
3. Regardless of whether you are upgrading from Version 3.5 or Version 3.5.0.1, edit the files described in Step 1 and make the same changes after you run the Version 3.5.0.2 installation program.
4. Start Deployment Manager on the master node.
5. Log in to the WebSphere Application Server administrative console on the master node.
6. On the All applications page, click the ESAdminRestServer20 application. Click the Manage Modules link on the application.
7. Review the ESAdminRestServer module's mapped servers. If the module is mapped incorrectly to ESAdminServer and some web server on a node that is configured to use the search role (for example, "WebSphere:cell=kn001Cell01,node=nodeSearch,server=webserver1" and "WebSphere:cell=kn001Cell01,node=nodeMaster,server=ESAdminServer"), fix the mapping by following these steps:
7a. Click the check box for the ESAdminRestServer module.
7b. Select ESAdminServer and the web server on the master node, for example, "WebSphere:cell=wca001Cell01,node=nodeMaster,server=webserver1" and "WebSphere:cell=wca001Cell01,node=nodeMaster,server=ESAdminServer"
7c. Click Apply. Make sure that the selected servers are correctly mapped to the module.
8. After saving these changes, restart the ESAdminServer instance in the WebSphere Application Server administrative console.

Uninstalling the Fix Pack in a WebSphere Application Server environment
If you use WebSphere Application Server and want to remove the fix pack, the uninstall program does not remove the Content Analytics applications from the WebSphere Application Server server. To completely remove the Watson Content Analytics fix pack and return to version 3.5.0.0, do the following steps:

1. On the master server, log in as the Content Analytics administrator.
2. At the command prompt, enter "esadmin system stopall" to stop the system.
3. Log in to the WebSphere Application Server administration console as the WebSphere Application Server administrator.
4. Go to Applications > Application Types > WebSphere enterprise applications.
5. Repeat steps 5.a - 5.h for each Content Analytics application (ESAdmin, ESAdminRestServer, ESAdminRestServer20, ESRestServer, and commonui):
a) Select an application and click the "Update" button.
b) Select "Replace the entire application" and "Remote file system", then click the "Browse" button.
c) Select <node name>/ES_INSTALL_ROOT/bin/<application name>.ear (for example, commonui.ear).
d) Follow the instructions in the wizard.
e) Select "merge new and existing bindings" for the "Specify bindings to use" field.
f) Leave the installation options at the default settings.
g) Confirm that <application> is mapped to webserver1 (an HTTP server definition) and ESSearchServer.
h) Finish and save the changes.
6. At the command prompt, enter "esadmin system startall" to start the Content Analytics system.
7. Confirm that the base version of the Content Analytics applications are running:
a) For commonui: Open the About dialog in the enterprise search application and content analytics miner. The version number 3.5.0.0 should be displayed.
b) For ESAdmin, ESAdminRestServer, ESAdminRestServer20, and ESRestServer: Go to the ES_INSTALL_ROOT/installedApps/<application>.ear/<application>.war/WEB-INF directory. The bldinfo.txt file should contain the version number 3.5.0.0.

Uninstalling Agent for Windows File Systems Fix Pack 2
If you uninstall Version 3.5.0.2 of the Agent for Windows File Systems, you might see service errors or other problems when you attempt to run or stop Version 3.5 of the Agent for Windows file systems crawler. To resolve this problem, enter the command appropriate for your installation.
  • For a local user, enter the following command:
    sc create "CCA_Agent_Service" binpath= "AGENT_INSTALLATION_DIRECTORY/bin/esagentservice.exe" start= auto error= normal obj= ".\WIN_USER_NAME" password= PASSWORD displayname= "IBM Watson Content Analytics Agent for Windows File Systems"
  • For a domain user, enter the following command:
    sc create "CCA_Agent_Service" binpath= "AGENT_INSTALLATION_DIRECTORY/bin/esagentservice.exe" start= auto error= normal obj= "WIN_DOMAIN_NAME_\WIN_USER_NAME" password= PASSWORD displayname= "IBM Watson Content Analytics Agent for Windows File Systems"

    Where:
AGENT_INSTALLATION_DIRECTORY is the directory where Agent crawler installed.
WIN_USER_NAME is the Windows user name that was used to install the Agent crawler.
WIN_DOMAIN_NAME is the Windows domain name of the Windows user name.
PASSWORD is password of the Windows user.

JVM version is not reverted after uninstalling the fix pack.


When you uninstall the fix pack and select "Uninstall IBM Watson Content Analytics v3.5.0.2 and restore IBM Watson Content Analytics v3.5.0.x" to restore the system to the previous level, the JVM version remains at the latest version. To resolve this problem, restore JVM directories manually by following these steps after you run the Version 3.5.0.2 installation program:
  1. Log in to the system as the Content Analytics administrator and change to the ES_INSTALL_ROOT directory, such as /opt/IBM/es or C:\Program Files\IBM\es.
  2. Rename the _jvm_<Version_Number> directory to _jvm. If you upgrade from version 3.5 fix pack 1, the directory name is jvm_3.5.0.1.
  3. If you run in a 64-bit environment, rename the _jvm64_<Version_Number> directory to _jvm64. If you upgrade from version 3.5 fix pack 1, the directory name is _jvm64_3.5.0.1.
  4. UNIX: If you installed the product in a UNIX environment, run the following commands to assign appropriate permissions to the files under JVM directory:
$ chmod 755 -R ES_INSTALL_ROOT/_jvm
$ chmod 755 -R ES_INSTALL_ROOT/_jvm64

Known issues and workarounds in Fix Pack 2

Unless otherwise noted, known issues in Version 3.5 and Version 3.5.0.1 also apply in Version 3.5.0.2.




Release Notes for Version 3.5.0.1

Installing Fix Pack 1

For information about installing Version 3.5.0.1, download the readme file from Fix Central. For more information about connecting to Fix Central to obtain the readme file or fix pack, see the download document.



Review the following known issues before you install or uninstall the fix pack.

Installing IBM InfoSphere V3.0
To install BigInsights V3.0 with IBM Watson Content Analytics V3.5 Fix Pack 1, two options are supported:
  • Install Hadoop Distributed File System (HDFS)
  • Use an existing shared directory space

The following options are not supported:
  • Install General Parallel File System (GPFS)
  • Use an existing General Parallel File System (GPFS)

IBM Content Navigator plug-in
By integrating a plug-in with Watson Content Analytics, you can enhance the search capabilities of IBM Content Navigator. New functions are available if you integrate a plug-in with Watson Content Analytics Version 3.5 Fix Pack 1. For more information and instructions on how to set up the plug-in, see http://www.ibm.com/support/docview.wss?uid=swg27043484.

Known issues and workarounds in Fix Pack 1

Unless otherwise noted, known issues in Version 3.5 also apply in Version 3.5.0.1.




Release Notes for Version 3.5


Deprecated functions

This section describes functions that are deprecated in Version 3.5.



User applications: User applications that you created in Watson Content Analytics Version 3.0 are supported in Version 3.5, but they are not migrated to the new application framework. The Version 3.0 content analytics miner and enterprise search application are deprecated and will not be supported in future releases. Applications that are based on the Version 3.0 framework cannot use the new features that are available in Version 3.5.

Crawlers: The following crawlers are deprecated. You cannot create these types of crawlers in Watson Content Analytics Version 3.5:

- Domino Document Manager
- Exchange Server 2000 and 2003
- Web Content Management
- WebSphere Portal

To collect content from these types of sources, use the following crawlers:
  • Use the Exchange Server crawler to collect content from Microsoft Exchange Server sources. See the system requirements for supported Exchange Server versions.
  • Use the Seed list crawler to collect content from IBM Connections, IBM Quickr for WebSphere Portal, IBM Web Content Manager, and IBM WebSphere Portal sources.

Installation

For information about installing Version 3.5, download the readme file from Fix Central. For more information about connecting to Fix Central to obtain the readme file or fix pack, see the download document for your operating system.



Review the following known issues before you install or uninstall Version 3.5.

WebSphere Application Server user: Fatal errors might be logged when Watson Content Analytics V3.5 is installed to use WebSphere Application Server on Windows and the WebSphere Application Server user is different from the Watson Content Analytics installation user. A sample error message is: Failed to get subkeys for: SOFTWARE\IBM\WebSphere Application Server\. You can ignore these errors because they occur when the installation program tries to create a default value for the WebSphere Application Server installation directory.

Silent installation of the Agent for Windows File Systems: To silently install the Agent for Windows File Systems, you must add the DATA_TRANSFER_PORT parameter to the response file and use it instead of the COMMUNICATION_PORT. For example:

#COMMUNICATION_PORT=8398
DATA_TRANSFER_PORT=8398

Administration

This section describes known issues and workarounds for administering the product.



Solution administration:
  • Solution templates cannot propagate field mappings from certain types of crawlers. For example, it is not possible to inherit field mappings from the Notes and Quickr for Domino crawlers.
  • Result groups that are associated with custom annotators in PEAR files are displayed on the "Converting a Collection to a Solution Template" page when the PEAR files are not installed in the system. (They are not displayed on the Search Quality Management page.) This situation occurs when user does the following actions:
    1. Create result groups that are associated with custom annotators in PEAR files.
    2. Export the collection to a ZIP file.
    3. Import the ZIP file to the system where the PEAR files are not installed.
    4. On the Solutions view, create a solution package and try to convert the collection to a solution template. On the Advanced tab, the result groups are shown even though the corresponding PEAR files are not installed in the system.

RDF stores: If you use DB2 as the RDF store, you cannot use the "search" and " linkAnalysis" predicates. That is, you cannot run a text search by using the REST API to submit a SPARQL query with the "search" predicate and cannot submit SPARQL queries with the "linkAnalysis" predicate to analyze the statistical weight of links in the stored RDF graph.



IBM InfoSphere BigInsights:
  • If you upgrade from Watson Content Analytics V3.0, you must do a full recrawl of any BigInsights collection in which the document cache was enabled. The old cache is not available in V3.5 because of changes to the index structure.
  • When you create a BigInsights collection, you must configure it to use only one index server.


Integration

This section describes known issues and workarounds for integrating with other products.


Application development

This section describes known issues and workarounds for using the product's application programming interfaces.



User applications

This section describes known issues and workarounds for using the content analytics miner and enterprise search applications.



Query limitation: The enterprise search application can show the original query separately from facet values that are added as the query is refined. For example, if you submit a query and then subsequently add facet values from the facet tree, the query terms are shown in the query text box and the facet values are listed under the query text box. This separation of query inputs cannot be shown when the AND/OR query structure is deeper than 4 levels. In this case, all query inputs are shown in the query text box.

Application presentation: Some widgets do not show data correctly when the user specifies the default collection by addressable URL. This situation occurs when an administrative user clicks the application link on the Collections view of the administration console. The workaround is to reselect the collection from the banner of the user application.

IBM Watson Content Analytics Studio (ICA Studio)

This section describes known issues and workarounds for using ICA Studio.



Help for ICA Studio: Some links from the context-sensitive help point to topics in IBM Knowledge Center on a public IBM website. If you do not have access to the internet, you can install IBM Knowledge Center Customer Installed Edition to access the documentation on a local intranet server. To locate a specific topic that is linked from the context-sensitive help, navigate to the locally installed IBM Knowledge Center Customer Installed Edition in your browser and search for the link text. For instructions on how to install IBM Knowledge Center Customer Installed Edition, see Installing Watson Content Analytics documentation on a local server.

SSL: ICA Studio cannot connect to the Watson Content Analytics server via the SSL protocol.

Lexical Analysis limitations:
  • Decomposition of plural possessives such as houses' is not consistently supported.
  • Lexical analysis may appear inconsistent where a word occurs in text with a hyphen or apostrophe attached to the end. This inconsistency is only evident when the word is explicitly marked in the dictionary as being invalid stand-alone.
  • Lexical analysis only recognizes abbreviations that have internal punctuation. Abbreviations without punctuation are generally categorized as UPPERCASEWORD unless it is explicitly recognized in the dictionary.
  • Where there is an overlap of multi-word unit annotations the matched text assigned to both annotations is the union of both spans.
  • There is a potential problem when using a mixed collection of lowercase and allcase dictionaries. It is possible that custom dictionary entries will not be annotated due to lookup ordering.
  • Dictionary entries:
    • When creating a Dictionary entry that contains multiple words, where one or more of the words are compounds, the compounds must be separated by a space in order for them to be recognized by Lexical Analysis. For example, the English term "won't pay" must be saved in a dictionary entry as "wo n't pay".
    • When you apply the breakrule option "Alphanumeric Sequences = Report separate numeric and alphanumeric tokens" and a custom dictionary has alphanumeric entries, the Token annotations and Dictionary annotations are affected. For both annotations to work correctly, the alphanumeric entry must include spaces. For example, specify "myfiles 2014" not "myfiles2014".
  • Part-of-Speech enhancements for French:

    After migrating a workspace from ICA Studio 3.0 to ICA Studio 3.5, UIMA pipeline configurations with Lexical Analysis stages for French may get the following error: "Missing lex dictionary ${lw_oov_dictionary_path:fr}". This error is caused by changes to the French default configuration and improvement of the quality of the Part-of-Speech tagging and lemmatization for French. To fix this limitation, open the Lexical Analysis stages in your UIMA pipeline configuration, and remove "Built in OOV dictionary" in the dictionary list for French.



    If you have rules that use Part-of-Speech as match criteria for tokens, or lemmatization results, then the change may result in some rules no longer working. It is possible to use the old Part-of-Speech tagging dictionary to generate the same Part-of-Speech tagging and lemmatization results generated by ICA Studio 3.0. However, there are pros and cons for using the old Part-of-Speech dictionary. Consider the following points before using the old dictionary:
    • If your parsing rules do not use Part-of-Speech values as match criteria, then you do not need to use the old dictionary. If you do use the old dictionary, you will not be able to benefit from the improvements in Part-of Speech tagging in any new rules you write.
    • If your parsing rules refer to Part-of-Speech values, then examine the impact of the changes to your analysis results. Generally the improvements in Part-of-Speech tagging should reduce the number of incorrect analysis results generated by parsing rules.

    If, after considering the preceding points, you decide to use the old Part-of-Speech tagging dictionary, then you must perform the following steps:

    1. Copy the files <ICA Studio installation path>/plugins/com.ibm.dltj.dictionaries.fr_8.5.0/fr-XX-TSimplified-7001.dic and <ICA Studio installation path>/plugins/com.ibm.dltj.dictionaries.fr_8.5.0/fr-XX-OOV-7002.dic to <your project>/Resources/Dictionaries.
    2. Start ICA Studio.
    3. Open the UIMA Pipeline Configuration file in the editor.
    4. Click Lexical Analysis Stage.
    5. Add the OOV Dictionaries: click Select, and select ${workspace_loc:/<your project>/Resources/Dictionaries}/fr-XX-OOV-7002.dic.
    6. Expand POS tagger Dictionary and click the Use the specified POS Tagger dictionary button.
    7. In the Dictionary field, click Select and select ${workspace_loc:/<your project>/Resources/Dictionaries}/fr-XX-TSimplified-7001.dic.


Parsing Rules limitations:
  • The parsing engine will freeze on unbounded repetitions of sequences that have a possible empty path, for example a repeating ordered group covering two optional annotations. To avoid this problem, make sure all repeating groups always have at least one required annotation.
  • An optional Token test followed by an ambiguous choice of annotations that includes a dictionary annotation (for example, where an annotation created by an earlier parsing stage exactly covers a dictionary annotation) may behave as a required test, causing the rule to fail on some phrases where it should apply. To work around this problem, change any rule that creates a new annotation exactly matching the span of an existing dictionary annotation to delete the covered annotation.
  • Phrases and Entities Rules cannot currently be created across sentence boundaries. A sentence boundary is usually determined by the presence of a dot (.) or a new line character in the text. For terms such as John A. Waters, the . will be detected as ending the sentence, unless A. is in a dictionary. The solution is to put any terms such as A., Cllr. (short for councillor) in a dictionary.
  • The Rules engine does not have good support for detecting changes to the annotation types required as input for rules. This can cause issues when running the annotator against a document. The most common issue is that rules will not be triggered because the rules are still trying to match the original type. The following changes can cause this issue:
  • Changing the definition of a dictionary type, such as changing the type for uima.tt.Day to be com.ibm.Day. The rules were based on uima.tt.Day as input, and do not automatically detect the change. You might need to delete these rules and recreate them.
  • Changing the names of annotations created by a rule. For example, one rule might create a type Org, and another rule might match Org and create a Company annotation on top of it. If the Org is changed to Organization in the first rule, the second rule might not fire because it still expects Org. The solution is to recreate any rules that depended on the Org type so that they now have Organization as input.
  • By default, Dictionary entries covering Token Annotations are ignored by rules. This allows Token-based rules to match even when Dictionary Annotations are covering some of the Tokens. However, when the Dictionary Annotation covers more than one Token, they are not ignored in the current release.
  • When creating rules, the Constraints tab lets users specify dictionary types to be prioritized over Token Annotations. This means that the dictionary type will hide the token for the purposes of the rule activation. However, if a prioritized Dictionary type and a non-prioritized Dictionary type cover the same Token, then the Token will not be hidden.
  • In the Selection tab of the Rule Editor, if a word was found in more than one Custom dictionary, the word will be displayed with one of the custom dictionary types together with a drop-down icon to allow you to select an alternative type to represent this word. Sometimes when clicking the drop-down icon, nothing happens. To clear this problem, click on another Token in the tree to highlight it, and then click on the drop-down icon again.
  • There is an issue with writing regular expressions in Arabic. The regular expression editor does not handle right-to-left (Arabic) input well. While it is possible to write a regular expression correctly, it is unlikely to look correct on the screen.
  • This release added support for a new annotation type that sits above the Token annotation. It is called Annotation and covers any type of annotation irrespective of its type, such as token, dictionary, rule, punctuation, and so on. As such, using this annotation type in a rule is very aggressive as it will ingest all annotations until the next break point (sentence in the case of phrase/entity rules, and sentence/paragraph/document in the case of aggregates). Therefore use this annotation type with caution.

    There is one Use Case where this annotation type (and its aggressive behavior) can be extremely useful when used in combination with the custom break rules. If you have a semi-structured document with a special field delimiter, then you can use the custom break rules to define that delimiter by setting the character in question (such as a tab, double tab, pipe, colon, single carriage return, and so on) as the sentence breaker, and then use the annotation type within a rule to mark up the fields between these delimiters.

    For example:

    • ID: 1234-3434 23D D33
      Name: Marie Wallace
      Address: 1 Studio Road, Dublin, Ireland


      If you set a single carriage return as the sentence delimiter in the break rules, you can then create the following rules:

      {token = ID:} + {Annotation}* = {token = ID:} + [Document ID]
      {token = Name:} + {Annotation}* = {token = ID:} + [Person]
      {token = Address:} + {Annotation}* = {token = ID:} + [Address]

  • Parsing Rules have a feature to help in the logical grouping of annotations, called the Group function. This allows you to group patterns of annotations in order to create larger annotations and relationships across them. There is one limitation: it does not support the use of features associated with the components of a group. You can physically add features to any annotations that you create under a group, however this feature will incorporate the aggregate of all instances of this group that is matched in the rule.
  • If you click the icon to create a pre-condition, the Create Parsing Rules tab will indicate the rule should be saved. That is, an asterisk indicating the rule has been modified and not saved will be displayed even if you have cancelled the pre-condition dialog.
  • If you create an annotation over two or more tokens in a rule, and then go back to the Selection tab to delete one of the tokens that the annotation covered, this will cause the annotation to be deleted.
  • If a Parsing Rule has all of the annotations in the selection tree marked as optional (occurring zero or one time), then this rule will never be triggered. At least one annotation in the selection criteria must be a required annotation.
  • If you Create a Rule which creates a Normalized Feature by using the Covered Text of that rule, then the rule will fail to build and will generate an error. It is possible to get around this problem by creating two rules. The first rule matches the pattern, and creates a single annotation over the entire span. The second rule matches the annotation created by the previous rule, and creates a feature that is the "value" of that annotation. This value will be the covered text of the first rule. You can then create a Normalized feature in this second rule that uses the feature that you just created.
  • If you close a Rules database while there is an unsaved rule open in the Rule editor, you will be prompted to save the rule, but the saved rule will not be written to the CSV file used for Source Control. Therefore if you then commit your changes to a source control repository, the modified rule will not be included in those changes. To avoid this problem, always save your rule before closing the database.
  • The Parsing Rule editor allows you to add a Feature of type "String Array" to the list of tests in a Rule. ICA Studio does not support tests on this type of feature. Adding a test will cause errors in the UIMA pipeline.
  • In the Selection tab of the Parsing Rule editor, if you add an annotation over two or more selected annotations, then the results can be unpredictable. This problem will only occur if:
      • 1. The first selected annotation is the first node in the parse tree.
        2. The annotation that is added is the same type as the first selected annotation.
  • Annotation type names that match internal ICA Studio or UIMA types may be treated incorrectly in the current release. The following names are therefore reserved and should not be used in user dictionaries or as rule outputs:
      • Document
      • Paragraph
      • Sentence
      • Annotation
      • Token
      • Lemma
      • WordLikeToken
      • Alphabetic
      • UppercaseAlphabetic
      • TitlecaseAlphabetic
      • LowercaseAlphabetic
      • Arabic
      • Hebrew
      • Syllabic
      • Hiragana
      • Katakana
      • Hangul
      • Thai
      • Ideographic
      • Han
      • Numeric
      • ChineseNumeral
      • Punctuation
      • ClauseEndingPunctuation

Character Rules limitations:

  • Character rules will be unable to recognize entities starting and ending in punctuation when that punctuation character is adjacent to multiple punctuation characters. This is due to the fact that character rule matches must start and end on token boundaries, and Lexical Analysis interprets longer sequences of punctuation as one token. For example, the open bracket in C++</a> is not a token boundary, and thus a character rule finding HTML closing tags will not be triggered on this sequence.
  • Setting a character rules file to affect tokenization may cause the rules to fail to be triggered in situations where the match starts with the separating character of the previous match. For example, a tokenization-affecting rule for matching currency expressions will fail to match $13.5 in the string US$13.5 because the dollar symbol is the break that ends the token US. This is due to the fact that lexical analysis treats the text as a sequence of tokens and breaks. The former are handled by standard dictionaries and tokenization-affecting character rules, the latter are normally consumed by break rules. Character rules that do not affect tokenization do not exhibit this behavior.
  • Performing a Split operation with more than one Character class node selected results in Character class nodes being added in an indeterminate order.
  • When writing Character Rules in a Character Rules database, you will receive an error if you are using a UIMA Pipeline with a Cleanup stage that deletes SentenceAnnotation types. Change the Cleanup stage so that SentenceAnnotation types are not deleted and try again.
  • When creating Features in an Annotation generated by a Character Rule, the feature will be shown in the Rule Editor as a List, and multiple sub-annotations can be added to it, but when the Rule is used, the Feature is of type string (not list).

Break Rules limitations:

  • The option to report one letter abbreviations as one token may be overruled by dictionary entries even when dictionary entries are shorter. For example, in English, the word I will cause the abbreviation I. to be identified as a sequence of two tokens. The lexical analysis stage lacks the analysis depth required to decide whether I. is an abbreviation (as in I. B. M.) or a sequence of a valid English word and a sentence ending (as in So do I.). As the lexical analysis engine is designed to be simple, fast and completely data driven, in these cases it gives priority to dictionary data over break rules based on the assumption that a match for a dictionary word carries more information. This feature can also be used to ensure that problematic abbreviations are always kept together: if a user dictionary contains I., the other interpretation will be overridden and the sequence will be treated as an abbreviation.

Dictionary Editor limitations:

  • If you edit the values in a dictionary database by using the in-line editor of the database view, sometimes changes will not be actioned. If you change the value of one field, and then without pressing Enter, click a second field and change the value of that field, the second change will not be actioned. This is caused by a defect in Eclipse.

Semantic Analysis limitations:

  • The script to build a semantic dictionary (build.bat) cannot contain non-ascii characters. For example, the names of the input RDF file and the output semantic dictionary file must contain only ascii characters.
  • In the configuration of a Semantic stage of a UIMA Pipeline, there are two "Value" fields. The description of these fields in the online help is incorrect.
  • For a Semantic Type, the value specifies how likely it is that concepts of this Type will be annotated by this Semantic stage. A value of 1 means that it is more likely that concepts of this Type will be annotated, and a value of 0 means that it is unlikely that concepts of this Type will be annotated by this Semantic stage.
  • For a Semantic Link, the value specifies how likely it is that the link will be traversed. A value of 1 means that the link is likely to be traversed, and a value of 0 means that it is unlikely to be traversed.
  • When exporting an ICA Studio pipeline that includes the Semantic Analysis stage for use with Watson Content Analytics, com.ibm.langware.semantic.Concept:ref, com.ibm.langware.semantic.Relation:subject, and com.ibm.langware.semantic.Relation:object are not selectable in the Add Index Field or Facet wizard because these features are not primitive. To work around this limitation, edit cas2index.xml located in an exported PEAR file and upload it to the Watson Content Analytics server manually.
    For example, if you set a mapping between the surface form of com.ibm.langware.semantic.Concept:ref and a Watson Content Analytics facet named concept_ref, you can add the following definition to cas2index.xml:

      <indexBuildItem>
       <name>com.ibm.langware.semantic.Concept</name>
       <indexRule>
         <style name="Facet">
           <attribute name="fixedName" value="$.concept_ref"/>
            <pathComponent name="feature" value="ref/coveredText()"/>
          </style>
       </indexRule>
      </indexBuildItem>

 

ICA Studio limitations:

  • If a project is renamed, any documents with an extension of .txt in the project will not be opened with the annotations editor until ICA Studio is restarted.
  • If you are using source control to share an ICA Studio workspace between several users. If one user deletes a database, and synchronizes that change to the source control repository, then when another user checks out those changes, they will find that the database has been correctly removed, but is replaced by a folder containing a number of files. This folder and its contents should be deleted.
  • When installing ICA Studio on any drive that is not C:, the installation wizard will create a "temp" folder on the drive you select. This temp folder will not be deleted after the installation is completed. The same problem occurs if you uninstall ICA Studio.
  • If you export a UIMA pipeline to a Watson Content Analytics server and define an index field that maps to a facet, and then at a later time you export the same pipeline to the server and change or remove the index field to facet mapping, the original mapping will not be deleted on the server. To work around this problem, you must open the Watson Content Analytics administration console and manually delete the mapping.
  • When using the Type Catalog, the option to show the UIMA pipeline configuration and the Type System input files that use the selected type does not always show the full list of UIMA pipeline configuration files that output the selected type.
  • If you have an open document annotated with a UIMA annotator, and then close and re-open the Properties view, the view will fail with an exception. To work around this issue, close all document editors and re-open them.
  • If you import an ICA Studio Project into a workspace, you will be prompted whether you want to restore each Dictionary or Rule database from the CSV files. If you respond Yes, you will receive an error message indicating that the database table already exists. This error message can be ignored as the database is up to date and intact.
  • The Collection Analysis view shows an asterisk (*) in the title indicating that it has unsaved data even if the collection analysis data has been saved.

Japanese limitations:

  • The built-in Japanese Lexical Analysis dictionary must be included in the Lexical Analysis stage of the Annotator.
  • Custom break rules files are not supported. The Lexical Analysis stage of an annotator must use the Studio's default segmentation rules for Japanese.
  • Custom dictionary entries will only be annotated if the underlying token is classified as a "Noun" or "Unknown" part of speech.

Chinese limitations:

  • The built-in Chinese Lexical Analysis dictionary must be included in the Lexical Analysis stage of the Annotator.
  • Custom break rules files are not supported. The Lexical Analysis stage of an annotator must use the Studio's default segmentation rules for Chinese.

Korean limitations:

  • The built-in Korean Lexical Analysis dictionary must be included in the Lexical Analysis stage of the Annotator.
  • Custom break rules files are not supported. The Lexical Analysis stage of an annotator must use the Studio's default segmentation rules for Korean.
  • Custom dictionary entries will only be annotated if the underlying token is classified as a "Noun" or "Unknown" part of speech.

Arabic limitations:

  • There is no support for word boundary detection if more than one space is missing in a sentence fragment. It will not be lexically analyzed and will be tagged as Unknown.

Documentation


This section describes corrections to the documentation.
  • Ambiguous references to high availability servers

    The documentation uses the term "high availability" to describe servers that are configured to serve as backup servers for disaster recovery and to describe servers that are added to a system to provide scalability and failover support.



    In a single-server installation, you can install one high availability server to support disaster recovery on Windows or AIX. In a distributed server installation, you can install two high availability servers to support disaster recovery on Windows or AIX; one server is dedicated to crawling and the other is dedicated to parsing and indexing.

    All other servers that you add to a Watson Explorer Content Analytics system are designed to support increased scalability, throughput, and failover support. These additional servers, which are dedicated to search, document processing, or indexing, are not high availability servers for backup or disaster recovery purposes, and they can be installed on any of the supported operating systems.

    Important: The documentation in IBM Knowledge Center about configuring support for high availability was originally written for the Windows Server 2008 environment. For current procedures, see Configuring high availability in a Windows Server 2012 environment.
  • Two issues concern the "Other parameters" section in the Exporting a UIMA pipeline for domain adaptive search topic:
    • The reference to DisableStopWord is incorrect. The correct spelling is DisableStopword. This correction also applies when you click F2 on the Domain Adaptive Search window to view help in Watson Content Analytics Studio.
    • The following statement is incorrect: "You can also specify whether queries from the same UIMA type are merged to a single query, and whether the UIMA type is enabled only when the original query is a plain text query that does not contain any special characters." Actually, the UIMA type is enabled when the original query is a plain text query without special syntax , which includes query syntax like '|' ("OR"), "IN',"WITHIN","INORDER","ANY", "samegroupas:", and so on, not just special characters like +,-,~,"", and so on.
  • Four help files about monitoring the BoardReader crawler and LDAP Entity Resolver were omitted from IBM Knowledge Center. You can read the content of the help files in this technote.
  • Disregard the information in the Configuring application user privileges topic. User privileges are now configured through application customizers, not the administration console.

[{"Product":{"code":"SS5RWK","label":"Content Analytics with Enterprise Search"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"--","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"3.5","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
17 June 2018

UID

swg27037841