IBM Support

Add or upgrade microservices tier on an existing installation of Information Server or later

How To


While installing Information Server, you can install microservices tier alongside other tiers using the standard Information Server installer. This document presents an alternative approach, wherein you install Information Server without microservices tier, and then subsequently add microservices tier to the product instance. This procedure is more advanced, but also offers a greater amount of flexibility in the way how the microservices tier is configured.

This document also addresses the scenario of first upgrading all other tiers of Information Server (or later) than the microservices tier, and then upgrading the microservices tier separately to the same fix pack level. Most importantly, this upgrade approach must be used if you installed the microservices tier using steps described in this document, in order to avoid overwriting your custom inventory file. In this case, you should skip upgrading microservices tier using the Information Server Update installer, and instead follow the steps described in this document to upgrade the microservices tier to the same fix pack level.


Before you begin
  • Ensure that at least the database and services tiers of Information Server are installed and operational. Refer to the Information Server installation documentation at
  • Plan your microservices tier architecture. Depending on the needs (for example, test versus production), you can install the microservices tier by using a single-node setup or a three-node setup. Refer to the system requirements page ( for a detailed set of requirements for the microservices tier hardware and operating system.
  • Elect one host as the microservices tier control plane node. The node will also be used to run the installation and perform any future maintenance actions, including applying upgrades.
  • During the installation process, you must ensure SSH communication from the control plane node to the remaining microservices tier node hosts. It's advisable to test connectivity before you start the installation, as well as ensure all nodes are added to the known hosts file of the user that will run the installation on the control plane host.
  • Fast network communication between cluster nodes must be ensured both during and after installation. For a list of mandatory open ports,  see
  • If you are installing as non-root user, you must install and configure the root privilege escalation utility on all microservices tier hosts. The default privilege escalation method is sudo. To use a different method, additional steps need to be done during the installation. Read the instructions in section Using a custom privilege escalation method before attempting the installation.
  • Ensure the database tier and services tier fully qualified domain names (FQDN) properly resolve in DNS on all microservices tier hosts. You can check that the names are properly resolved by using the nslookup command.
  • To ensure proper non-root process ownership and local storage rights, the installer will create a set of system users and groups on all microservices tier nodes, if the system users and groups don't exist. For a list of users, see the appendix at the bottom of this document.
  • Part of the microservices tier is installed from RPM packages by using a local YUM repository. The RPM packages are included in the installation bundle, however, they have a set of system-level dependencies that must either be preinstalled or available as a part of an active RHEL subscription or a local YUM repository. For a list of required system-level packages, see the system requirements page (
  • When performing an upgrade, ensure that both Kubernetes and the pods are running (or completed, in case of job pods), by checking kubelet service status (using "systemctl status kubelet.service" command) as well as executing the "kubectl get pods --all-namespaces" command and reviewing pod status.
Download and unpack the microservices tier installation bundle

The microservices installer is distributed as a compressed TAR archive. Download the Information Server microservices tier install bundle version that matches your Information Server version from Passport Advantage.
For Information Server, download is-enterprise-search- of part number CC77WML.
For Information Server, download is-ent-search- of part number CC7VEML.
Upload the installation media to microservices tier master node and extract it to a chosen directory. For example:
$ mkdir ~/installer
$ tar zxf is-enterprise-search- -C ~/installer
Throughout this document, the unpack directory will be referred to as INSTALL_DIR. All commands referenced in this document will assume the current working directory is INSTALL_DIR.

Important access rights information:
  • The user who runs the installation must have full access (read, write and execute) to the INSTALL_DIR directory and at least read and execute access to the INSTALL_DIR parent directory hierarchy.
  • Important: The INSTALL_DIR directory must not be globally writable. Otherwise, the installation behavior might become unpredictable due to Ansible ceasing to read the ansible.cfg configuration file. Refer to for more details.

Install Ansible

Ansible is the main utility that is used by the microservices tier installer to run actions on all the microservices tier hosts. The installation directory contains all necessary files to install Ansible. To install Ansible:

1. When installing version, run the script:
$ ./
[INFO]  Using default become method: sudo -n
[INFO]  Checking for Ansible...
[INFO]  Copying local Ansible YUM repository...
[INFO]  Checking for YUM package: ansible-2.9.5...
[INFO]  Checking for YUM package: python-netaddr...
[INFO]  Checking for YUM package: python-dns...
[INFO]  Installing Ansible using local YUM repository...
[INFO]  Ansible is now available at /usr/bin/ansible
[INFO]  Ansible version is 2.9.5
[PASS]  Ansible version is supported
[INFO]  Ansible uses python2
[INFO]  Checking for required python2 libraries...
    OK: Module netaddr was found
    OK: Module dns was found
[PASS]  Required python2 libraries are installed
When upgrading, use the following command to upgrade existing Ansible runtime installed with previous product version:
# ./ -u

2. When installing version or later, Ansible is no longer shipped with the product - instead, it must be preinstalled. To install Ansible follow the official Ansible documentation at (suggested), or using pip:

In addition, you must install Python "dns" and "netaddr" modules. When installing using RPM/YUM, install "python-dns" and "python-netaddr" system packages (or "python3-dns" and "python3-netaddr" when using Python version 3 - default on RHEL 8). When installing using pip, install dnspython and netaddr modules with the following command: "python -m pip install --user dnspython netaddr" (skip the --user flag to install globally).

Prepare the Ansible inventory file

The Ansible inventory file describes the microservices tier topology and defines all the installation parameters. The inventory file must be named "inventory.yaml" and placed directly in the INSTALL_DIR directory.

Important: The inventory file uses the YAML format and is very sensitive to the use of the proper kind and number of whitespace characters to represent the document's hierarchical structure. When you edit the inventory file, you must use two space characters for a single indentation level and refrain from using other whitespace characters, such as tabulators.

The easiest way to start creating an inventory file is to copy a default one:
$ cp defaults/default_inventory.yaml inventory.yaml
If the installation topology includes additional nodes, edit the inventory file and add node connection details to the Kubernetes group. An example three-node setup that uses password-less SSH as a custom user might look like the following example:
      ansible_host: localhost
      ansible_connection: local
              ansible_host: localhost
              ansible_connection: local
              ansible_user: installuser
              ansible_user: installuser
For a list of all available SSH connection customization options for the additional microservices tier nodes, see Ansible documentation:

The "all:vars:" section is the most basic place to define installation parameters. It is important that every variable is indented with exactly four space characters, as in the following example:
    image_registry_host: "{{ hostvars[groups.masters[0]].ansible_nodename|lower }}"
    image_registry_port: 5000
    image_registry_username: "{{ lookup('env', 'REGISTRY_USERNAME') | default('admin', true) }}"
    image_registry_password: "secret!"
    iis_server_host: ""
    iis_server_port: 9443
    iis_admin_user: "isadmin"
    iis_admin_password: "secret!"
    iis_db_type: "db2"
    iis_db_host: ""
    iis_db_port: 50000
    iis_db_user: "xmeta"
    iis_db_password: "secret!"
    iis_db_name: "xmeta"
    iis_db_driver: ""
    iis_db_url: "jdbc:db2://"
    iis_db_sr_type: "db2"
    iis_db_sr_host: ""
    iis_db_sr_port: 50000
    iis_db_sr_user: "xmetasr"
    iis_db_sr_password: "secret!"
    iis_db_sr_name: "xmeta"
    iis_db_sr_driver: ""
    iis_db_sr_url: "jdbc:db2://"
    ug_local_storage_dir: "/var/lib/ibm/ugdata"
    kube_pod_subnet: ""
    kube_service_subnet: ""
    finley_token: "secret!"
    zookeeper_sasl_enable: "yes"
    kafka_zookeeper_sasl_enable: "yes"
    kafka_sasl_enable: "yes"
    kafka_ssl_enable: "yes"
    solr_zookeeper_sasl_enable: "yes"
    solr_auth_enable: "yes"
      kafka: "secret!"
    solr_auth_basic_username: "solr"
    solr_auth_basic_password: "secret!"
The values "" and "" must be replaced with FQDNs of the services tier and the database tier, respectively. Supported database connection types are: "db2" for IBM DB2 database, "oracle" for Oracle database and "sqlserver" for Microsoft SQL Server. See the appendix for more details on the variables related to other tiers connectivity.

The value of the finley_secret variable can be an arbitrary text. A random secret can be generated in a couple of ways, for example:
$ openssl rand -base64 8
The values provided to the kube_pod_subnet and kube_service_subnet variables must not clash with existing host subnets. Consult the product documentation page for more details:

Because some of the variables contain sensitive data, the inventory file rights should be modified to 0600 (read-write for the owner only).

Configure the JWT verification certificate

In addition to the inventory file, the microservices tier requires Information Server JSON Web Token (JWT) certificate to validate authentication tokens.

To configure the JWT certificate, copy the IIS_INSTALL_DIR/lib/iis/tknproperties/tokenservicepublic.cer file from the Information Server services tier into the INSTALL_DIR/files directory on the microservices tier. The default location of the IIS_INSTALL_DIR on Linux is /opt/IBM/WebSphere/AppServer/profiles/InfoSphere for WebSphere Application Server Network Deployment and /opt/IBM/InformationServer/wlp/usr/servers/iis for WebSphere Liberty.

Run the microservices tier installation or upgrade

The installation is started with the script. The script reads the inventory file as input, so no additional parameters are needed. When started, the script performs a basic inventory sanity check and asks for confirmation:
$ ./
[INFO]  Console log output file: /opt/IBM/UGinstall/logs/ug_install_2020_05_25_19_53_07.log
[INFO]  Checking for Ansible...
[INFO]  Ansible version is 2.9.5
[PASS]  Ansible version is supported
[INFO]  Ansible uses python2
[INFO]  Checking for required python2 libraries...
    OK: Module netaddr was found
    OK: Module dns was found
[PASS]  Required python2 libraries are installed

[INFO]  Checking hosts connectivity...
master-1 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}
deployment_coordinator | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}

Do you want to proceed? (yes/no):
When upgrading, use the following command instead:
$ ./
At this point, verify that the list of hosts is complete. The list should include all microservices tier hosts and an additional "deployment_coordinator" alias. To continue, type "yes" and press Enter.

The first tasks that the installer runs are the input parameter and prerequisite checks. If problems are found during the checks, they are printed out to the console and the script continues. At the end of the check, the script prints an error message and a recap similar to the following example:
TASK [Summary of all prerequisite checks] *************************************************************************************************************************************************************************
Friday 22 May 2020  15:15:06 +0000 (0:00:00.053)       0:00:42.606 ************
ok: [deployment_coordinator] => {
    "changed": false,
    "msg": "All prerequisite checks passed"
fatal: [master-1]: FAILED! => {
    "assertion": "not (validate_kube_platform_prereqs_failed | default(omit) | bool)",
    "changed": false,
    "evaluated_to": false,
    "msg": "One or more prerequisite checks have failed, please check for more details in the log above."

NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************

PLAY RECAP ********************************************************************************************************************************************************************************************************
deployment_coordinator     : ok=14   changed=0    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0  
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
master-1                   : ok=60   changed=0    unreachable=0    failed=1    skipped=8    rescued=4    ignored=0  
The complete prerequisite check console log contains any error messages in detail. The same log can also be found in the INSTALL_DIR/logs directory.

If the prerequisite checks are successful, the installation continues without any action required. The installation takes 20-40 minutes on average, but might take longer, depending on the machine performance and number of nodes. A successful product installation finishes with a success message logged on the console and a RECAP reporting no failed tasks:
2020-05-26 02:13:54,926 p=32401 u=root n=ansible | TASK [com/ibm/ugi/iis/common : Summary of UG services rollout status check] ****
2020-05-26 02:13:54,926 p=32401 u=root n=ansible | Tuesday 26 May 2020  02:13:54 +0200 (0:00:00.549)       0:32:17.533 ***********
2020-05-26 02:13:54,960 p=32401 u=root n=ansible | ok: [deployment_coordinator] => {
    "changed": false,
    "msg": "All UG services have deployed successfully"
2020-05-26 02:13:54,965 p=32401 u=root n=ansible | PLAY [Create and upload version configmap to Kubernetes] ***********************
2020-05-26 02:13:54,976 p=32401 u=root n=ansible | TASK [com/ibm/ugi/kubeplatform/product_version : Deploy product version configmap] ***
2020-05-26 02:13:54,976 p=32401 u=root n=ansible | Tuesday 26 May 2020  02:13:54 +0200 (0:00:00.050)       0:32:17.583 ***********
2020-05-26 02:13:55,555 p=32401 u=root n=ansible | changed: [deployment_coordinator]
2020-05-26 02:13:55,558 p=32401 u=root n=ansible | PLAY RECAP *********************************************************************
2020-05-26 02:13:55,559 p=32401 u=root n=ansible | deployment_coordinator     : ok=354  changed=151  unreachable=0    failed=0    skipped=46   rescued=0    ignored=0  
2020-05-26 02:13:55,559 p=32401 u=root n=ansible | localhost                  : ok=4    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
2020-05-26 02:13:55,559 p=32401 u=root n=ansible | master-1                   : ok=248  changed=94   unreachable=0    failed=0    skipped=53   rescued=0    ignored=0  
2020-05-26 02:13:55,560 p=32401 u=root n=ansible | Tuesday 26 May 2020  02:13:55 +0200 (0:00:00.583)       0:32:18.167 ***********
2020-05-26 02:13:55,560 p=32401 u=root n=ansible | ===============================================================================
Configure the services tier to connect to the microservices tier

You must configure the services tier to set up the connection to common services that run on the microservices tier, such as Kafka and Solr.

Before you configure the services tier, obtain a Kafka CA certificate. Run the following command in the microservices tier shell:
$ INSTALL_DIR/ -y INSTALL_DIR/playbooks/shared_services/kafka_get_ca_crt.yaml -e kafka_ssl_ca_crt_file=/tmp/kafka_ca.pem
After the command finishes, copy the /tmp/kafka_ca.pem file to the same location on the services tier host. Then, use the following commands on the services tier machine to create a JKS truststore that will be used for Kafka client connections, replacing IS_INSTALL_HOME with the actual install location:
$ IS_INSTALL_HOME/jdk/bin/keytool -import -alias kafka -file /tmp/kafka_ca.pem -keystore /tmp/ug-host-truststore.jks -storepass secret! -noprompt
$ mkdir -p IS_INSTALL_HOME/Kafka
$ chmod 755 IS_INSTALL_HOME/Kafka
$ cp /tmp/ug-host-truststore.jks IS_INSTALL_HOME/Kafka
$ chmod 644 IS_INSTALL_HOME/Kafka/ug-host-truststore.jks
Note: The ug-host-truststore.jks will need to be copied to /IS_INSTALL_HOME/Kafka on all WebSphere nodes for a clustered WebSphere.

Next, encrypt the Kafka and Solr passwords, as given in the microservices tier inventory file, as well as the Kafka truststore password, as given in the previous step (invoke multiple times if the passwords differ):
$ IS_INSTALL_HOME/ASBServer/bin/ secret!
Next, set appropriate services tier configuration properties, replacing UG_HOST with the actual hostname of the microservices tier control plane host and replacing KAFKA_USERNAME, KAFKA_PASSWORD_ENCRYPTED, KAFKA_TRUSTSTORE_PASS_ENCRYPTED, SOLR_USERNAME, SOLR_PASSWORD_ENCRYPTED, FINLEY_TOKEN and KAFKA_CA_PEM_VAL with appropriate values:
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value remote
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value true
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value UG_HOST:2181/kafka
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value UG_HOST:9092
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value true
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value IS_INSTALL_HOME/Kafka/ug-host-truststore.jks
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value "SASL_SSL"
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value KAFKA_USERNAME
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value "JKS"
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value https://UG_HOST/solr
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value SOLR_USERNAME
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value FINLEY_TOKEN
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value UG_HOST
$ IS_INSTALL_HOME/ASBServer/bin/ -set -key -value KAFKA_CA_PEM_VAL
The value of the FINLEY_TOKEN must match the one defined in the inventory file.

The value of the KAFKA_CA_PEM_VAL is the content from generated file kafka_ca.pem on MS tier host /tmp location. Make sure that the
first line (-----BEGIN CERTIFICATE-----), last line (-----END CERTIFICATE-----) and any new line characters/spaces are removed before setting up KAFKA_CA_PEM_VAL value. A sample value looks like this:
Finally, restart WebSphere Application Server for the changes to become effective. Verify that no errors are reported upon startup in the WebSphere system output log.

Configure the engine tier to connect to the microservices tier

First, copy IS_INSTALL_HOME/Kafka/ug-host-truststore.jks, created according to the instructions in the previous section of this document, from the services tier to the engine tier at the same location as on services tier. Note: this step is not required if (released June 2020) or later is used for installation.

Then, update the connection details by using the following command:
$ IS_INSTALL_HOME/ASBNode/odf/ "isadminuser" "isadminpwd" "isserverhost" isserverport
where isadminuser is the Information Server administrator username, isadminpwd is the Information Server administrator password and isserverhost and isserverport are respectively the hostname and the port name of the services tier.

Verify that the following properties exist in the file /opt/IBM/InformationServer/ASBNode/conf/ and the values match the iisAdmin properties in the services tier:
Finally, restart the ODFEngine:
$ service ODFEngine stop
$ service ODFEngine start

Using a custom privilege escalation method

When installed by a non-root user, the microservices tier installer requires root privilege escalation to set up crucial components, such as Docker and Kubernetes. The default privilege escalation method is sudo, however, it is possible to change it to a different technology supported by Ansible. The list of supported methods, called "become plugins", is available at

To use a custom privilege escalation method to install Ansible, as described in the Installing Ansible section of this document, export the BECOME_CMD environment variable before you run the script, for example:
$ export BECOME_CMD='ksu'
Alternatively, use the privilege escalation utility to launch the root session and run the script in it. The script detects that it is running as the root user and does not use any additional privilege escalation.

When Ansible is installed, it takes over the task of privilege escalation for the entire remaining installation process. To configure Ansible for a custom privilege escalation method, edit the INSTALL_DIR/ansible.cfg ini file and add the necessary Ansible become plugin configuration entries. Follow the specific plugin documentation for the details on how to configure it. For example:
become_method = ksu
APPENDIX: List of inventory configuration variables

I. Configuration variables related to users and groups.

The microservices tier installer creates a set of users and a related group on all of the microservices tier hosts. It is possible to influence the UIDs and the GID of the users/group by providing additional inventory variables, as shown in the following table:

Users with a given UID and GID can also be created before installing the product. The installer will not attempt to create or modify existing users or groups, if only the configuration variables match the actual UID/GID values, on all microservices tier hosts operating systems.
User name Default UID Configuration variable name
docker_registry 5000 docker_registry_run_as_user_uid
ug_default 2000 ug_default_user_uid
ug_elasticsearch 9200 elasticsearch_user_uid
ug_kibana 5601 kibana_user_uid
ug_prometheus 9090 prometheus_user_uid
ug_grafana 9091 grafana_user_uid
ug_zookeeper 2181 zookeeper_user_uid
ug_kafka 9092 kafka_user_uid
ug_solr 8983 solr_user_uid
ug_cassandra 9042 cassandra_user_uid
ug_redis 6379 redis_user_uid
Group name Default GID Configuration variable name
ugdata 2000 ug_local_storage_group_gid
II. Configuration variables related to other tiers connectivity
Configuration variable name Description Default value
iis_server_host The FQDN (fully-qualified domain name) of the Services tier host None - mandatory variable
iis_server_port The port of the Services tier WebSphere Application Server 9443
iis_db_host The FQDN (fully-qualified domain name) of the XMETA database host Derived from iis_server_host variable
iis_db_port The port of the XMETA database service 50000
iis_db_type The type of the XMETA database; valid values are: "db2", "oracle" and "sqlserver" db2
iis_db_name The name of the XMETA database xmeta
iis_db_user The name of the XMETA database user xmeta
iis_db_password The password of the XMETA database user None - mandatory variable
iis_db_url The JDBC URL of the XMETA database Derived from other iis_db_* variables
iis_db_driver The JDBC driver class of the XMETA database; valid values are "", "" and ""
iis_db_oracle_type When the Database tier uses Oracle database, the kind of database connection; valid values are "SID" and "serviceName" serviceName
iis_db_sr_host The FQDN (fully-qualified domain name) of the XMETA staging repository database host Derived from iis_db_host variable
iis_db_sr_port The port of the XMETA staging repository database service Derived from iis_db_port variable
iis_db_sr_type The type of the XMETA staging repository database; valid values are: "db2", "oracle" and "sqlserver" Derived from iis_db_type variable
iis_db_sr_name The name of the XMETA staging repository database Derived from iis_db_name variable
iis_db_sr_user The name of the XMETA staging repository database user Derived from iis_db_user variable
iis_db_sr_password The password of the XMETA staging repository database user Derived from iis_db_password variable
iis_db_sr_url The JDBC URL of the XMETA staging repository database Derived from iis_db_url variable
iis_db_sr_driver The JDBC driver class of the XMETA staging repository database; valid values are "", "" and "" Derived from iis_db_driver variable
iis_db_sr_oracle_type When the Database tier uses Oracle database, the kind of staging repository database connection; valid values are "SID" and "serviceName" Derived from iis_db_oracle_type variable

Document Location


[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSZJPZ","label":"IBM InfoSphere Information Server"},"ARM Category":[{"code":"a8m0z0000000CabAAE","label":"Suite Installer-\u003EMicroservices Tier"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"11.7.1","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
26 October 2022