How To
Summary
While installing Information Server, you can install microservices tier alongside other tiers using the standard Information Server installer. This document presents an alternative approach, wherein you install Information Server without microservices tier, and then subsequently add microservices tier to the product instance. This procedure is more advanced, but also offers a greater amount of flexibility in the way how the microservices tier is configured.
This document also addresses the scenario of first upgrading all other tiers of Information Server 11.7.1.1 (or later) than the microservices tier, and then upgrading the microservices tier separately to the same fix pack level. Most importantly, this upgrade approach must be used if you installed the microservices tier using steps described in this document, in order to avoid overwriting your custom inventory file. In this case, you should skip upgrading microservices tier using the Information Server Update installer, and instead follow the steps described in this document to upgrade the microservices tier to the same fix pack level.
Steps
Before you begin
- Ensure that at least the database and services tiers of Information Server are installed and operational. Refer to the Information Server installation documentation at https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.7.0/com.ibm.swg.im.iis.install.nav.doc/containers/cont_iis_information_server_installation.html.
- Plan your microservices tier architecture. Depending on the needs (for example, test versus production), you can install the microservices tier by using a single-node setup or a three-node setup. Refer to the system requirements page (https://www.ibm.com/support/pages/node/795153) for a detailed set of requirements for the microservices tier hardware and operating system.
- Elect one host as the microservices tier control plane node. The node will also be used to run the installation and perform any future maintenance actions, including applying upgrades.
- During the installation process, you must ensure SSH communication from the control plane node to the remaining microservices tier node hosts. It's advisable to test connectivity before you start the installation, as well as ensure all nodes are added to the known hosts file of the user that will run the installation on the control plane host.
- Fast network communication between cluster nodes must be ensured both during and after installation. For a list of mandatory open ports, see https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.7.0/com.ibm.swg.im.iis.productization.iisinfsv.install.doc/topics/t_prep_ms_node_1171.html.
- If you are installing as non-root user, you must install and configure the root privilege escalation utility on all microservices tier hosts. The default privilege escalation method is sudo. To use a different method, additional steps need to be done during the installation. Read the instructions in section Using a custom privilege escalation method before attempting the installation.
- Ensure the database tier and services tier fully qualified domain names (FQDN) properly resolve in DNS on all microservices tier hosts. You can check that the names are properly resolved by using the nslookup command.
- To ensure proper non-root process ownership and local storage rights, the installer will create a set of system users and groups on all microservices tier nodes, if the system users and groups don't exist. For a list of users, see the appendix at the bottom of this document.
- Part of the microservices tier is installed from RPM packages by using a local YUM repository. The RPM packages are included in the installation bundle, however, they have a set of system-level dependencies that must either be preinstalled or available as a part of an active RHEL subscription or a local YUM repository. For a list of required system-level packages, see the system requirements page (https://www.ibm.com/support/pages/node/795153).
- When performing an upgrade, ensure that both Kubernetes and the pods are running (or completed, in case of job pods), by checking kubelet service status (using "systemctl status kubelet.service" command) as well as executing the "kubectl get pods --all-namespaces" command and reviewing pod status.
Download and unpack the microservices tier installation bundle
The microservices installer is distributed as a compressed TAR archive. Download the Information Server microservices tier install bundle version that matches your Information Server version from Passport Advantage.
For Information Server 11.7.1.1, download is-enterprise-search-11.7.1.1.tar.gz of part number CC77WML.
For Information Server 11.7.1.2, download is-ent-search-11.7.1.2-RH8.tar.gz of part number CC7VEML.
Upload the installation media to microservices tier master node and extract it to a chosen directory. For example:
$ mkdir ~/installer
$ tar zxf is-enterprise-search-11.7.1.1.tar.gz -C ~/installer
Throughout this document, the unpack directory will be referred to as INSTALL_DIR. All commands referenced in this document will assume the current working directory is INSTALL_DIR.
Important access rights information:
- The user who runs the installation must have full access (read, write and execute) to the INSTALL_DIR directory and at least read and execute access to the INSTALL_DIR parent directory hierarchy.
- Important: The INSTALL_DIR directory must not be globally writable. Otherwise, the installation behavior might become unpredictable due to Ansible ceasing to read the ansible.cfg configuration file. Refer to https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir for more details.
Install Ansible
Ansible is the main utility that is used by the microservices tier installer to run actions on all the microservices tier hosts.
Since Information Server 11.7.1.2, Ansible must be preinstalled on the microservices tier control plane node before starting the installation. Consult Installing Ansible as a prerequisite to IBM InfoSphere Information Server microservices tier installation for detailed steps on how to install and configure Ansible for microservices tier installation.
In version 11.7.1.1, Ansible is provided with the microservices tier installer. To install Ansible on version 11.7.1.1, run the ansible_install.sh script:
$ ./ansible_install.sh
[INFO] Using default become method: sudo -n
[INFO] Checking for Ansible...
[INFO] Copying local Ansible YUM repository...
[INFO] Checking for YUM package: ansible-2.9.5...
[INFO] Checking for YUM package: python-netaddr...
[INFO] Checking for YUM package: python-dns...
[INFO] Installing Ansible using local YUM repository...
[INFO] Ansible is now available at /usr/bin/ansible
[INFO] Ansible version is 2.9.5
[PASS] Ansible version is supported
[INFO] Ansible uses python2
[INFO] Checking for required python2 libraries...
OK: Module netaddr was found
OK: Module dns was found
[PASS] Required python2 libraries are installed
When upgrading to 11.7.1.1 release, use the following command to upgrade existing Ansible runtime installed with previous product version:
# ./ansible_install.sh -u
Prepare the Ansible inventory file
The Ansible inventory file describes the microservices tier topology and defines all the installation parameters. The inventory file must be named "inventory.yaml" and placed directly in the INSTALL_DIR directory.
Important: The inventory file uses the YAML format and is very sensitive to the use of the proper kind and number of whitespace characters to represent the document's hierarchical structure. When you edit the inventory file, you must use two space characters for a single indentation level and refrain from using other whitespace characters, such as tabulators.
The easiest way to start creating an inventory file is to copy a default one:
$ cp defaults/default_inventory.yaml inventory.yaml
If the installation topology includes additional nodes, edit the inventory file and add node connection details to the Kubernetes group. An example three-node setup that uses password-less SSH as a custom user might look like the following example:
all:
hosts:
deployment_coordinator:
ansible_host: localhost
ansible_connection: local
children:
kubernetes:
children:
masters:
hosts:
master-1:
ansible_host: localhost
ansible_connection: local
workers:
hosts:
worker-1:
ansible_host: 10.10.10.2
ansible_user: installuser
worker-2:
ansible_host: 10.10.10.3
ansible_user: installuser
vars:
...
...
For a list of all available SSH connection customization options for the additional microservices tier nodes, see Ansible documentation: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#connecting-to-hosts-behavioral-inventory-parameters.
The "all:vars:" section is the most basic place to define installation parameters. It is important that every variable is indented with exactly four space characters, as in the following example:
all:
...
...
vars:
image_registry_host: "{{ hostvars[groups.masters[0]].ansible_nodename|lower }}"
image_registry_port: 5000
image_registry_username: "{{ lookup('env', 'REGISTRY_USERNAME') | default('admin', true) }}"
image_registry_password: "secret!"
iis_server_host: "iis-services.acme.com"
iis_server_port: 9443
iis_admin_user: "isadmin"
iis_admin_password: "secret!"
iis_db_type: "db2"
iis_db_host: "iis-db.acme.com"
iis_db_port: 50000
iis_db_user: "xmeta"
iis_db_password: "secret!"
iis_db_name: "xmeta"
iis_db_driver: "com.ibm.db2.jcc.DB2Driver"
iis_db_url: "jdbc:db2://iis-db.acme.com:50000/xmeta"
iis_db_sr_type: "db2"
iis_db_sr_host: "iis-db.acme.com"
iis_db_sr_port: 50000
iis_db_sr_user: "xmetasr"
iis_db_sr_password: "secret!"
iis_db_sr_name: "xmeta"
iis_db_sr_driver: "com.ibm.db2.jcc.DB2Driver"
iis_db_sr_url: "jdbc:db2://iis-db.acme.com:50000/xmeta"
ug_local_storage_dir: "/var/lib/ibm/ugdata"
kube_pod_subnet: "10.32.0.0/12"
kube_service_subnet: "10.96.0.0/12"
finley_token: "secret!"
zookeeper_sasl_enable: "yes"
kafka_zookeeper_sasl_enable: "yes"
kafka_sasl_enable: "yes"
kafka_ssl_enable: "yes"
solr_zookeeper_sasl_enable: "yes"
solr_auth_enable: "yes"
kafka_sasl_users:
kafka: "secret!"
solr_auth_basic_username: "solr"
solr_auth_basic_password: "secret!"
The values "iis-services.acme.com" and "iis-db.acme.com" must be replaced with FQDNs of the services tier and the database tier, respectively. Supported database connection types are: "db2" for IBM DB2 database, "oracle" for Oracle database and "sqlserver" for Microsoft SQL Server. See the appendix for more details on the variables related to other tiers connectivity.
The value of the finley_secret variable can be an arbitrary text. A random secret can be generated in a couple of ways, for example:
$ openssl rand -base64 8
The values provided to the kube_pod_subnet and kube_service_subnet variables must not clash with existing host subnets. Consult the product documentation page for more details: https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.7.0/com.ibm.swg.im.iis.productization.iisinfsv.install.doc/topics/t_prep_ms_node_1171.html.
Because some of the variables contain sensitive data, the inventory file rights should be modified to 0600 (read-write for the owner only).
Configure the JWT verification certificate
In addition to the inventory file, the microservices tier requires Information Server JSON Web Token (JWT) certificate to validate authentication tokens.
To configure the JWT certificate, copy the IIS_INSTALL_DIR/lib/iis/tknproperties/tokenservicepublic.cer file from the Information Server services tier into the INSTALL_DIR/files directory on the microservices tier. The default location of the IIS_INSTALL_DIR on Linux is /opt/IBM/WebSphere/AppServer/profiles/InfoSphere for WebSphere Application Server Network Deployment and /opt/IBM/InformationServer/wlp/usr/servers/iis for WebSphere Liberty.
Run the microservices tier installation or upgrade
The installation is started with the install.sh script. The script reads the inventory file as input, so no additional parameters are needed. When started, the script performs a basic inventory sanity check and asks for confirmation:
$ ./install.sh
[INFO] Console log output file: /opt/IBM/UGinstall/logs/ug_install_2020_05_25_19_53_07.log
[INFO] Checking for Ansible...
[INFO] Ansible version is 2.9.5
[PASS] Ansible version is supported
[INFO] Ansible uses python2
[INFO] Checking for required python2 libraries...
OK: Module netaddr was found
OK: Module dns was found
[PASS] Required python2 libraries are installed
[INFO] Checking hosts connectivity...
master-1 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}
deployment_coordinator | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}
Do you want to proceed? (yes/no):
When upgrading, use the following command instead:
$ ./upgrade.sh
At this point, verify that the list of hosts is complete. The list should include all microservices tier hosts and an additional "deployment_coordinator" alias. To continue, type "yes" and press Enter.
The first tasks that the installer runs are the input parameter and prerequisite checks. If problems are found during the checks, they are printed out to the console and the script continues. At the end of the check, the script prints an error message and a recap similar to the following example:
TASK [Summary of all prerequisite checks] *************************************************************************************************************************************************************************
Friday 22 May 2020 15:15:06 +0000 (0:00:00.053) 0:00:42.606 ************
ok: [deployment_coordinator] => {
"changed": false,
"msg": "All prerequisite checks passed"
}
fatal: [master-1]: FAILED! => {
"assertion": "not (validate_kube_platform_prereqs_failed | default(omit) | bool)",
"changed": false,
"evaluated_to": false,
"msg": "One or more prerequisite checks have failed, please check for more details in the log above."
}
NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
deployment_coordinator : ok=14 changed=0 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
master-1 : ok=60 changed=0 unreachable=0 failed=1 skipped=8 rescued=4 ignored=0
The complete prerequisite check console log contains any error messages in detail. The same log can also be found in the INSTALL_DIR/logs directory.
If the prerequisite checks are successful, the installation continues without any action required. The installation takes 20-40 minutes on average, but might take longer, depending on the machine performance and number of nodes. A successful product installation finishes with a success message logged on the console and a RECAP reporting no failed tasks:
2020-05-26 02:13:54,926 p=32401 u=root n=ansible | TASK [com/ibm/ugi/iis/common : Summary of UG services rollout status check] ****
2020-05-26 02:13:54,926 p=32401 u=root n=ansible | Tuesday 26 May 2020 02:13:54 +0200 (0:00:00.549) 0:32:17.533 ***********
2020-05-26 02:13:54,960 p=32401 u=root n=ansible | ok: [deployment_coordinator] => {
"changed": false,
"msg": "All UG services have deployed successfully"
}
2020-05-26 02:13:54,965 p=32401 u=root n=ansible | PLAY [Create and upload version configmap to Kubernetes] ***********************
2020-05-26 02:13:54,976 p=32401 u=root n=ansible | TASK [com/ibm/ugi/kubeplatform/product_version : Deploy product version configmap] ***
2020-05-26 02:13:54,976 p=32401 u=root n=ansible | Tuesday 26 May 2020 02:13:54 +0200 (0:00:00.050) 0:32:17.583 ***********
2020-05-26 02:13:55,555 p=32401 u=root n=ansible | changed: [deployment_coordinator]
2020-05-26 02:13:55,558 p=32401 u=root n=ansible | PLAY RECAP *********************************************************************
2020-05-26 02:13:55,559 p=32401 u=root n=ansible | deployment_coordinator : ok=354 changed=151 unreachable=0 failed=0 skipped=46 rescued=0 ignored=0
2020-05-26 02:13:55,559 p=32401 u=root n=ansible | localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
2020-05-26 02:13:55,559 p=32401 u=root n=ansible | master-1 : ok=248 changed=94 unreachable=0 failed=0 skipped=53 rescued=0 ignored=0
2020-05-26 02:13:55,560 p=32401 u=root n=ansible | Tuesday 26 May 2020 02:13:55 +0200 (0:00:00.583) 0:32:18.167 ***********
2020-05-26 02:13:55,560 p=32401 u=root n=ansible | ===============================================================================
Configure the services tier to connect to the microservices tier
You must configure the services tier to set up the connection to common services that run on the microservices tier, such as Kafka and Solr.
Before you configure the services tier, obtain a Kafka CA certificate. Run the following command in the microservices tier shell:
$ INSTALL_DIR/run_playbook.sh -y INSTALL_DIR/playbooks/shared_services/kafka_get_ca_crt.yaml -e kafka_ssl_ca_crt_file=/tmp/kafka_ca.pem
After the command finishes, copy the /tmp/kafka_ca.pem file to the same location on the services tier host. Then, use the following commands on the services tier machine to create a JKS truststore that will be used for Kafka client connections, replacing IS_INSTALL_HOME with the actual install location:
$ IS_INSTALL_HOME/jdk/bin/keytool -import -alias kafka -file /tmp/kafka_ca.pem -keystore /tmp/ug-host-truststore.jks -storepass secret! -noprompt
$ mkdir -p IS_INSTALL_HOME/Kafka
$ chmod 755 IS_INSTALL_HOME/Kafka
$ cp /tmp/ug-host-truststore.jks IS_INSTALL_HOME/Kafka
$ chmod 644 IS_INSTALL_HOME/Kafka/ug-host-truststore.jks
Note: The ug-host-truststore.jks will need to be copied to /IS_INSTALL_HOME/Kafka on all WebSphere nodes for a clustered WebSphere.
Next, encrypt the Kafka and Solr passwords, as given in the microservices tier inventory file, as well as the Kafka truststore password, as given in the previous step (invoke multiple times if the passwords differ):
$ IS_INSTALL_HOME/ASBServer/bin/encrypt.sh secret!
{iisenc}lu9tzC91cvKRMWdLhX4IlA==
Next, set appropriate services tier configuration properties, replacing UG_HOST with the actual hostname of the microservices tier control plane host and replacing KAFKA_USERNAME, KAFKA_PASSWORD_ENCRYPTED, KAFKA_TRUSTSTORE_PASS_ENCRYPTED, SOLR_USERNAME, SOLR_PASSWORD_ENCRYPTED, FINLEY_TOKEN and KAFKA_CA_PEM_VAL with appropriate values:
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.sos.mode -value remote
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.sos.acceptAllCertificates -value true
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.sdp.zookeeper.connect -value UG_HOST:2181/kafka
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.sdp.kafka.bootstrap.servers -value UG_HOST:9092
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ug.microservice.indexing.isEnabled -value true
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.events.kafka.truststoreLocation -value IS_INSTALL_HOME/Kafka/ug-host-truststore.jks
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.events.kafka.securityProtocol -value "SASL_SSL"
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.events.kafka.truststorePassword -value KAFKA_TRUSTSTORE_PASS_ENCRYPTED
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.events.kafka.saslUser -value KAFKA_USERNAME
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.events.kafka.saslPassword -value KAFKA_PASSWORD_ENCRYPTED
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.events.kafka.truststoreType -value "JKS"
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.solr.search.connect -value https://UG_HOST/solr
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.solr.search.user -value SOLR_USERNAME
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.solr.search.password -value SOLR_PASSWORD_ENCRYPTED
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ug.finley_token -value FINLEY_TOKEN
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ug.host.name -value UG_HOST
$ IS_INSTALL_HOME/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.events.kafka.ca.pem -value KAFKA_CA_PEM_VAL
The value of the FINLEY_TOKEN must match the one defined in the inventory file.
The value of the KAFKA_CA_PEM_VAL is the content from generated file kafka_ca.pem on MS tier host /tmp location. Make sure that the
first line (-----BEGIN CERTIFICATE-----), last line (-----END CERTIFICATE-----) and any new line characters/spaces are removed before setting up KAFKA_CA_PEM_VAL value. A sample value looks like this:
MIIDKzCCAhOgAwIBAgIJAJV+2eUVd+UtMA0GCSqGSIb3DQEBCwUAMCwxKjAoBgNVBAMMIVVHIEthZmthIENBIHZtLWludC0yLmZ5cmUuaWJtLmNvbTAeFw0yMDA2MDQwOTI3NTdaFw00NzEwMjAwOTI3NTdaMCwxKjAoBgNVBAMMIVVHIEthZmthIENBIHZtLWludC0yLmZ5cmUuaWJtLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOJxeF5k3m2AWhPjaGBuIjrz2YxZxT2domdX+zKJzGbzdWvemjtXd+WfmZBsFN5CWacP6IN/chNGHP4G3NAGsCT7zishw21mkIpTPFqbliVlwZwZbE3QBvvAlHcuypEH8Osz07txtKsBd+wB3YlD3WwkOUSX5O454GjIrk3pEngnLZ4pSgbvoSR2yBCs3WqFp73DHF0GYZk/wcHmbiF+sDQPNChroht0+NRDQ0g6uAY8dt0hBFqtYY81MF2uTIFiYMS6hk5ztDZaPB22C9GvwKh0gBZUnVCpvo9SjH+trKvg9WfnMnsV4skinQ0sDHQXKpisQaW7TCbw2RBe5dz0ljECAwEAAaNQME4wHQYDVR0OBBYEFLVUDMHeQyyLqKg8HeANrNzkzX/SMB8GA1UdIwQYMBaAFLVUDMHeQyyLqKg8HeANrNzkzX/SMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBALVUU/MMqXmDkIm6rky0vGiuUTg8r6NmB+rqbvpzEhg+4DjKwSrzMr9lu1eOmodjZr/LI7UNns1T2crK0zePZqeUqm7wo6Yw6JizWozHmE8ZLA6TSonwyia8acwLnl/y8XELZ8TczS7kZ7ZIGcosIstLIqj0bVyzIFyv4tcXblsRJrOskFenqoyFk8wFjrNo+biw56FRfvqDM1OBH2Jmkjg5P9ctw0mKYYtE3VKqGgTn4BJ+DenTIQ3jTL/Lkd2Q6B8jgtu6F6N9AvI7UJf/ljtwPiM+6X/RNikdyt7yOvBAmDoXSrZrJ+Qj70pUpxHr/kdvDcsKyivbNWvEuXu7/PU=
Finally, restart WebSphere Application Server for the changes to become effective. Verify that no errors are reported upon startup in the WebSphere system output log.
Configure the engine tier to connect to the microservices tier
First, copy IS_INSTALL_HOME/Kafka/ug-host-truststore.jks, created according to the instructions in the previous section of this document, from the services tier to the engine tier at the same location as on services tier. Note: this step is not required if 11.7.1.1 (released June 2020) or later is used for installation.
Then, update the connection details by using the following command:
$ IS_INSTALL_HOME/ASBNode/odf/updateZKConnectString.sh "isadminuser" "isadminpwd" "isserverhost" isserverport
where isadminuser is the Information Server administrator username, isadminpwd is the Information Server administrator password and isserverhost and isserverport are respectively the hostname and the port name of the services tier.
Verify that the following properties exist in the file /opt/IBM/InformationServer/ASBNode/conf/odf.properties and the values match the iisAdmin properties in the services tier:
com.ibm.iis.events.kafka.saslPassword=KAFKA_PASSWORD_ENCRYPTED
com.ibm.iis.events.kafka.saslUser=KAFKA_USERNAME
com.ibm.iis.events.kafka.securityProtocol=SASL_SSL
com.ibm.iis.events.kafka.truststoreLocation=IS_INSTALL_HOME/Kafka/ug-host-truststore.jks
com.ibm.iis.events.kafka.truststorePassword=KAFKA_TRUSTSTORE_PASS_ENCRYPTED
com.ibm.iis.events.kafka.truststoreType=JKS
com.ibm.iis.sos.mode=remote
odf.zookeeper.connect=UG_HOST\:2181/kafka
Finally, restart the ODFEngine:
$ service ODFEngine stop
$ service ODFEngine start
Using a custom privilege escalation method
When installed by a non-root user, the microservices tier installer requires root privilege escalation to set up crucial components, such as Docker and Kubernetes. The default privilege escalation method is sudo, however, it is possible to change it to a different technology supported by Ansible. The list of supported methods, called "become plugins", is available at https://docs.ansible.com/ansible/latest/plugins/become.html#plugin-list.
To use a custom privilege escalation method to install Ansible, as described in the Installing Ansible section of this document, export the BECOME_CMD environment variable before you run the ansible_install.sh script, for example:
$ export BECOME_CMD='ksu'
$ INSTALL_DIR/ansible_install.sh
Alternatively, use the privilege escalation utility to launch the root session and run the ansible_install.sh script in it. The script detects that it is running as the root user and does not use any additional privilege escalation.
When Ansible is installed, it takes over the task of privilege escalation for the entire remaining installation process. To configure Ansible for a custom privilege escalation method, edit the INSTALL_DIR/ansible.cfg ini file and add the necessary Ansible become plugin configuration entries. Follow the specific plugin documentation for the details on how to configure it. For example:
[privilege_escalation]
become_method = ksu
Additional Information
APPENDIX: List of inventory configuration variables
I. Configuration variables related to users and groups.
The microservices tier installer creates a set of users and a related group on all of the microservices tier hosts. It is possible to influence the UIDs and the GID of the users/group by providing additional inventory variables, as shown in the following table:
Users with a given UID and GID can also be created before installing the product. The installer will not attempt to create or modify existing users or groups, if only the configuration variables match the actual UID/GID values, on all microservices tier hosts operating systems.
User name | Default UID | Configuration variable name |
---|---|---|
docker_registry | 5000 | docker_registry_run_as_user_uid |
ug_default | 2000 | ug_default_user_uid |
ug_elasticsearch | 9200 | elasticsearch_user_uid |
ug_kibana | 5601 | kibana_user_uid |
ug_prometheus | 9090 | prometheus_user_uid |
ug_grafana | 9091 | grafana_user_uid |
ug_zookeeper | 2181 | zookeeper_user_uid |
ug_kafka | 9092 | kafka_user_uid |
ug_solr | 8983 | solr_user_uid |
ug_cassandra | 9042 | cassandra_user_uid |
ug_redis | 6379 | redis_user_uid |
Group name | Default GID | Configuration variable name |
---|---|---|
ugdata | 2000 | ug_local_storage_group_gid |
II. Configuration variables related to other tiers connectivity
Configuration variable name | Description | Default value |
---|---|---|
iis_server_host | The FQDN (fully-qualified domain name) of the Services tier host | None - mandatory variable |
iis_server_port | The port of the Services tier WebSphere Application Server | 9443 |
iis_db_host | The FQDN (fully-qualified domain name) of the XMETA database host | Derived from iis_server_host variable |
iis_db_port | The port of the XMETA database service | 50000 |
iis_db_type | The type of the XMETA database; valid values are: "db2", "oracle" and "sqlserver" | db2 |
iis_db_name | The name of the XMETA database | xmeta |
iis_db_user | The name of the XMETA database user | xmeta |
iis_db_password | The password of the XMETA database user | None - mandatory variable |
iis_db_url | The JDBC URL of the XMETA database | Derived from other iis_db_* variables |
iis_db_driver | The JDBC driver class of the XMETA database; valid values are "com.ibm.db2.jcc.DB2Driver", "com.ibm.isf.jdbc.oracle.OracleDriver" and "com.ibm.isf.jdbc.sqlserver.SQLServerDriver" | com.ibm.db2.jcc.DB2Driver |
iis_db_oracle_type | When the Database tier uses Oracle database, the kind of database connection; valid values are "SID" and "serviceName" | serviceName |
iis_db_sr_host | The FQDN (fully-qualified domain name) of the XMETA staging repository database host | Derived from iis_db_host variable |
iis_db_sr_port | The port of the XMETA staging repository database service | Derived from iis_db_port variable |
iis_db_sr_type | The type of the XMETA staging repository database; valid values are: "db2", "oracle" and "sqlserver" | Derived from iis_db_type variable |
iis_db_sr_name | The name of the XMETA staging repository database | Derived from iis_db_name variable |
iis_db_sr_user | The name of the XMETA staging repository database user | Derived from iis_db_user variable |
iis_db_sr_password | The password of the XMETA staging repository database user | Derived from iis_db_password variable |
iis_db_sr_url | The JDBC URL of the XMETA staging repository database | Derived from iis_db_url variable |
iis_db_sr_driver | The JDBC driver class of the XMETA staging repository database; valid values are "com.ibm.db2.jcc.DB2Driver", "com.ibm.isf.jdbc.oracle.OracleDriver" and "com.ibm.isf.jdbc.sqlserver.SQLServerDriver" | Derived from iis_db_driver variable |
iis_db_sr_oracle_type | When the Database tier uses Oracle database, the kind of staging repository database connection; valid values are "SID" and "serviceName" | Derived from iis_db_oracle_type variable |
Document Location
Worldwide
Was this topic helpful?
Document Information
Modified date:
30 November 2023
UID
ibm16241456