Deploying the Analytics subsystem in a VMware environment
You can add analytics data collection by deploying the IBM® API Connect Analytics OVA file on a VMware virtual server.
Before you begin
- Review the Deployment requirements on VMware.
- Review the Configuration on VMware.
- For information on deploying a cluster, see Configuring API Connect subsystems in a cluster on VMware
- If you are upgrading from a previous version, see Upgrading in a VMware environment.
About this task
You must deploy the API Connect OVA template to create each Analytics virtual server that you want in your cloud.
By default, the Analytics subsystem is configured to store data so that users can review it in the Analytics user interface. After deploying the subsystem, you can optionally configure it to offload some data to a third-party service for review and storage. Any data that is not offloaded remains accessible from the Analytics user interface.
If you want to offload all Analytics data, you can optionally install the subsystem with the ingestion-only configuration. In this scenario, unused Analytics components (such as analytics-storage and analytics-client) are omitted from the topology. Only the components that are required for offloading data are deployed. The reduced topology requires less CPU, memory, and storage.
If you do not configure ingestion-only during installation, you can enable it later as explained in Enabling Analytics ingestion-only on VMware.
Settings that are required for the ingestion-only configuration are noted in the steps that follow.
Procedure
- Ensure that you obtained the distribution file and have a project directory, as described in First steps for deploying in a VMware environment.
-
Change to the project directory.
cd myProject
-
Create an analytics subsystem.
Where:apicup subsys create analyt analytics
- analyt is the name of the analytics server that you are creating. You can assign it any name, as long as the identifier consists of lower case alphanumeric characters or '-', with no spaces, starts with an alphabetic character, and ends with an alphanumeric character.
analytics
indicates that you want it to create an Analytics microservice.
The apiconnect-up.yml file that is in that directory is updated to add the analytics-related entries.Tip: At any time, you can view the current analytics subsystem values in the apiconnect-up.yml by running theapicup get
command:
If you have not updated the value, a default value is listed, if there is one that is available.apicup subsys get analyt
Sample output from
apicup subsys get
, after configuration is completed:apicup subsys get analyt Appliance settings ================== Name Value Description ---- ----- ------ additional-cloud-init-file (Optional) Path to additional cloud-init yml file data-device sdb VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO) default-password $6$rounds=4096$iMCJ9cfhFJ8X$pbmAl 9ClWzcYzHZFoQ6n7OnYCf/owQZIiCpAtWazs/FUn/ uE8uLD.9jwHE0AX4upFSqx/jf0ZmDbHPZ9bUlCY1 (Optional) Console login password for `apicadm` user dns-servers [1.2.136.11] List of DNS servers k8s-pod-network 172.16.0.0/16 (Optional) CIDR for pods within the appliance k8s-service-network 172.17.0.0/16 (Optional) CIDR for services within the appliance mode standard public-iface eth0 Device for API/UI traffic (Eg: eth0) search-domain [subnet1.example.com] List for DNS search domains ssh-keyfiles [/home/vsphere/.ssh/id_rsa.pub] List of SSH public keys files traffic-iface eth0 Device for cluster traffic (Eg: eth0) license-version Production Subsystem settings ================== Name Value Description ---- ----- ------ es-max-memory-gb 16 Memory limit for elastic search Endpoints ========= Name Value Description ---- ----- ------ analytics-client a7s-client.testsrv0233.subnet1.example.com FQDN of Analytics client/UI endpoint analytics-ingestion a7s-in.testsrv0233.subnet1.example.com FQDN of Analytics ingestion endpoint
- For production environments, specify
mode=standard
.apicup subsys set analyt mode=standard
The
mode=standard
parameter indicates that you are deploying in high availability (HA) mode for a production environment. If the mode parameter is omitted, the subsystem deploys by default indev
mode, for use in development and testing. For more information, see Requirements for initial deployment on VMware. - Version 2018.4.1.10 or later: Specify the license version you purchased.
apicup subsys set analyt license-version=<license_type>
The license_type must be either
Production
orNonproduction
. If not specified, the default value isNonproduction
. - Optional:
Configure your logging.
Logging can be configured at a later time, but you must enable it before installation to capture the log events from the installation.
- Complete the procedure at Configuring remote logging for a VMware deployment.
-
Enter the following command to create the log file:
apicup subsys set analyt additional-cloud-init-file=config_file.yml
-
Enter the following commands to update the
apiconnect-up.yml with the information for your environment:
-
Use
apicup
to set your endpoints.You can use wildcard aliases or host aliases with your endpoints.Optionally, you can specify all endpoints with one
apicup
command. See Tips and tricks for using APICUP.Note: You cannot specify the underscore character "_" in domain names that are used in endpoints. See Configuration on VMware.The endpoints must be unique hostnames which both point to the IP address of the OVA (single node deployment), or to the IP of a load balancer configured in front of the OVA nodes. See examples in sample output in step 3.
Setting Endpoint host description analytics-ingestion Your unique_hostname identifies the endpoint that enables the Gateway to push your analytics data. The values for the analytics-ingestion and analytics-client must be different. apicup subsys set analyt analytics-ingestion=unique_hostname.domain
analytics-client Your unique_hostname identifies the endpoint that enables the Cloud Manager, API Manager, and Developer Portal to communicate with the Analytics subsystem. apicup subsys set analyt analytics-client=unique_hostname.domain
This setting is not needed for the ingestion-only configuration.
-
To configure the Analytics subsystem for ingestion-only, include the following command:
apicup subsys set analyt ingestion-only=true
-
Set your search domain. Multiple search domains should be separated by
commas.
apicup subsys set analyt search-domain=your_search_domain
Where your_search_domain is the domain of your servers, entered in all lowercase. Setting this value ensures that your searches also append these values, which are based on your company's DNS resolution, at the end of the search value. A sample search domain is mycompany.example.com.
Ensure that the value for your_search_domain is resolved in the system's /etc/resolv.conf file to avoid "502" errors when accessing the Cloud Manager web site. For example:
# Generated by resolvconf search your_search_domain ibm.com other.domain.com
-
Set your domain name servers (DNS).
Supply the IP addresses of the DNS servers for your network. Use a comma to separate multiple server addresses.
apicup subsys set analyt dns-servers=ip_address_of_dns_server
DNS entries may not be changed on a cluster after the initial installation.
-
Use
-
Set a Public key.
apicup subsys set analyt ssh-keyfiles=path_to_public_ssh_keyfile
Setting this key enables you to use ssh with this key to log in to the virtual machine to check the status of the installation. You will perform this check in step 29 of these instructions.
-
You can set the password that you enter to log into your Analytics server for the first
time.
- Important: Review the requirements for creating and using a hashed password. See Setting and using a hashed default password.
-
If you do not have a password hashing utility, install one.
Operating system Command Ubuntu, Debian, OSX If the mkpasswd command utility is not available, download and install it. (You can also use a different password hashing utility.) On OSX, use the command: gem install mkpasswd
.Windows, Red Hat If necessary, a password hashing utility like OpenSSL. -
Create a hashed password
Operating system Command Ubuntu, Debian, OSX mkpasswd --method=sha-512 --rounds=4096 password
Windows, Red Hat For example, using OpenSSL: openssl passwd -1 password
. Note that you might need to add your password hashing utility to your path; for example, in Windows:set PATH=c:\cygwin64\bin;%PATH%
-
Set the hashed password for your subsystem:
apicup subsys set analyt default-password='hashed_password'
Notes:
- The password is hashed. If it is in plain text, you cannot log into the VMWare console.
- Note that the password can only be used to login through the VMware console. You cannot use it
to SSH into the Appliance as an alternative to using the
ssh-keyfiles
. - On Linux or OSX, use single quotes around hashed_password. For Windows, use double quotes.
- If you are using a non-English keyboard, understand the limitations with using the remote VMware console. See Requirements for initial deployment on VMware.
- Optional:
If the default IP ranges for the API Connect Kubernetes pod and the service
networks conflict with IP addresses that must be used by other processes in your deployment, modify
the API Connect values.
You can change the IP ranges of the Kubernetes pod and the service networks from the default values of 172.16.0.0/16 and 172.17.0.0/16, respectively. In the case that a /16 subnet overlaps with existing IPs on the network, a Classless Inter-Domain Routing (CIDR) as small as /22 is acceptable. You can modify these ranges during initial installation and configuration only. You cannot modify them once an appliance has been deployed. See Configuration on VMware.
-
Update the IP range for the Kubernetes pod
apicup subsys set analyt k8s-pod-network='new_pod_range'
Where new_pod_range is the new value for the range.
-
Update the IP range for Service networks.
apicup subsys set analyt k8s-service-network='new_service_range'
Where new_service _range is the new value for the range.
-
Update the IP range for the Kubernetes pod
-
Add your hosts.
apicup hosts create analyt hostname.domainname hd_password
Where the following are true:- hostname.domainname is the fully qualified name of the server where you are hosting your Analytics service, including the domain information.
- hd_password is the password of the Linux Unified Key Setup uses to encrypt
the storage for your Analytics service. This password is hashed when it is stored on the server or
in the ISO. Note that the password is base64 encoded when stored in
apiconnect-up.yml
.
Repeat this command for each host that you want to add.
Note:- Host names and DNS entries may not be changed on a cluster after the initial installation.
- Version 2018.4.1.0: Ensure that Reverse DNS lookup configuration is configured for the
host names.
nslookup <ip_address>
For Version 2018.4.1.1 or later, Reverse DNS lookup is not required.
-
Create your interfaces.
apicup iface create analyt hostname.domainname physical_network_id host_ip_address/subnet_mask gateway_ip_address
Where public_iface_id is the network interface ID of your physical server. The value is most ofteneth0
. The value can also beethx
, wherex
is a number identifier.The format is similar to this example:apicup iface create analyt myHostname.domain eth0 192.0.2.1/255.255.1.1 192.0.2.1
- Optional:
Use apicup to view the configured hosts:
apicup hosts list analyt testsrv0233.subnet1.example.com Device IP/Mask Gateway eth0 1.2.152.233/255.255.254.0 1.2.152.1
- Optional:
Enable the message queue for analytics.
apicup subsys set analyt enable-message-queue=true
Options are
true
orfalse
. The default isfalse
. When set to true, the message queue will be activated and the analytics pipeline will be configured to use it. See Configuring the analytics message queue.You can enable the message queue later if you do not want to enable it during installation.
-
Verify that the configuration settings are valid.
apicup subsys get analyt --validate
The output lists each setting and adds a check mark after the value once the value is validated. If the setting lacks a check mark and indicates an invalid value, reconfigure the setting. See the following sample output.
apicup subsys get analyt --validate Appliance settings ================== Name Value ---- ----- additional-cloud-init-file ✔ data-device sdb ✔ default-password $6$rounds=4096$iMCJ9cfhFJ8X$pbmAl9ClWzcYzH ZFoQ6n7OnYCf/owQZIiCpAtWazs/FUn/uE8uLD.9jwHE0AX4upFSqx/jf0ZmDbHPZ9bUlCY1 ✔ dns-servers [1.2.3.1] ✔ k8s-pod-network 172.16.0.0/16 ✔ k8s-service-network 172.17.0.0/16 ✔ mode standard ✔ public-iface eth0 ✔ search-domain [subnet1.example.com] ✔ ssh-keyfiles [/home/vsphere/.ssh/id_rsa.pub] ✔ traffic-iface eth0 ✔ license-version Production ✔ Subsystem settings ================== Name Value ---- ----- es-max-memory-gb 16 ✔ Endpoints ========= Name Value ---- ----- analytics-client a7s-client.testsrv0233.subnet1.example.co ✔ analytics-ingestion a7s-in.testsrv0233.subnet1.example.com ✔
Note: If you are installing the Analytics subsystem with the ingestion-only configuration, the following settings are not actually validated (because they are not used) but are marked as correct in the validation results:analytics-client
es-max-memory
-
Create your ISO file.
apicup subsys install analyt --out analytplan-out
The
--out
parameter and value are required. In this example, the ISO file is created in the myProject/analytplan-out/node-config directory.If the system cannot find the path to your software that creates ISO files, create a path setting to that software by running a command similar to the following command:
Operating system Command OSX and Linux export PATH=$PATH:/Users/your_path/
Windows set PATH="c:\Program Files (x86)\cdrtools";%PATH%
- Log into the VMware vSphere Web Client.
- Using the VSphere Navigator, navigate to the directory where you are deploying the OVA file.
- Right-click the directory and select Deploy OVF Template.
-
Complete the Deploy OVF Template wizard.
- Select the apiconnect-analytics.ova template by navigating to the location where you downloaded the file from Passport Advantage®.
- Enter a name and location for your file.
- Select a resource for your template.
- Review the details for your template.
- Select the size of your configuration.
- Select the storage settings.
- Select the networks.
- Customize the Template, if necessary.
- Review the details to ensure that they are correct.
- Select Finish to deploy the virtual machine.
Note: Do not change the OVA hardware version, even if the VMware UI shows a Compatibility range that includes other versions. See Requirements for initial deployment on VMware.The template creation appears in your Recent Tasks list. - Select the Storage tab in the Navigator.
- Navigate to your datastore.
-
Upload your ISO file.
- Select the Navigate to the datastore file browser icon in the icon menu.
- Select the Upload a file to the Datastore icon in the icon menu.
-
Navigate to the ISO file that you created in your project.
It is the myProject/analytplan-out/node-config
- Upload the ISO file to the datastore.
- Leave the datastore by selecting the VMs and Templates icon in the Navigator.
- Locate and select your virtual machine.
- Select the Configure tab in the main window.
-
Select Edit....
- On the Virtual Hardware tab, select CD/DVD Drive 1.
- For the Client Device, select Datastore ISO File.
- Find and select your datastore in the Datastores category.
- Find and select your ISO file in the Contents category.
- Select OK to commit your selection and exit the Select File window.
-
Ensure that the Connect At Power On check box is selected.
Tip:
- Expand the CD/DVD drive 1 entry to view the details and the complete Connect At Power On label.
- Note that VMware related issues with ISO mounting at boot may occur if Connect At Power On
- Select OK to commit your selection and close the window.
-
Start the virtual machine by selecting the play button on the icon bar.
The installation might take several minutes to complete, depending on the availability of the system and the download speed.
-
Log in to the virtual machine by using an SSH tool to check the status of the
installation:
-
Enter the following command to connect to mgmt using SSH:
You are logging in with the default ID of apicadm, which is the API Connect ID that has administrator privileges.ssh ip_address -l apicadm
-
Select Yes to continue connecting.
Your host names are automatically added to your list of hosts.
-
Run the apic status command to verify that the installation
completed and the system is running correctly.
The command output for a correctly running Analytics system is similar to the following lines:
#sudo apic status INFO[0000] Log level: info Cluster members: - testsrv0233.subnet1.example.com (1.2.152.233) Type: BOOTSTRAP_MASTER Install stage: DONE Upgrade stage: UPGRADE_DONE Docker status: Systemd unit: running Kubernetes status: Systemd unit: running Kubelet version: testsrv0233 (4.4.0-138-generic) [Kubelet v1.10.6, Proxy v1.10.6] Etcd status: pod etcd-testsrv0233 in namespace kube-system has status Running Addons: calico, dns, helm, kube-proxy, metrics-server, nginx-ingress, Etcd cluster state: - etcd member name: testsrv0233.subnet1.example.com, member id: 12836860275847862867, cluster id: 14018872452420182423, leader id: 12836860275847862867, revision: 365042, version: 3.1.17 Pods Summary: NODE NAMESPACE NAME READY STATUS REASON default apic-analytics-analytics-client-76956644b9-cmgx8 0/0 Pending testsrv0233 default apic-analytics-analytics-client-76956644b9-vlqp2 1/1 Running testsrv0233 default apic-analytics-analytics-cronjobs-retention-1541381400-hp9fc 0/1 Succeeded testsrv0233 default apic-analytics-analytics-cronjobs-rollover-1541445300-c5n6z 0/1 Succeeded default apic-analytics-analytics-ingestion-547f875467-8mhsl 0/0 Pending testsrv0233 default apic-analytics-analytics-ingestion-547f875467-s7flj 1/1 Running default apic-analytics-analytics-mtls-gw-85b8676855-jmh8c 0/0 Pending testsrv0233 default apic-analytics-analytics-mtls-gw-85b8676855-sw6ps 1/1 Running testsrv0233 default apic-analytics-analytics-storage-basic-8cckh 1/1 Running testsrv0233 kube-system calico-node-8crtp 2/2 Running testsrv0233 kube-system coredns-87cb95869-6flvn 1/1 Running testsrv0233 kube-system coredns-87cb95869-rccvb 1/1 Running testsrv0233 kube-system etcd-testsrv0233 1/1 Running testsrv0233 kube-system ingress-nginx-ingress-controller-f7b9z 1/1 Running testsrv0233 kube-system ingress-nginx-ingress-default-backend-6f58fb5f56-nklmv 1/1 Running testsrv0233 kube-system kube-apiserver-testsrv0233 1/1 Running testsrv0233 kube-system kube-apiserver-proxy-testsrv0233 1/1 Running testsrv0233 kube-system kube-controller-manager-testsrv0233 1/1 Running testsrv0233 kube-system kube-proxy-2vw9b 1/1 Running testsrv0233 kube-system kube-scheduler-testsrv0233 1/1 Running testsrv0233 kube-system metrics-server-5558db4678-9drz6 1/1 Running testsrv0233 kube-system tiller-deploy-84f4c8bb78-vx65c 1/1 Running
-
Enter the following command to connect to mgmt using SSH:
- If you have now installed all subsystems, continue to Access the Cloud Manager and begin API Connect Cloud Configuration.
Results
- Full installation in
standard
mode:Table 1. Kubernetes pods in a full Analytics installation Number Pod 2 client 2 ingestion 2 mtls 1 operator equal to the number of nodes analytics-storage-basic Note that for all pod types except analytics-storage-basic, there is only one of each pod when installed in
dev
mode. For analytics-storage-basic, the number of pods always matches the number of nodes in the subsystem, regardless of installation mode. - Message queue deployment in
standard
mode: All of the pods shown in Table Table 1, plus:Table 2. Additional pods used for Message Queue deployments Number Pod equal to the number of nodes mq-kafka equal to the number of nodes mq-zookeeper Note that the number of pods for mq-kafka and mq-zookeeper always match the number of nodes in the subsystem, regardless of installation mode.
- Ingestion-only installation in
standard
mode:Table 3. Kubernetes pods in an ingestion-only Analytics installation Number Pods 2 ingestion 2 mtls Note that in
dev
mode there is only one of each pod.
What to do next
Identify the DataPower® appliances to be used as gateway servers in the API Connect cloud and obtain the IP addresses.
Define your API Connect configuration by using the API Connect cloud console. For more information, see Defining the cloud.