Configuring the Analytics subsystem
Specify configuration properties for your Analytics subsystem, and create the ISO files.
Before you begin
About this task
Use the apicup
installation utility to specify configuration settings for your
Analytics subsystem.
Procedure
- Ensure that you obtained the distribution file and have a project directory, as described in First steps for deploying in a VMware environment.
-
Change to the project directory.
cd myProject
-
Create an analytics subsystem.
Where:apicup subsys create analyt analytics
- analyt is the name of the analytics server that you are creating. You can assign it any name, as long as the identifier consists of lowercase alphanumeric characters or '-', with no spaces, starts with an alphabetic character, and ends with an alphanumeric character.
analytics
indicates that you want it to create an Analytics subsystem.
The apiconnect-up-v10.yml file that is in that directory is updated to add the analytics-related entries.Tip: At any time, you can view the current analytics subsystem values in the apiconnect-up-v10.yml by running theapicup get
command:
If you have not updated the value, a default value is listed, if there is one that is available.apicup subsys get analyt
Sample output from
apicup subsys get
:Appliance settings ================== Name Value Description ---- ----- ------ additional-cloud-init-file (Optional) Path to additional cloud-init yml file data-device sdb VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO) default-password (Optional) Console login password for `apicadm` user, password must be pre-hashed dns-servers [] List of DNS servers extra-values-file (Optional) Path to additional configuration yml file k8s-pod-network 172.16.0.0/16 (Optional) CIDR for pods within the appliance k8s-service-network 172.17.0.0/16 (Optional) CIDR for services within the appliance public-iface eth0 Device for API/UI traffic (Eg: eth0) search-domain [] List for DNS search domains ssh-keyfiles [] List of SSH public keys files traffic-iface eth0 Device for cluster traffic (Eg: eth0) Subsystem settings ================== Name Value Description ---- ----- ------ analytics-backup-auth-pass (Optional) Server password for analytics backups analytics-backup-auth-user (Optional) Server username for analytics backups analytics-backup-certs (Optional) Backup certs which are used for TLS communication between APIC and backup server. Currently only supported for s3 backups. analytics-backup-chunk-size 1GB analytics-backup-chunk-size analytics-backup-host (Optional) FQDN for analytics backups server analytics-backup-path (Optional) path for analytics backups server analytics-backup-schedule 0 0 * * * (Optional) Cron schedule for analytics backups analytics-enable-compression true (Optional) Determines whether metadata files are stored in compressed format analytics-enable-server-side-encryption false (Optional) Determines whether files are encrypted. When set to true, files are encrypted on the server side using AES256 deployment-profile n1xc2.m16 Deployment profile (n1xc2.m16/n3xc4.m16) for analytics, (n1xc2.m16/n1xc4.m16/n3xc2.m16/n3xc4.m16) for management, (n1xc2.m8/n1xc4.m16/n1xc8.m16/n3xc4.m8/n3xc8.m16) for portal license-use nonproduction License use (production/nonproduction) Endpoints ========= Name Value Description ---- ----- ------ analytics-ingestion FQDN of Analytics ingestion endpoint
-
Specify your
deployment-profile
.apicup subsys set analyt deployment-profile=profile_type
Use one of the following values for the
profile_type
, based on the decision you made while Planning the Analytics deployment profile.For example, to deploy with a three replica profile, run the following command:
apicup subsys set analyt deployment-profile=n3xc4.m16
Note: The deployment profiles that are shown in theDescription
column of theapicup get
output are not correct. The available profiles are documented in Planning your deployment topology and profiles. -
Specify the license use that you purchased.
apicup subsys set analyt license-use=license_type
The
license_type
must be eitherproduction
ornonproduction
. If not specified, the default value isnonproduction
. - Optional:
Configure backups of the subsystem. Follow the instructions in Backing up and restoring the Analytics database on VMware. Important: It is highly recommend that you configure backups and also take additional steps to ensure that your configuration and data can be restored in the event of a disaster event. See Preparing the Analytics subsystem for disaster recovery on VMware.
- Optional:
Configure your logging.
Logging can be configured at a later time, but you must enable it before installation to capture the log events from the installation.
- Complete the procedure at Configuring remote logging for a VMware deployment.
-
Enter the following command to create the log file:
apicup subsys set analyt additional-cloud-init-file=config_file.yml
-
Set your ingestion endpoint.
The ingestion endpoint must be a unique host name that points to the IP address of the OVA (single node deployment), or to the IP of a load balancer configured in front of the OVA nodes. You will use the ingestion endpoint when registering the analytics service in the Cloud Manager UI.apicup subsys set analyt analytics-ingestion=unique_hostname.domain
- Set your search domain. Multiple search domains should be separated by
commas.
apicup subsys set analyt search-domain=your_search_domain
Where your_search_domain is the domain of your servers, entered in all lowercase. Setting this value ensures that your searches also append these values, which are based on your company's DNS resolution, at the end of the search value. A sample search domain is mycompany.example.com.
Ensure that the value for your_search_domain is resolved in the system's /etc/resolv.conf file to avoid "502" errors when accessing the Cloud Manager web site. For example:
# Generated by resolvconf search your_search_domain ibm.com other.domain.com
- Set your domain name servers (DNS). Supply the IP addresses of the DNS servers for your network. Use a comma to separate multiple server addresses.
DNS entries may not be changed on a cluster after the initial installation.apicup subsys set analyt dns-servers=ip_address_of_dns_server
-
Set a Public key.
apicup subsys set analyt ssh-keyfiles=path_to_public_ssh_keyfile
Setting this key enables you to use ssh with this key to log in to the virtual machine to check the status of the installation. You will perform this check later in Verifying deployment of the Analytics subsystem.
-
You can set the password that you enter to log into your Analytics server for the first
time.
- Review the requirements for creating and using a hashed password. See Setting and using a hashed default password.
-
If you do not have a password hashing utility, install one.
Operating system Command Ubuntu, Debian, OSX If the mkpasswd command utility is not available, download and install it. (You can also use a different password hashing utility.) On OSX, use the command: gem install mkpasswd
.Windows, Red Hat If necessary, a password hashing utility for the Windows operating system, like OpenSSL -
Create a hashed password
Operating system Command Ubuntu, Debian, OSX mkpasswd --method=sha-512 --rounds=4096 password
Windows, Red Hat For example, using OpenSSL: openssl passwd -1 password
. Note that you might need to add your password hashing utility to your path; for example, on Windows:set PATH=c:\cygwin64\bin;%PATH%
-
Set the hashed password for your subsystem:
apicup subsys set analyt default-password='hashed_password'
Notes:
- The password is hashed. If it is in plain text, you cannot log into the VMware console.
- Note that the password can only be used to login through the VMware console. You cannot use it
to SSH into the Appliance as an alternative to using the
ssh-keyfiles
. - On Linux or OSX, use single quotes around hashed_password. For Windows, use double quotes.
- If you are using a non-English keyboard, understand the limitations with using the remote VMware console. See Requirements for initial deployment on VMware.
- Optional:
If the default IP ranges for the API Connect Kubernetes pod and the service
networks conflict with IP addresses that must be used by other processes in your deployment, modify
the API Connect values.
You can change the IP ranges of the Kubernetes pod and the service networks from the default values of 172.16.0.0/16 and 172.17.0.0/16, respectively. In the case that a /16 subnet overlaps with existing IPs on the network, a Classless Inter-Domain Routing (CIDR) as small as /22 is acceptable. You can modify these ranges during initial installation and configuration only. You cannot modify them once an appliance has been deployed. See API Connect configuration on VMware.
-
Update the IP range for the Kubernetes pod
apicup subsys set analyt k8s-pod-network='new_pod_range'
Where new_pod_range is the new value for the range.
-
Update the IP range for Service networks.
apicup subsys set analyt k8s-service-network='new_service_range'
Where new_service _range is the new value for the range.
-
Update the IP range for the Kubernetes pod
-
Define the hostname for each subsystem node you are deploying:
apicup hosts create analyt hostname.domainname hd_password
Where the following are true:- hostname.domainname is the fully qualified name of the server where you are hosting your Analytics service, including the domain information.
- hd_password is the password of the Linux Unified Key Setup uses to encrypt
the storage for your Analytics service. This password is hashed when it is stored on the server or
in the ISO. Note that the password is base64 encoded when stored in
apiconnect-up-v10.yml
.
Repeat this command for each subsystem VM in your deployment. For example, if you are deploying a single node one replica profile then do this once, for a three replica profile you will define three VMs.
Note: Host names and DNS entries may not be changed on a cluster after the initial installation. -
Define the network interface for each subsystem node you are deploying:
apicup iface create analyt hostname.domainname physical_network_id host_ip_address/subnet_mask gateway_ip_address
Where physical_network_id is the network interface ID of your physical server. The value is most ofteneth0
. The value can also beethx
, wherex
is a number identifier.The format is similar to this example:apicup iface create analyt myHostname.domain eth0 192.0.2.1/255.255.1.1 192.0.2.1
Repeat this command for each subsystem VM in your deployment. For example, if you are deploying a single node one replica profile then do this once, for a three replica profile you will define three VMs.
- Optional:
Use apicup to view the configured hosts:
apicup hosts list analyt testsrv0233.subnet1.example.com Device IP/Mask Gateway eth0 1.2.152.233/255.255.254.0 1.2.152.1
- Optional: Enable JWT security instead of mTLS for communication from
management and gateway to your analytics subsystem. JWT security provides application layer security and can be used instead of mTLS when there are load-balancers located between subsystems that require TLS termination. For more information about JWT security, see Enable JWT instead of mTLS.To enable JWT and disable mTLS, first identify the JWKS URL from the management subsystem:
Disable mTLS and enable JWT by setting the jwks-url with apicup:apicup subsys get <management subsystem> ... jwks-url https://appliance1.apic.acme.com/api/cloud/oauth2/certs JWKS URL for Portal and Analytics subsystems to validate JWT -- this is unsettable and is generated based on the platform-api endpoint ...
apicup subsys set analyt mtls-validate-client=false apicup subsys set analyt jwks-url=https://appliance1.apic.acme.com/api/cloud/oauth2/certs
JWT for gateway to analytics communication requires an additional step during gateway registration. Enable the Use JWT switch when you register the gateway in the Cloud Manager UI.
Note: Do not disable mTLS without enabling JWT. -
Verify that the configuration settings are valid.
apicup subsys get analyt --validate
The output lists each setting and adds a check mark after the value once the value is validated. If the setting lacks a check mark and indicates an invalid value, reconfigure the setting. See the following sample output.
apicup subsys get analyt --validate Appliance settings ================== Name Value ---- ----- additional-cloud-init-file ✔ data-device sdb ✔ default-password $6$rounds=4096$iMCJ9cfhFJ8X$pbmAl9ClWzcYzH ZFoQ6n7OnYCf/owQZIiCpAtWazs/FUn/uE8uLD.9jwHE0AX4upFSqx/jf0ZmDbHPZ9bUlCY1 ✔ dns-servers [1.2.3.1] ✔ extra-values-file ✔ k8s-pod-network 172.16.0.0/16 ✔ k8s-service-network 172.17.0.0/16 ✔ public-iface eth0 ✔ search-domain [subnet1.example.com] ✔ ssh-keyfiles [/home/vsphere/.ssh/id_rsa.pub] ✔ traffic-iface eth0 ✔ Subsystem settings ================== Name Value ---- ----- analytics-backup-auth-pass ✔ analytics-backup-auth-user ✔ analytics-backup-certs ✔ analytics-backup-chunk-size 1GB ✔ analytics-backup-host ✔ analytics-backup-path ✔ analytics-backup-schedule 0 0 * * * ✔ analytics-enable-compression true ✔ analytics-enable-server-side-encryption false ✔ deployment-profile n1xc2.m16 ✔ license-use production ✔ Endpoints ========= Name Value ---- ----- analytics-ingestion a7s-in.testsrv0233.subnet1.example.com ✔
-
If you configured additional deployment options in an extra-values file, run the following
command to make the file available during installation:
apicup subsys set analyt extra-values-file path/analytics-extra-values.yaml
where
analytics-extra-values.yaml
is the name of the file containing your deployment settings. -
Create your ISO files:
apicup subsys install analyt --out analytplan-out
The
There will be one ISO file for each VM host you defined in step 14.--out
parameter and value are required. In this example, the ISO files are created in the myProject/analytplan-out directory.If the system cannot find the path to your software that creates ISO files, create a path setting to that software by running a command similar to the following command:
Operating system Command Ubuntu, Debian, OSX export PATH=$PATH:/Users/your_path/
Windows, Red Hat set PATH="c:\Program Files (x86)\cdrtools";%PATH%
- Continue with Deploying the Analytics subsystem OVA file.