Deploying the Management subsystem in a VMware environment

You can create a virtual server by deploying the relevant IBM® API Connect OVA file on a VMWare virtual server. Create all of the virtual servers that you want to use in your cloud.

Before you begin

Before you deploy:

About this task

You must deploy the API Connect OVA template to create each Management virtual server that you want in your cloud.

Procedure

  1. Ensure that you obtained the distribution file and have a project directory, as described in First steps for deploying in a VMware environment.
  2. Change to the project directory.
    cd myProject
  3. Create a management subsystem.
    apicup subsys create mgmt management
    Where:
    • mgmt is the name of your management server that you are creating. You can assign it any name, as long as the identifier consists of lower case alphanumeric characters or '-', with no spaces, starts with an alphabetic character, and ends with an alphanumeric character.
    • management indicates that you are creating a management microservice.

    The API Connect Helm charts are deployed into the default namespace. You do not need to specify a namespace.

    Tip: At any time, you can view the current management subsystem values in the apiconnect-up.yml by running the apicup subsys get command:
    apicup subsys get mgmt
    If you have not yet configured the subsystem, the command might return errors. Also, if you have not updated the value, a default value is listed, if there is one that is available.

    After configuration is complete, you can view output similar to the following sample:

    
    apicup subsys get mgmt
    Appliance settings
    ==================
    
    Name                               Value                                           Description
    ----                               -----                                           ------
    additional-cloud-init-file                                                         (Optional) Path to additional cloud-init yml file
    data-device                        sdb                                             VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO)
    default-password                   $6$rounds=4096$iMCJ9cfhFJ8X$pbmAl9
                                       ClWzcYzHZFoQ6n7OnYCf/owQZIiCpAtWazs
                                      /FUn/uE8uLD.9jwHE0AX4upFSqx/jf0ZmDbHPZ9bUlCY1   (Optional) Console login password for `apicadm` user
    dns-servers                        [1.2.136.11]                                    List of DNS servers
    k8s-pod-network                    172.16.0.0/16                                   (Optional) CIDR for pods within the appliance
    k8s-service-network                172.17.0.0/16                                   (Optional) CIDR for services within the appliance
    mode                               standard
    public-iface                       eth0                                            Device for API/UI traffic (Eg: eth0)
    search-domain                      [subnet1.example.com]                           List for DNS search domains
    ssh-keyfiles                       [/home/vsphere/.ssh/id_rsa.pub]                 List of SSH public keys files
    traffic-iface                      eth0                                            Device for cluster traffic (Eg: eth0)
    license-version                    Production
    
    Subsystem settings
    ==================
    
    Name                               Value                                           Description
    ----                               -----                                           ------
    az-name                            default-az                                      Availability Zone name
    cassandra-backup-auth-pass                                                        (Optional) Server password for DB backups
    cassandra-backup-auth-user                                                        (Optional) Server username for DB backups
    cassandra-backup-host                                                             (Optional) FQDN for DB backups server
    cassandra-backup-path              /backups                                       (Optional) path for DB backups server
    cassandra-backup-port              22                                             (Optional) Server port for DB backups
    cassandra-backup-protocol          sftp                                           (Optional) Protocol for DB backups (sftp/ftp/objstore)
    cassandra-backup-schedule          0 0 * * *                                      (Optional) Cron schedule for DB backups
    cassandra-max-memory-gb            9                                              Memory limit for DB
    cross-az-peers                     []                                             (Optional) IP addresses of nodes in other AZs
    
    
    Endpoints
    =========
    
    Name                               Value                                           Description
    ----                               -----                                           ------
    api-manager-ui                     api-manager-ui.testsrv0231.subnet1.example.com  FQDN of API manager UI endpoint
    cloud-admin-ui                     cloud-admin-ui.testsrv0231.subnet1.example.com  FQDN of Cloud admin endpoint
    consumer-api                       consumer-api.testsrv0231.subnet1.example.com    FQDN of consumer API endpoint
    platform-api                       platform-api.testsrv0231.subnet1.example.com    FQDN of platform API endpoint
    
  4. For production environments, specify mode=standard.
    apicup subsys set mgmt mode=standard

    The mode=standard parameter indicates that you are deploying in high availability (HA) mode for a production environment. If the mode parameter is omitted, the subsystem deploys by default in dev mode, for use in development and testing. For more information, see Requirements for initial deployment on VMware.

  5. Version 2018.4.1.10 or later: Specify the license version you purchased.

    apicup subsys set mgmt license-version=<license_type>

    The license_type must be either Production or Nonproduction. If not specified, the default value is Nonproduction.

  6. Optional: Configure scheduled backups of the subsystem. This step is optional but is recommended. Note that once you set up scheduled backups, you can also run backups on-demand. Refer to the instructions for scheduled backups in Backing up the management subsystem in VMware environments.
  7. Optional: Configure your logging.
    Logging can be configured at a later time, but you must enable it before installation to capture the log events from the installation.
    1. Complete the procedure at Configuring remote logging for a VMware deployment.
    2. Enter the following command to create the log file:
      apicup subsys set mgmt additional-cloud-init-file=config_file.yml
  8. Enter the following commands to update the apiconnect-up.yml with the information for your environment:
    1. Set your search domain. Multiple search domains should be separated by commas.
      apicup subsys set mgmt search-domain=your_search_domain

      Where your_search_domain is the domain of your servers, entered in all lowercase. Setting this value ensures that your searches also append these values, which are based on your company's DNS resolution, at the end of the search value. A sample search domain is mycompany.example.com.

      Ensure that the value for your_search_domain is resolved in the system's /etc/resolv.conf file to avoid "502" errors when accessing the Cloud Manager web site. For example:

      # Generated by resolvconf
      search your_search_domain ibm.com other.domain.com
    2. Set your domain name servers (DNS).
      Supply the IP addresses of the DNS servers for your network. Use a comma to separate multiple server addresses.
      apicup subsys set mgmt dns-servers=ip_address_of_dns_server[,ip_address_of_another_dns_server_if_necessary]
      DNS entries may not be changed on a cluster after the initial installation.
    3. Use apicup to set your endpoints.
      You can use wildcard aliases or host aliases with your endpoints.

      Optionally, you can specify all endpoints with one apicup command. See Tips and tricks for using APICUP.

      Note: You cannot specify the underscore character "_" in domain names that are used in endpoints. See Configuration on VMware.
      Table 1. Management subsystem endpoints
      Setting Endpoint host description
      platform-api Platform API endpoint. The host where your platform API calls are routed.
      apicup subsys set mgmt platform-api=platform-api.hostname.domain
      consumer-api Consumer API endpoint. The host where your consumer API calls are routed.
      apicup subsys set mgmt consumer-api=consumer-api.hostname.domain
      cloud-admin-ui Cloud admin user interface API endpoint. The host where your cloud administrator user-interface API calls are routed.
      apicup subsys set mgmt cloud-admin-ui=cloud-admin-ui.hostname.domain
      api-manager-ui API Manager user interface endpoint. The host where your API Manager API calls are routed.
      apicup subsys set mgmt api-manager-ui=api-manager-ui.hostname.domain
  9. Set a Public key.
    apicup subsys set mgmt ssh-keyfiles=path_to_public_ssh_keyfile

    Setting this key enables you to use ssh with this key to log in to the virtual machine to check the status of the installation. You will perform this check in step 29 of these instructions.

  10. Set the password that you enter to log into your Management appliance for the first time.
    1. Important: Review the requirements for creating and using a hashed password. See Setting and using a hashed default password.
    2. If you do not have a password hashing utility, install one.
      Operating system Command
      Ubuntu, Debian, OSX If the mkpasswd command utility is not available, download and install it. (You can also use a different password hashing utility.) On OSX, use the command: gem install mkpasswd .
      Windows, Red Hat If necessary, a password hashing utility like OpenSSL.
    3. Create a hashed password
      Operating system Command
      Ubuntu, Debian, OSX
      mkpasswd --method=sha-512 --rounds=4096 password
      Windows, Red Hat For example, using OpenSSL: openssl passwd -1 password. Note that you might need to add your password hashing utility to your path; for example, in Windows:
      set PATH=c:\cygwin64\bin;%PATH%
    4. Set the hashed password for your subsystem:
      apicup subsys set mgmt default-password='hashed_password'
  11. Optional: If the default IP ranges for the API Connect Kubernetes pod and the service networks conflict with IP addresses that must be used by other processes in your deployment, modify the API Connect values.
    You can change the IP ranges of the Kubernetes pod and the service networks from the default values of 172.16.0.0/16 and 172.17.0.0/16, respectively. In the case that a /16 subnet overlaps with existing IPs on the network, a Classless Inter-Domain Routing (CIDR) as small as /22 is acceptable. You can modify these ranges during initial installation and configuration only. You cannot modify them once an appliance has been deployed. See Configuration on VMware.
    1. Update the IP range for the Kubernetes pod
      apicup subsys set mgmt k8s-pod-network='new_pod_range'

      Where new_pod_range is the new value for the range.

    2. Update the IP range for Service networks.
      apicup subsys set mgmt k8s-service-network='new_service_range'

      Where new_service _range is the new value for the range.

  12. Add your hosts.
    apicup hosts create mgmt hostname.domainname hd_password
    Where the following are true:
    • hostname.domainname is the fully qualified name of the server where you are hosting your Management service, including the domain information.
    • hd_password is the password that the Linux Unified Key Setup uses to encrypt the storage for your Management service. This password is hashed when it is stored on the server or in the ISO. Note that the password is base64 encoded when stored in apiconnect-up.yml.

    Repeat this command for each host that you want to add.

    Note:
    • Host names and DNS entries may not be changed on a cluster after the initial installation.
    • Version 2018.4.1.0: Ensure that Reverse DNS lookup configuration is configured for the host names.
      nslookup <ip_address>

      For Version 2018.4.1.1 or later, Reverse DNS lookup is not required.

  13. Create your interfaces.
    apicup iface create mgmt hostname.domainname physical_network_id host_ip_address/subnet_mask gateway_ip_address
    Where public_iface_id is the network interface ID of your physical server. The value is most often eth0. The value can also be ethx, where x is a number identifier.
    The format is similar to this example: apicup iface create mgmt myHostname.domain eth0 192.0.2.10/255.255.255.0 192.0.2.1
  14. Optional: Use apicup to view the configured hosts:
    apicup hosts list mgmt
    testsrv0231.subnet1.example.com
        Device  IP/Mask                     Gateway
        eth0    1.2.152.231/255.255.254.0  1.2.152.1
    Note: This command might return the following messages, which you can ignore:
    * host is missing traffic interface 
    * host is missing public interface 
  15. Optional: Verify that the configuration settings are valid.
    apicup subsys get mgmt --validate

    The output lists each setting and adds a check mark after the value once the value is validated. If the setting lacks a check mark and indicates an invalid value, reconfigure the setting. See the following sample output.

     apicup subsys get mgmt --validate
    Appliance settings
    ==================
    
    Name                               Value
    ----                               -----
    additional-cloud-init-file                                                        ✔
    data-device                        sdb                                            ✔
    default-password                   $6$rounds=4096$iMCJ9cfhFJ8X$pbm
                                       Al9ClWzcYzHZFoQ6n7OnYCf/owQZIiCpAtWazs/
                                       FUn/uE8uLD.9jwHE0AX4upFSqx/jf0ZmDbHPZ9bUlCY1   ✔
    dns-servers                        [1.2.136.11]                                   ✔
    k8s-pod-network                    172.16.0.0/16                                  ✔
    k8s-service-network                172.17.0.0/16                                  ✔
    mode                               standard                                       ✔
    public-iface                       eth0                                           ✔
    search-domain                      [subnet1.example.com]                          ✔
    ssh-keyfiles                       [/home/vsphere/.ssh/id_rsa.pub]                ✔
    traffic-iface                      eth0                                           ✔
    license-version                    Production                                     ✔
    
    Subsystem settings
    ==================
    
    Name                               Value
    ----                               -----
    az-name                            default-az                                     ✔
    cassandra-backup-auth-pass                                                        ✔
    cassandra-backup-auth-user                                                        ✔
    cassandra-backup-host                                                             ✔
    cassandra-backup-path              /backups                                       ✔
    cassandra-backup-port              22                                             ✔
    cassandra-backup-protocol          sftp                                           ✔
    cassandra-backup-schedule          0 0 * * *                                      ✔
    cassandra-max-memory-gb            9                                              ✔
    cross-az-peers                     []                                             ✔
    
    
    Endpoints
    =========
    
    Name                               Value
    ----                               -----
    api-manager-ui                     api-manager-ui.testsrv0231.subnet1.example.com  ✔
    cloud-admin-ui                     cloud-admin-ui.testsrv0231.subnet1.example.com  ✔
    consumer-api                       consumer-api.testsrv0231.subnet1.example.com    ✔
    platform-api                       platform-api.testsrv0231.subnet1.example.com    ✔
    
  16. Create your ISO file.
    apicup subsys install mgmt --out mgmtplan-out

    The --out parameter and value are required.

    In this example, the ISO file is created in the myProject/mgmtplan-out directory.

    If the system cannot find the path to your software that creates ISO files, create a path setting to that software by running a command similar to the following command:

    Operating system Command
    OSX and Linux
    export PATH=$PATH:/Users/your_path/
    Windows
    set PATH="c:\Program Files (x86)\cdrtools";%PATH%
  17. Log into the VMware vSphere Web Client.
  18. Using the VSphere Navigator, navigate to the directory where you are deploying the OVA file.
  19. Right-click the directory and select Deploy OVF Template.
  20. Complete the Deploy OVF Template wizard.
    1. Select the apiconnect-management.ova template by navigating to the location where you downloaded the file from Passport Advantage®.
    2. Enter a name and location for your file.
    3. Select a resource for your template.
    4. Review the details for your template.
    5. Select the size of your configuration.
    6. Select the storage settings.
    7. Select the networks.
    8. Customize the Template, if necessary.
    9. Review the details to ensure that they are correct.
    10. Select Finish to deploy the virtual machine.
    Note: Do not change the OVA hardware version, even if the VMware UI shows a Compatibility range that includes other versions. See Requirements for initial deployment on VMware.
    The template creation appears in your Recent Tasks list.
  21. Select the Storage tab in the Navigator.
  22. Navigate to your datastore.
  23. Upload your ISO file.
    1. Select the Navigate to the datastore file browser icon in the icon menu.
    2. Select the Upload a file to the Datastore icon in the icon menu.
    3. Navigate to the ISO file that you created in your project.
      It is the myProject/mgmtplan-out
    4. Upload the ISO file to the datastore.
  24. Leave the datastore by selecting the VMs and Templates icon in the Navigator.
  25. Locate and select your virtual machine.
  26. Select the Configure tab in the main window.
  27. Select Edit....
    1. On the Virtual Hardware tab, select CD/DVD Drive 1.
    2. For the Client Device, select Datastore ISO File.
    3. Find and select your datastore in the Datastores category.
    4. Find and select your ISO file in the Contents category.
    5. Select OK to commit your selection and exit the Select File window.
    6. Ensure that the Connect At Power On check box is selected.
      Tip:
      • Expand the CD/DVD drive 1 entry to view the details and the complete Connect At Power On label.
      • Note that VMware related issues with ISO mounting at boot may occur if Connect At Power On
    7. Select OK to commit your selection and close the window.
  28. Start the virtual machine by selecting the play button on the icon bar.
    The installation might take several minutes to complete, depending on the availability of the system and the download speed.
  29. Log in to the virtual machine by using an SSH tool to check the status of the installation:
    1. Enter the following command to connect to mgmt using SSH:
      ssh ip_address -l apicadm
      You are logging in with the default ID of apicadm, which is the API Connect ID that has administrator privileges.
    2. Select Yes to continue connecting.
      Your host names are automatically added to your list of hosts.
    3. Run the apic status command to verify that the installation completed and the system is running correctly.

      Note that after installation completes, it can take several minutes for all servers to start. If you see the error message Subsystems not running, wait a few minutes, try the command again, and review the output in the Status column.

      The command output for a correctly running Management system is similar to the following lines:

      apicadm@testsys0181:~$ sudo apic status
      
      INFO[0001] Log level: info                              
      Cluster members:
      - testsys0164.subnet1.example.com (1.1.1.1)
        Type: BOOTSTRAP_MASTER
        Install stage: DONE
        Upgrade stage: NONE
        Docker status: 
          Systemd unit: running
        Kubernetes status: 
          Systemd unit: running
          Kubelet version: testsys0164 (4.4.0-137-generic) [Kubelet v1.10.6, Proxy v1.10.6]
        Etcd status: pod etcd-testsys0164 in namespace kube-system has status Running
        Addons: calico, dns, helm, kube-proxy, metrics-server, nginx-ingress, 
      - testsys0165.subnet1.example.com (1.1.1.2)
        Type: MASTER
        Install stage: DONE
        Upgrade stage: NONE
        Docker status: 
          Systemd unit: running
        Kubernetes status: 
          Systemd unit: running
          Kubelet version: testsys0165 (4.4.0-137-generic) [Kubelet v1.10.6, Proxy v1.10.6]
        Etcd status: pod etcd-testsys0165 in namespace kube-system has status Running
        Addons: calico, kube-proxy, nginx-ingress, 
      - testsys0181.subnet1.exmample.com (1.1.1.3)
        Type: MASTER
        Install stage: DONE
        Upgrade stage: NONE
        Docker status: 
          Systemd unit: running
        Kubernetes status: 
          Systemd unit: running
          Kubelet version: testsys0181 (4.4.0-137-generic) [Kubelet v1.10.6, Proxy v1.10.6]
        Etcd status: pod etcd-testsys0181 in namespace kube-system has status Running
        Addons: calico, kube-proxy, nginx-ingress, 
      Etcd cluster state:
      - etcd member name: testsys0164.subnet1.example.com, member id: 11019072309842691371, 
             cluster id: 5154498743703662183, leader id: 11019072309842691371, revision: 21848, version: 3.1.17
      - etcd member name: testsys0165.subnet1.example.com, member id: 541472388445093633,
             cluster id: 5154498743703662183, leader id: 11019072309842691371, revision: 21848, version: 3.1.17
      - etcd member name: testsys0181.subnet1.example.com, member id: 3261849123413063575, 
             cluster id: 5154498743703662183, leader id: 11019072309842691371, revision: 21848, version: 3.1.17
         
      Pods Summary:
      
      NODE               NAMESPACE          NAME                                                          READY        STATUS         REASON
      testsys0165        kube-system        calico-node-jp8zv                                             2/2          Running        
      testsys0164        kube-system        calico-node-pjjgh                                             2/2          Running        
      testsys0181        kube-system        calico-node-ssb9w                                             2/2          Running        
      testsys0164        kube-system        coredns-87cb95869-9nvdr                                       1/1          Running        
      testsys0164        kube-system        coredns-87cb95869-r9q8w                                       1/1          Running        
      testsys0164        kube-system        etcd-testsys0164                                              1/1          Running        
      testsys0165        kube-system        etcd-testsys0165                                              1/1          Running        
      testsys0181        kube-system        etcd-testsys0181                                              1/1          Running        
      testsys0165        kube-system        ingress-nginx-ingress-controller-92mkz                        1/1          Running        
      testsys0181        kube-system        ingress-nginx-ingress-controller-kt9sr                        1/1          Running        
      testsys0164        kube-system        ingress-nginx-ingress-controller-p7x55                        1/1          Running        
      testsys0164        kube-system        ingress-nginx-ingress-default-backend-6f58fb5f56-t27gx        1/1          Running        
      testsys0164        kube-system        kube-apiserver-testsys0164                                    1/1          Running        
      testsys0165        kube-system        kube-apiserver-testsys0165                                    1/1          Running        
      testsys0181        kube-system        kube-apiserver-testsys0181                                    1/1          Running        
      testsys0164        kube-system        kube-apiserver-proxy-testsys0164                              1/1          Running        
      testsys0165        kube-system        kube-apiserver-proxy-testsys0165                              1/1          Running        
      testsys0181        kube-system        kube-apiserver-proxy-testsys0181                              1/1          Running        
      testsys0164        kube-system        kube-controller-manager-testsys0164                           1/1          Running        
      testsys0165        kube-system        kube-controller-manager-testsys0165                           1/1          Running        
      testsys0181        kube-system        kube-controller-manager-testsys0181                           1/1          Running        
      testsys0165        kube-system        kube-proxy-7gqpw                                              1/1          Running        
      testsys0181        kube-system        kube-proxy-8hc8t                                              1/1          Running        
      testsys0164        kube-system        kube-proxy-bhgcq                                              1/1          Running        
      testsys0164        kube-system        kube-scheduler-testsys0164                                    1/1          Running        
      testsys0165        kube-system        kube-scheduler-testsys0165                                    1/1          Running        
      testsys0181        kube-system        kube-scheduler-testsys0181                                    1/1          Running        
      testsys0164        kube-system        metrics-server-6fbfb84cdd-lffxc                               1/1          Running        
      testsys0164        kube-system        tiller-deploy-84f4c8bb78-xxfds                                1/1          Running  
      
  30. Verify you can access the API Connect Cloud Manager. Enter the URL in your browser.
    The syntax is https://<hostname.domain>/admin. For example:
    
    https://cloud-admin-ui.testsrv0231.subnet1.example.com/admin

    The first time that you access the Cloud Manager user interface, you enter admin for the user name and 7iron-hide for the password. You will be prompted to change the Cloud Administrator password and email address. See Accessing the Cloud Manager user interface.

What to do next

If you want to deploy an API Connect Analytics OVA file, continue with Deploying the Analytics subsystem in a VMware environment.

If you want to deploy an API Connect Developer Portal OVA file, continue with Deploying the Developer Portal in a VMware environment.

Identify the DataPower® appliances to be used as gateway servers in the API Connect cloud and obtain the IP addresses.

Define your API Connect configuration by using the API Connect cloud console. For more information, see Defining the cloud.