Verify installation of the Management subsystem

Verify that deployment of the management subsystem OVA file succeeded.

Before you begin

You must have completed Deploying the Management subsystem OVA file.

About this task

After you configure an ISO and deploy it with the management subsystem ova, as described in Deploying the Management subsystem OVA file, verify that the subsystem installed correctly and is functional.

Procedure

  1. Log in to the virtual machine by using an SSH tool to check the status of the installation:
    1. Enter the following command to connect to mgmt using SSH:
      ssh ip_address -l apicadm
      You are logging in with the default ID of apicadm, which is the API Connect ID that has administrator privileges.
    2. Select Yes to continue connecting.
      Your host names are automatically added to your list of hosts.
    3. Run the apic status command to verify that the installation completed and the system is running correctly.

      Note that after installation completes, it can take several minutes for all servers to start. If you see the error message Subsystems not running, wait a few minutes, try the command again, and review the output in the Status column.

      The command output for a correctly running Management system is similar to the following lines:

      apicadm@testsys0181:~$ sudo apic status
      
      INFO[0001] Log level: info                              
      Cluster members:
      - testsys0164.subnet1.example.com (1.1.1.1)
        Type: BOOTSTRAP_MASTER
        Install stage: DONE
        Upgrade stage: NONE
        Docker status: 
          Systemd unit: running
        Kubernetes status: 
          Systemd unit: running
          Kubelet version: testsys0164 (4.4.0-137-generic) [Kubelet v1.10.6, Proxy v1.10.6]
        Etcd status: pod etcd-testsys0164 in namespace kube-system has status Running
        Addons: calico, dns, helm, kube-proxy, metrics-server, nginx-ingress, 
      - testsys0165.subnet1.example.com (1.1.1.2)
        Type: MASTER
        Install stage: DONE
        Upgrade stage: NONE
        Docker status: 
          Systemd unit: running
        Kubernetes status: 
          Systemd unit: running
          Kubelet version: testsys0165 (4.4.0-137-generic) [Kubelet v1.10.6, Proxy v1.10.6]
        Etcd status: pod etcd-testsys0165 in namespace kube-system has status Running
        Addons: calico, kube-proxy, nginx-ingress, 
      - testsys0181.subnet1.exmample.com (1.1.1.3)
        Type: MASTER
        Install stage: DONE
        Upgrade stage: NONE
        Docker status: 
          Systemd unit: running
        Kubernetes status: 
          Systemd unit: running
          Kubelet version: testsys0181 (4.4.0-137-generic) [Kubelet v1.10.6, Proxy v1.10.6]
        Etcd status: pod etcd-testsys0181 in namespace kube-system has status Running
        Addons: calico, kube-proxy, nginx-ingress, 
      Etcd cluster state:
      - etcd member name: testsys0164.subnet1.example.com, member id: 11019072309842691371, 
             cluster id: 5154498743703662183, leader id: 11019072309842691371, revision: 21848, version: 3.1.17
      - etcd member name: testsys0165.subnet1.example.com, member id: 541472388445093633,
             cluster id: 5154498743703662183, leader id: 11019072309842691371, revision: 21848, version: 3.1.17
      - etcd member name: testsys0181.subnet1.example.com, member id: 3261849123413063575, 
             cluster id: 5154498743703662183, leader id: 11019072309842691371, revision: 21848, version: 3.1.17
         
      Pods Summary:
      
      NODE               NAMESPACE          NAME                                                          READY        STATUS         REASON
      testsys0165        kube-system        calico-node-jp8zv                                             2/2          Running        
      testsys0164        kube-system        calico-node-pjjgh                                             2/2          Running        
      testsys0181        kube-system        calico-node-ssb9w                                             2/2          Running        
      testsys0164        kube-system        coredns-87cb95869-9nvdr                                       1/1          Running        
      testsys0164        kube-system        coredns-87cb95869-r9q8w                                       1/1          Running        
      testsys0164        kube-system        etcd-testsys0164                                              1/1          Running        
      testsys0165        kube-system        etcd-testsys0165                                              1/1          Running        
      testsys0181        kube-system        etcd-testsys0181                                              1/1          Running        
      testsys0165        kube-system        ingress-nginx-ingress-controller-92mkz                        1/1          Running        
      testsys0181        kube-system        ingress-nginx-ingress-controller-kt9sr                        1/1          Running        
      testsys0164        kube-system        ingress-nginx-ingress-controller-p7x55                        1/1          Running        
      testsys0164        kube-system        ingress-nginx-ingress-default-backend-6f58fb5f56-t27gx        1/1          Running        
      testsys0164        kube-system        kube-apiserver-testsys0164                                    1/1          Running        
      testsys0165        kube-system        kube-apiserver-testsys0165                                    1/1          Running        
      testsys0181        kube-system        kube-apiserver-testsys0181                                    1/1          Running        
      testsys0164        kube-system        kube-apiserver-proxy-testsys0164                              1/1          Running        
      testsys0165        kube-system        kube-apiserver-proxy-testsys0165                              1/1          Running        
      testsys0181        kube-system        kube-apiserver-proxy-testsys0181                              1/1          Running        
      testsys0164        kube-system        kube-controller-manager-testsys0164                           1/1          Running        
      testsys0165        kube-system        kube-controller-manager-testsys0165                           1/1          Running        
      testsys0181        kube-system        kube-controller-manager-testsys0181                           1/1          Running        
      testsys0165        kube-system        kube-proxy-7gqpw                                              1/1          Running        
      testsys0181        kube-system        kube-proxy-8hc8t                                              1/1          Running        
      testsys0164        kube-system        kube-proxy-bhgcq                                              1/1          Running        
      testsys0164        kube-system        kube-scheduler-testsys0164                                    1/1          Running        
      testsys0165        kube-system        kube-scheduler-testsys0165                                    1/1          Running        
      testsys0181        kube-system        kube-scheduler-testsys0181                                    1/1          Running        
      testsys0164        kube-system        metrics-server-6fbfb84cdd-lffxc                               1/1          Running        
      testsys0164        kube-system        tiller-deploy-84f4c8bb78-xxfds                                1/1          Running  
      
  2. Verify you can access the API Connect Cloud Manager UI. Enter the URL in your browser.
    The syntax is https://<hostname.domain>/admin. For example:
    
    https://cloud-admin-ui.testsrv0231.subnet1.example.com/admin

    The first time that you access the Cloud Manager user interface, you enter admin for the user name and 7iron-hide for the password. You will be prompted to change the Cloud Administrator password and email address. See Accessing the Cloud Manager user interface.

  3. Take a backup of your project directory and configure scheduled management database backups. See Backing up and restoring the Management subsystem.
    Note: If you installed v10.0.6 and already configured database backups during installation, then verify that the configuration succeeded: Verify configuration for s3 backup V10.0.6.0.