Deploying the Developer Portal in a VMware environment

You create a Developer Portal node by deploying the Developer Portal OVA template. After you deploy the Developer Portal OVA template, you can install the Developer Portal.

Before you begin

Before you deploy:
Note:

Ensure that your kernel or Kubernetes node has the value of its inotify watches set high enough so that the Developer Portal can monitor and maintain the files for each Developer Portal site. If set too low, the Developer Portal containers might fail to start or go into a non-ready state when this limit is reached. If you have many Developer Portal sites, or if your sites contain a lot of content, for example, many custom modules and themes, then a larger number of inotify watches are required. You can start with a value of 65,000, but for large deployments, this value might need to go up as high as 1,000,000. The Developer Portal containers take inotify watches only when they need them. The full number is not reserved or held, so it is acceptable to set this value high.

About this task

You must deploy the Developer Portal OVA template to create each Developer Portal node that you want in your cloud. Each node has a separate CLI password account that is required to log in through a Secure Shell (SSH) to complete specific administrative actions for only that node.

Important deployment information for Developer Portal:

  • You must deploy the Developer Portal OVA template by using a version of the VMware vSphere Client that supports the SHA-512 Cryptographic Hash Algorithm.
  • The Developer Portal node is initially configured with a default password of 7iron-hide, with the user name of admin. For security reasons, change the default password by completing one the following actions:
    • During deployment, if the feature is available in your VMware instance, enter a new password. If you specify a password during deployment, the password for the Developer Portal command line interface (CLI) is modified. Use the new password when you log into the CLI. You cannot modify the admin user name for the CLI.
    • After deployment, log in to the CLI for each virtual appliance and run the command to change the default password for that specific node. The CLI command is passwd.
    Note that this console uses a US keyboard configuration, in which the @ symbol can be in a different place to other keyboard configurations. If you are not using a US keyboard, ensure that you typed the password correctly.
  • Only static IP addresses that are specified during the apicup project configuration before the installation of the OVAs are supported.
  • To enable effective high availability for your Portal service, you need a latency that is less than 50ms between all OVAs to avoid the risk of performance degradation. Servers with uniform specifications are required, as any write actions occur at the speed of the slowest OVA, as the write actions are synchronous across the cluster of OVAs. It is recommended that there are three servers in each cluster of OVAs for the high availability configuration. The three servers can be situated in the same data center, or across three data centers to ensure the best availability. However, you can configure high availability with two data centers.
  • The backup secret is a Kubernetes secret that contains your username and password for your backup database (sftp/s3). Only password-based authentication is supported for sftp and s3, not authentication based on public certificates and private keys. Password-based authentication for s3 requires that you generate an access key and secret. For example:

Procedure

Note: The following steps apply to VMware only. Depending on the VMware version that you are using, some of the steps might vary. For example, you might not be able to change the user name and password during deployment.

  1. Ensure that you obtained the distribution file and have a project directory, as described in First steps for deploying in a VMware environment.
  2. Change to the project directory.
    cd myProject
  3. Create a portal subsystem.
    apicup subsys create port portal
    Where:
    • port is the name of the Developer Portal server that you are creating. You can assign it any name, as long as the identifier consists of lower case alphanumeric characters or '-', with no spaces, starts with an alphabetic character, and ends with an alphanumeric character.
    • portal indicates that you want it to create a Developer Portal microservice.
    The apiconnect-up.yml file that is in that directory is updated to add the portal-related entries.
    Tip: At any time, you can view the current developer portal subsystem values in the apiconnect-up.yml by running the apicup get command:
    apicup subsys get port
    If you have not yet configured the subsystem, the command might return errors. Also, if you have not updated the value, a default value is listed, if there is one that is available.

    After configuration is complete, you can view output similar to the following sample:

     
    Appliance settings
    ==================
    
    Name                          Value                                                Description
    ----                          -----                                                ------
    additional-cloud-init-file                                                         (Optional) Path to additional cloud-init yml file
    data-device                   sdb                                                  VM disk device (usually `sdb` for SCSI or `vdb` for VirtIO)
    default-password              $6$rounds=4096$iMCJ9cfhFJ8X$pbmAl9ClWzcYzHZFoQ6
                n7OnYCf/owQZIiCpAtWazs/FUn/uE8uLD.9jwHE0AX4upFSqx/jf0ZmDbHPZ9bUlCY1   (Optional) Console login password for `apicadm` user
    dns-servers                   [1.2.136.11]                                         List of DNS servers
    k8s-pod-network               172.16.0.0/16                                       (Optional) CIDR for pods within the appliance
    k8s-service-network           172.17.0.0/16                                       (Optional) CIDR for services within the appliance
    mode                          standard
    public-iface                  eth0                                                Device for API/UI traffic (Eg: eth0)
    search-domain                 [subnet1.example.com]                               List for DNS search domains
    ssh-keyfiles                  [/home/vsphere/.ssh/id_rsa.pub]                     List of SSH public keys files
    traffic-iface                 eth0                                                Device for cluster traffic (Eg: eth0)
    license-version               Production
    
    Subsystem settings
    ==================
    
    Name                          Value                                               Description
    ----                          -----                                               ------
    site-backup-auth-pass                                                            (optional) Server password for portal backups
    site-backup-auth-user                                                            (optional) Server username for portal backups
    site-backup-host                                                                 (optional) FQDN for portal backups server
    site-backup-path              /site-backups                                      (optional) Path for portal backups
    site-backup-port              22                                                 (optional) port for portal backups server
    site-backup-protocol          sftp                                               (Optional) Protocol for portal backups (sftp/objstore)
    site-backup-schedule          0 2 * * *                                          (optional) Cron schedule for portal backups
    
    
    Endpoints
    =========
    
    Name                          Value                                              Description
    ----                          -----                                              ------
    portal-admin                  api.portal.apimdev0232.subnet1.example.com         FQDN of Portal admin endpoint
    portal-www                    portal.apimdev0232.subnet1.example.com             FQDN of Portal web endpoint
  4. For production environments, specify mode=standard.
    apicup subsys set port mode=standard

    The mode=standard parameter indicates that you are deploying in high availability (HA) mode for a production environment. If the mode parameter is omitted, the subsystem deploys by default in dev mode, for use in development and testing. For more information, see Requirements for initial deployment on VMware.

  5. Version 2018.4.1.10 or later: Specify the license version you purchased.

    apicup subsys set port license-version=<license_type>

    The license_type must be either Production or Nonproduction. If not specified, the default value is Nonproduction.

  6. Optional: Configure scheduled backups of the subsystem. This step is optional but is recommended.
  7. Optional: Configure your logging.
    Logging can be configured at a later time, but you must enable it before installation to capture the log events from the installation.
    1. Complete the procedure at Configuring remote logging for a VMware deployment.
    2. Enter the following command to create the log file:
      apicup subsys set port additional-cloud-init-file=config_file.yml
  8. Enter the following commands to update the apiconnect-up.yml with the information for your environment:
    1. Use apicup to set your endpoints.
      You can use wildcard aliases or host aliases with your endpoints.

      Optionally, you can specify all endpoints with one apicup command. See Tips and tricks for using APICUP.

      Note: You cannot specify the underscore character "_" in domain names that are used in endpoints. See Configuration on VMware.

      The endpoints must be unique hostnames which both point to the IP address of the OVA (single node deployment), or to the IP of a load balancer configured in front of the OVA nodes. See examples in the sample output in step 3.

      Setting Endpoint host description
      portal-admin This is the unique_hostname for communication between your Cloud Manager and API Manager, and your Developer Portal. The values for the portal-admin and portal-www must be different.
      apicup subsys set port portal-admin=unique_hostname.domain
      portal-www This is the unique_hostname for the Developer Portal Internet site that is created for the Developer Portal. Multiple portal-www endpoints may be configured, as described here: Defining multiple portal endpoints for a VMware environment.
      apicup subsys set port portal-www=unique_hostname.domain
    2. Set your search domain. Multiple search domains should be separated by commas.
      apicup subsys set port search-domain=your_search_domain

      Where your_search_domain is the domain of your servers, entered in all lowercase. Setting this value ensures that your searches also append these values, which are based on your company's DNS resolution, at the end of the search value. A sample search domain is mycompany.example.com.

      Ensure that the value for your_search_domain is resolved in the system's /etc/resolv.conf file to avoid "502" errors when accessing the Cloud Manager web site. For example:

      # Generated by resolvconf
      search your_search_domain ibm.com other.domain.com
    3. Set your domain name servers (DNS).
      Supply the IP addresses of the DNS servers for your network. Use a comma to separate multiple server addresses.
      apicup subsys set port dns-servers=ip_address_of_dns_server[,ip_address_of_another_dns_server_if_necessary]
      DNS entries may not be changed on a cluster after the initial installation.
  9. Set a Public key.
    apicup subsys set port ssh-keyfiles=path_to_public_ssh_keyfile

    Setting this key enables you to use ssh with this key to log in to the virtual machine to check the status of the installation. You will perform this check in step 29 of these instructions.

  10. You can set a hashed password that you enter to log in to your Developer Portal server for the first time.
    1. Important: Review the requirements for creating and using a hashed password. See Setting and using a hashed default password .
    2. If you do not have a password hashing utility, install one.
      Operating system Command
      Ubuntu, Debian, OSX If the mkpasswd command utility is not available, download and install it. (You can also use a different password hashing utility.) On OSX, use the command: gem install mkpasswd .
      Windows, Red Hat If necessary, a password hashing utility like OpenSSL.
    3. Create a hashed password
      Operating system Command
      Ubuntu, Debian, OSX
      mkpasswd --method=sha-512 --rounds=4096 password
      Windows, Red Hat For example, using OpenSSL: openssl passwd -1 password. Note that you might need to add your password hashing utility to your path; for example, in Windows:
      set PATH=c:\cygwin64\bin;%PATH%
    4. Set the hashed password for your subsystem:
      apicup subsys set port default-password="hashed_password"

    Notes:

    • The password is hashed. If it is in plain text, you cannot log into the VMWare console.
    • Note that the password can only be used to login through the VMware console. You cannot use it to SSH into the Appliance as an alternative to using the ssh-keyfiles.
    • On Linux or OSX, use single quotes around hashed_password. For Windows, use double quotes.
    • If you are using a non-English keyboard, understand the limitations with using the remote VMware console. See Requirements for initial deployment on VMware.
  11. Optional: If the default IP ranges for the API Connect Kubernetes pod and the service networks conflict with IP addresses that must be used by other processes in your deployment, modify the API Connect values.
    You can change the IP ranges of the Kubernetes pod and the service networks from the default values of 172.16.0.0/16 and 172.17.0.0/16, respectively. In the case that a /16 subnet overlaps with existing IPs on the network, a Classless Inter-Domain Routing (CIDR) as small as /22 is acceptable. You can modify these ranges during initial installation and configuration only. You cannot modify them once an appliance has been deployed. See Configuration on VMware.
    1. Update the IP range for the Kubernetes pod
      apicup subsys set port k8s-pod-network='new_pod_range'

      Where new_pod_range is the new value for the range.

    2. Update the IP range for Service networks.
      apicup subsys set port k8s-service-network='new_service_range'

      Where new_service _range is the new value for the range.

  12. Add your hosts.
    apicup hosts create port hostname.domainname hd_password
    Where the following are true:
    • hostname.domainname is the fully qualified name of the server where you are hosting your Developer Portal, including the domain information.
    • hd_password is the password of the Linux Unified Key Setup uses to encrypt the storage for your Developer Portal. This password is hashed when it is stored on the server or in the ISO. Note that the password is base64 encoded when stored in apiconnect-up.yml.

      Repeat this command for each host that you want to add.

      Note:
      • Host names and DNS entries may not be changed on a cluster after the initial installation.
      • Version 2018.4.1.0: Ensure that Reverse DNS lookup configuration is configured for the host names.
        nslookup <ip_address>

        For Version 2018.4.1.1 or later, Reverse DNS lookup is not required.

  13. Create your interfaces.
    apicup iface create port hostname.domainname physical_network_id host_ip_address/subnet_mask gateway_ip_address
    Where public_iface_id is the network interface ID of your physical server. The value is most often eth0. The value can also be ethx, where x is a number identifier.
    The format is similar to this example: apicup iface create port myHostname.domain eth0 192.0.2.1/255.255.1.1 192.0.2.1
  14. Optional: Use apicup to view the configured hosts:
    apicup hosts list port
    apimdev0232.hursley.ibm.com
     Device  IP/Mask                     Gateway
     eth0    1.2.152.232/255.255.254.0   1.2.152.1
  15. Optional: Verify that the configuration settings are valid.
    apicup subsys get port --validate

    The output lists each setting and adds a check mark after the value once the value is validated. If the setting lacks a check mark and indicates an invalid value, reconfigure the setting. See the following sample output.

    
    apicup subsys get port --validate
    Appliance settings
    ==================
    
    Name                          Value
    ----                          -----
    additional-cloud-init-file                                                    ✔
    data-device                   sdb                                             ✔
    default-password              $6$rounds=4096$iMCJ9cfhFJ8X$pbmAl9ClWzcYz
      HZFoQ6n7OnYCf/owQZIiCpAtWazs/FUn/uE8uLD.9jwHE0AX4upFSqx/jf0ZmDbHPZ9bUlCY1   ✔
    dns-servers                   [1.2.136.11]                                    ✔
    k8s-pod-network               172.16.0.0/16                                   ✔
    k8s-service-network           172.17.0.0/16                                   ✔
    mode                          standard                                        ✔
    public-iface                  eth0                                            ✔
    search-domain                 [subnet1.example.com]                           ✔
    ssh-keyfiles                  [/home/vsphere/.ssh/id_rsa.pub]                 ✔
    traffic-iface                 eth0                                            ✔
    license-version               Production                                      ✔
    
    Subsystem settings
    ==================
    
    Name                          Value
    ----                          -----
    site-backup-auth-pass                                                         ✔
    site-backup-auth-user                                                         ✔
    site-backup-host                                                              ✔
    site-backup-path              /site-backups                                   ✔
    site-backup-port              22                                              ✔
    site-backup-protocol          sftp                                            ✔
    site-backup-schedule          0 2 * * *                                       ✔
    
    Endpoints
    =========
    
    Name                          Value
    ----                          -----
    portal-admin                  api.portal.testsrv0232.subnet1.example.com      ✔
    portal-www                    portal.testsrv0232.subnet1.example.com          ✔
    
  16. Create your ISO file.
    apicup subsys install port --out portplan-out

    The --out parameter and value are required. In this example, the ISO file is created in the myProject/portplan-out directory.

    If the system cannot find the path to your software that creates ISO files, create a path setting to that software by running a command similar to the following command:

    Operating system Command
    OSX and Linux
    export PATH=$PATH:/Users/your_path/
    Windows
    set PATH="c:\Program Files (x86)\cdrtools";%PATH%
  17. Log into the VMware vSphere Web Client.
  18. Using the VSphere Navigator, navigate to the directory where you are deploying the OVA file.
  19. Right-click the directory and select Deploy OVF Template.
  20. Complete the Deploy OVF Template wizard.
    1. Select the apiconnect-portal.ova template by navigating to the location where you downloaded the file from Passport Advantage®.
    2. Enter a name and location for your file.
    3. Select a resource for your template.
    4. Review the details for your template.
    5. Select the size of your configuration.
    6. Select the storage settings.

      Note that the number of Central Processing Units, RAM, and size of disk that you need for the Developer Portal varies depending on the number of sites that are hosted and the number of concurrent users you expect your site to have:

      Table 1. Developer Portal hardware requirements
      Number of sites Number of concurrent users Number of CPUs Amount of RAM (GB) Data Disk Size (GB)++
      1 1 2** 4 50
      20 5 4 16 70
      100 20 8 32 100
      100 100 16 64 500
      Important:
      • ++The data disk size is extra to the main disk of the OVA. The main disk of the OVA is sized at 100GB, and should not be changed. Therefore, the total disk size of the OVA is 100GB plus the data disk size. The default data disk size is 100GB. If you want to select a different value, then you need to do this by using the OVF Tool, or the VMware GUI, before you power on the VM. Note that certain versions of the VMware GUI may not allow you to resize the data disk, and in this case you should use the OVF Tool.
      • **The requirement of 2 CPUs is suitable only for proof-of-concept work, and non-high availability deployments. For example, this configuration is suitable for the demo mode of the apicup installation, which is set by using the command apicup subsys set port mode=dev. Note that standard mode is set by default.
      • It's not recommended to have more than 100 sites per Developer Portal service. Note that it's not necessary to have a Portal site for every Catalog, for example Catalogs that are only for API Developers don't need a Portal site, as the APIs can be tested by using credentials from the API Manager. If more than 100 sites are required, you should configure additional Developer Portal services; see Registering a Portal service.
    7. Select the networks.
    8. Customize the Template, if necessary.
    9. Review the details to ensure that they are correct.
    10. Select Finish to deploy the virtual machine.
    Note: Do not change the OVA hardware version, even if the VMware UI shows a Compatibility range that includes other versions. See Requirements for initial deployment on VMware.
    The template creation appears in your Recent Tasks list.
  21. Select the Storage tab in the Navigator.
  22. Navigate to your datastore.
  23. Upload your ISO file.
    1. Select the Navigate to the datastore file browser icon in the icon menu.
    2. Select the Upload a file to the Datastore icon in the icon menu.
    3. Navigate to the ISO file that you created in your project.
      It is the myProject/portplan-out
    4. Upload the ISO file to the datastore.
  24. Leave the datastore by selecting the VMs and Templates icon in the Navigator.
  25. Locate and select your virtual machine.
  26. Select the Configure tab in the main window.
  27. Select Edit....
    1. On the Virtual Hardware tab, select CD/DVD Drive 1.
    2. For the Client Device, select Datastore ISO File.
    3. Find and select your datastore in the Datastores category.
    4. Find and select your ISO file in the Contents category.
    5. Select OK to commit your selection and exit the Select File window.
    6. Ensure that the Connect At Power On check box is selected.
      Tip:
      • Expand the CD/DVD drive 1 entry to view the details and the complete Connect At Power On label.
      • Note that VMware related issues with ISO mounting at boot may occur if Connect At Power On
    7. Select OK to commit your selection and close the window.
  28. Start the virtual machine by selecting the play button on the icon bar.
    The installation might take several minutes to complete, depending on the availability of the system and the download speed.
  29. Log in to the virtual machine by using an SSH tool to check the status of the installation:
    1. Enter the following command to connect to mgmt using SSH:
      ssh ip_address -l apicadm
      You are logging in with the default ID of apicadm, which is the API Connect ID that has administrator privileges.
    2. Select Yes to continue connecting.
      Your host names are automatically added to your list of hosts.
    3. Run the apic status command to verify that the installation completed and the system is running correctly.
      The command output for a correctly running Developer Portal system is similar to the following lines:
      $ sudo apic status
      INFO[0000] Log level: info
      
      Cluster members:
      - testsrv1251.subnet1.example.com (1.2.3.4)
        Type: BOOTSTRAP_MASTER
        Install stage: DONE
        Upgrade stage: NONE
        Docker status:
          Systemd unit: running
        Kubernetes status:
          Systemd unit: running
          Kubelet version: testsrv1251 (4.4.0-137-generic) [Kubelet v1.10.6, Proxy v1.10.6]
        Etcd status: pod etcd-testsrv1251 in namespace kube-system has status Running
        Addons: calico, dns, helm, kube-proxy, metrics-server, nginx-ingress,
      Etcd cluster state:
      - etcd member name: testsrv1251.subnet1.example.com, member id: 10293853252850049269, cluster id: 17044377177359475136, leader id: 10293853252850049269, revision: 1485879, version: 3.1.17
      
      Pods Summary:
      NODE               NAMESPACE          NAME                                                          READY        STATUS         REASON
      testsrv1251        default            re702738954-apic-portal-db-f89vn                              2/2          Running
                         default            re702738954-apic-portal-nginx-6ffb8676d9-gtfc6                0/0          Pending
                         default            re702738954-apic-portal-nginx-6ffb8676d9-nqdzf                0/0          Pending
      testsrv1251        default            re702738954-apic-portal-nginx-6ffb8676d9-q85mt                1/1          Running
      testsrv1251        default            re702738954-apic-portal-www-p9bvx                             2/2          Running
      testsrv1251        kube-system        calico-node-xkpbk                                             2/2          Running
      testsrv1251        kube-system        coredns-87cb95869-p4qhf                                       1/1          Running
      testsrv1251        kube-system        coredns-87cb95869-z2n5z                                       1/1          Running
      testsrv1251        kube-system        etcd-testsrv1251                                              1/1          Running
      testsrv1251        kube-system        ingress-nginx-ingress-controller-dsnxw                        1/1          Running
      testsrv1251        kube-system        ingress-nginx-ingress-default-backend-6f58fb5f56-ldx7t        1/1          Running
      testsrv1251        kube-system        kube-apiserver-testsrv1251                                    1/1          Running
      testsrv1251        kube-system        kube-apiserver-proxy-testsrv1251                              1/1          Running
      testsrv1251        kube-system        kube-controller-manager-testsrv1251                           1/1          Running
      testsrv1251        kube-system        kube-proxy-4pp8v                                              1/1          Running
      testsrv1251        kube-system        kube-scheduler-testsrv1251                                    1/1          Running
      testsrv1251        kube-system        metrics-server-6fbfb84cdd-hkztz                               1/1          Running
      testsrv1251        kube-system        tiller-deploy-84f4c8bb78-v6k95                                1/1          Running
      
  30. If you have now installed all subsystems, continue to Access the Cloud Manager and begin API Connect Cloud Configuration.

What to do next

If you want to deploy an API Connect Analytics OVA file, continue with Deploying the Analytics subsystem in a VMware environment.

If you did not specify a new password during deployment in VMware, then after deployment, log in to the command-line interface (CLI) for each appliance and run the command passwd to change the password.