Preparing the portal subsystem for deployment

Specify configuration properties for your developer portal subsystem, and create the ISO files.

Before you begin

Review the Deployment requirements on VMware.

Complete Preparing to install API Connect in VMware.

Procedure

  1. Change to the project directory.
    cd <project directory>
  2. Create your initial portal subsystem definition.
    apicup subsys create <subsystem name> portal

    where <subsystem name> is the name you give the portal subsystem that you are creating. The name must consist of lowercase alphanumeric characters or '-', contain no spaces, and starts and end with an a-z character.

    Verify that the new subsystem is created, and has empty or default properties set:
    apicup subsys get <subsystem name>
    The following output is returned:
    Appliance settings                                    
    ==================                                    
                                                          
    Name                                   Value           Description 
    ----                                   -----           ------
    additional-cloud-init-file                             (Optional) Path to additional cloud-init yml file 
    ...
    <list of settings continues>
  3. Specify the deployment profile for the subsystem, for example n1xc4.m16. The available profiles are described here: Planning your deployment topology and profiles.
    apicup subsys set <subsystem name> deployment-profile=<profile>
    Note: The deployment profiles that are shown in the Description column of the apicup get output might not be correct. The available profiles are documented in Planning your deployment topology and profiles.
  4. Specify the API Connect license type.
    apicup subsys set <subsystem name> license-use=<license type>

    The license_type must be either production or nonproduction. If not specified, the default value is nonproduction.

  5. Optional: Configure scheduled backups of the portal database.

    Refer to the instructions in Backing up and restoring the portal subsystem.

  6. Optional: Configure your logging.
    Logging can be configured after installation, but if you want to capture log events during installation then you must enable it before.
    1. Complete the procedure at Configuring remote logging for a VMware deployment.
    2. Enter the following command to create the log file:
      apicup subsys set <subsystem name> additional-cloud-init-file=config_file.yml
  7. Set your portal endpoints.

    The endpoints must be unique hostnames which both point to the IP address of the portal VM (single node deployment), or to the IP of a load balancer configured in front of the portal VMs.

    Setting Endpoint host description
    portal-admin This is the unique_hostname for communication between your management subsystem and portal. The values for the portal-admin and portal-www must be different.
    apicup subsys set <subsystem name> portal-admin=<unique_hostname.domain>
    portal-www This is the unique_hostname to be used for your developer portal sites. Multiple portal-www endpoints can be configured, as described here: Defining multiple portal endpoints for a VMware environment.
    apicup subsys set <subsystem name> portal-www=<unique_hostname.domain>
  8. Set your search domain. If you have multiple search domains, then use commas to separate them.
    apicup subsys set <subsystem name> search-domain=<your search domain>

    where <your search domain> is the domain for your VMs (use lowercase). For example: myorganization.com. Do not use wildcard DNS.

  9. Set your domain name servers (DNS).

    Supply the IP addresses of the DNS servers for your network. Use a comma to separate multiple server addresses.

    apicup subsys set <subsystem name> dns-servers=ip_address_of_dns_server[,ip_address_of_another_dns_server_if_necessary]
  10. Set the SSH public key to enable SSH public key authentication.

    Specify the path to your SSH public key to enable SSH public key authentication. You can specify multiple paths by using commas as the separator.

    apicup subsys set <subsystem name> ssh-keyfiles=<path to publicc ssh keyfile>

    The public key is required so that the API Connect administrator can log in to the API Connect VMs with SSH. If you have multiple API Connect administrators, then include the public key for each administrator when you set ssh-keyfiles.

    The default public key is typically located in the user's home directory, for example: <home directory>/.ssh/id_rsa.pub.

  11. If there are multiple management subsystems in the same project, set the portal subsystem's platform-api and consumer-api certificates to match those used by the appropriate management subsystem to ensure that the portal subsystem is correctly associated with that management subsystem.

    This step applies only if you defined one or more management subsystems in your project directory.

    A portal subsystem can be associated with only one management subsystem. To associate the new portal subsystem with the appropriate management subsystem, manually set the mgmt-platform-api and mgmt-consumer-api certificates to match the ones used by the management subsystem.

    1. Run the following commands to get the certificates from the management subsystem:
      apicup certs get <management subsystem name> platform-api -t cert > platform-api.crt
      apicup certs get <management subsystem name> platform-api -t key > platform-api.key
      apicup certs get <management subsystem name> platform-api -t ca > platform-api-ca.crt
      
      apicup certs get <management subsystem name> consumer-api -t cert > consumer-api.crt
      apicup certs get <management subsystem name> consumer-api -t key > consumer-api.key
      apicup certs get <management subsystem name> consumer-api -t ca > consumer-api-ca.crt

      where mgmt-subsystem-name is the name of the specific management subsystem that you want to associate the new portal subsystem with.

    2. Run the following commands to set the portal's certificates to match those used by the management subsystem:
      apicup certs set <portal subsystem name> mgmt-platform-api Cert_file_path Key_file_path CA_file_path
      
      apicup certs set <portal subsystem name> mgmt-consumer-api Cert_file_path Key_file_path CA_file_path
  12. Set the VM console password.
    1. Important: Review the requirements for creating a hashed password. See Setting and using a hashed default password.
    2. Check that you have a password hashing tool installed.
      Operating system Command
      Linux, macOS If the mkpasswd command utility is not available, download and install it. (You can also use a different password hashing utility.) On macOS, use the command: gem install mkpasswd.
      Windows, Red Hat Use OpenSSL.
    3. Create a hashed password.
      Operating system Command
      Ubuntu, Debian, macOS
      mkpasswd --method=sha-512 --rounds=4096 password
      Windows, Red Hat For example, with OpenSSL: openssl passwd -1 password. You might need to add your password hashing utility to your path; for example on Windows:
      set PATH=c:\cygwin64\bin;%PATH%
    4. Set the hashed password for your subsystem:
      apicup subsys set <subsystem name> default-password='<hashed password>'
      • The password is hashed. If it is in plain text, you cannot log in to the VMware console.
      • The password can be used only to log in through the VMware console. You cannot use it to SSH into the VM as an alternative to using the ssh-keyfiles.
      • On Linux or macOS, use single quotation marks around <hashed password>. For Windows, use double quotation marks.
      • If you are not using a US English keyboard, understand the limitations with using the remote VMware console. See VMware deployment overview and requirements.
  13. Optional: If the default IP ranges of 172.16.0.0/16 and 172.17.0.0/16 (which are used by Kubernetes on your API Connect VMs) conflict with IP addresses that are used by other processes in your deployment, then you can set a smaller CIDR.

    A CIDR as small as /22 is supported.

    You can modify the IP ranges during initial installation only. You cannot modify them after the API Connect VMs are deployed. See Key points for API Connect on VMware.
    1. Update the IP range for the Kubernetes pods:
      apicup subsys set <subsystem name> k8s-pod-network='<new pod range>'

      where <new pod range> is the new value for the range.

    2. Update the IP range for Service networks:
      apicup subsys set <subsystem name> k8s-service-network='<new service range>'

      where <new service range> is the new value for the range.

  14. Define the hostname for each subsystem VM that you are deploying. If you specified a one replica (n1) profile, then you are deploying one VM, so define one hostname. If you specified a three replica (n3) profile, then you are deploying three VMs so define three hostnames:
    apicup hosts create <subsystem name> <hostname.domainname> <hd password>
    where:
    • <hostname.domainname> is the fully qualified name for the subsystem VM.
    • <hd password> is the password that the Linux Unified Key Setup uses to encrypt the storage for your subsystem. This password is Base64 encoded when stored in your project directory, and is hashed in the ISO file and on the VM.

    Repeat this command for each subsystem VM in your deployment. For example, if you are deploying a one replica profile then run the command once, for a three replica profile run the command three times (once for each <hostname.domainname>).

  15. Define the network interfaces and IP configuration for your subsystem VMs.
    apicup iface create <subsystem name> <hostname.domainname> <physical network id> <host ip address>/<subnet mask> <network gateway ip address>

    where <physical network id> is the network interface ID of your VM. The value is usually ethx, where x is a number from 0 to 9.

    Example:
    apicup iface create <subsystem name> <myHostname.domain> eth0 192.0.2.10/255.255.255.0 192.0.2.1

    For three replica deployments, repeat this command for each <myHostname.domain> in your subsystem deployment.

    Note: The <network gateway ip address> is the network gateway (not a DataPower Gateway). If you are creating multiple network interfaces (on each VM), each one must be on a different subnet with a different gateway.
    Note: You can optionally configure a second network interface (NIC) card for use with portal. In this scenario, one NIC is used for internal traffic between the portal and management subsystems, and the second is used as a public interface. If you are creating multiple network interfaces, each one must be on a different subnet with a different gateway. For information on configuring two NICs for portal, see Configuring two NICs on the portal.
  16. Optional: You can specify a range of allowable client IP addresses.
    Create a file called ptl-extra-values.yaml, and paste in the following contents:
      spec:
        portalUIEndpoint:
            annotations:
              ingress.kubernetes.io/whitelist-source-range: 1.2.88.0/24

    In this example, 1.2.88.0/24 is the acceptable range of client IP addresses.

    You can optionally restrict which IP addresses can access any of the ingresses by creating an allowlist annotation, which allows only the specified IP addresses to access the ingress and denies all other source IP addresses. See Specifying a range of allowable client IP addresses for the portal.

    Note: One usage of this restriction is the scenario where you split traffic between your private and public networks in 2 NIC configuration.
  17. Optional: Use apicup to view the configured hosts:
    apicup hosts list <subsystem name>
    testsrv0231.subnet1.example.com
        Device  IP/Mask                     Gateway
        eth0    1.2.152.231/255.255.254.0  1.2.152.1
    Note: This command might return the following messages, which you can ignore:
    * host is missing traffic interface 
    * host is missing public interface 
  18. Optional: Enable JWT security instead of mTLS for communication from management to your portal subsystem.
    JWT security provides application layer security and can be used instead of mTLS when there are load-balancers located between subsystems that require TLS termination. For more information about JWT security, see Enable JWT instead of mTLS. To enable JWT and disable mTLS, first identify the JWKS URL from the management subsystem:
    apicup subsys get <management subsystem name>
    
    ...
    jwks-url     https://appliance1.apic.acme.com/api/cloud/oauth2/certs  JWKS URL for Portal and analytics subsystems to validate JWT -- this is unsettable and is generated based on the platform-api endpoint 
    ...
    Disable mTLS and enable JWT by setting the jwks-url with apicup on your analytics subsystem:
    apicup subsys set <subsystem name> mtls-validate-client=false
    apicup subsys set <subsystem name> jwks-url=https://appliance1.apic.acme.com/api/cloud/oauth2/certs
    Note: Do not disable mTLS without enabling JWT.
  19. Define your NTP server.
    Note: If your environment has internet access, then ntp.ubuntu.com is used by default, and you can skip this step.
    1. Create a file called cloud-init.yaml and paste in the details of your NTP server.
      ntp:
        enabled: true
        ntp_client: systemd-timesyncd
        servers:
          - ntp.example.com
    2. Set additional-cloud-init-file to your cloud-init.yaml file.
      apicup subsys set <subsystem name> additional-cloud-init-file cloud-init.yaml
  20. If you are installing a 2DCDR deployment, then follow the instructions in Installing a two data center deployment to configure your 2DCDR properties.
  21. Verify that the configuration settings are valid.
    apicup subsys get <subsystem name> --validate

    The output shows all configuration settings, and has a checkmark against all settings that pass validation. If the setting does not have a checkmark, then it is invalid. See the following sample output.

    Appliance settings                                                                                                                            
    ==================                                                                                                                            
                                                                                                                                                  
    Name                          Value                                                                                                            
    ----                          -----                                                                                                           
    additional-cloud-init-file                                                                                                                     ✔ 
    data-device                   sdb                                                                                                              ✔ 
    default-password              $6$rounds=4096$vtcqpAVK$dzqrOeYP33WTvTug38Q4Rld5l8TmdQgezzTnkX/PFwkzTiZ2S0CqNRr1S4b08tOc4p.OEg4BtzBe/r8RAk.gW/   ✔ 
    dns-servers                   [192.168.1.201]                                                                                                  ✔ 
    extra-values-file                                                                                                                              ✔ 
    k8s-pod-network               172.16.0.0/16                                                                                                    ✔ 
    k8s-service-network           172.17.0.0/16                                                                                                    ✔ 
    public-iface                  eth0                                                                                                             ✔ 
    search-domain                 [subnet1.example.com]                                                                                            ✔ 
    ssh-keyfiles                  [/home/vsphere/.ssh/id_rsa.pub]                                                                                  ✔ 
    traffic-iface                 eth0                                                                                                             ✔ 
                                                                                                                                                  
                                                                                                                                                  
    Subsystem settings                                                                                                                            
    ==================                                                                                                                            
                                                                                                                                                  
    Name                          Value                                                                                                            
    ----                          -----                                                                                                           
    deployment-profile            n1xc2.m8                                                                                                         ✔ 
    license-use                   nonproduction                                                                                                    ✔ 
    multi-site-ha-enabled         false                                                                                                            ✔ 
    multi-site-ha-mode            active                                                                                                           ✔ 
    replication-peer-fqdn                                                                                                                          ✔ 
    site-name                                                                                                                                      ✔ 
                                                                                                                                                  
                                                                                                                                                  
    Endpoints                                                                                                                                     
    =========                                                                                                                                     
                                                                                                                                                  
    Name                          Value                                                                                                            
    ----                          -----                                                                                                           
    portal-admin                  portal-api.example.com                                                                                           ✔ 
    portal-replication                                                                                                                             ✔ 
    portal-www                    portal-www.example.com                                                                                           ✔ 
    
    
  22. Create your ISO files.
    apicup subsys install <subsystem name> --out <subsystem name>plan-out

    The ISO files are created in the <project directory>/<subsystem name>plan-out directory.

    If the system cannot find your ISO file creation tool, then add it to your PATH property:

    Operating system Command
    Linux, macOS
    export PATH=$PATH:/Users/your_path/
    Windows
    set PATH="c:\Program Files (x86)\cdrtools";%PATH%

What to do next

Deploy your portal subsystems: Deploying the portal subsystem OVA.