Operator-based network setup

Follow the instructions in this task to set up the network for OpenShift operator. This scenario is typically used with Netezza® Performance Server installations. After migrating to 2.0.x, you will not be able to use default NPS ports when connecting to the instance if additional IP is not configured for it. For each production instance, an extra IP is mandatory. For development instances, you can use either a dedicated IP or node port based setup instead.

Before you begin

This process assumes functional application network with switches configured after migrating to version 2.0. For more information on standard YAML file for network setup, see Node side network configuration.

Procedure

  1. In /opt/ibm/appliance/platform/apos-comms/customer_network_config/ansible/System_Name.yml file, add the openshift_networking_enabled boolean, following the line application_network_enabled: True:
        application_network_enabled: True
        openshift_networking_enabled: True
    Note: When editing your System_Name.yml file, you must use spaces. Do not use Tab.
  2. Add OpenShift specific IP address values to the yaml in the corresponding application network section:
        application_network:
          network1:
            default_gateway: true
            vlan: 901
            prefix: 24
            gateway: 9.42.116.1
            floating_ip: 9.42.116.205
            mtu: 1500
            custom_routes: <OPTIONAL>
            additional_openshift_ipaddrs: ["9.42.77.65/24"]
            additional_openshift_routes: ["default via 9.42.77.1"]
    

    The above example adds an extra IP address in VLAN 901, with IP 9.42.77.65/24 with static route default via 9.42.77.1

    Note: In case of multiple networks, where network 2 does not have OpenShift networking, retain the additional_openshift_ipaddrs: ["<OPTIONAL>"] and additional_openshift_routes: ["<OPTIONAL>"] template values as follows:
             network2:
               default_gateway: false
               vlan: 4082
               # just number, no slash
               prefix: 24
               gateway:
               floating_ip: 192.168.30.1
               mtu: 1500
               custom_routes: <OPTIONAL>
               additional_openshift_ipaddrs: ["<OPTIONAL>"]
               additional_openshift_routes: ["<OPTIONAL>"]
  3. Run the set_openshift_addrs.yml playbook:
    ANSIBLE_HASH_BEHAVIOUR=merge ANSIBLE_LOG_PATH=/var/log/appliance/platform/apos-comms/house_setup.log ansible-playbook -i /opt/ibm/appliance/platform/apos-comms/customer_network_config/ansible/System_Name.yml /opt/ibm/appliance/platform/apos-comms/customer_network_config/ansible/playbooks/set_openshift_addrs.yml
    or the same but without output to screen:
    /opt/ibm/appliance/platform/apos-comms/tools/aposHouseConfig.py --limit_play set_openshift_addrs
    Note: The set_openshift_addrs.yml playbook only configures the additional OpenShift addresses. If you made any other changes in the YAML file as part of this procedure (for example, added a new network), these changes won't be applied unless you first run the standard network configuration playbook as described in Testing the YAML file and running playbooks, and only then run the set_openshift_addrs.yml.
  4. Check status of the pods:
    oc get po -n ap-keepalived
    Three extra keepalived pods (with random suffixes) are created along with the ap-keepalived-operator pod after running the playbook. They should all be Running:
    [root@node1 ~]# oc get pods -n ap-keepalived
    NAME                                      READY   STATUS    RESTARTS   AGE
    ap-keepalived-brvkr                       1/1     Running   0          19h
    ap-keepalived-j8dct                       1/1     Running   0          19h
    ap-keepalived-operator-696d87bb69-vs9gn   1/1     Running   0          19h
    ap-keepalived-vr75q                       1/1     Running   0          19h
    (
    Also you should be able to ping the addresses that you set in additional_openshift_ipaddrs section.