Integrating F5 BIG-IP device with IBM Cloud Private

Complete these steps to integrate the F5 BIG-IP device with your IBM® Cloud Private cluster.

Prerequisites

Steps

  1. Configure the F5 BIG-IP device as a peer to your IBM Cloud Private cluster by installing the ibm-calico-bgp-peer chart Version 1.1.0. When you configure the LTM device as a peer to your IBM Cloud Private cluster that runs Calico, the LTM device communicates directly with the pods.
  2. Install the ibm-f5bigip-controller chart Version 1.1.0. The chart creates a controller that watches the Kubernetes objects that are created and communicates the interested ones to the F5 BIG-IP device. As a result, the F5 BIG-IP device creates appropriate virtual servers and other corresponding LTM objects.

Configure the F5 BIG-IP device as a peer to your IBM Cloud Private cluster

Complete these steps to add the F5 BIG-IP device as a BGP peer to the Calico mesh in your IBM Cloud Private cluster:

  1. Log in to the management console.
  2. Click Catalog and search for the ibm-calico-bgp-peer chart.
  3. Provide the internal IP address of the F5 BIG-IP device and the autonomous system (AS) number. In an IBM Cloud Private Calico cluster, the AS number is set to 64512. If you need to apply the device as a peer only to a particular cluster node, then provide the internal IP address of the node. Else, leave the field blank to make each node in the cluster aware of this peer.
  4. Set the context for calicoctl. You must provide the etcd endpoint URL and the etcd credentials.

    • etcd Endpoint: The etcd endpoint must be of the format https://<master-node-internal-IP-address>:4001. Use the following command to get the etcd endpoint information:

       kubectl get cm etcd-config -ojsonpath={.data.etcd_endpoints} -n kube-system
      

      Following is a sample output of the command:

       https://192.168.70.225:4001
      
    • etcd Secret: The Kubernetes secret object with the CA certificate (etcd-ca), client certificate (etcd-cert), and private key (etcd-key) that provide access to etcd. Note: You must create the Kubernetes secret object in the kube-system namespace.

      kind: Secret
      metadata:
       name: etcd-secret
       namespace: kube-system
      type: Opaque
      data:
       etcd-ca: LS0......
         .......
         .......
         ..tLQ==
       etcd-cert: MS0......
         .......
         .......
         ..tLQ==
       etcd-key: NS0......
         .......
         .......
         ..tsde2
      

    In IBM Cloud Private, you can use the following command to get the existing etcd secret information:

       kubectl get secret etcd-secret -oyaml -n kube-system
    

    Following is a sample output of the command:

       apiVersion: v1
       data:
         etcd-ca:
       LS0tLS1CRU....LQ==
         etcd-cert:
       Q2VydGlm....LS0=
           etcd-key:
       LS0tLS1....LS0=
       kind: Secret
       metadata:
         name: etcd-secret
         namespace: kube-system
       type: Opaque
       ~#
    
  5. Provide the calico-ctl image details. These details are required as the calicoctl command is used to configure the BGP peer.

    • Repository is the location of the image.
    • Tag is the image tag. In IBM Cloud Private Version 3.2.1, the tag must be v3.5.2.
    • Pullpolicy is the image pull policy. Default value is IfNotPresent.
  6. Click Install to install the chart.
  7. Log in to the F5 BIG-IP device and run these commands to complete the BGP peer configuration:

    ssh root@<F5-BIG-IP-device-IP-address>
    imish
    enable
    configure terminal
    router bgp <AS number>
    neighbor <cluster-name> peer-group
    neighbor <cluster-name> remote-as <AS number>
    neighbor <master node IP address> peer-group <cluster-name>
    neighbor <worker node 1 IP address> peer-group <cluster-name>
    neighbor <worker node 2 IP address> peer-group <cluster-name>
    write
    exit
    exit
    

    Ensure that all the cluster nodes that you want to add as peers have connectivity with the F5 BIG-IP device. Add all these nodes as peers.

    Following is an example configuration based on the example topology:

    ssh root@192.168.80.254
    imish
    enable
    configure terminal
    router bgp 64512
    neighbor cluster1 peer-group
    neighbor cluster1 remote-as 64512
    neighbor 192.168.70.225 peer-group cluster1  # ICP master node
    neighbor 192.168.70.226 peer-group cluster1  # ICP worker node 1
    neighbor 192.168.70.227 peer-group cluster1  # ICP worker node 2
    write
    exit
    exit
    
  8. Verify that the F5 BIG-IP device is added as a BGP peer in the Calico mesh. Check the node status on any one of the cluster nodes by using the calicoctl utility.

    calicoctl node status
    

    Following is a sample output when the command is run on the master node:

    Calico process is running.
    IPv4 BGP status
    +----------------+-------------------+-------+------------+-------------+
    |  PEER ADDRESS  |     PEER TYPE     | STATE |   SINCE    |    INFO     |
    +----------------+-------------------+-------+------------+-------------+
    | 192.168.70.226 | node-to-node mesh | up    | 2018-04-03 | Established |
    | 192.168.70.227 | node-to-node mesh | up    | 2018-04-03 | Established |
    | 192.168.70.254 | global            | up    | 09:46:47   | Established |
    +----------------+-------------------+-------+------------+-------------+
    
  9. Verify that the route table on the F5 BIG-IP device is updated with routes to all the nodes in your IBM Cloud Private cluster. You can now reach the pod IPs directly from the new BGP peer.

    route -n
    

    Following is a sample output:

    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.80.1    0.0.0.0         UG    9      0        0 mgmt
    192.168.80.0    0.0.0.0         255.255.255.0   U     0      0        0 mgmt
    10.1.85.192     192.168.70.227  255.255.255.192 UG    0      0        0 internal_vlan
    10.1.145.64     192.168.70.225  255.255.255.192 UG    0      0        0 internal_vlan
    10.1.148.64     192.168.70.226  255.255.255.192 UG    0      0        0 internal_vlan
    192.168.60.0    0.0.0.0         255.255.255.0   U     0      0        0 external_vlan
    192.168.70.0    0.0.0.0         255.255.255.0   U     0      0        0 internal_vlan
    

The peer configuration is complete. You can now create the Helm chart to integrate the F5 BIG-IP Controller with IBM Cloud Private.

Install the ibm-f5bigip-controller chart

Install the ibm-f5bigip-controller chart to integrate with the F5 BIG-IP controller.

  1. Log in to the management console.
  2. Navigate to Catalog and search for the ibm-f5bigip-controller chart.
  3. Provide the required attributes for the chart.
    • URL is the management IP address of the F5 BIG-IP device.
    • Partition Name is the BIG-IP partition where you configure objects. Make sure that a partition is managed from only one F5 BIG-IP Kubernetes controller. Managing the same partition from multiple F5 BIG-IP Kubernetes controllers might lead to unexpected behavior.
    • Username is the BIG-IP iControl REST user name.
    • Password is the BIG-IP iControl REST password.
    • Pool Member Type is the type of BIG-IP pool members that you want to create. Valid values are cluster or nodeport. Use cluster to create pool members for each of the endpoint for the service, that is pod. Use nodeport to create pool members for each node in the Kubernetes cluster and to rely on kube-proxy at the node level to load-balance requests to pods.
    • Default Ingress IP is the IP address that is used by the controller to configure a virtual server for all ingresses with the annotation: virtual-server.f5.com/ip: controller-default.
    • Namespace(s) is the list of Kubernetes namespaces to watch. For example, ["ns1","ns2"].
    • Node Label Selector is the node label. Kubernetes BIG IP controller watches only the nodes with the specified label.
    • Extra Arguments are other Kubernetes BIG IP controller options. Provide a map in the form of {"key":"value", …}.
  4. Provide the NodeSelector and Tolerations parameter values.
    • NodeSelector is the node on which the controller must run. For example, If you want the controller pod to be placed on the master node, then provide {"role": "master"}.
    • Tolerations is used for scheduling the controller on to a node with matching taints. For example, [{"key":"dedicated","operator":"Exists","effect":"NoSchedule"},{"key":"CriticalAddonsOnly","operator":"Exists"}].
  5. Click Install to install the chart.
  6. Verify that the F5 BIG IP controller pod is successfully running and watching out for resources in the required namespaces.

    kubectl get po -n <release-namespace>
    

    Following is a sample output:

    NAME                                   READY     STATUS    RESTARTS   AGE
    c1-f5bigipctlr-ctlr-7d8759cd7f-z8zm4   1/1       Running   0          52m
    

Integration of F5 BIG-IP device with IBM Cloud Private is complete. You can now create virtual servers and load-balance the pods directly from your F5 BIG-IP device.

Example configuration of creating virtual servers from IBM Cloud Private

  1. Create an application with the name myapp in the namespace that the Kubernetes BIG IP controller is watching.

    kubectl get po -n <release-namespace> -owide | grep myapp
    

    Following is a sample output:

    NAME          READY   STATUS    RESTARTS   AGE   IP            NODE
    myapp-kggt8   1/1     Running   0          1h    10.1.148.66   192.168.70.226
    myapp-z5grr   1/1     Running   0          1h    10.1.85.226   192.168.70.227
    
  2. Create a service to expose your application.

    kubectl get svc -n <release-namespace>
    

    Following is a sample output:

    NAME      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    myapp     ClusterIP   10.0.0.20    <none>        8080/TCP   56s
    
  3. Create a configmap to create a service-specific front-end virtual server, and, if required, pools on the BIG-IP system.

    cat f5_configmap.yaml
    kind: ConfigMap
    apiVersion: v1
    metadata:
     name: myapp-vs
     labels:
       f5type: virtual-server
    data:
     schema: "f5schemadb://bigip-virtual-server_v0.1.1.json"
     data: |
       {
         "virtualServer": {
           "frontend": {
             "balance": "round-robin",
             "mode": "http",
             "partition": "icp",
             "virtualAddress": {
               "bindAddr": "192.168.60.10",
               "port": 80
             }
           },
           "backend": {
             "serviceName": "myapp",
             "servicePort": 8080
           }
         }
       }
    

    Verify that the configmap is created:

    kubectl get cm -n <release-namespace>
    

    Following is an example output:

    NAME                        DATA      AGE
    myapp-vs                    2         2s
    
  4. After you create the configmap, you can see the virtual servers and pools on the F5 BIG-IP device. Open the F5 BIG-IP device UI and check the following locations:

    • For virtual servers, see F5 BIG-IP UI > <Partition Name> > Local Traffic > Virtual Servers.
    • For pools, see F5 BIG-IP UI > <Partition Name> > Local Traffic > Pools.
  5. Verify whether the requests to the service are load balanced.

    First request to the service:

     curl 192.168.60.10
    

    Following is an example output:

     Hostname: myapp-kggt8
     Pod Information:
             node name:      192.168.70.226
             pod name:       myapp-kggt8
             pod namespace:  f5
             pod IP: 10.1.148.66
    

    Second request to the service:

     curl 192.168.60.10
    

    Following is an example output:

     Hostname: myapp-z5grr
     Pod Information:
             node name:      192.168.70.227
             pod name:       myapp-z5grr
             pod namespace:  f5
             pod IP: 10.1.85.226