Integrating VMware NSX-T 2.4 with IBM Cloud Private

VMware NSX-T 2.4 (NSX-T), provides networking capability to an IBM® Cloud Private cluster.

Note: If you configured NSX-T on an IBM Cloud Private cluster, removing worker nodes from your cluster does not remove the ports and flows from the Open vSwitch (OVS) bridge.

Important: If you uninstall IBM Cloud Private, you must remove the cluster-related entries, such as firewall rules, switches, and Tier-1 routers from your NSX-T manager.

Supported operating systems

NSX-T integration with Kubernetes is supported on the following operating systems:

Integrate NSX-T with IBM Cloud Private cluster nodes

  1. Install NSX-T in VMware vSphere environment. For more information about NSX-T, see NSX-T Data Center Installation Guide Opens in a new  tab.

  2. Configure NSX-T resources for IBM Cloud Private cluster. For more information, see VMware documentation Opens in a new tab.

    Note: Create resources such as overlay transport zone, tier-0 logical router, IP blocks, and IP pools. Be sure to keep a record of the name or UUID of the resources. Add the name or UUID of the resources to the config.yaml file.

    Note: When you configure the IP Blocks for Kubernetes Pods section, use the IP block that you set as the network_cidr in the <installation_directory>/cluster/config.yaml file.

  3. Install the NSX-T Container Network Interface (CNI) plug-in package on each node in your cluster. For more information, see VMware documentation Opens in a new tab.

  4. Install and configure Open vSwitch (OVS) on each node in your cluster. For more information, see VMware documentation Opens in a new tab.

    Note: If the assigned ofport is not 1, make sure to add ovs_uplink_port to the config.yaml file when you Prepare the IBM Cloud Private configuration file.

  5. Configure NSX-T networking on each node in your cluster. For more information about configuring NSX-T on the nodes, see VMware documentation Opens in a new tab. When you tag the logical switch port, be sure to use the following parameter values:

    • {'ncp/node_name': '<node_name>'}: If you did configure kubelet_nodename: hostname in the <installation_directory>/cluster/config.yaml file, then add the node's host name as the <node_name> parameter value. You can get the node's host name by running the following command:

      hostname -s
      

      If you did not configure kubelet_nodename: hostname in the <installation_directory>/cluster/config.yaml file, then add the node's IP address as the <node_name> parameter value. Get the IP address of your IBM Cloud Private cluster node from the <installation_directory>/cluster/hosts file.

    • {'ncp/cluster': '<cluster_name>'}: Use the cluster_name that you set in the <installation_directory>/cluster/config.yaml file. The default value is mycluster.

Prepare the IBM Cloud Private configuration file

Complete the following steps to prepare the configuration file:

  1. If it does not exist, create a directory with the name images under the <installation_directory>/cluster/ folder.
  2. Download and copy the NSX-T Docker container .tar file to the <installation_directory>/cluster/images folder.
  3. Add the following parameter to the <installation_directory>/cluster/config.yaml file:

    network_type: nsx-t
    

    Note: Only one network type can be enabled for an IBM Cloud Private cluster. When you enable network_type: nsx-t, ensure that you remove the default network_type: calico setting from the config.yaml file.

NSX-T configuration

To configure NSX-T, add the following parameters to the config.yaml file:

nsx_t:
  managers: <IP address>[:<port>],<IP address>[:port]
  manager_user: <user name for NSX-T manager>
  manager_password: <password for NSX-T manager user>
  manager_ca_cert: |
    -----BEGIN CERTIFICATE-----
    MIIDYzCCAkugAwIBAgIEcK9gWjANBgsedkiG9w0BAQsFADBiMQswCQYDVQQGEwJV
    ..........................................
    ..........................................
    ..........................................
    hzYlaog68RTAQpkV0bwedxq8lizEBADCgderTw99OUgt+xVybTFtHume8JOd+1qt
    G3/WlLwiH9upSujL76cEG/ERkPR5SpGZhg37aK/ovLGTtCuAnQndtM5jVMKoNDl1
    /UOKWe1wrT==
    -----END CERTIFICATE-----
  client_cert: |
    -----BEGIN CERTIFICATE-----
    MIIDUDCCAjigAwIBAgIBCDANBgkqhkiG9w0BAQsFADA6TR0wGwYDVQQDDBQxMjcu
    ..........................................
    ..........................................
    ..........................................
    X9Kr61vjKeOpboUlz/oGRo7AFlqsCSderTtQH28DWumzutfj
    -----END CERTIFICATE-----
  client_private_key:  |
    -----BEGIN PRIVATE KEY-----
    MIIEvgIBADANBgkqhkiG9w0BAUYTRASCBKgwggSkAgEAAoIBAQC/Jz4WnaTmbfB7
    ..........................................
    ..........................................
    ..........................................
    n8jakjGLolYe5yv0KyM4RTD5
    -----END PRIVATE KEY-----
  subnet_prefix: 24
  external_subnet_prefix: 24
  ingress_mode: <hostnetwork or nat>
  ncp_package: <name of the NSX-T Docker container file that is placed in `<installation_directory>/cluster/images` folder>
  ncp_image: registry.local/ob-5667597/nsx-ncp
  ncp_image_tag: latest
  ovs_uplink_port: <name of the interface that is configured as an uplink port >
  ovs_bridge: <OVS bridge name that is used to configure container interface >
  tier0_router: <name or UUID of the tier0 router >
  overlay_TZ: <name or UUID of the NSX overlay transport zone >
  container_ip_blocks: <name or UUID of the container IP blocks >
  external_ip_pools: <name or UUID of the external IP pools >
  no_snat_ip_blocks: <name or UUID of the no-SNAT namespaces IP blocks >
  node_type: <type of container node. Allowed values are `HOSTVM` or `BAREMETAL`>
  enable_snat: true
  enable_nsx_err_crd: false  
  loadbalancer_enabled: false
  lb_default_ingressclass_nsx: true
  lb_l4_auto_scaling: true
  lb_external_ip_pools: <name or UUID of the external IP pools for load balancer>
  lb_pool_algorithm: ROUND_ROBIN  
  lb_service_size: SMALL   
  lb_l4_persistence: source_ip
  lb_l7_persistence: <persistence type for ingress traffic through Layer 7 load balancer. Allowed values are `source_ip` or `cookie`>
  lb_default_cert: |
    -----BEGIN CERTIFICATE-----
    MIIDUDCCAjigAwIBAgIBCDANBgkqhkiG9w0BAQsFADA6TR0wGwYDVQQDDBQxMjcu
    ..........................................
    ..........................................
    ..........................................
    X9Kr61vjKeOpboUlz/oGRo7AFlqsCSderTtQH28DWumzutfj
    -----END CERTIFICATE-----
  lb_default_private_key: |
    -----BEGIN PRIVATE KEY-----
    MIIEvgIBADANBgkqhkiG9w0BAUYTRASCBKgwggSkAgEAAoIBAQC/Jz4WnaTmbfB7
    ..........................................
    ..........................................
    ..........................................
    n8jakjGLolYe5yv0KyM4RTD5
    -----END PRIVATE KEY-----    

  apparmor_enabled: true
  apparmor_profile: <name of the AppArmor profile to be used>
  firewall_top_section_marker: <name of the firewall section under which the firewall rule for your cluster is created>
  firewall_bottom_section_marker: <name of the firewall section above which the firewall rule for your cluster is created>

Note: managers, subnet_prefix, ncp_package, ncp_image, ncp_image_tag, overlay_TZ, container_ip_blocks, external_ip_pools, tier0_router are mandatory parameters.

Important: Either manager_user and manager_password or client_cert and client_private_key is mandatory.

See the following guidelines and values for the parameters:

Next, continue with IBM Cloud Private installation.