Integrating VMware NSX-T 2.4 with IBM Cloud Private
VMware NSX-T 2.4 (NSX-T), provides networking capability to an IBM® Cloud Private cluster.
Note: If you configured NSX-T on an IBM Cloud Private cluster, removing worker nodes from your cluster does not remove the ports and flows from the Open vSwitch (OVS) bridge.
-
Before you add the worker node to the cluster again, you must clear the ports and flows from the bridge.
-
You can clear the ports and flows by deleting the
br-intbridge and adding the bridge back when you integrate NSX-T with IBM Cloud Private cluster nodes.
Important: If you uninstall IBM Cloud Private, you must remove the cluster-related entries, such as firewall rules, switches, and Tier-1 routers from your NSX-T manager.
Supported operating systems
NSX-T integration with Kubernetes is supported on the following operating systems:
- Ubuntu 16.04
- Red Hat Enterprise Linux (RHEL) 7.5 and 7.6 only
Integrate NSX-T with IBM Cloud Private cluster nodes
-
Install NSX-T in VMware vSphere environment. For more information about NSX-T, see NSX-T Data Center Installation Guide
.
-
Configure NSX-T resources for IBM Cloud Private cluster. For more information, see VMware documentation
.
Note: Create resources such as overlay transport zone, tier-0 logical router, IP blocks, and IP pools. Be sure to keep a record of the name or UUID of the resources. Add the name or UUID of the resources to the config.yaml file.
Note: When you configure the IP Blocks for Kubernetes Pods section, use the IP block that you set as the
network_cidrin the<installation_directory>/cluster/config.yamlfile. -
Install the NSX-T Container Network Interface (CNI) plug-in package on each node in your cluster. For more information, see VMware documentation
.
-
Install and configure Open vSwitch (OVS) on each node in your cluster. For more information, see VMware documentation
.
Note: If the assigned
ofportis not1, make sure to addovs_uplink_portto theconfig.yamlfile when you Prepare the IBM Cloud Private configuration file. -
Configure NSX-T networking on each node in your cluster. For more information about configuring NSX-T on the nodes, see VMware documentation
. When you tag the logical switch port, be sure to use the following parameter values:
-
{'ncp/node_name': '<node_name>'}: If you did configurekubelet_nodename: hostnamein the<installation_directory>/cluster/config.yamlfile, then add the node's host name as the<node_name>parameter value. You can get the node's host name by running the following command:hostname -sIf you did not configure
kubelet_nodename: hostnamein the<installation_directory>/cluster/config.yamlfile, then add the node's IP address as the<node_name>parameter value. Get the IP address of your IBM Cloud Private cluster node from the<installation_directory>/cluster/hostsfile. -
{'ncp/cluster': '<cluster_name>'}: Use thecluster_namethat you set in the<installation_directory>/cluster/config.yamlfile. The default value ismycluster.
-
Prepare the IBM Cloud Private configuration file
Complete the following steps to prepare the configuration file:
- If it does not exist, create a directory with the name
imagesunder the<installation_directory>/cluster/folder. - Download and copy the NSX-T Docker container .tar file to the
<installation_directory>/cluster/imagesfolder. -
Add the following parameter to the
<installation_directory>/cluster/config.yamlfile:network_type: nsx-tNote: Only one network type can be enabled for an IBM Cloud Private cluster. When you enable
network_type: nsx-t, ensure that you remove the defaultnetwork_type: calicosetting from theconfig.yamlfile.
NSX-T configuration
To configure NSX-T, add the following parameters to the config.yaml file:
nsx_t:
managers: <IP address>[:<port>],<IP address>[:port]
manager_user: <user name for NSX-T manager>
manager_password: <password for NSX-T manager user>
manager_ca_cert: |
-----BEGIN CERTIFICATE-----
MIIDYzCCAkugAwIBAgIEcK9gWjANBgsedkiG9w0BAQsFADBiMQswCQYDVQQGEwJV
..........................................
..........................................
..........................................
hzYlaog68RTAQpkV0bwedxq8lizEBADCgderTw99OUgt+xVybTFtHume8JOd+1qt
G3/WlLwiH9upSujL76cEG/ERkPR5SpGZhg37aK/ovLGTtCuAnQndtM5jVMKoNDl1
/UOKWe1wrT==
-----END CERTIFICATE-----
client_cert: |
-----BEGIN CERTIFICATE-----
MIIDUDCCAjigAwIBAgIBCDANBgkqhkiG9w0BAQsFADA6TR0wGwYDVQQDDBQxMjcu
..........................................
..........................................
..........................................
X9Kr61vjKeOpboUlz/oGRo7AFlqsCSderTtQH28DWumzutfj
-----END CERTIFICATE-----
client_private_key: |
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAUYTRASCBKgwggSkAgEAAoIBAQC/Jz4WnaTmbfB7
..........................................
..........................................
..........................................
n8jakjGLolYe5yv0KyM4RTD5
-----END PRIVATE KEY-----
subnet_prefix: 24
external_subnet_prefix: 24
ingress_mode: <hostnetwork or nat>
ncp_package: <name of the NSX-T Docker container file that is placed in `<installation_directory>/cluster/images` folder>
ncp_image: registry.local/ob-5667597/nsx-ncp
ncp_image_tag: latest
ovs_uplink_port: <name of the interface that is configured as an uplink port >
ovs_bridge: <OVS bridge name that is used to configure container interface >
tier0_router: <name or UUID of the tier0 router >
overlay_TZ: <name or UUID of the NSX overlay transport zone >
container_ip_blocks: <name or UUID of the container IP blocks >
external_ip_pools: <name or UUID of the external IP pools >
no_snat_ip_blocks: <name or UUID of the no-SNAT namespaces IP blocks >
node_type: <type of container node. Allowed values are `HOSTVM` or `BAREMETAL`>
enable_snat: true
enable_nsx_err_crd: false
loadbalancer_enabled: false
lb_default_ingressclass_nsx: true
lb_l4_auto_scaling: true
lb_external_ip_pools: <name or UUID of the external IP pools for load balancer>
lb_pool_algorithm: ROUND_ROBIN
lb_service_size: SMALL
lb_l4_persistence: source_ip
lb_l7_persistence: <persistence type for ingress traffic through Layer 7 load balancer. Allowed values are `source_ip` or `cookie`>
lb_default_cert: |
-----BEGIN CERTIFICATE-----
MIIDUDCCAjigAwIBAgIBCDANBgkqhkiG9w0BAQsFADA6TR0wGwYDVQQDDBQxMjcu
..........................................
..........................................
..........................................
X9Kr61vjKeOpboUlz/oGRo7AFlqsCSderTtQH28DWumzutfj
-----END CERTIFICATE-----
lb_default_private_key: |
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAUYTRASCBKgwggSkAgEAAoIBAQC/Jz4WnaTmbfB7
..........................................
..........................................
..........................................
n8jakjGLolYe5yv0KyM4RTD5
-----END PRIVATE KEY-----
apparmor_enabled: true
apparmor_profile: <name of the AppArmor profile to be used>
firewall_top_section_marker: <name of the firewall section under which the firewall rule for your cluster is created>
firewall_bottom_section_marker: <name of the firewall section above which the firewall rule for your cluster is created>
Note: managers, subnet_prefix, ncp_package, ncp_image, ncp_image_tag, overlay_TZ, container_ip_blocks, external_ip_pools, tier0_router are mandatory parameters.
Important: Either manager_user and manager_password or client_cert and client_private_key is mandatory.
See the following guidelines and values for the parameters:
network_type: Must be set tonsx-t.managers: IP address or host name of the NSX-T manager.
Note: You can specify multiple NSX-T managers by adding the IP addresses or host names by using a comma to separate.manager_user: User name of the user who has access to the NSX-T manager.manager_password: Password of the user specified in themanager_userparameter.manager_ca_cert: Contents of the NSX-T manager CA certificate file to verify the NSX-T manager server certificate.client_cert: Contents of the NSX-T manager client certificate file to authenticate with the NSX-T manager.client_private_key: Contents of the NSX-T manager client private key file to authenticate with the NSX-T manager.subnet_prefix: Subnet prefix length of the IP address block for pods.external_subnet_prefix: Subnet prefix length of the external (Network Address Translation) (NAT), IP address block. If the length is not specified, the value insubnet_prefixis the default length.ingress_mode: Gives the option to expose the ingress controller. Specifyingnatutilizes the NSX-T NAT pool and specifyinghostnetwork, utilizes the node IP address.
Note: The default IBM Cloud Private management ingress controller uses the node IP address for routing, and you can set the custom ingress controllers to use NSX-T NAT pool for routing.ncp_package: NSX-T Docker container .tar file that is in the<installation_directory>/cluster/imagesfolder.ncp_image: Name of the NSX-T Docker container image.ncp_image_tag: Tag for the NSX-T Docker container image, such aslatest.ovs_uplink_port: Name of the interface that configures as a uplink port.
Note: Add this parameter only if the value ofofportis not1.ovs_bridge: OVS bridge name that is used to configure container interface.tier0_router: Name or UUID of the Tier-0 logical router. Tier-0 routers use downlink ports to connect to Tier-1 routers and uplink ports to connect to external networks.overlay_TZ: Name or UUID of the NSX overlay transport zone that creates logical switches for container networking. Every hypervisor that hosts the Kubernetes node VMs must join this transport zone.container_ip_blocks: Name or UUID of the IP blocks that create subnets.
Note: If a name is chosen, it must be unique.external_ip_pools: Name or UUID of the external IP pools that allocate IP addresses for translating container IPs using Source Network Address Translation (SNAT) rules.no_snat_ip_blocks: Name or UUID of the IP blocks that create subnets for no-SNAT projects. You can specify that no-SNAT projects use these IP blocks.
Note: If theno_snat_ip_blocksvalue is empty, the value fromcontainer_ip_blocksis the default.node_type: Type of container node. Allowed values areHOSTVMorBAREMETAL.enable_snat: Setting to enable or disable SNAT.
Note: The default value istrue.enable_nsx_err_crd: This parameter is used to enable or disable error reporting through NSXError custom resource definition (CRD). The default value isfalse.loadbalancer_enabled: Setting to enable or disable a load balancer.
Note: The default value isfalse.lb_default_ingressclass_nsx: The setting for ingress controller behavior. NSX load balancers handle the ingress if the parameter value istrue. Third-party ingress controllers handle the ingress if the parameter value isfalse. An example of a third-party ingress is NGINX.
Note: The default value istrue.lb_l4_auto_scaling: Enable or disable Layer 4 load balancer auto scaling. The default value istrue.lb_external_ip_pools: Name or UUID of the external IP pools for load balancer.lb_pool_algorithm: Load balancing algorithm for the load balancer pool object.
Note: Your options areROUND_ROBIN,LEAST_CONNECTION,IP_HASH, orWEIGHTED_ROUND_ROBIN. The default value isROUND_ROBIN.lb_service_size: Load balancer service size.
Note: Your options areSMALL,MEDIUM, orLARGE. The default value isSMALL.
Important:SMALLload balancer supports 10 virtual serversMEDIUMload balancer supports 100 virtual serversLARGEload balancer supports 1000 virtual servers
lb_l4_persistence: Persistence type for ingress traffic through Layer 4 load balancer. Allowed value issource_ip.lb_l7_persistence: Persistence type for ingress traffic through Layer 7 load balancer. Allowed values aresource_iporcookie.lb_default_cert: Insert the contents of the default certificate file for HTTPS load balancing.lb_default_private_key: Insert the contents of the private key file for the default certificate for HTTPS load balancing.apparmor_enabled: Specifies the status of the AppArmor service on the system. The default value istrue.
Important: This parameter is applicable only to Ubuntu.
Note: For RHEL, the parameter must be set tofalse.apparmor_profile: Name of the AppArmor profile. The default AppArmor profile name isnode-agent-apparmor. If you are using another profile, specify the custom profile name as the parameter value.firewall_top_section_marker: Name of the firewall section under which the firewall rule for your cluster is created.firewall_bottom_section_marker: Name of the firewall section above which the firewall rule for your cluster is created.
Next, continue with IBM Cloud Private installation.