Integrating VMware NSX-T 2.3 with IBM Cloud Private
VMware NSX-T 2.3 (NSX-T), provides networking capability to an IBM® Cloud Private cluster.
Note: If you configured NSX-T on an IBM Cloud Private cluster, removing worker nodes from your cluster does not remove the ports and flows from the Open vSwitch (OVS) bridge.
-
Before you add the worker node to the cluster again, you must clear the ports and flows from the bridge.
-
You can clear the ports and flows by deleting the
br-int
bridge and adding the bridge back when you integrate NSX-T with IBM Cloud Private cluster nodes.
Important: If you uninstall IBM Cloud Private, you must remove the cluster-related entries, such as firewall rules, switches, and Tier-1 routers from your NSX-T manager.
Supported operating systems
NSX-T integration with Kubernetes is supported on the following operating systems:
- Ubuntu 16.04
- Red Hat Enterprise Linux™ (RHEL) 7.4 and 7.5 only
Integrate NSX-T with IBM Cloud Private cluster nodes
-
Install NSX-T in VMware vSphere environment. For more information about NSX-T, see NSX-T Data Center Installation Guide .
-
Configure NSX-T resources for IBM Cloud Private cluster. For more information, see VMware documentation .
Note: Create resources such as overlay transport zone, tier-0 logical router, IP blocks, and IP pools. Be sure to keep a record of the name or UUID of the resources. Add the name or UUID of the resources to the config.yaml file.
Note: When you configure the IP Blocks for Kubernetes Pods section, use the IP block that you set as the
network_cidr
in the<installation_directory>/cluster/config.yaml
file. -
Install the NSX-T Container Network Interface (CNI) plug-in package on each node in your cluster. For more information, see VMware documentation .
-
Install and configure Open vSwitch (OVS) on each node in your cluster. For more information, see VMware documentation .
Note: If the assigned
ofport
is not1
, make sure to addovs_uplink_port
to theconfig.yaml
file when you Prepare the IBM Cloud Private configuration file. -
Configure NSX-T networking on each node in your cluster. For more information about configuring NSX-T on the nodes, see VMware documentation . When you tag the logical switch port, be sure to use the following parameter values:
-
{'ncp/node_name': '<node_name>'}
: If you did configurekubelet_nodename: hostname
in the<installation_directory>/cluster/config.yaml
file, then add the node's host name as the<node_name>
parameter value. You can get the node's host name by running the following command:hostname -s
If you did not configure
kubelet_nodename: hostname
in the<installation_directory>/cluster/config.yaml
file, then add the node's IP address as the<node_name>
parameter value. Get the IP address of your IBM Cloud Private cluster node from the<installation_directory>/cluster/hosts
file. -
{'ncp/cluster': '<cluster_name>'}
: Use thecluster_name
that you set in the<installation_directory>/cluster/config.yaml
file. The default value ismycluster
.
-
Prepare the IBM Cloud Private configuration file
Complete the following steps to prepare the configuration file:
- If it does not exist, create a directory with the name
images
under the<installation_directory>/cluster/
folder. - Download and copy the NSX-T Docker container .tar file to the
<installation_directory>/cluster/images
folder. -
Add the following parameter to the
<installation_directory>/cluster/config.yaml
file:network_type: nsx-t
Note: Only one network type can be enabled for an IBM Cloud Private cluster. When you enable
network_type: nsx-t
, ensure that you remove the defaultnetwork_type: calico
setting from theconfig.yaml
file.
NSX-T configuration
To configure NSX-T, add the following parameters to the config.yaml
file:
nsx_t:
managers: <IP address>[:<port>],<IP address>[:port]
manager_user: <user name for NSX-T manager>
manager_password: <password for NSX-T manager user>
manager_ca_cert: |
-----BEGIN CERTIFICATE-----
MIIDYzCCAkugAwIBAgIEcK9gWjANBgsedkiG9w0BAQsFADBiMQswCQYDVQQGEwJV
..........................................
..........................................
..........................................
hzYlaog68RTAQpkV0bwedxq8lizEBADCgderTw99OUgt+xVybTFtHume8JOd+1qt
G3/WlLwiH9upSujL76cEG/ERkPR5SpGZhg37aK/ovLGTtCuAnQndtM5jVMKoNDl1
/UOKWe1wrT==
-----END CERTIFICATE-----
client_cert: |
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 8 (0x8)
Signature Algorithm: sha256WithRSAEncryption
..........................................
..........................................
..........................................
43:8e:69:8c:5f:d2:ab:eb:5b:e3:29:e3:a9:6e:85:25:cf:fa:
06:46:8e:c0:16:5a:ac:09:2d:02:ef:2b:50:1f:6f:03:5a:e9:
b3:ba:d7:e3
-----BEGIN CERTIFICATE-----
MIIDUDCCAjigAwIBAgIBCDANBgkqhkiG9w0BAQsFADA6TR0wGwYDVQQDDBQxMjcu
..........................................
..........................................
..........................................
X9Kr61vjKeOpboUlz/oGRo7AFlqsCSderTtQH28DWumzutfj
-----END CERTIFICATE-----
client_private_key: |
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAUYTRASCBKgwggSkAgEAAoIBAQC/Jz4WnaTmbfB7
..........................................
..........................................
..........................................
n8jakjGLolYe5yv0KyM4RTD5
-----END PRIVATE KEY-----
subnet_prefix: 24
external_subnet_prefix: 24
ingress_mode: <hostnetwork or nat>
ncp_package: <name of the NSX-T Docker container file that is placed in `<installation_directory>/cluster/images` folder>
ncp_image: registry.local/ob-5667597/nsx-ncp
ncp_image_tag: latest
ovs_uplink_port: <name of the interface that is configured as an uplink port >
ovs_bridge: <OVS bridge name that is used to configure container interface >
tier0_router: <name or UUID of the tier0 router >
overlay_TZ: <name or UUID of the NSX overlay transport zone >
container_ip_blocks: <name or UUID of the container IP blocks >
external_ip_pools: <name or UUID of the external IP pools >
no_snat_ip_blocks: <name or UUID of the no-SNAT namespaces IP blocks >
node_type: <type of container node. Allowed values are `HOSTVM` or `BAREMETAL`>
enable_snat: true
loadbalancer_enabled: false
lb_default_ingressclass_nsx: true
lb_pool_algorithm: ROUND_ROBIN
lb_service_size: SMALL
lb_l4_persistence: source_ip
lb_l7_persistence: <persistence type for ingress traffic through Layer 7 load balancer. Allowed values are `source_ip` or `cookie`>
lb_default_cert: |
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 8 (0x8)
Signature Algorithm: sha256WithRSAEncryption
..........................................
..........................................
..........................................
43:8e:69:8c:5f:d2:ab:eb:5b:e3:29:e3:a9:6e:85:25:cf:fa:
06:46:8e:c0:16:5a:ac:09:2d:02:ef:2b:50:1f:6f:03:5a:e9:
b3:ba:d7:e3
-----BEGIN CERTIFICATE-----
MIIDUDCCAjigAwIBAgIBCDANBgkqhkiG9w0BAQsFADA6TR0wGwYDVQQDDBQxMjcu
..........................................
..........................................
..........................................
X9Kr61vjKeOpboUlz/oGRo7AFlqsCSderTtQH28DWumzutfj
-----END CERTIFICATE-----
lb_default_private_key: |
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAUYTRASCBKgwggSkAgEAAoIBAQC/Jz4WnaTmbfB7
..........................................
..........................................
..........................................
n8jakjGLolYe5yv0KyM4RTD5
-----END PRIVATE KEY-----
apparmor_enabled: true
apparmor_profile: <name of the AppArmor profile to be used>
Note: managers
, subnet_prefix
, ncp_package
, ncp_image
, ncp_image_tag
, overlay_TZ
, container_ip_blocks
, external_ip_pools
, tier0_router
are mandatory parameters.
Important: Either manager_user
and manager_password
or client_cert
and client_private_key
is mandatory.
See the following guidelines and values for the parameters:
network_type
: Must be set tonsx-t
-
managers
: IP address or host name of the NSX-T managerNote: You can specify multiple NSX-T managers by adding the IP addresses or host names by using a comma to separate.
manager_user
: User name of the user who has access to the NSX-T managermanager_password
: Password of the user specified in themanager_user
parametermanager_ca_cert
: Contents of the NSX-T manager CA certificate file to verify the NSX-T manager server certificateclient_cert
: Contents of the NSX-T manager client certificate file to authenticate with the NSX-T managerclient_private_key
: Contents of the NSX-T manager client private key file to authenticate with the NSX-T managersubnet_prefix
: Subnet prefix length of the IP address block for podsexternal_subnet_prefix
: Subnet prefix length of the external (Network Address Translation) (NAT), IP address block. If the length is not specified, the value insubnet_prefix
is the default length.-
ingress_mode
: Gives the option to expose the ingress controller. Specifyingnat
utilizes the NSX-T NAT pool and specifyinghostnetwork
, utilizes the node IP address.Note: The default IBM Cloud Private management ingress controller uses the node IP address for routing, and you can set the custom ingress controllers to use NSX-T NAT pool for routing.
ncp_package
: NSX-T Docker container .tar file that is in the<installation_directory>/cluster/images
folderncp_image
: Name of the NSX-T Docker container imagencp_image_tag
: Tag for the NSX-T Docker container image, such aslatest
-
ovs_uplink_port
: Name of the interface that configures as a uplink portNote: Add this parameter only if the value of
ofport
is not1
. ovs_bridge
: OVS bridge name that is used to configure container interface.tier0_router
: Name or UUID of the Tier-0 logical router. Tier-0 routers use downlink ports to connect to Tier-1 routers and uplink ports to connect to external networks.overlay_TZ
: Name or UUID of the NSX overlay transport zone that creates logical switches for container networking. Every hypervisor that hosts the Kubernetes node VMs must join this transport zone.-
container_ip_blocks
: Name or UUID of the IP blocks that create subnets.Note: If a name is chosen, it must be unique.
external_ip_pools
: Name or UUID of the external IP pools that allocate IP addresses for translating container IPs using Source Network Address Translation (SNAT) rules.-
no_snat_ip_blocks
: Name or UUID of the IP blocks that create subnets for no-SNAT projects. You can specify that no-SNAT projects use these IP blocks.Note: If the
no_snat_ip_blocks
value is empty, the value fromcontainer_ip_blocks
is the default. node_type
: Type of container node. Allowed values areHOSTVM
orBAREMETAL
.-
enable_snat
: Setting to enable or disable SNAT.Note: The default value is
true
. -
loadbalancer_enabled
: Setting to enable or disable a load balancer.Note: The default value is
false
. -
lb_default_ingressclass_nsx
: The setting for ingress controller behavior. NSX load balancers handle the ingress if the parameter value istrue
. Third-party ingress controllers handle the ingress if the parameter value isfalse
. An example of a third-party ingress is NGINX.Note: The default value is
true
. -
lb_pool_algorithm
: Load balancing algorithm for the load balancer pool object.Note: Your options are
ROUND_ROBIN
,LEAST_CONNECTION
,IP_HASH
, orWEIGHTED_ROUND_ROBIN
. The default value isROUND_ROBIN
. -
lb_service_size
: Load balancer service size.Note: Your options are
SMALL
,MEDIUM
, orLARGE
. The default value isSMALL
.Important:
SMALL
load balancer supports 10 virtual serversMEDIUM
load balancer supports 100 virtual serversLARGE
load balancer supports 1000 virtual servers
lb_l4_persistence
: Persistence type for ingress traffic through Layer 4 load balancer. Allowed value issource_ip
.lb_l7_persistence
: Persistence type for ingress traffic through Layer 7 load balancer. Allowed values aresource_ip
orcookie
.lb_default_cert
: Insert the contents of the default certificate file for HTTPS load balancing.lb_default_private_key
: Insert the contents of the private key file for the default certificate for HTTPS load balancing.-
apparmor_enabled
: Specifies the status of the AppArmor service on the system. The default value istrue
.Important: This parameter is applicable only to Ubuntu.
Note: For RHEL, the parameter must be set to
false
. apparmor_profile
: Name of the AppArmor profile. The default AppArmor profile name isnode-agent-apparmor
. If you are using another profile, specify the custom profile name as the parameter value.
Next, continue with IBM Cloud Private installation.