NSX-T

NSX-T is a network virtualization and security platform that automates the implementation of network policies, network objects, network isolation, and micro segmentation.

nsx-t

NSX-T network virtualization for Kubernetes

L2 & L3 segregation

NSX-T creates a separate L2 switch, Virtual Distributed Switch (VDS) and L3, distributed logical (DLR) router for every namespace. The namespace level router is called T1 router. All the T1 routers are connected to T0 router, which acts like edge gateway to the IBM® Cloud Private cluster as well as edge firewall and loadbalancer. Due separate L2 switch, all the broadcast traffic is confined to the namespace and as well as due to separate L3 router, each namespace can host its own pod IP subnet.

Micro segmentation

NSX-T provides distributed firewall (DFW) for managing east-west traffic. The Kubernetes network policies are converted into NSX-T DFW rules. With L2 segmentation, dedicated L3 subnets for namespaces and k8s network policies, you can achieve micro segmentation within and across the namespace.

NAT pools

Edge appliance is an important component of the NSX-T management cluster. It offers routing, firewall, load-balancing, and network address translation among other features. By creating pods on the NSX-T pod network (and not relying on the host network), all traffic can be made to traverse through the edge appliance by using its firewall, load-balancing, and network address translation capabilities. The edge appliance assigns SNAT IPs to outbound traffic and DNAT IPs to inbound traffic from the NAT pool (created as part of the NSX-T deployment). By relying on the network address translation, the cluster node IPs are not exposed on the outbound traffic.

References for network considerations with NSX-T

For more information about integrating NSX-T with IBM Cloud Private, see Integrating VMware NSX-T 2.3 with IBM Cloud Private.

config.yaml

network_type: nsx-t

nsx_t:
  managers: <IP address>[:<port>],<IP address>[:port]
  manager_user: <user name for NSX-T manager>
  manager_password: <password for NSX-T manager user>
  manager_ca_cert: ...
  client_cert: ...
  client_private_key:  ...
  subnet_prefix: 24
  external_subnet_prefix: 24
  ingress_mode: <hostnetwork or nat>
  ncp_package: <name of the NSX-T Docker container file that is placed in `<installation_directory>/cluster/images` folder>
  ncp_image: registry.local/ob-5667597/nsx-ncp
  ncp_image_tag: latest
  ovs_uplink_port: <name of the interface that is configured as an uplink port >
  ovs_bridge: <OVS bridge name that is used to configure container interface >
  tire0_router: <name or UUID of the tier0 router >
  overlay_TZ: <name or UUID of the NSX overlay transport zone >
  container_ip_blocks: <name or UUID of the container IP blocks >
  external_ip_pools: <name or UUID of the external IP pools >
  no_snat_ip_blocks: <name or UUID of the no-SNAT namespaces IP blocks >
  node_type: <type of container node. Allowed values are `HOSTVM` or `BAREMETAL`> 
  enable_snat: true
  loadbalancer_enabled: false
  lb_default_ingressclass_nsx: true
  lb_pool_algorithm: ROUND_ROBIN  
  lb_service_size: SMALL   
  lb_l4_persistence: source_ip
  lb_l7_persistence: <persistence type for ingress traffic through Layer 7 loadbalancer. Allowed values are `source_ip` or `cookie`>
  lb_default_cert: ...
  lb_default_private_key: ...
  apparmor_enabled: true
  apparmor_profile: <name of the AppArmor profile to be used>