Container network interface (CNI) configuration
IBM Storage Scale container native core pods uses two network interfaces, an "admin" and a "daemon" interface. The "admin" interface is used for monitoring and management, while the "daemon" interface is used for filesystem I/O. In a default configuration, both networks utilize the Kubernetes node internal network exposed on the host, and the pods are exposed to all host networking.
Configuring CNI provides advantages over the default configuration. The "admin" and "daemon" network interfaces become private to the IBM Storage Scale container native core pod. The "admin" interface moves to use the Kubernetes pod network, which includes security and isolation provided by Kubernetes and NetworkPolicies. The "daemon" interface, if configured by CNI, can be setup to be isolated, private, and high-speed.
When configuring custom network interfaces, CNI is the only supported method for IBM Storage Scale container native network interface. It offers enhanced security and isolation benefits beyond what configuring network interfaces directly on the host provides.
How to configure CNI
-
To configure CNI on OpenShift, follow the steps documented by Red Hat for understanding multiple networks.
-
If the node has only a single physical network attachment, then the network adapter needs to be shared between networks. There are several CNI flavors that allow this:
Bridge
,IPVLAN
andMACVLAN
.- The
MACVLAN
plugin supports network policies and should be the default choice.
- The
-
If the node has multiple physical network attachments and you want to dedicate one of the physical networks to IBM Storage Scale, select
host-adapter
CNI. It will map a physical network adapter into a pod, making it inaccessible to the host and other pods. -
SR-IOV
surpasses the capabilties ofhost-adapter
. Advanced features like RDMA, GPUdirect and bonding of network ports are accessible viaSR-IOV
hardware network. For more information, see About Single Root I/O Virtualization (SR-IOV) hardware networks.Features: GPUdirect and bonding of network ports are not currently supported by IBM Storage Scale. Also
SR-IOV
allows to partition the hardware adapter and hand those partitions to different pods. Configuration is more complex compared to other CNIs and choice of supported network adapters is limited. As of today, IBM Storage Scale has not been tested withSR-IOV
. -
The IP address mapping is required to be static. This can be achieved by setting up static IPs or by configuring DHCP static mapping. For remote mount of a filesystem from a IBM Storage Scale storage cluster, this network must be routed to the storage cluster's daemon network.
-
-
Configure each of the OpenShift nodes that comprise the IBM Storage Scale container native cluster:
a) Create runtime configuration node annotation that has the CNI definition. The specific node annotations for IBM Storage Scale is
scale.spectrum.ibm.com/daemon-network
. The format of the CNI annotation value is formed in the same format of a single network as defined in the format specified byk8s.v1.cni.cncf.io/networks
.Example:
annotations: scale.spectrum.ibm.com/daemon-network: |- { "name": "daemon-network", "mac": "22:22:0a:11:37:b2", "ips": [ "10.17.99.63" ] }
b)
ips
field must be set as this is the static IP desired for this CNI network. c)mac
field may be set if you use dhcp ipam (backed by a statically mapped dhcp).ips
still must be set in addition tomac
.This might seem redundant, but IBM Storage Scale container native uses
ips
to set up its own name resolution. This process is asynchronous and independent of the pod actually being created. Ifips
was not set, then dhcp address would not be discovered until after pod creation.