In the third part of this three-part series, we will present two advanced approaches to configure routing rules for multiple network interfaces.
Source Interface Based Routing Tables involve creating individual routing tables that are specific to each interface. Network Namespaces are best suited for utilizing multiple network interfaces from containers. The figure below shows our multi-zone controller/worker scenario, as explained in Part 1:
- Multiple Virtual Network Interfaces Connectivity and Usage in IBM Cloud VPC: Part 1
- Multiple Virtual Network Interfaces Connectivity and Usage in IBM Cloud VPC: Part 2
- Multiple Virtual Network Interfaces Connectivity and Usage in IBM Cloud VPC: Part 3
Source Interface Based Routing Tables
This approach works in a similar manner to Custom Routes discussed in Part 2 of this series, but with the addition of named routing tables. The tables allow the user to manage more rules or even many more interfaces in a structured fashion.
This approach may be useful if you are working with multiple processes, each of which requires access to a specific amount of bandwidth on a VSI. For example, suppose you want to have a database process only bound to the Data network interface on Worker2. A client database process on Worker1 would want to connect to the database and specifically solely using Interface B, which provides higher bandwidth. Any process that is bound to Interface A would solely work exclusively with a much smaller control or Internet access. In this scenario, it is helpful to create multiple routing tables and employ a specific configuration for each interface (such as a larger MTU and bandwidth for the Data network).
In this example, we will create our first primary interface routing table and will explicitly define routes for our Control subnet on Worker1:
Next, we will create a second table for our Data network. This will specify similar routes, except this time for the Data subnet again on Worker1:
The complete list of the commands on Worker1 and Worker2 is in Appendix B.
Verify connectivity into other virtual machines, such as from Worker1 to Worker2. Note that in this case, we did not explicitly define routes to Zone2 virtual machines via any “172.16.102.0” rules. Instead, we simply allowed the source interface to route all traffic from the interface to the appropriate gateway. This allows the ping to another zonal VM to succeed. However, in this case, we must explicitly specify which interface we would like to use for communication. This is in contrast with the Custom Routes mentioned earlier:
Regarding extensibility and usability, Source Based Routing Tables have their own advantages and disadvantages, compared with Custom Routes. Suppose that we are going to add another worker, Worker3, in another availability zone. In addition, suppose that Worker3 is connected to the Control and Data networks through Control Subnet3 and Data Subnet3. With Custom Routes, it is required to add new routing rules not only on Worker3, but also on Worker1 and Worker2.
In contrast, with Source Based Routing Tables, because we do not call out any destination routing rules in our existing tables, we do not need to modify them. We will only need to create new routing tables for Worker3. However, this is with the trade-off that we are required to specify to our software application the specific interface we want to bind and use for communication.
Network Namespaces
Suppose a virtual machine employs multiple network interfaces. In this VSI, we have multiple containers each utilizing its own network interface. Using Linux network namespaces is ideal in this configuration to maintain and manage various interfaces and associated routes. This methodology will allow processes and containers to run exclusively in the context of a provided namespace and be restricted to only the designated namespace. By employing a single network namespace in the context of a specific container, the container and processes achieve isolation and reachability to only specified networks and interfaces. It will be impossible for the container or processes in this context to access any other interface unless explicitly permitted.
Here are the first two commands to execute on Worker1:
Once the secondary interface is moved into the “data” namespace, it becomes unavailable in the default context/namespace. Now, we can add routes and even additional interfaces within the context of the newly created data namespace:
In the examples above, on Worker1, we move the ens4 adapter into the data namespace. Next, we bring up the ens4 and lo (loopback) interfaces in the data namespace, and then run the dhcp client to enable the ens4 interface to obtain a valid IP address. Subsequently, we add a default route for the interface using the IP address and gateway. Appendix C indicates the whole command sequences for Worker1 and Worker2.
As setup, we currently can ping the appropriate control network interface from Worker1 to Worker2 in Zone2 using the default context:
However, to reach the data network of Worker2 from Worker1, it is necessary to enter the data namespace. This can be achieved via ip netns exec data
command followed by the commands that should be executed in the provided network namespace context:
The ping to “172.16.102.0/24” network will only succeed within the data namespace as the ens4 interface is only available in that namespace. Additionally, the control network is in the default namespace; hence, it cannot also be reached from the data network namespace context.
Summary
To summarize, Network Namespaces are ideal when there are multiple containers, each with distinct network isolation and reachability. In our example, control network processes cannot communicate or know about data network processes or communication and vice versa. This may be a disadvantage — for example, if we are required to create automation that restarts a workload using the control network when we observe an issue with a data network worker. Thus, it may be ideal to utilize Source Interface Based Routing Tables when necessary to automate the control network function that relies on a data network pattern. This methodology will still allow one to separate the network interfaces and bandwidth for the data network, while still specifying for all control communication to use a specified interface. Moreover, we explored the simplicity of adding Custom Routes to achieve reachability in a simple multi-network-interface system where the communication also involved zone traversal.
As we explored in this blog series, there are benefits of utilizing a specific method to achieve reachability and each method mentioned here is appropriate for a given use-case scenario. We hope this has helped you understand how to achieve reachability with multiple network interfaces attached to virtual machines in IBM Cloud and choose an appropriate method that works well in your situation.
- Multiple Virtual Network Interfaces Connectivity and Usage in IBM Cloud VPC: Part 1
- Multiple Virtual Network Interfaces Connectivity and Usage in IBM Cloud VPC: Part 2
- Multiple Virtual Network Interfaces Connectivity and Usage in IBM Cloud VPC: Part 3
Appendix A: Custom Routes
Worker1:
Worker2:
Appendix B: Source Interface Based Routing Tables
Worker1:
Worker2:
Appendix C: Network Namespaces
Worker1:
Worker2: