In the second part of this three-part blog series, we will demonstrate adding custom routing rules to accomplish the separation and then test the reachability.
This is one of the simplest methods to achieve reachability in an existing virtual machine without having to understand the concepts of routing tables or namespaces. This is part of the series of blogs to explain how to use multiple network interfaces in IBM Cloud VPC.
This figure shows our multi-zone controller/worker scenario, as explained in Part 1:
From Worker1 we can see that ping to Control Subnet1 and Subnet2 are working by default using the default route. Therefore, we need to add routing rules for the data subnet to ping Worker2 on Interface D from Worker1 using Interface B.
Generally, a simple approach can be to simply add the rules via an ip route add command. Specifically, in this case, one would use the command on Worker1:
ip route add 172.16.102.0/24 via 172.16.101.1 dev ens4 metric 0
Scroll to view full table
Similarly, here is the command to execute on Worker2:
ip route add 172.16.101.0/24 via 172.16.102.1 dev ens4 metric 0
Scroll to view full table
Once the rules have been established, one can check routes via ip route command. Ping the respective IP addresses from Worker1 to Worker2 to validate that reachability is achieved:
ping -c 3 -I 172.16.1.5 172.16.2.5
ping -c 3 -I 172.16.101.5 172.16.102.5
Scroll to view full table
Additionally, to be sure that communication is NOT occurring from the control interfaces to data network and vice versa, we can validate that the following pings will NOT be successful:
ping -c 3 -I 172.16.1.5 172.16.102.5
ping -c 3 -I 172.16.101.5 172.16.2.5
Scroll to view full table
In this case, the correct isolation has been achieved by the combination of the routing rules and anti-spoofing. More generally, one should enforce the isolation by security group rules. Readers are referred to our online manual.
One additional item to note is the default routing within the virtual machine. Since we have properly added routes based on the destination, even the default ping command without specifying a source interface will work properly. Therefore, the following commands will be successful:
ping -c 3 172.16.2.5
ping -c 3 172.16.102.5
Scroll to view full table
Persistent Custom Routes
Now that we have successfully tested the Custom Routes we added, we can make it persistent by adding the following on Ubuntu distributions to the netplan. First, move the existing /etc/netplan/50-cloud-init.yaml to /etc/netplan/99-netcfg.yaml and then add a route to the destination network via the local interface’s gateway.
New /etc/netplan/99-netcfg.yaml:
network:
version: 2
ethernets:
ens3:
dhcp4: true
match:
macaddress: 02:00:02:06:b3:a3
set-name: ens3
ens4:
dhcp4: true
routes:
- to: 172.16.102.0/24
via: 172.16.101.1
metric: 0
Scroll to view full table
Once this is done, /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg can be created, according to the following text:
network: {config: disabled}
Scroll to view full table
Now, issue the netplan apply command and proceed to test the reachability by ping.
On Red Hat Enterprise Linux, kindly refer to the Red Hat documentation on configuring the custom/static routes.
Summary
In this post, we explained how to configure Custom Routes to accomplish reachability in our use case of multiple network interfaces. Custom Routes are the simplest method to enable communication and separation among virtual server instances with multiple network interfaces. In the next (and final) post, we will present two more approaches that are more advanced and flexible, namely Source Interface Based Routing Tables and Network Namespaces.