October 29, 2021 By Saju Mathew
Rei Odaira
Weiming Gu
6 min read

In the third part of this three-part series, we will present two advanced approaches to configure routing rules for multiple network interfaces.

Source Interface Based Routing Tables involve creating individual routing tables that are specific to each interface. Network Namespaces are best suited for utilizing multiple network interfaces from containers.  The figure below shows our multi-zone controller/worker scenario, as explained in Part 1:

Source Interface Based Routing Tables

This approach works in a similar manner to Custom Routes discussed in Part 2 of this series, but with the addition of named routing tables. The tables allow the user to manage more rules or even many more interfaces in a structured fashion.  

This approach may be useful if you are working with multiple processes, each of which requires access to a specific amount of bandwidth on a VSI. For example, suppose you want to have a database process only bound to the Data network interface on Worker2. A client database process on Worker1 would want to connect to the database and specifically solely using Interface B, which provides higher bandwidth. Any process that is bound to Interface A would solely work exclusively with a much smaller control or Internet access. In this scenario, it is helpful to create multiple routing tables and employ a specific configuration for each interface (such as a larger MTU and bandwidth for the Data network).

In this example, we will create our first primary interface routing table and will explicitly define routes for our Control subnet on Worker1:

echo 200 ens3tab >> /etc/iproute2/rt_tables
ip route add 172.16.1.0/24 dev ens3 proto static scope link src 172.16.1.5 table ens3tab
ip route add default via 172.16.1.1 dev ens3 proto static src 172.16.1.5 table ens3tab
ip rule add from 172.16.1.5 table ens3tab

Next, we will create a second table for our Data network. This will specify similar routes, except this time for the Data subnet again on Worker1:

echo 201 ens4tab >> /etc/iproute2/rt_tables
ip route add 172.16.101.0/24 dev ens4 proto static scope link src 172.16.101.5 table ens4tab
ip route add default via 172.16.101.1 dev ens4 proto static src 172.16.101.5 table ens4tab
ip rule add from 172.16.101.5 table ens4tab

The complete list of the commands on Worker1 and Worker2 is in Appendix B.

Verify connectivity into other virtual machines, such as from Worker1 to Worker2. Note that in this case, we did not explicitly define routes to Zone2 virtual machines via any “172.16.102.0” rules. Instead, we simply allowed the source interface to route all traffic from the interface to the appropriate gateway.  This allows the ping to another zonal VM to succeed. However, in this case, we must explicitly specify which interface we would like to use for communication. This is in contrast with the Custom Routes mentioned earlier:

ping -c 3 -I 172.16.1.5 172.16.2.5
ping -c 3 -I 172.16.101.5 172.16.102.5

Regarding extensibility and usability, Source Based Routing Tables have their own advantages and disadvantages, compared with Custom Routes. Suppose that we are going to add another worker, Worker3, in another availability zone. In addition, suppose that Worker3 is connected to the Control and Data networks through Control Subnet3 and Data Subnet3. With Custom Routes, it is required to add new routing rules not only on Worker3, but also on Worker1 and Worker2. 

In contrast, with Source Based Routing Tables, because we do not call out any destination routing rules in our existing tables, we do not need to modify them. We will only need to create new routing tables for Worker3. However, this is with the trade-off that we are required to specify to our software application the specific interface we want to bind and use for communication.

Network Namespaces

Suppose a virtual machine employs multiple network interfaces. In this VSI, we have multiple containers each utilizing its own network interface. Using Linux network namespaces is ideal in this configuration to maintain and manage various interfaces and associated routes. This methodology will allow processes and containers to run exclusively in the context of a provided namespace and be restricted to only the designated namespace.  By employing a single network namespace in the context of a specific container, the container and processes achieve isolation and reachability to only specified networks and interfaces.  It will be impossible for the container or processes in this context to access any other interface unless explicitly permitted. 

Here are the first two commands to execute on Worker1:

ip netns add data
ip link set ens4 netns data

Once the secondary interface is moved into the “data” namespace, it becomes unavailable in the default context/namespace. Now, we can add routes and even additional interfaces within the context of the newly created data namespace:

ip netns exec data ip link set ens4 up
ip netns exec data ip link set lo up
ip netns exec data dhclient ens4
ip netns exec data ip route add default via 172.16.101.1 dev ens4 proto static src 172.16.101.5

In the examples above, on Worker1, we move the ens4 adapter into the data namespace. Next, we bring up the ens4 and lo (loopback) interfaces in the data namespace, and then run the dhcp client to enable the ens4 interface to obtain a valid IP address. Subsequently, we add a default route for the interface using the IP address and gateway. Appendix C indicates the whole command sequences for Worker1 and Worker2.

As setup, we currently can ping the appropriate control network interface from Worker1 to Worker2 in Zone2 using the default context:

ping -c 3 172.16.2.5

However, to reach the data network of Worker2 from Worker1, it is necessary to enter the data namespace. This can be achieved via ip netns exec data command followed by the commands that should be executed in the provided network namespace context:

ip netns exec data ping -c 3 172.16.102.5

The ping to “172.16.102.0/24” network will only succeed within the data namespace as the ens4 interface is only available in that namespace. Additionally, the control network is in the default namespace; hence, it cannot also be reached from the data network namespace context.

Summary

To summarize, Network Namespaces are ideal when there are multiple containers, each with distinct network isolation and reachability. In our example, control network processes cannot communicate or know about data network processes or communication and vice versa. This may be a disadvantage — for example, if we are required to create automation that restarts a workload using the control network when we observe an issue with a data network worker. Thus, it may be ideal to utilize Source Interface Based Routing Tables when necessary to automate the control network function that relies on a data network pattern. This methodology will still allow one to separate the network interfaces and bandwidth for the data network, while still specifying for all control communication to use a specified interface. Moreover, we explored the simplicity of adding Custom Routes to achieve reachability in a simple multi-network-interface system where the communication also involved zone traversal.

As we explored in this blog series, there are benefits of utilizing a specific method to achieve reachability and each method mentioned here is appropriate for a given use-case scenario. We hope this has helped you understand how to achieve reachability with multiple network interfaces attached to virtual machines in IBM Cloud and choose an appropriate method that works well in your situation.

Appendix A: Custom Routes

Worker1:

ip route add 172.16.102.0/24 via 172.16.101.1 dev ens4 metric 0  

Worker2:

ip route add 172.16.101.0/24 via 172.16.102.1 dev ens4 metric 0  

Appendix B: Source Interface Based Routing Tables

Worker1:

echo 200 ens3tab >> /etc/iproute2/rt_tables
ip route add 172.16.1.0/24 dev ens3 proto static scope link src 172.16.1.5 table ens3tab
ip route add default via 172.16.1.1 dev ens3 proto static src 172.16.1.5 table ens3tab
ip rule add from 172.16.1.5 table ens3tab

echo 201 ens4tab >> /etc/iproute2/rt_tables
ip route add 172.16.101.0/24 dev ens4 proto static scope link src 172.16.101.5 table ens4tab
ip route add default via 172.16.101.1 dev ens4 proto static src 172.16.101.5 table ens4tab
ip rule add from 172.16.101.5 table ens4tab

Worker2:

echo 200 ens3tab >> /etc/iproute2/rt_tables
ip route add 172.16.2.0/24 dev ens3 proto static scope link src 172.16.2.5 table ens3tab
ip route add default via 172.16.2.1 dev ens3 proto static src 172.16.2.5 table ens3tab
ip rule add from 172.16.2.5 table ens3tab

echo 201 ens4tab >> /etc/iproute2/rt_tables
ip route add 172.16.102.0/24 dev ens4 proto static scope link src 172.16.102.5 table ens4tab
ip route add default via 172.16.102.1 dev ens4 proto static src 172.16.102.5 table ens4tab
ip rule add from 172.16.102.5 table ens4tab

Appendix C: Network Namespaces

Worker1:

ip netns add data
ip link set ens4 netns data
ip netns exec data ip link set ens4 up
ip netns exec data ip link set lo up
ip netns exec data dhclient ens4
ip netns exec data ip route add default via 172.16.101.1 dev ens4 proto static src 172.16.101.5

Worker2:

ip netns add data
ip link set ens4 netns data
ip netns exec data ip link set ens4 up
ip netns exec data ip link set lo up
ip netns exec data dhclient ens4
ip netns exec data ip route add default via 172.16.102.1 dev ens4 proto static src 172.16.102.5
Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters