December 2, 2021 By Rei Odaira
Saju Mathew
Weiming Gu
7 min read

A look at the benefit of using multiple network interfaces in IBM Cloud VPC to obtain network bandwidth beyond the rate limit of a single network interface.

In IBM Cloud Virtual Private Cloud (VPC), a user can attach more than one network interface to a virtual server instance (VSI) to achieve a higher aggregated network bandwidth. By simply attaching two network interfaces and using a separate network flow on each interface, you can easily exceed the rate cap of a single network interface. However, to maximize the throughput from each of the multiple network interfaces, it is necessary to tune the VSIs.

In this blog post, we demonstrate the benefit of using multiple network interfaces in terms of the network bandwidth and describe the necessary performance tuning. In our operational examples, we assume our guest operating system to be Ubuntu Linux; however, this discussion applies to any guest operating system.

Use case scenario

Let’s suppose you have two worker VSIs that exchange large amounts of data, and the network bandwidth between them is the limiting factor of the entire system’s performance, as shown in the figure below. Workers 1 and 2 originally had only the primary network interfaces — Interfaces A and P, respectively. You can attach a secondary network interface to each of them — Interfaces B and Q, respectively. Readers can refer to our online document on how to accomplish this:

Please keep in mind that you must use a sufficiently large profile in terms of the number of virtual CPUs to obtain a higher aggregated bandwidth than that of a single network interface. The appendix of this blog explains how to calculate the network bandwidth of each network interface. In our measurement results shown below, we used VSIs of a 48 virtual CPU profile, which means that interfaces A, B, P and Q are all capable of 16 Gbps maximum throughput rate.

Setting up routing rules

Once you attach a secondary network interface to the worker VSIs, the first thing you must do is to establish connectivity between them. As described in another blog post, you must set up routing rules. Here are sample commands to add necessary routing rules in each VSI:

  • Worker 1:
    ip route add 172.16.0.104/32 via 172.16.0.4 dev ens3
    ip route add 172.16.0.105/32 via 172.16.0.5 dev ens4
  • Worker 2:
    ip route add 172.16.0.4/32 via 172.16.0.104 dev ens3
    ip route add 172.16.0.5/32 via 172.16.0.105 dev ens4

While these example commands were provided for simplicity, it may be desirable to utilize multiple subnets to achieve similar results for larger environments. This would also simplify the need to add routing rules for the host within the same availability zone.

Throughput measurement

You can easily test the maximum network throughput using the iPerf2 benchmark program. To take advantage of the two network interfaces on your VSIs, you need to invoke two processes of the iPerf2 client, targeting different server IP addresses assigned to the two network interfaces. Suppose you run the iPerf2 clients on Worker 1 and an iPerf2 server on Worker 2. The following is a command to start the iPerf2 server:

iperf -s

Listed below are commands to start the iPerf2 clients to measure the TCP throughput between Worker 1 and Worker 2. To run the commands concurrently, you may run them either in the background from a shell or from two separate terminals:

iperf -c 172.16.0.104
iperf -c 172.16.0.105

In the figure below, the blue bar on the left-hand side represents the aggregated throughput you will obtain from the two iPerf2 processes, which we measured to be 18.4 Gbps. As expected, this value was higher than the 16 Gbps cap of a single network interface:

On the right side of the figure, we show throughputs from the bidirectional benchmark mode. The commands we listed above measure unidirectional throughput by transferring data from client to server. In the bidirectional measurement, we may achieve up to two times the throughput of unidirectional because data flows in both directions. One can use the iPerf2 option -- bidir to create another concurrent benchmark in the opposite direction. In our measurement, we observed a 30.3 Gbps bidirectional throughput, as shown in the figure.

Throughput tuning

Although you can easily achieve an aggregated throughput of 18.4 Gbps (which is higher than the throughput cap of 16 Gbps for a single network interface), it is far below the total theoretical maximum throughput of 32 Gbps achievable by two network interfaces. We present three tuning methods to increase the actual throughput and approach the theoretical maximum.

The first tuning is to increase the maximum transmission unit (MTU) from the default 1500 bytes to 9000 bytes. The MTU determines the largest packet size that can be communicated in the network. Sending each packet over a network includes a constant overhead regardless of the size of the packet. Additionally, there is variable part that’s proportional to the size of the packet. By using a larger packet size, you can amortize the constant overhead and reduce the communication overhead per transferred byte. On the downside, if you make the MTU too large, the underlying network infrastructure may not be able to handle such a large packet, which can result in a dropped packet. In IBM Cloud VPC, the maximum supported MTU is 9000. When communication is confined among VSIs within IBM Cloud, it is safe to increase the MTU to 9000.

Run these commands on both Worker 1 and Worker 2 to increase the MTU to 9000 for each network interface (assuming ens3 and ens4 are the interfaces present from the ip link command):

ip link set ens3 mtu 9000
ip link set ens4 mtu 9000

As indicated by the orange bars in the figure above, we observe throughputs of 19.9 Gbps and 34.1 Gbps in the unidirectional and bidirectional benchmarks, respectively.

The second tuning strategy involves increasing the size of the system socket buffer. The socket buffer size of a network stream determines how many bytes of data the sender can send out on the network stream before being acknowledged by the receiver. If a network has a high bandwidth and/or long latency, it is advisable to increase the socket buffer size to take full advantage of the network bandwidth. The downside is that memory overhead is increased for each network stream. Unless your network application uses a large number (more than a few hundreds) of network streams at the same time, you may not need to worry about this memory overhead. Considering the bandwidth cap and the typical latency of the IBM Cloud network, 8 MB would be sufficiently large for a socket buffer.

Here is a command to set the socket buffer size to 8 MB on the guest operating system:

sysctl -w net.core.rmem_default=8388608 net.core.rmem_max=8388608 net.core.wmem_default=8388608 net.core.wmem_max=8388608

This should be run on both Workers 1 and 2. Additionally, your application may need to explicitly specify the size of the socket buffer you are going to use. In iPerf2, this can be realized by adding -w 8M option to both the server and clients. The gray bars in the figure above indicate a large increase in the throughput after setting the socket buffer size to 8 MB: 29.1 Gbps and 46.1 Gbps for unidirectional and bidirectional throughputs, respectively.

The third tuning methodology involves the utilization of multiple network streams for each network interface. Owing to the limitations of single-threaded scheduling performance of a virtualized CPU, a single network stream usually fails to saturate the available network bandwidth. Instead, adding additional flows by using more than one network stream on a network interface allows for improved saturation of the CPU and achieves higher throughput. This behavior, of course, is highly dependent on the application and the specified number of network streams. In iPerf2, adding -P 4 as a client option provides four streams. 

A typical throughput-oriented application attempting to maximize throughput may find that anywhere from two to sixteen streams per network interface obtains optimal throughput. The yellow bars in the figure above show that the unidirectional throughput reached the theoretical maximum of 32 Gbps when using four streams per network interface. In this case, the bidirectional throughput increased to 57.4 Gbps, 90% of the theoretical maximum of 64 Gbps.

Listed below are the final command line options issued to iPerf2 server and clients:

iperf -s -w 8M

iperf -c 172.16.0.104 -w 8M -P 4
iperf -c 172.16.0.105 -w 8M -P 4

Summary

In this blog post, we demonstrated the benefit of using multiple network interfaces in IBM Cloud VPC to obtain network bandwidth beyond the rate limit of a single network interface.  Furthermore, we explained three tuning methods to take full advantage of the available network bandwidth, including adjusting the MTU, optimizing the system socket buffer size and increasing the number of network streams per network interface. Please keep in mind that if you use multiple network interfaces to go outside of VPC, other throughput restrictions might apply, depending on the communication peer. To learn more, check out this online course for advanced networking architectures and best practices.

Learn more about IBM Cloud VPC.

Appendix

For most profiles, the default total bandwidth of a VSI is determined by the expression {Total VSI Bandwidth} = {Virtual CPUs in VSI} x 2 Gbps (at the time of publication). The total VSI bandwidth is also currently capped at 80 Gbps. Please refer to the VPC instance profiles for tables displaying the detailed bandwidth on a per-profile basis. 

A VSI’s total bandwidth is split between storage and network bandwidths, with a default ratio of 1:3 (storage:network), which applies to all instance profiles. Thus, the network bandwidth of a VSI can be determined by the expression {Total VSI Network Bandwidth} = {Total VSI Bandwidth} x 0.75. In addition, the bandwidth of each network interface is capped at 16 Gbps at the time of publication. Please refer to the online documentation for updated information on VPC bandwidth allocation profiles. Note that this default bandwidth allocation is also adjustable after provisioning the VSI.

For example, if your VSI has eight virtual CPUs with one network interface, {Total VSI Bandwidth} = {8 vCPUs} x 2 Gbps, resulting in a total VSI bandwidth of 16 Gbps. Recalling that some of the bandwidth is allocated to storage by default, {Total VSI Network Bandwidth} = {16 Gbps Total VSI Bandwidth} x 0.75. It follows that the total VSI network bandwidth is 12 Gbps, which will remain unchanged even if you attach one more network interface to the VSI. This implies that each of the two network interfaces will be capped at 6 Gbps because the network bandwidth is split evenly between all interfaces.

In contrast, if a VSI has 32 virtual CPUs with a single network interface, the total VSI bandwidth is calculated as 32 x 2 Gbps = 64 Gbps. The network bandwidth is 64 x 0.75 = 48 Gbps, while the network interface is capped at 16 Gbps. If you attach one more network interface to the VSI, each of the two network interfaces will be capable of 16 Gbps, because 16 + 16 = 32 Gbps, which is still below 48 Gbps.

Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters