The network is the key component of any cloud, connecting all resources together to form the cloud.
Consequently, network performance is one of the most critical considerations when we move an application or workload onto the cloud. Many cloud users wonder how fast the network on IBM Cloud is, and the answer is: “It depends…”
For one, network performance depends on where the source and the destination are. This blog post is the first in a series where we will consider several combinations of source and destinations and see how fast the IBM Cloud network really is, both in terms of network throughput and latency. More specifically, this post will focus on how fast the network is between bare metal servers on IBM Cloud. Virtual servers will be covered in our next blog in the series.
IBM Cloud has multiple infrastructures for hosting cloud resources, and these infrastructures have different network capabilities. They can be in a single availability zone (AZ) or across multiple zones. Think of a zone as a single cloud data center.
Learn more about locations for resource deployment here.
IBM Cloud Bare Metal Servers (Classic Architecture)
At the present time, IBM Cloud offers Bare Metal Servers on the so-called Classic Architecture, which is different from the newer Gen2 cloud architecture that offers Virtual Private Cloud (VPC) and advanced networking features. Classic cloud infrastructure does not support VPC. The IBM Cloud documentation contains a deeper comparison of the Classic and VPC infrastructures.
IBM Cloud offers the widest selection of bare metal servers among major public clouds. These range from single processor (single socket) to quad-processor (four sockets) servers. Some bare metal configurations are also certified for SAP workloads and VMware hosts. In terms of network port speeds, bare metal servers on IBM Cloud can support network speeds of 100 Mbps, 1 Gbps, 10 Gbps and for some select high-end configurations, 25 Gbps. In this post, we will see if we could hit those network speeds and determine the network latencies using industry-standard network benchmarks.
Test methodology
To measure the network bandwidth and latency, we used iperf3, netperf and qperf – all industry-standard network performance tools. We made sure that the output from those tools agreed with one another during our testing.
Each test ran for at least five minutes to ensure steady state. Netperf, iperf3 and qperf are client-server test tools, which means that the client runs on one server (the “source”) while the server runs on the other server (the “destination”). In single-direction tests, a single client-server pair was used while for full-duplex (bidirectional) tests, two client-server pairs were used for data transfers in both directions at the same time.
For network performance measurements, we provisioned two bare metal servers within the same zone (data center), in different zones within the same region and in different regions. The following network throughput and latencies were observed.
Network performance data
Since the servers are bare metal, not virtual servers, there is no need for network virtualization, so we would expect to get close to line speeds between servers within the same data center (zone). Based on the data above, that’s pretty much the case. Between bare metal servers with 25-Gbps network cards in the same data center (zone), the network latency is 11.8 microseconds — much better than approximately 47 microseconds with 10-Gbps network cards. This low network latency also comes with an effective network throughput of 22.8 Gbps.
Going across different data centers within the same zone, the effective network throughput drops a little bit, but the latencies go up quite significantly to several hundred microseconds.
Going across different zones drops the effective network throughput down to 1 or 2 Gbps and increases the network latencies to around 15 milliseconds.
Learn more about IBM Cloud Bare Metal Servers
Want to check out the best network performance on our bare metal servers today? Enhance your networking interface by choosing the 25-GbE port speed for top switch input/output (I/O) performance, fabric capability and PCIe lanes.