IBM Support

PowerVM Virtual Ethernet Speed is often confused with VIOS SEA speed

How To


Summary

Faster physical network and faster virtual networks are not one to one speeds.

Objective

Nigels Banner

Steps

Update in 2016:

Please note this blog is from 2011 during the initial POWER7 days and technology has moved on with faster CPUs and memory in addition there has been software improvements.  I am amazed how many good computer people read this and assume this blog is true to all time!

Virtual Ethernet is faster now but there is also a warning here. Here is an analogy.

  • Most vehicles can do 10 MPH (including me on a bike), most cars can do 100 MPH (including my family car) but very few vehicles can do 1000 MPH. That last times ten multiplier is a real massive jump!
  • The same issues arise from 100 Mb/s to 1 Gb/s and on to 10 Gb/s = easy, easy and hard.

Multiple Gb/s is possible over virtual Ethernet (especially on POWER8 or later) but you have to work at it. That means network tuning and may be application changes.  Small packets is a killer just due to the number of packets you need and the "transactional costs" of dealing with higher numbers.  You also need much larger buffering at source and target or you run out of room and have to suspend transmission.  There is also problems that are not network related - can you create at the sender the data this fast and process it all at the receiving end.  This is not just a virtual Ethernet issue the same is true for physical networks with hardware assist in adapters and switches.  Finally, take a look at SR-IOV in POWER8 which has hardware but can still allow Live Partition Mobility.

I have had a couple of Power systems administrators make assumptions about the virtual Ethernet speed improvements when they install a 10 Gb vNIC in a VIOS which are simply not true.  I guess that if three teams have made this mistake then others are about too. So I intend here to put the record straight.
The expectation is that (deliberately fully spelt out long-hand to make it very clear):
  • When they upgrade the Integrated Virtual Ethernet (also called a Host Ethernet Adapter) from 1 Gigabit per second to 10 Gigabit per second
  • that the Virtual Ethernet inside the machine which runs at about 1 Gigabit per second beforehand will also jump to 10 Gigabit per second
  • so that the Virtual machines (LPAR) using the Virtual I/O Server (VIOS) Shared Ethernet Adapter (SEA) will benefit from the increase network speed.
  • That is the client virtual machines speed jump from 1 Gb/s to 10 Gb/s to the external network.
 Sorry guys but this is not true

 Below are my comments:
  • It is fundamentally true that the performance between two nodes across a network is constrained by the weakest link.
  • The virtual Ethernet speed is not related to the physical adapter speed of the adapter in the VIOS - I am not sure why that is assumed.
  • Perhaps the speed of the 1Gb /s Ethernet adapter or 1 Gb/s IVE/HEA being similar to the typical speeds of the virtual Ethernet which are 0.5 Gb/s to 1.5 Gb/s lead many people to believe or assume the physical adapter or IVE/HEA was a limiting factor but it is not.
  • If it was true then virtual machine to virtual machine speed (i.e. without that limit) would be much faster but it is not.
  • The virtual Ethernet function is performed by the Power5 or Power6 or Power7 main processors providing the TCPIP multiple layers of packet handling and then communication via hypervisor data structures plus interrupts across the machine via the Hypervisor. The actual blocks of data in memory are not moved (a very expensive CPU operation)  as the Hypervisor can perform this function via virtual memory address space remapping the translations of the memory blocks (which takes some CPU cycles but much less). But all these CPU control and virtual memory operations do take elapsed time and CPU cycles.
  • Note: the simplest way to reduce virtual Ethernet performance is to not have enough Power CPU cycles available to your source and target virtual machines (LPARs). For example by capping your virtual with too low and Entitlement, too low a virtual CPU number, low Entitlements and forcing the virtual machine to compete for extra CPU time from the pool or too small a CPU pool.
  • The physical PCIe adapter or IVE/HEA has a dedicated processor on-board to provide these function off-loaded from the main CPUs and get the data via DMA (Direct Memory Access) with no Power CPU cycles. When you upgrade the adapter I think it is safe to assume you upgrade the Ethernet media speed and probably the adapter processor to handle the increased bandwidth.
Referring to the below diagram:
  1. The network speed between virtual machine A and virtual machine B is purely due to the virtual Ethernet speed - something like 0.5 to 1.5 Gb/s.
  2. The network speed between virtual machine A and the VIOS is the virtual Ethernet speed - as above.
  3. The network speed between virtual machine A and target outside of the machine via the VIOS SEA is the lower of the virtual Ethernet speed and the physical network speed.
    • If the external network is 1 Gb/s then this it is the limiting factor.
    • If the external network is 10 Gb/s then the virtual Ethernet is the limiting factor i.e. about 1Gb/s.
  4. Of course, the VIOS using a 10 Gb/s physical adapter or vNIC can maintain much higher external network speed for ALL the virtual machines as each virtual machine can con-currently be sending and receiving network data at the virtual Ethernet speed.
  5. Virtual machine C is not using the virtual Ethernet. If it has a 10 Gb/s adapter or vNIC then is can reach higher speeds to other nodes on the network like for example, 5 to 7 Gb/s. The speed depends on a number of things including
    1. Power CPU cycles available in the virtual machine as running higher speed networks does take considerably more Power CPU cycles for device drivers and interrupts. This is often over looked and is true for virtual or physical networks.
    2. Dealing with the data itself - reading the data from disk or creating it or dealing with incoming data or just saving it to disk.
    3. External network contention - although modern Ethernet switches avoid a lot of packet collisions and resends.
    4. The far end also has to deal with the data - we often hit this limitation when tuning our end of a network without realising it. I often get ask - what is stopping AIX going faster and its not our end that is the issue.
    5. Tuning of the local operating system in flow control, RFA settings, buffering etc. see the "no" commands for lots of options.
    6. Large packet always helps as you then get more bytes per network "transaction" - each transaction costs CPU cycles and interrupts.
  6. Converting virtual machines A or B at around 1 Gb/s to a type C virtual machine will get you a jump of 5 to 7 times network speed.
  7. The Shared Ethernet Adapter (SEA) on the VIOS does require Power CPU cycles (so don't starve you VIOS of compute time) but does not limit network speeds. The bridging is done at a lower TCPIP stack level for maximum efficiency.
dIAGRAM
  
The bottom line is the virtual Ethernet speed is not effected by VIOS physical adapter  - either PCIe or vNIC.
 Well I hope this clears up this erroneous assumption and you have better expectations of the effects of a VIOS network adapter upgrade.
Originally posted and updated on DeveloperWorks in 2011 and 2016 with 72242 Visits.

Additional Information


Other places to find content from Nigel Griffiths IBM (retired)

Document Location

Worldwide

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW1W1","label":"Power -\u003EPowerLinux"},"Component":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG60","label":"IBM i"},"Component":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}}]

Document Information

Modified date:
14 June 2023

UID

ibm11120155