IBM Support

IBM i Virtualization Details

Flashes (Alerts)


Abstract

IBM i Virtualization Details

Content

You are in: IBM i Technology Updates >  Hardware and Firmware > IBM i Virtualization Details

For more information about IBM i Technology Refreshes and Resaves, see IBM i Technology Refreshes and Resaves.

Each item in this page indicates the supported environment as VIOS (for functions that require a VIOS configuration) or iVirtualization (for functions that are supported in a configuration where IBM i is serving I/O to a client IBM i, AIX, or Linux partition, or where IBM i is a client of an IBM i partition).  See IBM i Virtualization and Open Storage for more information on iVirtualization.

Also, for IBM i Virtualization configurations:

  • The hosting partition can be any IBM i release, IBM i 6.1 or newer, that is supported on a particular Power System processor generation natively.
  • The client partition can be any IBM i release, IBM i 6.1 or newer, that is supported on that same Power System processor generation natively or with VIOS.
  • For example, IBM i 7.1, on a Power System with POWER6 technology, can have client partitions that are IBM i 6.1 or 7.2; however, IBM i 7.3 is not supported on POWER6, so it cannot be a client of IBM i 7.1 on a POWER6 system.

See IBM i Virtualization Summary for a summary view of the items described in this page.


December 2022 - IBM i 7.5 TR 1 and IBM i 7.4 TR 7 and IBM i 7.3 TR 13

Support for VIOS 3.1.4 -- IBM i 7.5 TR 1 and IBM i 7.4 TR 7 and IBM i 7.3 TR 13
Support for the latest release of VIOS is added to IBM i 7.5, 7.4 and 7.3.  Among the enhancements with this level are improvements for Live Partition Mobility and the VIOS dump support.
For more information about these and other virtualization enhancements, see October 11, 2022, Announce letter:  IBM Power Virtualization enhancements.
IBM i leverages VIOS NPIV multiple queues -- IBM i 7.5 TR 1 

Support for IBM i to leverage NPIV multiple queues was added previously to IBM i 7.4 and 7.3 -- see more information in the next section, which was published earlier.  Now support is also added to IBM i 7.5 TR 1.


May 2022 - IBM i 7.4 and IBM i 7.3

IBM i leverages VIOS NPIV multiple queues

Fibre Channel (FC) adapters with high bandwidth, such as 16 Gb or 32 Gb, support multiple request and completion queues for storage I/O communication. Use of multiple work queues in the physical FC stack significantly improves the input/output requests per second (IOPS) due to the ability to drive the I/O operations in parallel through the FC adapter.

The recently announced IBM PowerVM VIOS support for multiple queues for the interfaces between virtual FC client and server adapters (see announcement IBM Virtualization Enhancements) exposed the physical FC adapter capabilities to the client operating system. Now IBM i takes advantage of that support to use multiple request and response queues in the IBM i virtual FC client adapter to provide increased parallelism throughout the entire virtual I/O stack. This has the potential to improve overall virtual FC I/O throughput, reduce latency, and achieve higher IOPS. This function is supported for both Power10 and Power9 servers. See NPIV Multiple-Queue support in IBM Documentation for more information.

The IBM i configuration for this support is automatic, based on the VIOS settings, but those settings can be changed by the system administrator.  All configuration is managed in the VIOS partition.

Requirements:

  • IBM Power server with Power10 technology or IBM Power server with Power9 technology with FW940 or later
  • VIOS version 3.1.2.10 or later
  • 16 Gb or 32 Gb Fibre Channel adapter - Feature codes EN1A, EN1B, EN1C, EN1D, EN2A, and EN2B

November 2020 - IBM i 7.4 and IBM i 7.3 and IBM i 7.2

For more details on these new functions, and more, see announcement IBM Virtualization Enhancements.
NPIV Acceleration

VIOS has enhanced Fibre Channel NPIV to provide multiqueue support. NPIV multiqueue provides improved performance, including more throughput, reduced latency, and higher IOPS, by spreading the I/O workload across multiple work queues.

Requirements for full performance benefits :
  • 16 Gb or 32 Gb Fibre Channel adapter - Feature codes EN1A, EN1B, EN1C, EN1D, EN2A, and EN2B
  • FW940, or later
  • VIOS 3.1.2.10, or later
  • Any supported IBM i release
Multiple Mapping of NPIV ports for Live Partition Mobility (LPM) operations
LPM now supports multiple client virtual Fibre Channel (VFC) adapter ports being mapped to a single physical Fibre Channel port.  Previously, if you wanted to use Live Partition Mobility (LPM) with NPIV, each client virtual Fibre Channel adapter needed to be mapped to a separate physical port.  With VIOS 3.1.2.10 and any supported IBM i release, that restriction has been removed.  The same physical port can now be double-mapped to the same IBM i client partition, allowing for better adapter utilization.

May 2020 - IBM i 7.4 Technology Refresh 1 and IBM i 7.3 Technology Refresh 7 and PTFs

IBM i Tape VIrtualization - PTFs for IBM i 7.4, IBM i 7.3, IBM i 7.2

Support has been added to allow the sharing of a tape library among multiple IBM i partitions -- without requiring multiple adapters, a SAN switch, or a VIOS partition.  The sharing of the library device is done via an NWSD, as easily as sharing a stand-alone tape device.  Both the server and client IBM i partitions can be running any combination of IBM i 7.4, IBM i 7.3, and IBM i 7.2.  Configuration types supported include SAS direct-attached devices, Fibre Channel direct-attached devices, and SAN-attached Fibre Channel devices.

Support is provided via these PTFs and their requisites, which will be available on or before 2020-06-30:

  • IBM i 7.4 MF66863
  • IBM i 7.3 MF64802
  • IBM i 7.2 MF64803
Additional details, including specific tape libraries supported, may be found at the following topic links:

IBM i Hybrid Network Virtualization - IBM i 7.4 TR 1, IBM i 7.3 TR 7
In order to more fully and efficiently use increasing network bandwidths, and still use the Live Partition Mobility (LPM) function, configurations are now supported where a native Virtual Function is used for performance, and a virtual interface (vNIC or Virtual Ethernet/SEA) is used for the live migration.  

Single Root I/O Virtualization (SR-IOV) technology provides low overhead for hardware adapter sharing, resulting in the best overall performance and Quality of Service (QoS) capability.  Essentially, a native Virtual Function is configured as the primary network interface, and a virtual interface is configured as a back-up in the VIPA configuration.  IBM i Hybrid Network Virtualization then automates the removing and adding of the SR-IOV logical port as part of the partition migration.

Minimum requirements:

System firmware level: FW940
HMC level: 9.1.940.0

IBM POWER9 Server

IBM i 7.4 TR 1 or IBM i 7.3 TR 7


November 2019 - IBM i 7.4 Technology Refresh 1 and IBM i 7.3 Technology Refresh 7

Support for SR-IOV logical ports on a restricted I/O LPAR - IBM i 7.4 TR 1, IBM i 7.3 TR 7
Dynamic LPAR (DLPAR) can be used to assign SR-IOV logical ports to LPARs that are configured with restricted I/O.  This configuration has the benefit of high-speed, low-latency traffic, while still being able to make use of Live Partition Mobility.  For Live Partition Mobility in a configuration with SR-IOV, use Dynamic LPAR (DLPAR) to remove the SR-IOV logical ports from the LPAR before performing the LPM action.   For IBM i 7.4 TR 1 this new support also enables the usage of RoCE for IBM i on the Power S922 server; RoCE is required for the IBM Db2 Mirror for i product.

Additional requirements:

System firmware level: FW940
HMC level: 9.1.940


October 2017 - IBM i 7.3 Technology Refresh 3 and IBM i 7.2 Technology Refresh 7

Increase in max LUNs per port for NPIV configurations
Clients have been growing their configurations at a pace that has caused them to hit the limit on the number of LUNs they can create for each virtual Fibre Channel port. With IBM i 7.3 TR3 and IBM i 7.2 TR7, the limit of 64 LUNs per port has now been increased to 127 for NPIV configurations (regardless of physical Fibre Channel adapter type).

Increase of virtual LUN size after initial allocation (IBM i 7.3 TR 3 only)

External storage systems have the capability to increase a LUN size after its initial allocation. In the past, IBM i has ignored any increase in a LUN’s size on an external storage system. Additional LUNs needed to be added to increase the capacity of a logical partition. With IBM i 7.3 TR3, IBM i will recognize that an increase has occurred and, after an IPL, will allow the LUN to be used at its newly increased size. 

Implementation notes:

  • With IBM i 7.3 TR 3 and IBM i 7.2 TR 7, and later, this is supported for Spectrum Virtualize based storage products.  (Note that support was added later for IBM DS8900 storage with IBM i 7.3 TR 7,  IBM i 7.4 TR 1, and later code levels.  See the entry for November 2019 in IBM i Functional Enhancements Details  for details about the DS8900 support.)
  • If the units are mirrored by IBM i, the size change will not be recognized by IBM i.   However, if mirroring is then stopped and the system IPLed past storage management recovery, the new larger size will be used.
  • LUNs formatted with 4096 byte sectors will not change size.  Dynamic increase of LUN size is supported only for 512, 520 and 4160 byte sector formats
  • Reduction of LUN size is NOT supported
  • See IBM TechNote for any future notes about this support.

Additional PTFs:  MF64100, MF64245, MF64631, MF64944

Automation for Cloud Init
Previously, CloudInit needed to be enabled before capturing a logical partition. Now a deployed partition on a POWER8 system will automatically enable the IBM i CloudInit component on the deployment of a partition.  This should be particularly useful to those who use PowerVC for capturing and deploying logical partitions.  Note that both the captured image and the new deploy need to be on a POWER8 server.


November 2016 - IBM i 7.3 Technology Refresh 1 and IBM i 7.2 Technology Refresh 5

For more details on these new functions, and more, see announcement letter IBM PowerVM and IBM PowerVC enhancements.

vNIC fail-over
With IBM i 7.3 TR1 and IBM i 7.2 TR5, and PowerVM V2.2.5, support is now provided for automated fail-over for SR-IOV network configurations.

Shared Ethernet Adapter (SEA) large send

With IBM i 7.3 TR1 and IBM i 7.2 TR5, and PowerVM V2.2.5, SEA performance is improved for IBM i workloads.


April and May 2016 - IBM i 7.3 and IBM i 7.2 Technology Refresh 4

Live Partition Mobility - Support for active tapes
Live Partition Mobility (LPM) has been enhanced to allow movement of an IBM i 7.3 or IBM i 7.2 partition from one server to another while a tape drive is actively in use.  The tape device may be in the midst of a tape operation, such as save or restore, while the partition is moving to the target server, and it will continue to operate seamlessly during that move.

A remaining restriction is that the tape drive or tape library Vary On operation may fail during the move to another server.  If a tape Vary On fails due to an LPM operation, just retry the Vary On.

Configurations are supported for the following tape and tape library devices:

  • Fibre Channel LTO5 and newer drives in the 7226 enclosure
  • TS3100/TS3200 (3573) with LTO5 and newer Fibre Channel drives
  • TS3310 (3576) with LTO5 and newer Fibre Channel drives
  • TS3500/TS4500 with LTO5 and newer, and 3592-E07 and newer Fibre Channel drives
  • ProtecTIER® virtual tape library, code level 3.3.5.1 or newer

Note: The device driver uses persistent reservation for the tape drives.  If a device is attached that emulates a supported IBM tape drive, but does not support persistent reservation as expected, the results may be unpredictable.


November 2015 - IBM i 7.2 Technology Refresh 3, IBM i 7.1 Technology Refresh 11
Support for IBM i Virtualization configurations with Little Endian Linux client partition
Little Endian Linux partitions can now be run on Power Systems. To better fit Linux workloads with IBM i workloads, a Little Endian Linux partition can be configured as a client that uses an IBM i 7.2 or IBM i 7.1 partition as an I/O server partition. Both HMC and VPM configurations are supported.


vNIC with SR-IOV
With IBM i 7.2 TR 3 and IBM i 7.1 TR 11, plus newer levels of firmware and VIOS, a Virtual Network Interface Controller (vNIC) allows the Live Partition Mobility function to be done with SR-IOV configurations.  For more information see announcement IBM PowerVM V2.2.4.


VLAN Tag Support for Network Boot and Install (IBM i 7.2 TR 3 only)
With IBM i 7.2 TR 3, a reconfiguration of the virtual LAN will no longer be required in order to do a network boot and install of an IBM i partition.


Large Send Offload on Virtual Ethernet
For both IBM i 7.2 TR 3 and IBM i 7.1 TR 11, IBM i Virtualization configurations with virtual Ethernet traffic between partitions on the same system should see a performance benefit due to implementation of Large Send Offload.  An IBM Server with POWER8 technology and Firmware 840, or newer, is also required.


May 2015 - IBM i 7.2 Technology Refresh 2, IBM i 7.1 Technology Refresh 10

SR-IOV Support for Power Systems with POWER8 Technology
IBM i 7.2 and IBM i 7.1 now support SR-IOV NIC in S814 and S824 scale-out server slots and in the PCIe Gen3 I/O Drawer slots, as well as the previously announced E880 and E870 System Node slots.  

Note that SR-IOV support requires an HMC.  Specific requirements include:

  • IBM i 7.2 TR 2 or IBM i 7.1 TR 10
  • HMC 830
  • FW 830
  • VIOS 2.2.3.51 (if using VIOS configuration)
  • NIC adapter:  #EN0H, #EN0J, #EN0K, #EN0L, #EN0M, #EN0N, #EN15, #EN16, #EN17, or #EN18

For more details, see the April 28th 2015 IBM Power Systems:  Server and  I/O Enhancements RFA.


May and June 2014 - IBM i 7.2 and IBM i 7.1 Technology Refresh 8

Native, iVirtualization, VIOS - Native SR-IOV Ethernet

New Single Root I/O Virtualization (SR-IOV) technology for Network Interface Controller (NIC) allows native sharing of Ethernet adapters without VIOS in the configuration, and allows increased performance and quality of service control when VIOS is in the configuration.  This is similar to the Integrated Virtual Ethernet (IVE) technology that was available on previous generations of Power Systems.  

SR-IOV can provide simple virtualization without VIOS with greater server efficiency as more of the virtualization work is done in the hardware and less in the software. SR-IOV can also provide bandwidth quality of service (QoS) controls to help ensure that client-specified partitions have a minimum level of Ethernet port bandwidth, and thus improve the ability to share Ethernet ports more effectively.  SR-IOV can be combined with VIOS to provide higher virtualization function, such as Live Partition Mobility, as well as quality of service controls.

SR-IOV is supported for NIC workloads with specific combinations of SR-IOV capable I/O adapters, on select Power Systems running the latest level of firmware and with specific operating system levels.  For IBM i, native SR-IOV support is provided with IBM i 7.2 and IBM i 7.1 TR8.  HMC is required to configure and manage SR-IOV functions.  The initial roll-out is on POWER7+ 770 and 780 models with the latest 7.8 system firmware, and with the Ethernet functions on adapters #EN0H, #EN0K, #EN10, and #EN11. 

IBM i 7.2, IBM i 7.1, and IBM i 6.1 with 6.1.1 machine code are supported as clients in VIOS configurations and iVirtualization configurations.  If an SRIOV logical Ethernet port is assigned to IBM i 7.2, IBM i 7.1 or VIOS as the server partition, and it is marked as "promiscuous" at the HMC, then the server partition can use that port to bridge between the physical network and the Power Hypervisor virtual Ethernet.  Through that bridge, any virtual Ethernet port can have its traffic bridged to the physical network, including one from a partition running IBM i 6.1 with 6.1.1 machine code.

For more details, see the April 8 2014 RFA for IBM Power Systems POWER7/POWER7+ Enhancements.

 

iVirtualization, VIOS - Increased number of virtual disks per VSCSI adapter

For both IBM i 7.2 and IBM i 7.1 TR8, the number of virtual disks allowed per virtual SCSI adapter in an IBM i client partition is increased from 16 to 32.  This allows more flexibility in iVirtualization and VIOS configurations.

 

iVirtualization - Layer-2 bridging of VLAN-tagged frames

The Ethernet layer-2 bridge capability was added in IBM i 7.1 Technology Refresh 3.  IBM i 7.2 now supports bridging of frames that contain an 802.1q Virtual LAN (VLAN) header.  With this addition, a single IBM i Ethernet resource can transparently bridge multiple Power Hypervisor virtual Ethernet VLANs to the corresponding VLAN on the physical network, keeping the traffic for each VLAN isolated.  

See IBM Documentation Ethernet Layer-2 bridging for more information on this technology.

 

iVirtualization - Control over resources used to initialize client disk

IBM i 7.2 adds a resource allocation priority parameter (RSCALCPTY) to CRTNWSSTG/CHGNWSSTG commands to allow the user to have more control over resources on an IBM i server partition. When initializing and formatting a virtual disk on an IBM i client partition, an IBM i server partition has used as much system resource as it can to try to complete the request as fast as possible. In some circumstances this has caused other jobs on the server partition to slow down. The new parameter gives the user the ability to specify how aggressively the server consumes system resources while performing initialization of the disk. A lower priority value can reduce server resource utilization but increases the amount of time it takes to complete disk initialization.

See IBM Documentation for the RSCALCPTY keyword on the CRTNWSSTG and CHGNWSSTG commands.

iVirtualization - SSD preference for virtual disk

With IBM i 7.2, one can now specify a preference for allocating solid-state disk storage.  See IBM Documentation for the UNIT(*SSD) parameter on the CRTNWSSTG command.

 

iVirtualization - Easier tape and optical configuration

An additional virtualization usability enhancement for iVirtualization configurations in IBM i 7.2 is the new ALWDEVRSC keyword on the CRTNWSD/CHGNWSD commands.  Use of this keyword in an IBM i server partition describes the specific tape and optical device resources that are to be virtualized to a client partition, instead of having to use the RSTDDEVRSC keyword to specify all of the optical and tape resources that are not to be virtualized.

See IBM Documentation for the ALWDEVRSC keyword on the CRTNWSD and CHGNWSD commands.


March 2013 - IBM i 7.1 Technology Refresh 6

VIOS - PowerVM N_Port ID Virtualization (NPIV) attachment of SVC and Storwize Storage

IBM i 7.1 partitions on POWER7+, POWER7, and POWER6 servers support NPIV attachment of IBM SAN Volume Controller (SVC) and IBM Storwize V7000, V3700, and V3500 storage systems. Setting up configurations to share adapters is simpler with NPIV. This support includes the load source device and also allows the use of PowerHA for i Logical Unit (LUN) level switching. See PowerHA SystemMirror Technology Updates for more information about PowerHA support.

The storage that shows through to the IBM i partition is 512-byte sectors for both VIOS VSCSI and VIOS NPIV attachment. Migration from VSCSI to NPIV attachment is relatively easy since the format of the data on disk is the same for both.

For compatibility and availability information, see the IBM System Storage Interoperation Center.


October 2012 - IBM i 7.1 Technology Refresh 5

VIOS - Large Receive Offload for Ethernet layer 2 bridging

With V7R1 Technology Refresh 5, IBM i virtual Ethernet adapters can receive packets of up to 64 kilobytes, regardless of the maximum frame size chosen. This allows an IBM i partition to be added to a network that is serviced by a Virtual I/O Server (VIOS) Shared Ethernet Adapter (SEA) with large receive offload enabled.

Note for Network Install: A new install will not have this support until the IBM i 7.1 TR 5 PTF Group is permanently applied. This PTF Group must be on your install media in order to create a new partition and assign it to a VLAN with large-receive enabled, and then use that media to do the install.
 

VIOS - IBM PowerVM V2.2 Refresh with SSP & LPM updates

Shared Storage Pools now allow up to 16 systems to share a common pool of preallocated storage. Shared Storage Pools are able to provide quick provisioning and efficient storage utilization for virtualized IBM Power Systems workloads. Shared Storage Pools technology can simplify server and storage administration, reduce SAN infrastructure, and provide a framework that allows for easier movement of live workloads across Power frames.

Improved control for Live Partition Mobility NPIV configurations, which allows specification of the destination Fibre Channel port, lets the desired Fibre Channel port be used automatically once the partition movement completes.

Use of Runtime Expert for AIX improves VIOS setup, tuning, and validation.

For more details, see PowerVM RFA.


May 2012 - IBM i 7.1 Technology Refresh 4

VIOS - IBM i Live Partition Mobility

IBM i Live Partition Mobility allows for the movement of a running partition from one POWER7 server to another with no application downtime, resulting in better system utilization and improved application availability.

Live Partition Mobility is a major step in IBM's Power Systems virtualization continuum. It can be combined with other virtualization technologies, such as logical partitions with shared I/O and storage, to provide a fully virtualized computing platform that offers the degree of system and infrastructure flexibility required by today's production data centers.

For more details, see IBM i Live Partition Mobility.

VIOS - HMC Remote Restart

Available via RPQ is a special availability function called Remote Restart. When properly configured, partitions running on a server that goes down unexpectedly (or even when expected) can be activated or restarted on a different (remote) server.

The expectation is that this "Remote Restart" function can be initiated in most cases more quickly than getting the original server fully powered on and activating the partition on that original server.

For more details, see HMC Remote Restart PRPQ.

iVirtualization - Performance enhancement for zeroing virtual disk

Performance and resource utilization have been improved for an IBM i 7.1 partition that is used as the virtual I/O server for an IBM i client partition when adding a new virtual disk to an IBM i client partition's disk configuration. After applying this enhancement to the server partition, initializing a virtual disk on the client partition will complete much more quickly if the Network Server Storage Space has been newly created and so is already initialized. Note that no modifications to client partitions are needed to take advantage of this change. For more details, see IBM TechNote Slow Performance during initialization of Client Partitions.


December 2011 - IBM i 7.1 Technology Refresh 3

VIOS - IBM PowerVM V2.2 Refresh with SSP enhancements

Shared Storage Pools (which requires a PowerVM 2.2 Service Pack) creates pools of storage for virtualized workloads, and can improve storage utilization, simplify administration, and reduce SAN infrastructure costs. The enhanced capabilities enable four systems to participate in a Shared Storage Pool configuration, which can improve efficiency, agility, scalability, flexibility, and availability.

For more details, see PowerVM RFA.



October 2011 - IBM i 7.1 Technology Refresh 3

iVirtualization - Ethernet layer-2 bridging

Most logical partitions in a Power System need access to an IP network, usually through Ethernet. However, it is not always possible or cost-effective to assign a physical Ethernet adapter to every logical partition in a Power System. Using Ethernet layer-2 bridging in an iVirtualization environment, a system administrator can connect a single partition to the physical Ethernet and then bridge all traffic between that network and one of the Power System's virtual Ethernet LANs. Once the bridge is in place, any partition can access the physical network by using virtual Ethernet adapters on the bridged virtual LAN. Thus, Ethernet layer-2 bridging to the virtual LAN allows many logical partitions to share a single physical connection to the site network.

With IBM i 7.1, an IBM i partition can bridge a physical Ethernet port to the virtual LAN. This reduces hardware costs, since there are fewer Ethernet adapters required in the Power System, fewer ports required at the switch, and fewer cables to connect them. It may also reduce administration costs since there are fewer physical resources to manage, and this Ethernet sharing does not require a Virtual I/O Server partition, potentially reducing the number of environments that administrators need to manage.

In order to use Ethernet layer-2 bridging, users must have access to the management console for the system. The administrator must use the management console to create a virtual Ethernet adapter in the IBM i partition, indicating that the adapter will be used for external access. Then, the user creates two Ethernet line descriptions: one for the new virtual Ethernet adapter and one for the Ethernet port connected to the physical network. To establish the bridge, the user gives the two line descriptions the same bridge name, which is a new parameter on the CRTLINETH and CHGLINETH commands for the purposes of this support. When both line descriptions are active, traffic will be bridged between the two networks.

Ethernet Layer-2 Bridging enhancement supports any 1Gbps-capable or 10Gbps Ethernet adapter (besides Host Ethernet Adapter).

For more information, refer to the IBM Documentation Ethernet Layer-2 bridging  topic.


VIOS - Mirroring with NPIV Attached Storage

IBM i mirroring algorithms are enhanced to take into consideration any N_Port ID Virtualization (NPIV) attached DS8000 disks. The locations of the virtual disks are considered when the pairs of mirror disk units are calculated.
 

iVirtualization - Virtual Partition Manager enhancement to create IBM i partitions

The Virtual Partition Manage (VPM) is a partition management tool for an iVirtualization environment that supports the creation of partitions that use only virtual I/O and does not require the Hardware Management Console (HMC) or Systems Director Management Console (SDMC). In addition to being able to manage Linux guest partitions, the VPM now supports creation and management of IBM i partitions.

The VPM function is available on POWER6 and POWER7 Express Servers that do not have an external management console. With this enhancement to IBM i 7.1, the ability to create up to four IBM i partitions is enabled in VPM. Client IBM i partitions, that are created with VPM, use virtual I/O to connect back to the IBM i I/O server partition to access the physical disk and network. VPM in the IBM i I/O server partition is used to create the virtual SCSI and virtual Ethernet adapters for the client partitions. The user is then able to use Network Storage Spaces (NWSSTG) and Network Storage Descriptions (NWSD) in the IBM i I/O server partition to define the storage for the client partitions. Tape, disk, and optical are allowed to be virtualized to the client partitions. The client IBM i partitions can be IBM i 7.1 or IBM i 6.1 with either 6.1 or 6.1.1 machine code.

VIOS - PowerVM N_Port ID Virtualization attachment of DS5000

IBM i 7.1 partitions on POWER6 or POWER7 blade systems now support N_Port ID Virtualization attachment of DS5100 and DS5300 Storage Systems. Setting up configurations to share adapters is simpler with NPIV. This support also allows the use of a Lab Services toolkit to access copy services for the DS5000 storage.

For compatibility information, see the IBM System Storage Interoperation Center.


VIOS - IBM PowerVM V2.2 Refresh with Network Load Balancing

One of the enhancements to IBM PowerVM is a Network Load Balancing function that will be useful to IBM i client partitions. Network Load Balancing splits network traffic across redundant Shared Ethernet Adapters.

For more details, see the PowerVM RFA.


May 2011 - IBM i 7.1 Technology Refresh 2

VIOS - Partition Suspend and Resume

PowerVM now includes support for an IBM i 7.1 partition to be suspended, and later resumed. Using Suspend / Resume, clients can perform long-term suspension of partitions, thereby freeing server resources that were in use by that partition, and later resume operation of that partition and its applications on the same server. During the Suspend operation, the partition state (memory, NVRAM, and Virtual Service Processor state) is saved on persistent storage. The Resume operation restores that saved partition state to the server resources. Suspend / Resume can be used to save energy or to allow other partitions to make use of the resources from the suspended partition.

Requirements for Suspend / Resume:

  • All I/O resources must be virtualized using VIOS.
  • All partition storage must be external.
  • Either an HMC or SDMC must be used to manage the partitions.
  • The partition must be resumed on the same server on which it was suspended.
  • POWER7 firmware: Ax730_xxx, or later, is required.
  • VIOS 2.2.0.12-FP24 SP02, or later, is required.

iVirtualization - IBM i to IBM i virtual tape support

A simple, cost-effective virtual tape solution is now provided for an iVirtualization environment. An IBM i server partition can be used to share a tape drive among multiple client partitions (IBM i, Linux, or AIX) without the use of VIOS. With an IBM i 7.1 server partition and either an IBM i 7.1 client partition or an IBM i 6.1 client partition with 6.1.1 machine code, IBM i 7.1 Technology Refresh 2 supports virtualizing LTO3, LTO4, LTO5, DAT160, and DAT320 tape drives, including drives in a TS2900, TS3100, and TS3200 tape library when the tape library is in sequential mode. See IBM info APAR II14615 for a complete list of supported devices and required PTFs.

VIOS - PowerVM N_Port ID Virtualization attachment of DS5000

IBM i 7.1 partitions on POWER6 or POWER7 rack and tower systems now support N_Port ID Virtualization attachment of DS5100 and DS5300 Storage Systems. Setting up configurations to share adapters is simpler with NPIV. This support also allows the use of a Lab Services toolkit to access copy services for the DS5000 storage.

For compatibility information, see the IBM System Storage Interoperation Center.


December 2010 - IBM i 7.1 Technology Refresh 1

VIOS - Shared Storage Pools

Shared Storage Pools (which requires VIOS 2.2) creates pools of storage for virtualized workloads, and can improve storage utilization, simplify administration, and reduce SAN infrastructure costs. The Shared Storage Pools can be accessed by VIOS partitions deployed across multiple Power Systems servers so that an assigned allocation of storage capacity can be efficiently managed and shared.

For more details, see the PowerVM RFA.



October 2010 - IBM i 7.1 Technology Refresh 1

iVirtualization - Support for embedded media changers

This embedded media changer support extends the automatic media switching capability of virtual optical device type 632B on virtual I/O serving partitions to the client partition's virtual optical device type 632C. One application of this new function is the use of image catalogs for unattended installs of client partitions. This switching capability also allows users to manually switch media in a client virtual optical device without requiring authority to the serving partition. This is accomplished via the image catalog interface WRKIMGCLGE *DEV command interface on the client partition.

VIOS - Expanded HBA and switch support for NPIV on Power Blades

PowerVM™ VIOS 2.2.0 with IBM i 7.1 client partitions or IBM i 6.1 with 6.11 machine code client partitions supports the QLogic 8 Gb Blade HBAs to attach DS8100, DS8300, and DS8700 storage systems via NPIV. This allows easy migration from existing DS8100, DS8300, and DS8700 storage to a blade environment. Full PowerHA™ support is also available with virtual Fibre Channel and the DS8100, DS8300, and DS8700, which includes metro mirroring, global mirroring, flash copy, and LUN level switching.
 

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG60","label":"IBM i"},"Component":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}}]

Document Information

Modified date:
11 October 2022

UID

ibm11137514