Adding flash acceleration to shared storage pools

Virtual I/O Servers (VIOS) with Shared Storage Pool (SSP) flash acceleration can improve performance by using Solid-State Drive (SSD) or flash storage caching on the Virtual I/O Server.

This feature enables each Virtual I/O Server to use a flash caching device for read-only caching. The flash caching devices can be:
  • Devices that are attached to the server, such as a built-in SSD in the server.
  • Devices that are directly attached to the server by using Serial Attached SCSI (SAS) controllers.
  • Resources that are available in the storage area network (SAN).

The VIOS must be able to identify the device as a flash device for the device to be considered eligible to be used as a cache device. The VIOS uses the MEDIUM ROTATION RATE field of the SCSI Block Device Characteristics VPD page (SCSI INQUIRY page B1) to determine whether a device is a flash device. If a device does not support that page or displays a value other than 0001h Non-rotating medium in the MEDIUM ROTATION RATE field, the device cannot be used as a cache device.

You can derive the maximum performance benefit by using locally attached flash caching devices.

SSP flash acceleration is based on caching on the Virtual I/O Servers, while Power® flash caching or server-side caching is performed on the client logical partition. For more information on server-side caching, see or .

Both types of caching can be used independently. The performance characteristics of both of these types of caching are similar, on similar type of client logical partition workloads.

SSP flash acceleration performs read-only caching over the entire storage pool, including any storage tiers in the pool. Only the user data (data blocks) in the pool is cached, while the metadata is not cached. Instead, metadata access might be accelerated by using SSD storage on the SAN for the system tier.

Concepts and terms in SSP flash acceleration

You can cache the storage pool dynamically (enable or disable caching), while workloads are running on the client logical partitions. The workloads do not need to be brought down to an inactive state to enable caching. The terms that are used to explain the flash caching concept are described in the following table.
Term Description
Cache device A cache device is a Solid-State Drive (SSD) or a flash disk that is used for caching.
Cache pool A cache pool is a group of cache devices that is used only for storage caching.
Enable caching Start caching the storage pool.
Disable caching Stop caching the storage pool.

When caching is enabled for the storage pool, caching starts on all Virtual I/O Servers in the cluster that have a defined cache pool. This process implicitly creates a logical cache device (known as a cache partition) derived from the local cache pool for each Virtual I/O Server. When the storage pool caching is enabled, all the read requests for the user data blocks of the storage pool are routed to the SSP caching software. If a specific user data block is found in the local Virtual I/O Server cache, the I/O request is processed from the cache device. If the requested block is not found in the cache, or if it is a write request, the I/O request is sent directly to the storage pool SAN devices.

When caching is disabled for the storage pool, the caching on all Virtual I/O Servers in the cluster stops. This process implicitly cleans up the logical cache device from the local cache pool on each server.

Architecture and components of SSP flash acceleration

The components of SSP flash acceleration include the VIOS, cache management and cache engine, and storage pool. These components are described in the following table.
Component Description
VIOS The administration and management of caching is performed from the VIOS command-line interface by using the sspcache command.
Storage pool (pool driver) The storage pool is the caching target and the pool driver manages the cluster cache coherency.
Cache management and cache engine Cache management provides the lower-level cache configuration commands, while the cache engine runs the local caching logic to determine what blocks are cached in the storage pool.
SSP flash acceleration performs distributed cache coherency between the Virtual I/O Servers in the following ways:
  • The storage pool driver coordinates the distributed cache coherency across the cluster.
  • The cache engine manages node level caching (promoting or demoting cache entries) and interacts with the storage pool driver to maintain cache coherency. This component uses the same local caching method as with the Power flash caching, or server-side caching.
  • The cache engine is used for any storage pool I/O operations. This type of caching is known as look-aside caching.

The following figure explains the flow for various I/O operations when caching is enabled.

This image is a VIOS file that contains information about the architecture of SSP flash acceleration.

The details of the I/O operations that are shown in the figure, are explained in the following table.
I/O operation Description
Cache Read Hit
  • VIOS passes I/O read request from client logical partition to the storage pool driver.
  • Storage pool driver checks the cache engine and finds that the extent is cached in the local cache device.
  • The I/O request is entirely satisfied in the cache and passed back to the client logical partition.
Cache Read Miss
  • VIOS passes I/O read request from client logical partition to the storage pool driver.
  • Storage pool driver checks the cache engine and finds that the extent is not cached in the local cache device.
  • The storage pool driver satisfies the request from the SAN and it is passed back to the client logical partition.
Write operation
  • VIOS passes I/O write request from client logical partition to the storage pool driver.
  • The extent is invalidated on any node in the cluster that has the extent cached, to ensure cache coherency.
  • The storage pool driver performs the write request to the SAN.

Attributes of caching in SSP flash acceleration

The attributes of caching in SSP flash acceleration are:
Transparent to applications
Clustered applications can be used on the client logical partitions.
Independent of Client operating systems
Caching is supported on AIX®, IBM® i, and Linux operating systems.
Read only node-specific cache
Results of write operations are sent to the SAN after cache invalidation occurs.
Concurrent and coherent shared data access
Supports concurrent shared data access with full coherency across the SSP landscape.
Independent of types of storage
No dependency on the type of flash storage for caching and SAN storage for SSP.

Advantages of SSP flash acceleration

Some of the benefits of SSP flash acceleration include:
  • Improvement in latency and throughput with certain workloads such as analytical and transactional workloads, and online transaction processing.
  • Transparent acceleration, such that client logical partitions are unaware of caching on Virtual I/O Servers.
  • Better virtual machine (VM) density, without performance impacts.
  • Allows more efficient utilization and scaling of SAN infrastructure. The SAN offloading of read requests can increase write throughput on congested SANs.
  • Benefits from sharing blocks across VMs based on cloned virtual Logical Units (LUs), when common blocks are already cached.
  • Compatibility with Live Partition Mobility (LPM).

Limitations of caching in SSP flash acceleration

Some limitations of caching in SSP flash acceleration are:
  • The SSP caching software is configured as a read-only cache, which means that only read requests are processed from the flash Solid-State Drive (SSD). All write requests are processed by the storage pool only and go directly to the SAN.
  • Data that is written to the storage pool is not populated in the cache automatically. If the write operation is performed on a block that is in the cache, the existing data in the cache is marked as invalid. The same block reappears in the cache, based on how frequently and how recently the block is accessed.
  • Cache devices cannot be shared between Virtual I/O Servers.
  • Performance benefits depend on the size of the application working set and the type and size of the SAN disk controller cache. Typically, the collective working set must be larger than the SAN disk controller cache to realize significant performance benefits.

Configuration of caching in SSP flash acceleration

You must complete the following steps from the VIOS command-line interface to enable caching:
  1. Create a cache pool on each VIOS in the cluster, by using the cache_mgt command.
  2. Enable caching of the storage pool on the SSP cluster from a single VIOS node by using the sspcache command.
Creation of the cache pool on each VIOS is a one-time step. The syntax for this command is:
cache_mgt pool create –d <devName>[,<devName>,…] -p <poolName>
For example, to create a 1024 MB cache on each VIOS in the cluster and then to enable caching on the storage pool, complete the following steps:
  1. To create a 1024 MB cache, enter the following command:
    cache_mgt pool create –d /dev/hdisk11 –p cmpool0     
    This command must be run on all Virtual I/O Servers in the cluster.
  2. To enable caching of the storage pool on the SSP cluster from a single VIOS node, enter the following command:
    sspcache -enable -sp -size 1024
    This command must be run on a single VIOS in the cluster.

Management of caching in SSP flash acceleration

After caching is configured, the caching requirements might change over time. You might need to add new workloads that need to be cached. To fulfill the changing requirements, the cache pool can be extended, by adding extra cache devices, if necessary. Thus, you can increase the cache size.

You can use the following examples to manage the caching configuration.
  1. To add a cache device to the cache pool, enter the following command on each VIOS in the cluster:
    > cache_mgt pool extend -p cmpool0 -d hdisk12 –f
  2. To extend the cache size to 2048 MB, enter the following command on one node:
    > sspcache –resize –sp –size 2048



Last updated: Thu, October 15, 2020