IBM®
Storwize® V5100 systems can use NVMe-attached
drives in the control enclosures to provide significant performance improvements as compared to
SAS-attached drives. The system also supports SAS-attached expansion enclosure options.
A Storwize V5100 control enclosure contains
up to 24 NVMe-attached IBM FlashCore® Modules or
other self-encryption NVMe-attached SSD drives. The drives are accessible
from the front of the control enclosure, as shown in Figure 1.Figure 1. Front view of the control enclosure
Each Storwize V5100 control enclosure
contains two identical node canisters. As Figure 2 shows, the top node canister is inverted above the bottom one; each node canister is
bounded on each side by a power supply unit.Figure 2. Rear view of the control enclosure
The Storwize V5100 system has three models:
Storwize V5100, Storwize V5100F, and a utility model, Storwize V5100 Utility. Some models differ only in
warranty, as shown in Table 1.
Each model has the following characteristics and features:
IBM Spectrum Virtualize software with
enclosure-based, all inclusive software feature licensing
Six channels of cache for the single CPU with 1 - 12 DIMMs, supporting 32 GB (2 x 16 GB), 96 GB
(6 x 16 GB), or 288 GB (6 x 16 GB + 6 x 32 GB), which is 64 GB, 192 GB, or 576 GB per control
enclosure (I/O group)
NVMe transport protocol for high performance of 2.5-inch (SFF) NVMe-attached flash drives:
Support for self-compressing, self-encrypting 2.5-inch NVMe-attached IBM FlashCore Modules with the following storage
capacities: 4.8 TB, 9.6 TB, and 19.2
TB.
Support for industry-standard 2.5-inch NVMe-attached SSD drive options with the following
storage capacities: 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB.
On-board ports:
Four 10 Gb Ethernet ports (first two ports also support management IP)
Two USB ports
One 1 Gb Ethernet technician port
Two PCIe HBA slots that optionally support:
4-port 16 Gbps Fibre Channel (FC) adapter. Required for adding other Storwize V5100 control enclosures to an IO group,
up to a maximum of two control enclosures per system (from 0 to 1 adapter in slot 2 only).
4-port 32 Gbps Fibre Channel (FC) adapters which supports
simultaneous SCSI and NVMeFC connections on the same port (from 0 to 1 adapter in slot 2 only).
2-port 25 Gbps Ethernet (iWARP) adapter that support iSCSI or iSER host attachment (from 0 to 1
adapter in slot 2 only).
2-port 25 Gbps Ethernet (RoCE) adapter that support iSCSI or iSER host attachment (from 0 to 1
adapter in slot 2 only).
4-port (2 ports active) 12 Gbps SAS adapter (from 0 to 1 adapter in slot 1 only). Required for
attachment to Storwize V5100 expansion
enclosures.
The following drives and expansion enclosures are supported.
Support for 2.5-inch 12 Gbps SAS industry-standard flash drives in Storwize V5100 SAS expansion enclosures, with the
following capacities: 1.92 TB, 3.84 TB, 7.68 TB, and 15.36
TB.
Support for an intermix of Storwize V5100
2U and 5U SAS expansion enclosures with a total chain weight of 10 in each of two SAS chains.
Support for up to 12 LFF drives in a 2U SAS expansion enclosure (2077/2078-12F), each enclosure with a chain weight
of 1.
Support for up to 24 SFF drives in a 2U SAS expansion enclosure (2077/2078-24F), with a chain weight of 1.
Support for up to 92 SFF drives in a 5U SAS expansion enclosure (2077/2078-92F/A9F), with a chain weight of 2.5.
Support for up to 20 2U expansion enclosures (2077/2078-AFF), each with 24 SFF drives, or up to
480 drives in two SAS chains. Each enclosure with a chain weight of 1.
Support for up to 8 5U expansion enclosures (2077/2078-A9F), with 92 SFF drives each, or up to
736 drives, in two SAS chains. Each enclosure with a chain weight of 2.5.
Table 1. Overview of Storwize V5100 systems
Product
Specific features
Models
Warranty
Storwize V5100
Single 8-core Intel Xeon Skylake 64-bit CPU at 1.7 GHz per
each node canister. Each control enclosure contains two node canisters.
NVMe control enclosure with the ability to add hybrid expansions.
2077-424
1 year
Optional, priced service offerings
2078-424
3 years
Storwize V5100F
Single 8-core Intel Xeon Skylake 64-bit CPU at 1.7 GHz per
each node canister. Each control enclosure contains two node canisters.
NVMe control enclosure with the ability to add all-Flash expansions.
2077-AF4
1 year
Optional, priced service offerings
2078-AF4
3 years
NVMe transport protocol in Storwize V5100 control enclosures
Storwize V5100 systems use the Non-Volatile
Memory express (NVMe) drive transport protocol.
NVMe is designed specifically for flash technologies. It is a faster, less complicated storage
drive transport protocol than SAS.
NVMe-attached drives support multiple queues so that each CPU core can communicate directly with
the drive. This avoids the latency and overhead of core-core communication, to give the best
performance.
NVMe offers better performance and lower latencies exclusively for solid state drives through
multiple I/O queues and other enhancements.
NVMe multi-queuing supports the Remote Direct Memory Access (RDMA) queue pair model for fast
system access to host-attached iWARP or RoCE communications using iSCSI Extensions for RDMA
(iSER).
Storwize V5100 uses distributed RAID level
6 for best resiliency.
In addition to supporting self-compressing, self-encrypting IBM FlashCore Modules, the NVMe transport protocol also supports other industry
standard NVMe flash drives.
IBM FlashCore Modules are NVMe-attached
drives
IBM FlashCore Modules have built-in performance
neutral hardware compression and encryption.
Up to 24 IBM FlashCore Modules in the
FlashSystem 9100 control enclosures are available in 4.8 TB, 9.6 TB, and 19.2 TB NVMe-attached Flash
Drives with IBM FlashCore Technology that offer up
to 3:1 self-compression, as well as self-encryption. The new IBM FlashCore2 Modules also include a 38.4 TB capacity.
IBM FlashCore Modules are based on the IBM FlashCore Technology in IBM FlashSystem® 900 and also in use in IBM FlashSystem V9000, IBM FlashSystem A9000, and IBM
FlashSystem A9000R systems.
The 24 FlashCore Modules in the 19.2 TB NVMe-attached Flash Drives give a maximum per control
enclosure of 460 TB of raw storage, of which 384 TB are usable, yielding an effective 768 TB
(because of the 2:1 sustained compression ratio).
An intermix of IBM FlashCore Module
NVMe-attached Flash Drives of different sizes can be used in an IBM FlashSystem 9100 control enclosure.
IBM
Spectrum Virtualize
software
A Storwize V5100 control enclosure consists
of two node canisters that each run IBM Spectrum
Virtualize software, which is part of the IBM
Spectrum Storage family.
IBM Spectrum Virtualize software
provides the following functions for the host systems that attach to the system:
A single pool of storage
Logical unit virtualization
Management of logical volumes
The system also provides the following functions:
Large scalable cache
Copy Services:
IBM
FlashCopy® (point-in-time copy) function,
including thin-provisioned FlashCopy to make
multiple targets affordable
IBM
HyperSwap® (active-active copy) function
Metro Mirror
(synchronous copy)
Global Mirror
(asynchronous copy)
Data migration
Space management:
IBM
Easy Tier® function to
migrate the most frequently used data to higher-performance storage
Metering of service quality when combined with IBM Spectrum® Connect. For information,
refer to the IBM Spectrum Connect
documentation.
Thin-provisioned logical volumes
Compressed volumes to consolidate storage using data reduction
pools (Real-Time Compression is not supported .)
Data Reduction pools with deduplication
System hardware
The storage system consists of a set
of drive enclosures. Control enclosures contain NVMe flash drives and a pair of
node canisters. A collection of control enclosures that are managed as a single system
is called a clustered system.
Expansion enclosures contain SAS drives and are attached to control enclosures.
Expansion canisters include the serial-attached SCSI (SAS) interface hardware that
enables the node canisters to use the SAS flash drives of the expansion enclosures.
Figure 3 shows the system as a storage system. The internal drives are
configured into arrays and volumes are created from those arrays.
Figure 3. System as a storage system
The system can also be used to virtualize other storage systems, as shown in Figure 4.
Figure 4. System shown virtualizing other storage system
The two node canisters in each control enclosure are arranged into pairs that are known as
I/O groups. A single pair is responsible for serving I/O on a specific volume. Because
a volume is served by two node canisters, the volume continues to be available if one node canister
fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI are used to
disable the I/O for a node before it is taken offline or when a volume cannot be accessed through
that node.
A system that does not contain any internal drives can be used as a storage virtualization
solution.
System topology
The system topology can be set up in several different ways.
Standard topology, where all node canisters in the system are at the
same site.Figure 5. Example of a standard system topology
HyperSwap
topology, where the system consists of at least two I/O groups. Each I/O group is at a
different site. Both nodes of an I/O group are at the same site. A volume can be active
on two I/O groups so that it can immediately be accessed by the other site when a site
is not available. Figure 6. Example of a HyperSwap system
topology
System management
The nodes in a clustered system operate as a single system and present a single point of control for system management and
service. System management and error reporting are provided through an Ethernet interface to one of
the nodes in the system, which is called the configuration node. The configuration node
runs a web server and provides a command-line interface (CLI). The configuration node is a role that
any node can take. If the current configuration node fails, a new configuration node is selected
from the remaining nodes. Each node also provides a command-line interface and web interface to
enable some hardware service actions.
Storwize V5100 can
have up to two I/O groups in a clustered system.
Fabric types
I/O operations between hosts and nodes and between nodes and RAID storage systems are performed by using the
SCSI standard. The nodes communicate with each other by using private SCSI commands.
Each node canister has four onboard 10 Gbps Ethernet ports. A node canister can also support a
2-port 25 Gbps Ethernet host interface adapters.
Table 2 shows the fabric types that
can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can be
used at the same time.
Table 2. Communications types
Communications type
Host to node
Node to storage system
Node to node
Fibre Channel SAN
Yes
Yes
Yes
iSCSI
10 Gbps Ethernet
25 Gbps Ethernet
Yes
Yes
No
iSER
25 Gbps Ethernet
Yes
No
No
RDMA-capable Ethernet ports for node-to-node communication (25 Gbps
Ethernet)