IBM®
Storwize® V7000 2076-724 systems use NVMe-attached
drives in the control enclosures to provide significant performance improvements as compared to
SAS-attached flash drives. The system also supports 2U and 5U SAS-attached expansion enclosure
options.
A Storwize V7000 2076-724 control enclosure
contains up to 24 NVMe-attached IBM FlashCore®
Modules or other self-encrypting NVMe-attached SSD drives. The drives are
accessible from the front of the control enclosure, as shown in Figure 1.
Each Storwize V7000 2076-724 control enclosure
contains two identical node canisters. As Figure 2
shows, the top node canister is inverted above the bottom one; each node canister is bounded on each
side by a power supply unit.
The Storwize V7000 2076-724 model has the
following characteristics and features:
IBM Spectrum Virtualize software with
enclosure-based, all inclusive software feature licensing
Dual 8-core Intel Skylake 64-bit CPUs at 1.7 GHz per each
of two node canisters
Six channels of cache per CPU with 1 - 12 DIMMs, supporting 32 GB - 288 GB, which is 64 GB - 576
GB per node canister, and 128 GB - 1152 GB per control enclosure
NVMe transport protocol for high performance of 2.5-inch (SFF) NVMe-attached flash drives:
Support for self-compressing, self-encrypting 2.5-inch NVMe-attached IBM FlashCore Modules with the following storage
capacities: 4.8 TB, 9.6 TB, and 19.2 TB.
Support for industry-standard 2.5-inch NVMe-attached SSD drive options with the following
storage capacities: 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB.
On-board ports:
Four 10 Gb Ethernet ports
Two USB ports
One 1 Gb Ethernet technician port
One PCIe HBA slot with 4-port (2 ports active) 12 Gbps SAS adapter, used for attachment of
expansion enclosures:
Support for 2.5-inch 12 Gbps SAS industry-standard flash drives in Storwize V7000 2076-724 SAS expansion enclosures, with
the following capacities: 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB.
Support for an intermix of IBM Storwize V7000 2U
and 5U expansion enclosures with a total chain weight of 10 in each of two SAS chains
Support for up to 20 2U IBM Storwize V7000
2076-12F or 2076-24F expansion enclosures (24 SFF drives each, or up to 480 drives) in
two SAS chains, each enclosure with a chain weight of 1.
Support for up to 8 5U IBM
Storwize V7000 2076-92F expansion enclosures (92 SFF drives each, or up to 736
drives) in two SAS chains, each enclosure with a chain weight of 2.5.
Two PCIe HBA slots that optionally support any combination of the following adapters:
4-port 16 Gbps Fibre Channel (FC) adapters that supports NVMe over Fabrics (NVME-oF). Required
for adding control enclosures, up to a maximum of four per system (0 to 2).
4-port 32 Gbps Fibre Channel (FC) adapters which supports
simultaneous SCSI and NVMeFC connections on the same port (0 to 2).
2-port 25 Gbps Ethernet (iWARP) adapters that support iSCSI or iSER host attachment (0 to
2).
2-port 25 Gbps Ethernet (RoCE) adapters that support iSCSI or iSER host attachment (0 to
2).
Table 1. Overview of the Storwize V7000 2076-724
system
Product
Specific features
Models
Warranty
Storwize V7000 2076-724
Dual 8-core Intel Skylake 64-bit CPUs at 1.7 GHz per each
of two node canisters
Single SSD boot drive
Hardware compression assist of 40 Gb/s (Real-Time Compression is not supported in Storwize
V7000 Gen 3.)
2076-724
3 year
Customer installed and maintained with one FRU (system
board only) replacement support by IBM Service Support
Representatives (SSRs)
Optional, priced service offerings
Using an optional FC adapter, you can also add one of the following control enclosures to an IBM
Storwize V7000 2076-724 system until the system has a
total of four control enclosures (eight nodes):
Storwize V7000 2076-724
Storwize V7000 2076-624
Storwize V7000 2076-524
FlashSystem 9110 9846/8-AF7
Attention: Adding a Storwize V7000 2076-724/U7B system to an existing Storwize V7000 system consisting of three or
fewer 2076-724, 2076-624, or 2076-524 control enclosures, creates
a Storwize V7000 2076-724/U7B system.
NVMe transport protocol in Storwize V7000 2076-724 control enclosures
Storwize V7000 2076-724 systems use the
Non-Volatile Memory express (NVMe) drive transport protocol.
NVMe is designed specifically for flash technologies. It is a faster, less complicated storage
drive transport protocol than SAS.
NVMe-attached drives support multiple queues so that each CPU core can communicate directly with
the drive. This avoids the latency and overhead of core-core communication, to give the best
performance.
NVMe offers better performance and lower latencies exclusively for solid state drives through
multiple I/O queues and other enhancements.
NVMe multi-queuing supports the Remote Direct Memory Access (RDMA) queue pair model for fast
system access to host-attached iWARP or RoCE communications using iSCSI Extensions for RDMA
(iSER).
Storwize V7000 2076-724 uses distributed RAID
level 6 for best resiliency.
In addition to supporting self-compressing, self-encrypting IBM FlashCore Modules, the NVMe transport protocol also supports other industry
standard NVMe flash drives.
IBM FlashCore Modules are NVMe-attached
drives
IBM FlashCore Modules have built-in performance
neutral hardware compression and encryption.
Up to 24 IBM FlashCore Modules in the
FlashSystem 9100 control enclosures are available in 4.8 TB, 9.6 TB, and 19.2 TB NVMe-attached Flash
Drives with IBM FlashCore Technology that offer up
to 3:1 self-compression, as well as self-encryption.
IBM FlashCore Modules are based on the IBM FlashCore Technology in IBM FlashSystem® 900 and also in use in Storwize V7000 2076-724/U7B, IBM FlashSystem V9000, IBM FlashSystem A9000, and IBM
FlashSystem A9000R systems.
The 24 FlashCore Modules in the 19.2 TB NVMe-attached Flash Drives give a maximum per control
enclosure of 460 TB of raw storage, of which 384 TB are usable, yielding an effective 768 TB
(because of the 2:1 sustained compression ratio).
An intermix of IBM FlashCore Module
NVMe-attached Flash Drives of different sizes can be used in an IBM FlashSystem 9100 control enclosure.
IBM
Spectrum Virtualize
software
A Storwize V7000 2076-724 control enclosure
consists of two node canisters that each run IBM
Spectrum Virtualize software, which is part of the IBM
Spectrum Storage family.
IBM Spectrum Virtualize software
provides the following functions for the host systems that attach to the system:
A single pool of storage
Logical unit virtualization
Management of logical volumes
Mirroring of logical volumes
The system also provides the following functions:
Large scalable cache
Copy Services:
IBM
FlashCopy® (point-in-time copy) function,
including thin-provisioned FlashCopy to make
multiple targets affordable
IBM
HyperSwap® (active-active copy) function
Metro Mirror
(synchronous copy)
Global Mirror
(asynchronous copy)
Data migration
Space management:
IBM Easy Tier® function to
migrate the most frequently used data to higher-performance storage
Metering of service quality when combined with IBM Spectrum® Connect. For information,
refer to the IBM Spectrum Connect
documentation.
Thin-provisioned logical volumes
Compressed volumes to consolidate storage using data reduction
pools (Real-Time Compression is not supported in Storwize V7000 Gen 3.)
Data Reduction pools with deduplication
System hardware
The storage system consists of a set
of drive enclosures. Control enclosures contain NVMe flash drives and a pair of
node canisters. A collection of control enclosures that are managed as a single system
is called a clustered system or
simply a system. Expansion enclosures contain SAS drives and are attached
to control enclosures. Expansion canisters include the serial-attached SCSI (SAS)
interface hardware that enables the node canisters to use the SAS flash drives of the expansion
enclosures.
Figure 3 shows the system as a storage system. The internal drives are
configured into arrays and volumes are created from those arrays.
The system can also be used to virtualize other storage systems, as shown in Figure 4.
The two node canisters in each control enclosure are arranged into pairs that are known as
I/O groups. A single pair is responsible for serving I/O on a specific volume. Because
a volume is served by two node canisters, the volume continues to be available if one node canister
fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI are used to
disable the I/O for a node before it is taken offline or when a volume cannot be accessed through
that node.
A system that does not contain any internal drives can be used as a storage virtualization
solution.
System topology
The system topology can be set up in following ways.
Standard topology, where all node canisters in the system are at the same site.
System management
The nodes in a system operate as a single system and present a single point of control for system
management and service. System management and error reporting are provided through an Ethernet
interface to one of the nodes in the system, which is called the configuration node.
The configuration node runs a web server and provides a command-line interface (CLI). The
configuration node is a role that any node can take. If the current configuration node fails, a new
configuration node is selected from the remaining nodes. Each node also provides a command-line
interface and web interface to enable some hardware service actions.
Fabric types
I/O operations between hosts and nodes and between nodes and RAID storage systems are performed
by using the SCSI standard. The nodes communicate with each other by using private SCSI
commands.
Each node canister has four onboard 10 Gbps Ethernet ports. A node canister can also support up
to two 2-port 25 Gbps Ethernet host interface adapters.
Table 2 shows the fabric types that
can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can
be used at the same time.
Table 2. Communications types
Communications type
Host to node
Node to storage system
Node to node
Fibre Channel SAN
Yes
Yes
Yes
iSCSI
10 Gbps Ethernet
25 Gbps Ethernet
Yes
Yes
No
iSER
25 Gbps Ethernet
Yes
No
No
RDMA-capable Ethernet ports for node-to-node communication (25 Gbps
Ethernet)