Storwize V7000 Gen3 system overview

IBM® Storwize® V7000 Gen3 (Storwize V7000 2076-724) systems use NVMe-attached drives. The system also supports optional 2U and 5U SAS-attached expansion enclosures that accept a range of SAS drive types and capacities.

A Storwize V7000 2076-724 control enclosure contains up to 24 NVMe-attached IBM FlashCore® Modules or other self-encrypting NVMe-attached SSD drives. The drives are accessible from the front of the control enclosure, as shown in Figure 1.
Figure 1. Front view of the control enclosure
Front view of the control enclosure
Each Storwize V7000 2076-724 control enclosure contains two node canisters. As Figure 2 shows, the top node canister is inverted above the bottom one. The control enclosure also contains two power supply units (PSUs) that operate independently of each other. The PSUs are also visible from the back of the control enclosure.
Figure 2. Rear view of the control enclosure
Rear view of the control enclosure
Storwize V7000 2076-724 systems have the following characteristics and features:
  • Operates on IBM Spectrum Virtualize software.

    Like other Storwize V7000 systems, Storwize V7000 Gen3 control enclosures use trust-based licenses for virtualization, Remote Copy, and compression.

  • Dual 8-core Intel Skylake 64-bit CPUs at 1.7 GHz, per each node canister
  • Single SSD boot drive
  • Hardware compression assist rate of 40 Gb/s (Real-Time Compression is not supported in Storwize V7000 Gen 3.)
  • Supports various memory options that range from 64 GB, 128 GB, 196 GB, or 576 GB per node canister.
  • Supports the NVMe transport protocol for high performance of 2.5-inch (SFF) NVMe-attached flash drives, including:
    • Self-compressing, self-encrypting IBM FlashCore Modules
    • Industry-standard NVMe-attached SSD drives
  • Onboard ports for connectivity and maintenance:
    • Four 10 Gbps Ethernet ports
    • Two USB ports
    • One 1 Gbps Ethernet technician port
  • One PCIe networking adapter slot that can support a 4-port 12 Gbps SAS adapter. With the two active ports on this adapter, the control enclosure can support 2 SAS chains that can connect to the following expansion enclosures:
    • Up to 20 2076-12F or 2076-24F (2U) expansion enclosures. Each expansion enclosure has a chain weight of 1.

      2076-12F expansion enclosures can support up to 12 3.5-inch 12 Gbps SAS hard disk drives.

      2076-24F expansion controllers can support 24 2.5 inch 12 Gbps SAS flash drives and hard disk drives.

    • Up to eight 2076-92F (5U) expansion enclosures. Each expansion enclosure has a chain weight of 2.5.

      2076-92F expansion enclosures can support 2.5-inch and 3.5-nch 12 Gbps SAS disk drives and flash drives.

    • A mixture of 2U and 5U expansion enclosures, with a total chain weight of 10 in each SAS chain.
  • Two PCIe networking adapter slots that can support any combination of the following optional adapters:
    • 4-port 16 Gbps Fibre Channel (FC) adapters that support NVMe over Fabrics (NVME-oF). This adapter is required to add other control enclosures to the system (0 - 2).
      Using an FC adapter, you can connect the control enclosure to up to three more systems (for a maximum of eight nodes). A Storwize V7000 Gen3 system can connect to the following control enclosures:
      • Storwize V7000 2076-724 (another Storwize V7000 Gen3)
      • Storwize V7000 2076-624 (Storwize V7000 Gen2+)
      • Storwize V7000 2076-524 ( Storwize V7000 Gen2 )

      The Storwize V7000 Gen3 system can also use an FC adapter for host connectivity and for virtualization of other storage systems.

    • 2-port 25 Gbps Ethernet (iWARP) adapters that support iSCSI or iSER host attachment (0 - 2).
    • 2-port 25 Gbps Ethernet (RoCE) adapters that support iSCSI or iSER host attachment (0 - 2).
  • 3-year warranty

    Overall, the system is installed and maintained by customers. However, the replacement of one FRU part (enclosure midplane) is supported by IBM Service Support Representatives (SSRs). Optional, priced service offerings are also available.

NVMe transport protocol overview

Storwize V7000 Gen3 systems use the Non-Volatile Memory express (NVMe) drive transport protocol. NVMe is designed specifically for flash technologies. NVMe-attached drives support multiple queues so that each CPU core in the control enclosure can communicate directly with a drive. Through multiple I/O queues and other enhancements, NVMe increase performance and lowers latency for solid-state drives. NVMe multi-queuing also supports the Remote Direct Memory Access (RDMA) queue pair model for fast system access to host-attached iWARP or RoCE communications that use iSCSI Extensions for RDMA (iSER).

IBM FlashCore Module overview

IBM FlashCore Modules are based on the IBM FlashCore Technology that is used in IBM FlashSystem® 900, IBM FlashSystem V9000, IBM FlashSystem A9000, and IBM FlashSystem A9000R systems. IBM FlashCore Modules are NVMe-attached drives that provide built-in performance neutral hardware compression and encryption.

Each Storwize V7000 Gen3 system can support up to 24 IBM FlashCore Modules, which are available in different storage capacities. A control enclosure can support a mixture of drive sizes. For example, 24 FlashCore Modules in the 19.2 TB NVMe-attached Flash Drives provide a maximum of 460 TB of raw storage per control enclosure. These drives provide 384 TB of usable storage, which yields an effective 768 TB of storage (because of the 2:1 sustained compression ratio).

IBM Spectrum Virtualize software

A Storwize V7000 Gen3 control enclosure consists of two node canisters that each run IBM Spectrum Virtualize software, which is part of the IBM Spectrum Storage family. IBM Spectrum Virtualize software provides the following functions for the host systems that attach to the system:
  • A single pool of storage
  • Logical unit virtualization
  • Management of logical volumes
  • Mirroring of logical volumes
The system also provides the following functions:
  • Large scalable cache
  • Copy Services:
    • IBM FlashCopy® (point-in-time copy) function, including thin-provisioned FlashCopy to make multiple targets affordable
    • IBM HyperSwap® (active-active copy) function
    • Metro Mirror (synchronous copy)
    • Global Mirror (asynchronous copy)
    • Data migration
  • Space management:
    • IBM Easy Tier® function to migrate the most frequently used data to higher-performance storage
    • Metering of service quality when combined with IBM Spectrum® Connect. For information, refer to the IBM Spectrum Connect documentation.
    • Thin-provisioned logical volumes
    • Compressed volumes to consolidate storage using data reduction pools (Real-Time Compression is not supported in Storwize V7000 Gen 3.)
    • Data Reduction pools with deduplication

System overview

The storage system consists of a set of drive enclosures. Control enclosures contain NVMe flash drives and a pair of node canisters. A collection of control enclosures that are managed as a single system is called a clustered system or a system. Expansion enclosures contain SAS drives and are attached to control enclosures. Expansion canisters include the serial-attached SCSI (SAS) interface hardware that enables the node canisters to use the SAS flash drives of the expansion enclosures.

Figure 3 shows the system as a storage system. The internal drives are configured into arrays and volumes are created from those arrays.

Figure 3. System as a storage system
This figure shows an overview of a storage system.

The system can also be used to virtualize other storage systems, as shown in Figure 4.

Figure 4. System shown virtualizing other storage system
This figure shows an overview of viirutalizing other storage systems

The two node canisters in each control enclosure are arranged into pairs that are known as I/O groups. A single pair is responsible for serving I/O on a specific volume. Because a volume is served by two node canisters, the volume continues to be available if one node canister fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI are used to disable the I/O for a node before it is taken offline or when a volume cannot be accessed through that node.

A system that does not contain any internal drives can be used as a storage virtualization solution.

System topology

The system topology can be set up in several different ways.
  • Standard topology, where all node canisters in the system are at the same site.
    Figure 5. Example of a standard system topology
    This figure shows an example of a standard system topology
  • HyperSwap topology, where the system consists of at least two I/O groups. Each I/O group is at a different site. Both nodes of an I/O group are at the same site. A volume can be active on two I/O groups so that it can immediately be accessed by the other site when a site is not available.
    Figure 6. Example of a HyperSwap system topology
    This figure shows an example of a HyperSwap system topology

Volumes types

You can create the following types of volumes on the system.
  • Basic volumes, where a single copy of the volume is cached in one I/O group. Basic volumes can be established in any system topology; however, Figure 7 shows a standard system topology.
    Figure 7. Example of a basic volume
    This figure shows an example of a basic volume in a standard system configuration.
  • Mirrored volumes, where copies of the volume can either be in the same storage pool or in different storage pools. The volume is cached in a single I/O group, as Figure 8 shows. Typically, mirrored volumes are established in a standard system topology.
    Figure 8. Example of mirrored volumes
    This figure shows an example of a mirrored volume in a standard system configuration.
  • HyperSwap volumes, where copies of a single volume are in different storage pools that are on different sites. As Figure 9 shows, the volume is cached in two I/O groups that are on different sites. These volumes can be created only when the system topology is HyperSwap.
    Figure 9. Example of HyperSwap volumes
    This figure shows an example of HyperSwap volumes in a HyperSwap system configuration.

System management

The nodes in a clustered system operate as a single system and present a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command-line interface and web interface to enable some hardware service actions.

Fabric types

I/O operations between hosts and nodes and between nodes and RAID storage systems are performed by using the SCSI standard. The nodes communicate with each other by using private SCSI commands.

Each node canister has four onboard 10 Gbps Ethernet ports. A node canister can also support up to two 2-port 25 Gbps Ethernet host interface adapters.

Table 1 shows the fabric types that can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can be used at the same time.

Table 1. Communications types
Communications type Host to node Node to storage system Node to node
Fibre Channel SAN Yes Yes Yes
iSCSI
  • 10 Gbps Ethernet
  • 25 Gbps Ethernet
Yes Yes No
RDMA-capable Ethernet ports for node-to-node communication (25 Gbps Ethernet) No No Yes