IBM®
FlashSystem 5200 systems can use
NVMe-attached drives in the control enclosures to provide significant performance improvements as
compared to SAS-attached drives. The system also supports SAS-attached expansion enclosure
options.
A FlashSystem 5200 control
enclosure contains up to 12 NVMe-attached IBM FlashCore® Modules, industry-standard NVMe drives, or Storage Class Memory (SCM) drives. The
drives are accessible from the front of the control enclosure, as shown in Figure 1.Figure 1. Front view of the control enclosure
The FlashSystem 5200 control
enclosure (MTM 4662-6H2, 4662-UH6, 4662-Y12) contains two redundant power supplies. Each power supply is rated at 100-127 V (low line) at
836 W and 200-240 V (high line) ~ 9 A, 50/60 Hz.
Each control enclosure contains two identical node canisters.
Figure 2. Rear view of the control enclosure, showing the node canisters
Each node canister has the following characteristics and features:
IBM Storage Virtualize software with
enclosure-based, all inclusive software feature licensing.
Four channels of cache for the single CPU with 1 - 4 DIMMs, supporting 32 GB (1 x 32 GB), 128 GB
(4 x 32 GB), or 256 GB (4 x 64 GB), which is 64 GB, 256 GB, or 512 GB per control enclosure (I/O
group).
Note: A minimum of 128 GB of memory per canister (256 GB per enclosure) is required to
support deduplication.
NVMe transport protocol support for high performance 2.5-inch (SFF) NVMe-attached flash
drives:
Self-compressing, self-encrypting 2.5-inch NVMe-attached IBM FlashCore Modules with the following storage capacities: 4.8 TB, 9.6 TB, 19.2
TB, and 38.4 TB.
Industry-standard 2.5-inch NVMe-attached drive options with the following storage capacities:
800 MB, 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB.
SCM 2.5-inch NVMe-attached drive options with the following storage capacities: 375 GB, 750 GB,
800 GB, 1.6 TB, and 3.2 TB
The following on-board ports:
Two 10 Gb Ethernet iSCSI ports
Ethernet port 1 is for accessing the management interfaces, for accessing the service
assistant GUI for the canister, and for iSCSI host attachment.
Ethernet port 2 can also be used for the failover management interfaces and for iSCSI
host attachment.
One USB port
One 1 Gb Ethernet technician port
Two PCIe adapter slots that support the following optional adapters:
4-port 16 Gbps Fibre Channel (FC) adapter. Required for adding other FlashSystem 5200 control enclosures to an
I/O group, up to a maximum of four control enclosures per system. (Fibre Channel host adapters
cannot be mixed with SAS host adapters).
2-port 32 Gbps Fibre Channel (FC) adapters that support simultaneous SCSI and NVMeFC connections
on the same port. (Fibre Channel host adapters cannot be mixed with SAS host adapters).
4-port 10 Gbps Ethernet (iSCSI) host adapter.
2-port 25 Gbps Ethernet (iWARP) adapters that support iSCSI host attachment.
2-port 25 Gbps Ethernet (RoCE) adapter that support iSCSI host attachment.
2-port 25 Gbps Ethernet (RoCE) adapters that support NVMe over RDMA and NVMe over TCP for host
attachments.
4-port 12 Gbps SAS adapter for host attachment (slot 1 only). (SAS host adapters cannot be mixed
with Fibre Channel host adapters).
2-port 12 Gbps SAS adapter for attachment to SAS expansion enclosures (slot 2 only).
FlashSystem 5200 supports host-attached iSCSI Extensions for RDMA (iSER) with RoCE or
iWARP.
Note: For specific allowable adapter configurations, see Adapter Slot Guidelines in
Related Links.
The following expansion enclosures are supported.
Support for 2.5-inch 12 Gbps SAS industry-standard flash drives in SAS expansion enclosures.
Support for an intermix of 2U and 5U SAS expansion enclosures with a total chain weight of 10 in
each of two SAS chains, per control enclosure.
Support for up to 12 LFF drives in a 2U SAS expansion enclosure (4662-12G or 4662-F12), each enclosure with a chain weight
of 1.
Support for up to 24 SFF drives in a 2U SAS expansion enclosure (4662-24G or 4662-F24), with a chain weight of 1.
Support for up to 92 SFF drives in a 5U SAS expansion enclosure (4662-92G or 4662-F92), with a chain weight of 2.5.
Support for up to 20 2U expansion enclosures, each with 12 or 24 SFF drives, or up to 480 drives
in two SAS chains. Each enclosure with a chain weight of 1.
Support for up to 8 5U expansion enclosures with 92 SFF drives each, or up to 736 drives, in two
SAS chains. Each enclosure with a chain weight of 2.5.
NVMe transport protocol in the control enclosures
These systems use the Non-Volatile Memory express (NVMe) drive transport
protocol.
FlashSystem 5200 supports the following transport protocols for host
attachments: NVMe over Fibre Channel, NVMe over RDMA, and NVMe over TCP.
NVMe is designed specifically for flash technologies. It is a faster, less complicated storage
drive transport protocol than SAS.
NVMe-attached drives support multiple queues so that each CPU core can communicate directly with
the drive. This capability avoids the latency and overhead of core-core communication to give the
best performance.
NVMe offers better performance and lower latencies exclusively for solid-state drives through
multiple I/O queues and other enhancements.
In addition to supporting self-compressing, self-encrypting IBM FlashCore Modules, the NVMe transport protocol also supports other industry
standard NVMe flash drives.
IBM Storage Virtualize software
The control enclosure consists of two node canisters that each run IBM Storage Virtualize software, which is part of the IBM Storage Virtualize family.
IBM Storage Virtualize software provides the
following functions for the host systems that attach to the system:
A single pool of storage
Logical unit virtualization
Management of logical volumes
The system also provides the following functions:
Large scalable cache
Copy Services:
IBM
FlashCopy® (point-in-time copy) function, including
thin-provisioned FlashCopy to
make multiple targets affordable
IBM
HyperSwap® (active-active copy) function
Metro Mirror (synchronous
copy)
Global Mirror (asynchronous
copy)
Data migration
Space management:
IBM Easy Tier® function to migrate the
most frequently used data to higher-performance storage
Metering of service quality when combined with IBM Spectrum® Connect. For information, refer to the IBM Spectrum Connect documentation.
Thin-provisioned logical volumes
Compressed volumes to consolidate storage using data reduction pools
Data Reduction pools with deduplication
System hardware
The storage system consists of a set
of drive enclosures. Control enclosures contain NVMe flash drives and a pair of
node canisters. A collection of control enclosures that are managed as a single system
is called a clustered system.
Expansion enclosures contain SAS drives and are attached to control enclosures.
Expansion canisters include the serial-attached SCSI (SAS) interface hardware that
enables the node canisters to use the SAS flash drives of the expansion enclosures.
Figure 3 shows the system as a storage system. The internal drives are
configured into arrays and volumes are created from those arrays.
Figure 3. System as a storage system
The system can also be used to virtualize other storage systems, as shown in Figure 4.
Figure 4. System shown virtualizing other storage system
The two node canisters in each control enclosure are arranged into pairs that are known as
I/O groups. A single pair is responsible for serving I/O on a specific volume. Because
a volume is served by two node canisters, the volume continues to be available if one node canister
fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI are used to
disable the I/O for a node before it is taken offline or when a volume cannot be accessed through
that node.
Systems with internal drives and systems without internal drives can be used as a storage
virtualization solution.
System topology
The system topology can be set up in following ways.
Standard topology, where all node canisters in the system are at the same site. Figure 5. Example of a standard system topology
HyperSwap topology, where the system consists of at least two I/O groups. Each I/O
group is at a different site. Both nodes of an I/O group are at the same site. A volume can be
active on two I/O groups so that it can immediately be accessed by the other site when a site is not
available. Figure 6. Example of a HyperSwap system topology
System management
The nodes in a clustered system operate as a single system and present a single point of control for system management and
service. System management and error reporting are provided through an Ethernet interface to one of
the nodes in the system, which is called the configuration node. The configuration node
runs a web server and provides a command-line interface (CLI). The configuration node is a role that
any node can take. If the current configuration node fails, a new configuration node is selected
from the remaining nodes. Each node also provides a command-line interface and web interface to
enable some hardware service actions.
The system can have up to four I/O groups in a clustered system.
Note: Clustering of FlashSystem 5200 systems using Ethernet
over RDMA is not supported.
Fabric types
I/O operations between the FlashSystem 5200, hosts and externally
virtualized storage are performed using the iSCSI or NVMe over Fabrics standard. The FlashSystem 5200 and other IBM Storage Virtualize systems communicate with each
other by using private SCSI commands.
Table 1 shows the fabric types that
can be used for communicating between hosts, nodes, and RAID storage systems. These fabric types can be
used at the same time.
Table 1. Communications types
Communications type
Host to node
Node to storage system
Node to node
Fibre Channel SAN (SCSI)
Yes
Yes
Yes
iSCSI
10 Gbps Ethernet
25 Gbps Ethernet
Yes
Yes
No
iSER (RDMA-capable) 25 Gbps Ethernet
Yes
No
Yes
RDMA-capable Ethernet ports for node-to-node communication (25 Gbps
Ethernet)