IBM Support

V8.6.2.x Configuration Limits and Restrictions for IBM FlashSystem 9500

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to IBM FlashSystem 9500 software version 8.6.2.x

Content

The use of WAN optimization devices such as Riverbed are not supported in native Ethernet IP partnership configurations containing FlashSystem 9500 enclosures.

VMware Virtual Volumes (vVols)
 
Removing a vVol child pool is currently not supported. Contact IBM Support if you have a requirement to delete a vVol child pool.

Safeguarded Copy

The following restrictions apply for Safeguarded Copy:

  1. Mirrored volumes cannot be safeguarded.
  2. Mirroring of existing safeguarded source volumes is supported for migration purposes only.
  3. HyperSwap volumes are supported. However, recovery requires that they be converted to regular volumes before use.
  4. Pre-defined schedules are designed to avoid running out of FlashCopy maps in a single graph and keep within the supported volumes count. It is possible to create policies (that use the CLI only) that can, potentially, breach those limits. Caution must be exercised.
  5. The GUI does not support creating user-defined policies but can display any created that use the CLI.
  6. The source volume cannot be in an ownership group.
  7. The source volume cannot be used with Transparent Cloud Tiering (TCT).

Volume Mobility

Refer to the product documentation for the restrictions that apply to Volume Mobility: Migrating data between systems non-disruptively


Data Reduction Pools

The following restrictions apply for Data Reduction Pools (DRP):

  1. VMware vSphere Virtual Volumes (vVols) are not supported in a DRP.
  2. A volume in a DRP cannot be shrunk.
  3. No volume can move between I/O groups when the volume is a DRP (use FlashCopy or Metro Mirror instead).
  4. No split of a volume mirror to copy in a different I/O group.
  5. Real/used/free/tier capacity is not reported per volume - only per pool.

RAID and Distributed RAID

FlashSystem 9500 systems support DRAID1 and DRAID6 arrays.


DRAID1

DRAID1 arrays with more than six member drives are not supported.

FCM3 38.4 TB drives are not supported by DRAID1.


DRAID Strip Size

For candidate drives, FlashSystem 9500 systems support a strip size of 256.

Extent Size

The minimum (and recommended) extent size is 4096MiB.


Non-Disruptive Volume Move (NDVM)

The following Fibre Channel attached host types are supported for nondisruptively moving a volume between I/O groups (control enclosures):

Host Operating System Host Multipathing Host Clustering Notes
AIX 7.2 AIXPCM
Nondisruptive volume move can result in the same volume being mapped to different hosts in the same host cluster that uses different SCSI ID. If the host cluster cannot tolerate this configuration, then nondisruptive volume move cannot be used.
SAN boot is supported
NPIV is supported
Microsoft Windows 2019 MSDSM
Hyper-V Failover Cluster
SAN boot is supported
Microsoft Windows 2016 MSDSM
Hyper-V Failover Cluster
SAN Boot is supported
Red Hat 8 Native
The original paths might need to be manually removed on the host before removing access to the old I/O group
SLES 15 Native The original paths might need to be manually removed on the host before removing access to the old I/O group
VMware 6.7 Native VAAI is supported
VMware 6.5 Native VAAI is supported
Solaris 11.3 SPARC MPXIO SAN boot is supported

Note: For all other host types, I/O needs to be quiesced before moving a volume.

When moving a volume that is mapped to a host cluster, it is required to rescan disk paths on all host cluster nodes to ensure the new paths are detected before removing access from the original I/O group.


Clustered Systems

A FlashSystem 9500 system requires native Fibre Channel SAN or alternatively 16Gbps / 32Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster.

Note support for 32Gbps direct attachment requires an RPQ. Clustering can also be accomplished with 25Gbps Ethernet, for standard topologies.

Partnerships between systems for replication can be used with both Fibre Channel and native Ethernet connectivity. Distances greater than 300 meters are supported by using an FCIP link or Fibre Channel between source and target.
Clustering over Fibre Channel Clustering over 25Gb Ethernet HyperSwap over Fibre Channel HyperSwap over Ethernet (25Gb only) Replication over Fibre Channel Replication over Ethernet (10Gb or 25Gb)
Yes up to 2 I/O groups Yes up to 2 I/O groups Yes up to 2 I/O groups Yes up to 2 I/O groups Yes Yes


Transparent Cloud Tiering

Transparent cloud tiering on the system is defined by configuration limitations and rules. Refer to the IBM Documentation maximum limits page for details.

The following restrictions apply for Transparent Cloud Tiering:

  1. When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account - even if the cloud account object is removed and remade on the system, the encryption type for that cloud account cannot be changed while back up data for that system exists in the cloud provider.
  2. Performing rekey operations on a system with an encryption enabled cloud account, perform the commit operation immediately after the prepare operation. Remember to retain the previous system master key (on USB or in Key server) as this key can still be needed to retrieve your cloud backup data when performing a T4 recovery or an import.
  3. Avoid the use of the 'Restore_uid' option when backup is imported to a new cluster.
  4. Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1 or later.

The following AWS regions are supported by this code-level:

  • us-east-1
  • us-west-1
  • us-west-2
  • ca-central-1
  • eu-west-1
  • eu-west-2
  • eu-west-3
  • eu-central-1
  • sa-east-1
  • ap-southeast-1
  • ap-southeast-2
  • ap-south-1
  • ap-northeast-1
  • ap-northeast-2

Snapshots and TCT
TCT cloud snapshots are supported with the system's new FlashCopy® management model based on snapshot function.
However, cloud snapshots cannot co-exist with the user-owned legacy FlashCopy mappings.
For more information, refer to the following links:-
Snapshot function

Snapshots
  • IBM recommends that the sum of volume copies and snapshots in a system should not exceed 50,000 when using standard pools. Data reduction pools are recommended for larger configurations.
  • IBM recommends that the number of volumes in a volume group should not exceed 128, especially if the sum of volume copies and snapshots in the system exceeds 32,100

Node Memory

Nodes in an I/O group cannot be replaced by nodes with less memory when compressed volumes are present.

If a customer must migrate from 64GB to 32GB memory node canisters in an I/O group, they have to remove all compressed volume copies in that I/O group.

A customer must not:

  1. Create an I/O group with node canisters with 64GB of memory.
  2. Create compressed volumes in that I/O group.
  3. Delete both node canisters from the system with CLI or GUI.
  4. Install new node canisters with 32GB of memory and add them to the configuration in the original I/O group with CLI or GUI.

HyperSwap

Configure your host multipath driver to use an ALUA-based path policy.

Due to the requirement for multiple access I/O groups, SAS attached host types are not supported by HyperSwap volumes.

A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer.  This restriction prevents a HyperSwap volume on one system being virtualized by another.


Direct Attachment

IBM System Storage DS8000 series is not supported by direct-attached systems.

SAN boot on Windows 2019 (Qlogic HBA) is not supported by 32GB direct-attached systems.


16 Gbps Fibre Channel Node Connection

Refer to the IBM System Storage Inter-operation Center (SSIC) for supported 16 Gbps Fibre Channel configurations supported by 16 Gbps node hardware.

Note 16 Gbps Node hardware is supported when connected to Brocade and Cisco 8 Gbps or 16 Gbps fabrics only.

Direct connections to 2 Gbps or 4 Gbps SAN or direct host attachment to 2 Gbps or 4 Gbps ports is not supported.

Other configured switches that are not directly connected to the 16 Gbps Node hardware can be any supported fabric switch as currently listed in the SSIC.


25 Gbps Ethernet Canister Connection

A maximum of 6 2x25 Gbps Ethernet adapters are supported in each node canister for iSCSI communication with iSCSI capable Ethernet ports in hosts connected through Ethernet switches. The 25 Gbps Ethernet adapters do not support FCoE.

There are two types of 25Gbps Ethernet adapter features supported:

  1. RDMA over Converged Ethernet (RoCE)
  2. Internet Wide-area RDMA Protocol(iWARP)

Either works for standard iSCSI communications with hosts, for example, by not using Remote Direct Memory Access (RDMA). RDMA can be used on the adapters for clustering purpose over iWARP.

When use of RDMA with a 25Gbps Ethernet adapter becomes possible then RDMA links work between RoCE ports or between iWARP ports (that is, from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host).

Changing MTU value to other than 1500 is not allowed on RoCE ports. If you are upgrading from older versions having more than 1500 MTU for RoCE ports, then consider changing them to MTU 1500 before upgrade.

For Ethernet switches and adapters supported in hosts, visit the SSIC.

Example of a RoCE adapter for use in a host

Example of an iWARP adapter for use in a host


IP Partnership

IP partnerships are not supported on 1 Gb Ethernet ports - those are only for system management.

Using an Ethernet switch to convert a 25 Gb to a 1 Gb IP partnership, or a 10 Gb to a 1 Gb IP partnership is not supported. Therefore, the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.


Host Limitations

SAN BOOT function on AIX 7.2 TL5
SAN BOOT is not supported for AIX 7.2 TL5 when connected by using the NVME/FC protocol.

N2225/N2226 SAS HBA
VMware 6.7 (Guest O/S SLES12SP4) connected by SAS N2225/N2226 host adapters are not supported.

Lenovo 430-16e/8e SAS HBA
VMware 6.7 and 6.5 (Guest O/S SLES12SP4) connected by SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2019 and 2016 connected by SAS Lenovo 430-16e/8e host adapters are not supported.

Windows 2016 HyperV
RHEL v7.1 guests on Windows 2016 HyperV, with Virtual Fibre Channel, are not supported.

iSER
FlashSystem 9500 nodes do not support iSER host attachment.

Windows NTP server 
The Linux NTP client used by SAN Volume controller might not always function correctly with Windows W32Time NTP Server.

IBM i connected using directly attached Fibre Channel

IBM i is not supported as a host operating system when connected using directly attached Fibre Channel to FlashSystem or SAN Volume controller systems running 8.6.1.0 or later versions.

IBM i is supported as a host operating system when connected to FlashSystem or SAN Volume controller systems via a Fibre Channel switch.


Fabric Limitations

Only one Fibre Channel Forwarder (FCF) switch per fabric is supported

Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.


Priority Flow Control for iSCSI / iSER

Priority Flow Control for iSCSI / iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.


Policy-based Replication
The following configuration limits and restrictions apply to policy-based replication:
  1. The name of a volume group cannot be changed while a replication policy is assigned.
  2. The name of a volume cannot be changed while the volume is in a volume group with a replication policy assigned.
  3. Ownership groups are not supported by policy-based replication.
  4. Policy-based replication is not supported on HyperSwap topology systems.
  5. Remote Copy and policy-based replication can be configured on the same volume only if the volume is operating as the production volume for both types of replication, for the purpose of migrating from Remote Copy to policy-based replication.
  6. Policy-based replication cannot be used with volumes that are:
  •    Image mode
  •    HyperSwap
  •    Configured to use Transparent Cloud Tiering (TCT)
The following actions cannot be performed on a volume while the volume is in a volume group with a replication policy assigned:
  1. Resize (expand or shrink)
  2. Migrate to image mode, or add an image mode copy
  3. Move to a different I/O group

Policy-based replication for high availability

A limit of one I/O group if the system has any storage partitions.

If using policy-based high availability, the following restrictions apply:

  • A limit of one I/O group can be configured in the system.
  • A limit of one partnership per system can be configured.
  • Only Fibre Channel SCSI hosts can be configured in a storage partition configured for high availability.
  • Host clusters cannot be used for hosts configured in a storage partition configured for high availability.
  • A partnership cannot be used for both high availability and asynchronous replication.
  • Only the 'standard' system topology is supported.
  • Reduced host interoperability support; only RHEL and VMware ESXi hosts are supported.
  • Only SCSI Fibre Channel hosts are supported.
  • Persistent Reserve commands are not supported.

Policy-based replication with VMware Virtual Volumes (vVols)
 

If using policy-based replication with VMware Virtual Volumes (vVols), the following restrictions apply:

  • A limit of one I/O group can be configured in the system.
  • A limit of one partnership per system can be configured for vVol replication. Additional partnerships can be configured for non-vVol replication.
  • Only the 'standard' system topology is supported.

Maximum Configurations

Configuration limits for FlashSystem 9500:

Property
Context
Maximum Number
Comments
System (Cluster) Properties
I/O groups / Control Enclosures per system (cluster)
2
Each control enclosure contains two node canisters
Active nodes per system
4
Arranged as two I/O groups
Spare Nodes per system N/A
Nodes per fabric
64
Maximum number of IBM Storage Virtualize nodes that can be present on the same Fibre Channel fabric, with visibility of each other
Inter-cluster partnerships per system
3
A system can be partnered with up to three remote systems. No more than four systems can be in the same connected set
IP Quorum devices per system
5
Data encryption keys per system
1,024
Key servers per system 4
Node Properties 
iSCSI sessions per node 1,024 2048 in IP failover mode (when partner node is unavailable).
This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions
iSER sessions per node 256
iSCSI + iSER sessions per node 1,088
Managed Disk Properties 
Managed disks (MDisks) per system
4,096
The maximum number of logical units that can be managed by a system, including internal arrays.

Internal distributed arrays consume 16 logical units.

This number also includes external MDisks that are not configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools per system
1,024
Parent pools per system
128
Child pools per system
1,023
Managed disk extent size
8,192 MB
Capacity for an individual internal managed disk (array)
-
No limit is imposed beyond the maximum number of drives per array limits.
The maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of Storage Systems. Refer to the supported hardware matrix for further details.
The maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
The maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Maximum Provisioning policies 40
Data Reduction Pool Properties
Data Reduction Pools per system
4
MDisks per Data Reduction Pool
128
Volume copies per Data Reduction Pool 65,000
Extents per I/O group per Data Reduction Pool
524,288 (512K)
Volume (Virtual Disk) Properties
Basic Volumes per system
32,500
(when no remote copy relationships configured)
or
15,864
(when remote copy relationship(s) exists)
Each Basic Volume uses one VDisk, each with one copy.

If a Remote Copy partnership exists to a system that supports a lower number of volumes, the maximum number of volumes is reduced to the lower limit, or 8192 if that is greater.

For example, if one system has a limit of 32,500 volumes and the other has a limit of 8192 volumes, both systems are limited to 8192 volumes.
Note that if traditional remote copies are configured, the maximum host mappable volumes limit should be set to 15,864
Volume copies in host mappable volumes per system 32,500
Stretched volumes per system N/A
HyperSwap volumes per system 2,000 Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship, and 4 FlashCopy mappings.
Volume copies per volume 2
Total mirrored volume capacity per I/O group 1 PB
Volumes per I/O group
(Volumes per caching I/O group)
- No limit is imposed here beyond the volumes per system limit.
Volume Groups per system 1,024
Volume groups per storage partition 1,024
Volumes per volume group 512
IBM recommends that the number of volumes in a volume group should not exceed 128, especially if the sum of volume copies and snapshots in the system exceeds 32,100
Volumes per storage pool - No limit is imposed beyond the volumes per system limit
Fully allocated volume capacity 256 TB Maximum size for an individual fully allocated volume.

The maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Thin-provisioned (space-efficient) per-volume capacity for volumes in regular and data reduction pools 256 TB Maximum size for an individual thin-provisioned volume.

The maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Host mappings per system 64,000 See also - volume mappings per host object
Host Properties 
Host objects (IDs) per system
2,048
Host objects per storage partition - No limit is imposed here beyond the host object limits
Host objects (IDs) per I/O group
2,048
Refer to the additional Fibre Channel and iSCSI host limits
Volume mappings per host object
2,048
Although IBM FlashSystem 9500 allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing or managing this number of volumes. The practical mapping limit is restricted by the host OS, not IBM FlashSystem 9500.
Note: this limit does not apply to hosts of type adminlun (used to support VMware vVols).
Portset Objects per system 72 FC + Ethernet
IP address objects per system 2,048
IP addresses per port 64
When node failover occurs, Ethernet ports that have the same ID are configured with the IP addresses of the partner. Hence there can be a maximum of 128 IP addresses configured per Ethernet port during failover.
For Emulex ports, there can be a maximum of 3 unique VLANs per port and a maximum of 32 IP addresses per port.
IP address objects per node 256
Unique IP addresses per port 64
Routable IP addresses per port 1
IP addresses per node per portset Host 4
Remote Copy
(IP Replication)
1
Remote Copy
(High Speed Ethernet)
2
Storage Number of Ethernet Ports on node
Fibre Channel ports per portset 4
Fibre Channel Host objects per portSet - Same as maximum number of hosts supported on that platform
Host Cluster Properties
Host clusters per system
512
Hosts in a host cluster
128
Fibre Channel Host Properties 
Fibre Channel hosts per system
2,048
Fibre Channel host ports per system
4,096
Fibre Channel hosts per I/O group
2,048
Fibre Channel host ports per I/O group
2,048
NPIV Direct Attach Logins per Fibre Channel WWPN 128
Fibre Channel host ports per host object (ID)
32
iSCSI Host Properties 
iSCSI hosts per system
2,048
iSCSI hosts per I/O group
1,024
iSCSI names per host object
4
iSCSI names per I/O group
1,024

Adapter Hardware Properties

Gen4 4-port 32 Gbps FC adapters per node / canister 6
4-port 64 Gbps FC adapters per node / canister 3
On board 1 Gbps Ethernet I/O ports per node / canister 1 Technician port
3
3 management ports (including 1 technician port)
2-port 25 Gbps iWARP adapters per node / canister 6
25Gbps iWARP ports per node canister 12
2-port 25 Gbps RoCE adapters per node canister 6
2-port 100 Gbps NVMe/RDMA RoCEv2 adapters per node canister 6
NVMe over Fabrics Host Properties
NVMe Qualified Names (NQNs) per host object (ID) 2
NVMe over Fibre Channel hosts per system
32
This limit is not policed by the IBM Storage Virtualize software. Any configurations that exceed this limit might experience significant adverse performance impact.
NVMe over Fibre Channel hosts per I/O group
16
This limit is not policed by the IBM Storage Virtualize software. Any configurations that exceed this limit might experience significant adverse performance impact.
NVMe connections per FC port 16 This limit is the number of FC2 logins supported.
NVMe over RDMA hosts per system 512
NVMe over RDMA hosts per I/O group 256
Primary RDMA connections per port 256
NVMe over RDMA - Host queue size 64
NVMe over RDMA - Number of IO queues 8
NVMe over TCP hosts per system 512
NVMe over TCP hosts per I/O group 256
NVMe over TCP - Host queue size 128
NVMe over TCP - Number of IO queues 8
Copy Services Properties
Total Remote Copy capacity per IO group (includes Metro Mirror, Volume Migration and HyperSwap)
2 PiB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
Remote Copy Metro Mirror relationships per system 10,000
Remote Copy Active-Active relationships (HyperSwap volumes) per system 2,000
Remote Copy migration relationships per system 256
Maximum round-trip latency for Metro Mirror, HyperSwap and Migration relationships 3ms
Remote Copy relationships per consistency group - No limit is imposed beyond the Remote Copy relationships per system limit
Remote Copy consistency groups per system 256
3-site Remote Copy relationships per consistency group 256
3-site Remote Copy consistency groups per system 16
3-site Metro Mirror Remote Copy relationships per system 2,500
3-site HyperSwap Remote Copy relationships per system 2,000
FlashCopy Properties
FlashCopy mappings per system
15,864
FlashCopy consistency groups per system 500
FlashCopy mappings per consistency group 512
FlashCopy targets per source
256
Snapshots per system
64,999
 (when standard pools aren't configured)

50,099

(when standard pools are configured)

Note: 50,099 is not a policed limit

This maximum requires a single basic volume.  For each additional basic volume created, the maximum number of snapshots is reduced by one.

For example, on a system with 1,000 basic volumes, the maximum number of snapshots per system is 64,000.
Snapshots per volume copy
64,999
 (when standard pools aren't configured)

50,099

(when standard pools are configured)

Note: 50,099 is not a policed limit

This maximum requires a single basic volume. For each additional volume copy created, the maximum number of snapshots is reduced by one.

For example, on a system with 1,000 mirrored volumes, the maximum number of snapshots per system is 63,000.
Thin-Clone, Clone Volumes per system 32,499
Thin-Clone Volumes per source volume 32,499
Clone Volumes per source volume 32,499
Total FlashCopy bitmap Allowance per I/O group
- of which legacy FlashCopy can have up to
20 GiB
4 GiB
Total FlashCopy volume capacity per I/O group
- of which legacy FlashCopy can have up to
40 PiB
8 PiB
Safeguarded policies per system 32 Includes 3 predefined and 29 user-defined policies
Snapshot policies per system 32
Replication Properties
Policy-based replication
Total volume capacity per I/O group using policy-based replication (asynchronous replication or HA) 4 PiB
Volumes using policy-based replication (asynchronous replication or HA) 32,500
VMware Virtual Volumes (vVols) using asynchronous policy-based replication See notes Contact IBM to review the supported limits when planning to use vVol replication
Volume groups per system using policy-based replication - No limit is imposed beyond the volume groups per system limit
Volume groups per system using policy-based replication for VMware Virtual Volumes (vVols) See notes Contact IBM to review the supported limits when planning to use vVol replication
Volumes per volume group using policy-based replication - No limit is imposed beyond the volumes per volume group limit
VMware Virtual Volumes (vVols) per volume group using policy-based replication See notes Contact IBM to review the supported limits when planning to use vVol replication
Maximum round-trip latency for asynchronous policy-based replication using Fibre Channel partnerships 250ms
Maximum round-trip latency for asynchronous policy-based replication using IP partnerships 80ms
Maximum round-trip latency for asynchronous policy-based replication using High Speed Ethernet partnerships 3ms
Maximum round-trip latency for policy-based high availability 1ms
Replication policies per system 32
I/O groups per system using policy-based replication -
No limit is imposed beyond the I/O groups per system (cluster) limit.
Note: Policy-based high availability supports a maximum of one I/O group in the system
I/O group connections per system using policy-based replication 12 Example: If I/O group 0 in system 1 is replicating to two I/O groups in system 2, this counts as two connections. If I/O group 1 in system 1 is also replicating to these same I/O groups, that counts as two additional connections, for a total of four connections.
Total number of I/O groups in remote systems that can be configured for policy-based replication for an I/O group 6
Storage partitions per system 4
IP Partnership Properties 
Inter-cluster IP partnerships per system
3
A system can be partnered with up to three remote systems.
Inter-site links per IP partnership
2
A maximum of two inter-site links can be used between two IP partnership sites.
Ports per node per IP partnership
1
A maximum of one port per node can be used for IP partnership.

Replication over High Speed Ethernet Properties

High speed Ethernet partnerships per system 1
High speed portsets per system 6
High speed portsets per high speed partnership 2
IP addresses per node per high speed portset 2
Internal Storage Properties
SAS chains per control enclosure
2
Expansion enclosures per SAS chain
3 x 2Uxx
1 x 5U92
Expansion enclosures per control enclosure
6
Expansion enclosures per system 12
Drives per I/O group
232
Maximum drives per system
464
Includes SAS and NVMe drives
SCM drives per I/O group 12
Maximum NVMe drives per system 96
Distributed RAID Array Properties 
Arrays per system
20
Encrypted arrays per system
20
Arrays per I/O group
10
Member drives per array
128
FCM arrays per storage pool 1 Existing multi-array storage pools created before 8.5.0 are supported as 'grandfathered'
Drives per array (RAID-1)
6
Minimum-Maximum member drives per RAID-6 array
6-128
Minimum-Maximum member drives per RAID-6 array (NVMe drives) 6-48
Minimum-Maximum member drives per RAID-1 array
2-16
Rebuild areas per non-FCM array
1-4
Rebuild areas per FCM array
1
Rebuild areas per non-FCM RAID-1 array
2 drives only
0
>2 drives
1
Rebuild areas per FCM RAID-1 array
2 drives only
0
>2 drives
1
Minimum-Maximum stripe width for RAID-6 array
5-16
Maximum FCM drive capacity for RAID-1 array 19.2 TB
Maximum drives added to an array in a single DRAID expansion
12
42
for DRAID-1 for FS9200 only
for DRAID-5 and DRAID-6
Minimum-Maximum stripe width for RAID-1 array
2-2
Maximum drive capacity for RAID-1 array
8 TB
This limit applies to HDDs
Drives added to an array in a single RAID-1 expansion
12
Drives added to an array in a single RAID-6 or RAID-5 expansion 42
Concurrent DRAID expansions per system
4
Concurrent DRAID expansions per parent storage pool
1
External Storage System Properties
Storage system WWNNs per system (cluster)
1,024
Storage system WWPNs per system (cluster)
1,024
WWNNs per storage system
16
WWPNs per WWNN
16
WWPNs per MDisk 16 WWPNs per MDisk means the limit of Storage System WWPNs that can have LUN mappings for a specific Storage System Logical Unit (LU)
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system limit
System and User Management Properties 
User accounts per system
400
Includes the default user accounts
User groups per system
256
Includes the default user groups
Ownership groups per system 64
Remote authentication (LDAP) services per system 1
Multifactor authentication services per system 1
Single sign-on authentication services per system 1
Authentication services per system
1
DNS servers per system 2
NTP servers per system
1
iSNS servers per system 1
Concurrent OpenSSH sessions per system
32
Two Person Integrity Requests per system 4
Patches per cluster 7
Patches per node (Includes Installed, Obsolete, and Error) 30
Event notification Properties
SNMP servers per system
6
Syslog servers per system
6
Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12
LDAP servers per system
6
REST API Properties
Maximum active connections per cluster 4 RESTful API
Maximum requests/sec to auth endpoint 3 RESTful API
Maximum requests/sec to command endpoints 10 RESTful API
 
 

Extents

The following table compares the maximum volume, MDisk, and system capacity for each extent size.

Extent size 
Maximum non thin-provisioned volume capacity
Maximum thin-provisioned volume capacity (for regular pools) 
Maximum thin-provisioned and compressed volume size in data reduction pools
Maximum total data reduced volume capacity in a single data reduction pool per I/O group
Maximum 
Virtualized MDisk capacity
Maximum DRAID MDisk capacity 
Total storage capacity manageable per system 2
4,096 MB
256 TB
256 TB
256 TB
2 PB
512 TB
8 PB
16 PB
8,192 MB
256 TB
256 TB
256 TB
4 PB
1 PB
16 PB
32 PB

1 Unless limited by the total storage capacity manageable per the system

2 The total capacity values assume that all of the storage pools in the system use the same extent size. 

[{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST2HTZ","label":"IBM FlashSystem Software"},"ARM Category":[{"code":"a8m0z000000bqPqAAI","label":"Documentation"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
03 May 2024

UID

ibm17039374