Preventive Service Planning
Abstract
This document lists the configuration limits and restrictions specific to IBM FlashSystem 9100 and 9200 software version 8.5.2.x
Content
Safeguarded Copy
The following restrictions apply for Safeguarded Copy:
- Mirrored volumes cannot be safeguarded. Stretched cluster is not supported
- Mirroring of existing safeguarded source volumes is supported for migration purposes only
- HyperSwap volumes are supported. However, recovery requires that they be converted to regular volumes before use
- Pre-defined schedules are designed to avoid running out of FlashCopy maps in a single graph and keep within the supported volumes count. It is possible to create policies (that use the CLI only) that can, potentially, breach those limits. Caution must be exercised
- The GUI does not support creating user-defined policies but can display any created that use the CLI.
- The source volume cannot be in an ownership group
- The source volume cannot be used with Transparent Cloud Tiering (TCT).
Volume Mobility
The following restrictions apply for Volume Mobility (nondisruptive volume move between systems):
- No 3-site support
- Not intended to be a DR or HA solution
- No support for consistency groups, change volumes, or expanding volumes
- Reduced host interoperability support. Only the following host operating systems are supported
- RHEL
- SLES
- ESXi
- Solaris
- HP-UX.
- SCSI only. Fibre Channel and iSCSI supported. NVMe not supported
- No SCSI persistent reservations or Offloaded Data Transfer (ODX).
Data Reduction Pools
The following restrictions apply for Data Reduction Pools (DRP):
- VMware vSphere Virtual Volumes (vVols) are not supported in a DRP
- A volume in a DRP cannot be shrunk
- No volume can move between I/O groups when the volume is a DRP (use FlashCopy or Metro Mirror / Global Mirror instead).
- No split of a volume mirror to copy in a different I/O group
- Real/used/free/free/tier capacities are not reported per volume - only per pool.
Distributed RAID
FlashSystem 9100 and 9200 systems cannot create new DRAID5 arrays with more than 8 member drives (existing DRAID5 arrays with more than 8 members are supported). Expansion beyond 8 member drives is also supported for new or existing DRAID5 arrays.
DRAID Strip Size
For candidate drives, with a capacity greater than 4 TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. Use a strip size of 256 instead.
Non-Disruptive Volume Move (NDVM)
The following Fibre Channel attached host types are supported for nondisruptively moving a volume between I/O groups (control enclosures):
Host Operating System | Host Multipathing | Host Clustering | Notes |
---|---|---|---|
AIX 7.2 | AIXPCM |
Nondisruptive volume move can result in the same volume being mapped to different hosts in the same host cluster that uses different SCSI ID. If the host cluster cannot tolerate this configuration, then nondisruptive volume move cannot be used.
|
SAN boot is supported
NPIV is supported
|
Microsoft Windows 2019 | MSDSM |
Hyper-V Failover Cluster
|
SAN boot is supported
|
Microsoft Windows 2016 | MSDSM |
Hyper-V Failover Cluster
|
SAN Boot is supported
|
Red Hat 8 | Native |
The original paths might need to be manually removed on the host before removing access to the old I/O group
|
|
SLES 15 | Native | The original paths might need to be manually removed on the host before removing access to the old I/O group | |
VMware 6.7 | Native | VAAI is supported | |
VMware 6.5 | Native | VAAI is supported | |
Solaris 11.3 SPARC | MPXIO | SAN boot is supported |
Note: For all other host types, I/O must be quiesced before moving a volume.
When moving a volume that is mapped to a host cluster, it is required to rescan disk paths on all host cluster nodes to ensure the new paths are detected before removing access from the original I/O group.
Clustered Systems
A FlashSystem 9100 or 9200 system requires native Fibre Channel SAN or alternatively 16 Gbps or 32 Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Support for 32 Gbps direct attachment requires an RPQ. Clustering can also be accomplished with 25 Gbps Ethernet, for standard topologies.
Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel and native Ethernet connectivity. Distances greater than 300 meters are supported by using an FCIP link or Fibre Channel between source and target.
Clustering over Fibre Channel | Clustering over 25Gb Ethernet | HyperSwap over Fibre Channel | HyperSwap over Ethernet (25Gb only) | Metro / Global Mirror replication over Fibre Channel | Metro / Global Mirror replication over Ethernet (10Gb or 25Gb) |
---|---|---|---|---|---|
Yes, up to 4 I/O groups | Yes up to 4 I/O groups | Yes up to 4 I/O groups | Yes up to 4 I/O groups | Yes | Yes |
Transparent Cloud Tiering
Transparent cloud tiering on the system is defined by configuration limitations and rules. See the IBM Documentation maximum limits page for details.
The following restrictions apply for Transparent Cloud Tiering:
- When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account. Even if the cloud account object is removed and remade on the system, the encryption type for that cloud account cannot be changed while back up data for that system exists in the cloud provider.
- Performing rekey operations on a system with an encryption enabled cloud account, perform the commit operation immediately after the prepare operation. Remember to retain the previous system master key (on USB or in Key server) as this key can still be needed to retrieve your cloud backup data when performing a T4 recovery or an import.
- Avoid the use of the 'Restore_uid' option when backup is imported to a new cluster.
- Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1 or later.
The following AWS regions are supported by this code-level:
- us-east-1
- us-west-1
- us-west-2
- ca-central-1
- eu-west-1
- eu-west-2
- eu-west-3
- eu-central-1
- sa-east-1
- ap-southeast-1
- ap-southeast-2
- ap-south-1
- ap-northeast-1
- ap-northeast-2
Encryption and TCT
There is a small possibility that, on a system that uses both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption rekey operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state. The user is unable to cancel or commit the encryption rekey because the cloud account is offline. The user is unable to remove the cloud account because an encryption rekey is in progress.
The system can be recovered from this state by using a T4 Recovery procedure.
It is also possible that SAS-attached storage arrays go offline.
The 2 scenarios identify where this might happen:
Scenario A
- Using USB encryption and Cloud.
- A new USB key is prepared by using 'chencryption -usb newkey -key prepare'.
- The new presumptive key is deleted from all USB sticks before the new key is committed.
- All nodes in the system are rebooted.
- The cloud account is offline as it cannot get the presumptive key. The cloud account cannot be removed, and the encryption rekey cannot be completed or cancelled. The system remains stuck in these cloud and encryption states.
- Any SAS-attached arrays are offline and locked.
- The system can be restored by T4 to a previous config backup.
Scenario B
- Using key server encryption and Cloud.
- A new key server key is prepared by using 'chencryption -keyserver newkey -key prepare'.
- The new presumptive key is deleted from the key server before the new key is committed.
- All nodes in the system are rebooted.
- The cloud account is offline as it cannot get the presumptive key. The cloud account cannot be removed, and the encryption rekey cannot be completed or cancelled. The system remains stuck in these cloud and encryption states.
- SAS-attached arrays are not affected.
- The system can be restored by T4 to a previous config backup.
NPIV (N_Port ID Virtualization)
The following recommendations and restrictions can be followed when implementing NPIV:
FCoE is not supported by NPIV.
Operating systems not currently supported for use with NPIV:
- HPUX 11iV2
- Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM
- IBMi in Direct attach
Other Operating Systems
Other operating Systems might also experience the same issue when modifying the NPIV state from "Transitional" to "Disabled", in which case the operating system-specific rescan command can be used.
Fabric Attachment
NPIV mode is only supported when used with Brocade or Cisco Fibre Channel SAN switches that are NPIV capable.
Node Memory
Nodes in an I/O group cannot be replaced by nodes with less memory when compressed volumes are present.
Nodes in an I/O group cannot be replaced by nodes with less memory when compressed volumes are present.
If a customer must migrate from 64GB to 32GB memory node canisters in an I/O group, they have to remove all compressed volume copies in that I/O group.
A customer must not:
- Create an I/O group with node canisters with 64GB of memory.
- Create compressed volumes in that I/O group.
- Delete both node canisters from the system with CLI or GUI.
- Install new node canisters with 32GB of memory and add them to the configuration in the original I/O group with CLI or GUI.
HyperSwap
Configure your host multipath driver to use an ALUA-based path policy.
Due to the requirement for multiple access I/O groups, SAS attached host types are not supported by HyperSwap volumes.
A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer. This restriction prevents a HyperSwap volume on one system being virtualized by another.
AIX Live Partition Mobility (LPM)
AIX LPM is supported by the HyperSwap function and AIX 7.
Direct Attachment
IBM System Storage DS8000 series is not supported by direct-attached systems.
SAN boot on Windows 2019 (Qlogic HBA) is not supported by 32GB direct-attached systems.
16 Gbps Fibre Channel Node Connection
Refer to the IBM System Storage Inter-operation Center (SSIC) for supported 16 Gbps Fibre Channel configurations supported by 16 Gbps node hardware.
Note 16 Gbps Node hardware is supported when connected to Brocade and Cisco 8 Gbps or 16 Gbps fabrics only.
Direct connections to 2 Gbps or 4 Gbps SAN or direct host attachment to 2 Gbps or 4 Gbps ports is not supported.
Other configured switches that are not directly connected to the 16 Gbps Node hardware can be any supported fabric switch as currently listed in the SSIC.
25 Gbps Ethernet Canister Connection
Two optional 2-port 25 Gbps Ethernet adapters are supported in each node canister for iSCSI communication with iSCSI capable Ethernet ports in hosts by using Ethernet switches. These 2-port 25 Gbps Ethernet adapters do not support FCoE.
There are two types of 25 Gbps Ethernet adapter feature supported:
- RDMA over Converged Ethernet (RoCE)
- Internet Wide-area RDMA Protocol(iWARP)
Either works for standard iSCSI communications with hosts, for example, by not using Remote Direct Memory Access (RDMA). A future software release will add (RDMA) links by using new protocols that support RDMA such as NVMe over Ethernet.
Use of RDMA with a 25 Gbps Ethernet adapter becomes possible then RDMA links work between RoCE ports or between iWARP ports (that is, from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host).
For Ethernet switches and adapters supported in hosts, visit the SSIC.
Example of an RoCE adapter for use in a host
Example of an iWARP adapter for use in a host
IP Partnership
IP partnerships are supported on any of the available Ethernet ports. Using an Ethernet switch to convert a 25Gb to a 1Gb IP partnership, or a 10Gb to a 1Gb IP partnership is not supported. Therefore, the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.
VMware vSphere Virtual Volumes (vVols)
The maximum number of virtual machines on a single VMware ESXi host in a FlashSystem 9100/9200 / vVol storage configuration is limited to 680.
The use of VMware vSphere Virtual Volumes (vVols) on a system that is configured for HyperSwap is not currently supported by FlashSystem 9100 or 9200.
Host Limitations
SAN BOOT function on AIX 7.2 TL5
SAN BOOT is not supported for AIX 7.2 TL5 that connect by using the NVME/FC protocol.
RDM Volumes attached to guests in VMware 7.0
Using RDM (raw device mapping) volumes attached to any guests, with the RoCE iSER protocol, results in pathing issues or inability to boot the guest.
N2225/N2226 SAS HBA
VMware 6.7 (Guest O/S SLES12SP4) connected by using SAS N2225/N2226 host adapters are not supported.
Lenovo 430-16e/8e SAS HBA
VMware 6.7 and 6.5 (Guest O/S SLES12SP4) connected by using SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2019 and 2016 connected that use SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2016 HyperV
RHEL v7.1 guests on Windows 2016 HyperV, with Virtual Fibre Channel, are not supported.
iSER
Operating systems not currently supported for use with iSER:
- Windows 2012 R2 with Mellanox ConnectX-4 Lx EN adapters
- Windows 2016 with Mellanox ConnectX-4 Lx EN adapters
Windows NTP server
The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server
Fabric Limitations
Only one FCF (Fibre Channel Forwarder) switch per fabric is supported.
Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.
Priority Flow Control for iSCSI / iSER
Priority Flow Control for iSCSI / iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.
- The name of a volume group cannot be changed while a replication policy is assigned.
- The name of a volume cannot be changed while the volume is in a volume group with a replication policy assigned.
- Ownership groups are not supported by policy-based replication.
- Policy-based replication is not supported on HyperSwap topology systems.
- Policy-based replication cannot be used with volumes that are:
- Image mode
- HyperSwap
- Part of a remote-copy relationship
- Configured to use Transparent Cloud Tiering (TCT)
- VMware vSphere Virtual Volumes (vVols)
- Resize (expand or shrink)
- Migrate to image mode, or add an image mode copy
- Move to a different I/O group
Maximum Configurations
Configuration limits for FlashSystem 9100 and 9200:
Property
|
Hardware Type
|
Maximum Number
|
Comments
|
System (Cluster) Properties
|
|||
I/O groups / Control Enclosures per system (cluster) |
4
|
Each control enclosure contains two node canisters | |
Active nodes per system |
8
|
Arranged as four I/O groups | |
Nodes per fabric |
64
|
Maximum number of FS9100 and FS9200 system nodes that can be present on the same Fibre Channel fabric, with visibility of each other | |
Fabrics per system |
8
|
The number of counterpart Fibre Channel SANs that are supported - Up to 4 fabrics that use native Fibre Channel ports |
|
Inter-cluster partnerships per system |
3
|
A system can be partnered with up to three remote systems. No more than four systems can be in the same connected set | |
IP Quorum devices per system |
5
|
||
Data encryption keys per system |
1,024
|
||
Key servers per system | 4 | ||
Node Properties
|
|||
Logins per node Fibre Channel WWPN |
512
|
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems | |
Portset objects per system | 72 | FC + Ethernet | |
IP address objects per system | 2,048 | Includes duplicated IP addresses | |
IP address objects per node | 256 | ||
IP addresses per port | 64 |
When a node fails over, Ethernet ports with the same ID will be configured with all the IP addresses of the partner. Hence there can be a maximum of 128 IP addresses configured per Ethernet port during failover.
For Emulex ports, there can be a maximum of 3 unique VLANs per port and a maximum of 32 IP addresses per port.
For Mellanox iSER connectivity, there can be a maximum of 31 VLANs per port and a maximum of 31 IP addresses per port with VLAN.
|
|
Routable IP addresses per port | 1 | ||
iSCSI sessions per node |
1,024
|
2048 in IP failover mode (when partner node is unavailable). This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions |
|
iSER sessions per node |
256
|
||
iSCSI + iSER sessions per node | 1,088 | ||
Managed Disk Properties
|
|||
Managed disks (MDisks) per system |
4,096
|
The maximum number of logical units that can be managed by a system, including internal arrays. Internal distributed arrays consume 16 logical units. This number also includes external MDisks that, have not been configured into storage pools (managed disk groups) |
|
Managed disks per storage pool (managed disk group) |
128
|
||
Storage pools per system |
1,024
|
||
Parent pools per system |
128
|
||
Child pools per system |
1,023
|
||
Managed disk extent size |
8,192 MB
|
||
Capacity for an individual internal managed disk (array) |
-
|
No limit is imposed beyond the maximum number of drives per array limits. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Capacity for an individual external managed disk |
1 PB
|
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Total storage capacity manageable per system |
32 PB
|
Maximum requires an extent size of 8192 MB to be used This limit represents the per system maximum of 2^22 extents. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Maximum Provisioning policies | 32 | ||
Data Reduction Pool Properties
|
|||
Data Reduction Pools per system |
4
|
||
MDisks per Data Reduction Pool |
128
|
||
Volumes copies per Data Reduction Pool |
15,864
|
||
Extents per I/O group per Data Reduction Pool |
524,288 (512K)
|
||
Volume (Virtual Disk) Properties
|
|||
Basic Volumes (VDisks) per system |
15,864
|
Each Basic Volume uses one VDisk, each with one copy. If a Remote Copy partnership exists to a system that supports a lower number of volumes, the maximum number of volumes is reduced to the lower limit, or 8192 if that is greater. For example, if one system has a limit of 15864 volumes and the other has a limit of 8192 volumes, both systems are limited to 8192 volumes. |
|
HyperSwap volumes per system |
2,000
|
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship, and 4 FlashCopy mappings. | |
Volume groups per system | 1,024 | ||
Volumes per I/O group (volumes per caching I/O group) |
-
|
No limit is imposed here beyond the volumes per system limit. | |
Compressed volume copies in data reduction pools per system |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Compressed volume copies in data reduction pools per I/O group |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Deduplicated volume copies in data reduction pools per system |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Deduplicated volume copies in data reduction pools per I/O group |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Volumes accessible per I/O group | 15,864 | ||
Volumes per storage pool |
-
|
No limit is imposed beyond the volumes per system limit | |
Fully allocated volume capacity |
256 TB
|
Maximum size for an individual fully allocated volume. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Thin-provisioned (space-efficient) per-volume capacity for volumes in regular and data reduction pools |
256 TB
|
Maximum size for an individual thin-provisioned volume. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
HyperSwap volume capacity in a single I/O group by using RAID |
2 PiB
|
This limit depends on the bitmap allocation for mirroring and replication in each I/O group.
See the IBM Documentation for details. |
|
Host mappings per system | 64,000 | ||
Mirrored Volume (Virtual Disk) Properties
|
|||
Copies per volume |
2
|
||
Volume copies per system |
15,864
|
||
Total mirrored volume capacity per I/O group |
1 PB
|
||
Host Properties
|
|||
Host objects (IDs) per system |
2,048
|
A host object can contain both Fibre Channel ports and iSCSI names | |
Host objects (IDs) per I/O group |
512 or 1024
|
"setlimit iogrphosts" must be used to enable 1024 hosts per I/O group, otherwise the limit is 512. Refer to the additional Fibre Channel and iSCSI host limits
|
|
Volume mappings per host object |
2,048
|
Although IBM FlashSystem 9100 and 9200 allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing or managing this number of volumes. The practical mapping limit is restricted by the host OS, not IBM FlashSystem 9100 or 9200. Note: this limit does not apply to hosts of type adminlun (used to support VMware vVols). |
|
Unique IP addresses per port | 64 | ||
IP addresses per node / portset | Host | 4 | |
Remote Copy | 1 | ||
Storage | Number of Ethernet Ports on node | ||
FC ports per portset | 4 | ||
FC Host objects per portset | Same as the maximum number of hosts supported on that platform | ||
Host Cluster Properties
|
|||
Host clusters per system |
512
|
||
Hosts in a host cluster |
128
|
||
Fibre Channel Host Properties
|
|||
Fibre Channel hosts per system |
2,048
|
||
Fibre Channel host ports per system |
8,192
|
||
Fibre Channel hosts per I/O group |
512 or 1024
|
For FS91x0, and FS9200, 'setlimit iogrphosts' must be used to enable 1024 hosts per I/O group, otherwise the limit is 512 | |
Fibre Channel host ports per I/O group |
2,048
|
||
NPIV Direct Attach Logins per Fibre Channel WWPN | 128 | ||
Fibre Channel host ports per host object (ID) |
32
|
||
iSCSI Host Properties
|
|||
iSCSI hosts per system |
2,048
|
||
iSCSI hosts per I/O group |
512
|
||
iSCSI names per host object (ID) |
4
|
||
iSCSI names per I/O group |
512
|
||
iSCSI Hardware Properties
|
|||
10 Gbps Ethernet ports per system |
4
|
Onboard ports | |
iSER Host Properties
|
|||
iSER hosts per system |
2,048
|
||
iSER hosts per I/O group |
512
|
||
iSER names per host object (ID) |
4
|
||
Adapter Hardware Properties | |||
4-port 16 Gbps FC adapters per node / canister | 3 | ||
4-port 32 Gbps FC adapters per node / canister | 3 | ||
On board 1 Gbps Ethernet I/O ports per node / canister | 1 | Technician port | |
On board 10 Gbps Ethernet I/O ports per node / canister | 4 | ||
2-port 25 Gbps iWARP adapters per node / canister | 3 | ||
25 Gbps iWARP ports per canister | 6 | ||
2-port 25 Gbps RoCE adapters per node / canister | 3 | ||
25 Gbps RoCE ports per canister | 6 | ||
NVMe over Fibre Channel Host Properties
|
|||
FC-NVMe hosts per system |
64
|
This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit might experience significant adverse performance impact. | |
FC-NVMe hosts per I/O group |
16
|
This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit might experience significant adverse performance impact. | |
Fibre Channel Logins per FC-NVMe WWPN | 16 | This limit is the number of FC2 logins supported. | |
NVMe Qualified Names (NQNs) per host object (ID) |
2
|
||
NVMe over RDMA hosts per system | 768 | ||
NVMe over RDMA hosts per I/O group | 512 | ||
Primary RDMA connections per port | 256 | ||
Copy Services Properties
|
|||
Total Metro Mirror, Global Mirror, and HyperSwap capacity per I/O group | 2 PiB | This limit is the total capacity for all master and auxiliary volumes in the I/O group. | |
Remote Copy (Metro Mirror and Global Mirror) relationships per system |
10,000
|
This can be any mix of Metro Mirror and Global Mirror relationships. | |
Remote Copy migration relationships per system | 256 | ||
Maximum round-trip latency for Metro Mirror, HyperSwap, and Migration relationships | 3ms | ||
Global Mirror cycling mode relationships (also known as GMCV) per system, with cycle times less than 300 seconds | 256 | ||
Global Mirror cycling mode relationships (also known as GMCV) per system, with cycle times of 300 seconds or more | 2,500 | ||
Active-Active Relationships (HyperSwap) |
2,000
|
||
Maximum round-trip latency for Global Mirror and Global Mirror cycling mode (also known as GMCV) if using FC replication | 80ms | 250ms in certain zoning | |
Remote Copy relationships per consistency group for Metro Mirror, Global Mirror, and Active-Active (HyperSwap) relationships |
-
|
No limit is imposed beyond the Remote Copy relationships per system limit. Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice. |
|
Remote Copy relationships per consistency group, for Global Mirror cycling mode relationships (also known as GMCV) | 256 |
Note the I/O pause time at the start of each cycle increases in proportion to the number of relationships in the consistency group.
Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice.
|
|
Remote Copy consistency groups per system |
256
|
||
3-site Remote Copy (Metro Mirror) relationships per consistency group | 256 | ||
3-site Remote Copy (Metro Mirror) consistency groups per system | 16 | ||
3-site Remote Copy (Metro Mirror) relationships per system | 2,500 | ||
3-site HyperSwap Remote Copy relationships per system | 2,000 | ||
FlashCopy mappings per system |
15,864
|
||
FlashCopy targets per source |
256
|
||
FlashCopy mappings per consistency group |
512
|
||
FlashCopy consistency groups per system |
500
|
||
Total FlashCopy volume capacity per I/O group |
4 PB
|
||
Snapshots per system | 15,863 | ||
Snapshots per volume copy | 15,863 | ||
Thin-Clone, Clone Volumes per system | 15,862 | ||
Thin-Clone Volumes per source volume | 15,862 | ||
Clone Volumes per source volume | 15,862 | ||
FlashCopy bitmap space allowance for Snapshots, Volumes (Thin-Clone, Clone) and legacy FlashCopy | 2 GiB | ||
FlashCopy bitmap space allowance for Snapshots only | 2 GiB | ||
FlashCopy mappings per graph | 256 | ||
Safeguarded volumes per system |
15,864
|
||
Safeguarded volume groups per system |
256
|
||
Safeguarded volumes per volume group |
512
|
||
Safeguarded policies per system |
32
|
Includes 3 predefined and 29 user-defined policies | |
Snapshot policies per system |
32
|
||
Policy-based replication | |||
Policy-based replication capacity per I/O group | 2,048 TiB | ||
Policy-based replication replicated volumes per system | 7,932 | ||
Volume groups per system that uses policy-based replication | 1,024 | No limit beyond system limit - volume groups per system | |
Volumes per volume group that uses policy-based replication | 512 | No limit beyond system limit - volumes per volume group | |
Maximum round-trip latency for asynchronous policy-based replication that uses Fibre Channel partnerships | 250ms | ||
Maximum round-trip latency for asynchronous policy-based replication that uses IP partnerships | 80ms | ||
Maximum replication policies per system | 32 | ||
Maximum I/O groups that use policy-based replication | 2 | ||
IP Partnership Properties
|
|||
Inter-cluster IP partnerships per system |
3
|
A system can be partnered with up to three remote systems. | |
Inter-site links per IP partnership |
2
|
A maximum of two inter-site links can be used between two IP partnership sites. | |
Ports per node |
1
|
A maximum of one port per node can be used for IP partnership. | |
Internal Storage Properties
|
|||
SAS chains per control enclosure |
2
|
||
Enclosures per SAS chain |
10 x 2Uxx
Or
4 x 5U92
|
Each SAS chain can have a maximum total "chain weight" of 5 or 6 depending on code level. Each 92G enclosure has a chain weight of 2.5; each 12G or 24G enclosure has a chain weight of 1.
So at 8.5.2.0, for example, it would be valid to have 2x 92G enclosures and 1x 24G enclosure (total chain weight of 6).
See the product documentation for further details.
|
|
Expansion enclosures per control enclosure |
20
|
||
Expansion enclosures per system | 80 | ||
Drives per I/O group |
760
|
||
Drives per system |
3,040
|
||
SCM drives per I/O group | 12 | ||
Non-Distributed RAID Array Properties
|
|||
Arrays per system |
128
|
||
Encrypted arrays per system |
128
|
||
Member drives per array |
16
|
||
Minimum-Maximum member drives per RAID-0 array |
1-8
|
||
Minimum-Maximum member drives per RAID-1 array | 2-2 | ||
Minimum-Maximum member drives per RAID-10 array |
2-16
|
||
Minimum-Maximum member drives per RAID-10 array (tier SCM) |
2-16
|
||
Hot spare drives |
-
|
No limit is imposed | |
Distributed RAID Array Properties
|
|||
Arrays per system |
32
|
The presence of non-DRAID arrays can reduce this limit | |
Encrypted arrays per system |
32
|
The presence of non-DRAID arrays can reduce this limit | |
Arrays per I/O group |
10
|
The presence of non-DRAID arrays can reduce this limit | |
Member drives per array |
128
|
||
FCM arrays per storage pool | 1 | Existing multi-array storage pools created before 8.5.0 are supported as 'grandfathered' | |
Minimum-Maximum member drives per RAID-1 array | FS9100 | N/A | |
FS9200 | 2-16 | ||
Minimum-Maximum member drives per RAID-5 array |
4-128
|
||
Minimum-Maximum member drives per RAID-5 array (NVMe drives) | 4-24 | ||
Minimum-Maximum member drives per RAID-6 array |
6-128
|
||
Minimum-Maximum member drives per RAID-6 array (NVMe drives) | 6-24 | ||
Rebuild areas per non-FCM array |
1-4
|
||
Rebuild areas per FCM array |
1
|
||
Minimum-Maximum stripe width for RAID-5 array |
3-16
|
||
Minimum-Maximum stripe width for RAID-6 array |
5-16
|
||
Maximum drive capacity for RAID-5 array |
8 TB
|
||
Drives added to an array in a single DRAID expansion |
12
|
for DRAID-1 | |
42 | for DRAID-5 and DRAID-6 | ||
Concurrent DRAID expansions per system |
4
|
||
Concurrent DRAID expansions per parent storage pool |
1
|
||
Compressed DRAID arrays per storage pool | 1 | ||
External Storage System Properties
|
|||
Storage system WWNNs per system (cluster) |
1,024
|
||
Storage system WWPNs per system (cluster) |
1,024
|
||
WWNNs per storage system |
16
|
||
WWPNs per WWNN |
16
|
||
WWPNs per Mdisk | 16 | WWPNs per Mdisk means the limit of Storage System WWPNs that can have LUN mappings for a specific Storage System Logical Unit (LU) | |
LUNs (managed disks) per storage system |
-
|
No limit is imposed beyond the managed disks per system limit | |
System and User Management Properties
|
|||
User accounts per system |
400
|
Includes the default user accounts | |
User groups per system |
256
|
Includes the default user groups | |
Authentication services per system |
1
|
||
DNS servers per system | 2 | ||
NTP servers per system |
1
|
||
Maximum number of iSNS servers per system |
1
|
||
Concurrent OpenSSH sessions per system |
32
|
||
Event notification Properties
|
|||
SNMP servers per system |
6
|
||
Syslog servers per system |
6
|
||
Email (SMTP) servers per system |
6
|
Email servers are used in turn until the email is successfully sent | |
Email users (recipients) per system |
12
|
||
LDAP servers per system |
6
|
||
REST API Properties
|
|||
Threads per session |
64
|
||
HTTP header size |
16 KB
|
Extents
The following table compares the maximum volume, MDisk, and system capacity for each extent size.
Extent size (MB)
|
Maximum non thin-provisioned volume capacity in GB
|
Maximum thin-provisioned volume capacity in GB (for regular pools)
|
Maximum thin-provisioned and compressed volume size in data reduction pools in GB
|
Maximum total thin-provisioned and compressed capacity for all volumes in a single data reduction pool per I/O group in GB
|
Maximum MDisk capacity in GB
|
Maximum DRAID MDisk capacity in TB
|
Total storage capacity manageable per system *
|
16
|
2,048
(2 TB)
|
2,000
|
2,048
(2 TB)
|
8,192
(8 TB)
|
2,048
(2 TB)
|
32
|
64 TB
|
32
|
4,096
(4 TB)
|
4,000
|
4,096
(4 TB)
|
16,384
(16 TB)
|
4,096
(4 TB)
|
64
|
128 TB
|
64
|
8,192
(8 TB)
|
8,000
|
8,192
(8 TB)
|
32,768
(32 TB)
|
8,192
(8 TB)
|
128
|
256 TB
|
128
|
16,384
(16 TB)
|
16,000
|
16,384
(16 TB)
|
65,536
(64 TB)
|
16,384
(16 TB)
|
256
|
512 TB
|
256
|
32,768
(32 TB)
|
32,000
|
32,768
(32 TB)
|
131,072
(128 TB)
|
32,768
(32 TB)
|
512
|
1 PB
|
512
|
65,536
(64 TB)
|
65,000
|
65,536
(64 TB)
|
262,144
(256 TB)
|
65,536
(64 TB)
|
1,024
(1 PB)
|
2 PB
|
1,024
|
131,072
(128 TB)
|
130,000
|
131,072
(128 TB)
|
524,288
(512 TB)
|
131,072
(128 TB)
|
2,048
(2 PB)
|
4 PB
|
2,048
|
262,144
(256 TB)
|
260,000
|
262,144
(256 TB)
|
1,048,576
(1 PB)
|
262,144
(256 TB)
|
4,096
(4 PB)
|
8 PB
|
4,096
|
262,144
(256 TB)
|
260,000
|
262,144
(256 TB)
|
2,097,152
(2 PB)
|
524,288
(512 TB)
|
8,192
(8 PB)
|
16 PB
|
8,192
|
262,144
(256 TB)
|
260,000
|
262,144
(256 TB)
|
4,194,304
(4 PB)
|
1,048,576
(1 PB)
|
16,384
(16 PB)
|
32 PB
|
* The total capacity values assume that all of the storage pools in the system use the same extent size.
Was this topic helpful?
Document Information
Modified date:
08 May 2024
UID
ibm16611185