Preventive Service Planning
Abstract
This document lists the configuration limits and restrictions specific to IBM Storage SAN Volume Controller software version 8.6.2.x
Content
The use of WAN optimization devices such as Riverbed is not supported in native Ethernet IP partnership configurations containing IBM Storage SAN Volume Controller.
Safeguarded Copy
Requires the purchase of the additional FlashCopy license.
The following restrictions apply for Safeguarded Copy:
- HyperSwap volumes are supported. However, recovery requires that they be converted to regular volumes before use
- Pre-defined schedules are designed to avoid running out of FlashCopy maps in a single graph and keep within the supported volumes count. It is possible to create policies (that use the CLI only) that can, potentially, breach those limits. Caution should be exercised
- The source volume cannot be in an ownership group
- The source volume cannot be used with Transparent Cloud Tiering (TCT).
Volume Mobility
Refer to the product documentation for the restrictions that apply to Volume Mobility: Migrating data between systems nondisruptively
Data Reduction Pools
The following restrictions apply for Data Reduction Pools (DRP):
- VMware vSphere Virtual Volumes (vVols) are not supported in a DRP
- A volume in a DRP cannot be shrunk
-
No volume can move between I/O groups when the volume is a DRP (use FlashCopy or Metro Mirror instead).
- No split of a volume mirror to copy in a different I/O group
- Real/used/free/tier capacities are not reported per volume - only per pool.
Non-Disruptive Volume Move (NDVM)
The following Fibre Channel attached host types are supported for nondisruptively moving a volume between I/O groups:
Host Operating System | Host Multipathing | Host Clustering | Notes |
---|---|---|---|
AIX 7.2 | AIXPCM |
Nondisruptive volume move can result in the same volume being mapped to different hosts in the same host cluster that uses different SCSI ID. If the host cluster cannot tolerate this configuration, then nondisruptive volume move cannot be used.
|
SAN boot is supported
NPIV is supported
|
Microsoft Windows 2019 | MSDSM |
Hyper-V Failover Cluster
|
SAN boot is supported
|
Microsoft Windows 2016 | MSDSM |
Hyper-V Failover Cluster
|
SAN Boot is supported
|
Red Hat 8 | Native |
The original paths might need to be manually removed on the host before removing access to the old I/O group
|
|
SLES 15 | Native | The original paths might need to be manually removed on the host before removing access to the old I/O group | |
VMware 6.7 | Native | VAAI is supported | |
VMware 6.5 | Native | VAAI is supported | |
Solaris 11.3 SPARC | MPXIO | SAN boot is supported |
Note: For all other host types, I/O needs to be quiesced before moving a volume.
If moving a volume that is mapped to a host cluster, it is required to rescan disk paths on all host cluster nodes to ensure that the new paths are detected before removing access from the original I/O group.
Clustered Systems
An IBM Storage SAN Volume Controller system requires native Fibre Channel SAN or alternatively 8 Gbps or 16 Gbps or 32 Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Clustering can also be accomplished with 25 Gbps Ethernet, for standard topologies.
Partnerships between systems for replication can be used with both Fibre Channel and native Ethernet connectivity. Distances greater than 300 meters are supported by using an FCIP link or Fibre Channel between source and target.
Clustering over Fibre Channel | Clustering over 25 Gb Ethernet | HyperSwap over Fibre Channel | HyperSwap over Ethernet (25 Gb only) | Replication over Fibre Channel | Replication over Ethernet (10 Gb or 25 Gb) |
---|---|---|---|---|---|
Yes up to 4 I/O groups | Yes up to 4 I/O groups | Yes up to 4 I/O groups | Yes up to 4 I/O groups | Yes | Yes (including 1 Gb on older hardware) |
Hot Spare Node
In a situation where an adapter PCI slot location on the spare node does not match the active nodes, an active node cannot be replaced by a spare node by using the 'swapnode' command. If the user encounters the error "CMMVC9261E, means that the command failed because the specified node does not have a status of "candidate". It is recommended to have adapters in the same slot for spare nodes and the active node for 'swapnode' replace command to work.
When the online spare node is put into the Service state, it is immediately removed back to spare and 5 minutes later rejoin as online spare, in the cluster. The user can, instead of putting the online spare into service, wait for the original to come back or simply remove the online spare and then perform their maintenance.
Transparent Cloud Tiering
Transparent cloud tiering on the system is defined by configuration limitations and rules. Refer to the IBM Documentation maximum limits page for details.
The following restrictions apply for Transparent Cloud Tiering:
- When a cloud account is created, it must continue to use the same encryption type throughout the life of the data in that cloud account. Even if the cloud account object is removed and remade on the system, the encryption type for that cloud account cannot be changed while back up data for that system exists in the cloud provider.
- Performing rekey operations on a system with an encryption enabled cloud account, perform the commit operation immediately after the prepare operation. Remember to retain the previous system master key (on USB or in Key server) as this key can still be needed to retrieve your cloud backup data when performing a T4 recovery or an import.
- Avoid the use of the 'Restore_uid' option when backup is imported to a new cluster.
- Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1 or later.
The following AWS regions are supported by this code-level:
- us-east-1
- us-west-1
- us-west-2
- ca-central-1
- eu-west-1
- eu-west-2
- eu-west-3
- eu-central-1
- sa-east-1
- ap-southeast-1
- ap-southeast-2
- ap-south-1
- ap-northeast-1
- ap-northeast-2
TCT cloud snapshots are supported with the system's new FlashCopy® management model based on snapshot function.
However, cloud snapshots cannot co-exist with the user-owned legacy FlashCopy mappings.
Nodes in an I/O group cannot be replaced by nodes with less memory.
HyperSwap
Configure your host multipath driver to use an ALUA-based path policy.
Due to the requirement for multiple access I/O groups, SAS attached host types are not supported by HyperSwap volumes.
A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer. This restriction prevents a HyperSwap volume on one system being virtualized by another.
Direct Attachment
IBM System Storage DS8000 series is not supported by direct-attached systems.
SAN boot on Windows 2019 (Qlogic HBA) is not supported by 32 GB direct-attached systems.
Cisco Nexus
The minimum level of Cisco Nexus firmware supported for FCoE with the IBM 2145-SA2, 2145-SV2, 2145-SV3 is 5.2(1)N1(2a).
16 Gbps Fibre Channel Node Connection
Refer to the IBM Storage Inter-operation Center (SSIC) for supported 16 Gbps Fibre Channel configurations supported by 16 Gbps node hardware.
Note 16 Gbps Node hardware is supported when connected to Brocade and Cisco 8 Gbps or 16 Gbps fabrics only.
Direct connections to 2 Gbps or 4 Gbps SAN or direct host attachment to 2 Gbps or 4 Gbps ports is not supported.
Other configured switches that are not directly connected to the 16 Gbps Node hardware can be any supported fabric switch as currently listed in the SSIC.
25 Gbps Ethernet Canister Connection
Three optional (or six in the case of SV3 model nodes) 2-port 25 Gbps Ethernet adapters are supported in each SAN Volume Controller node, for iSCSI communication with iSCSI capable Ethernet ports, for hosts connecting through Ethernet switches. The 25 Gbps Ethernet adapters do not support FCoE.
There are two types of 25 Gbps Ethernet adapter Features supported:
- RDMA over Converged Ethernet (RoCE)
- Internet Wide-area RDMA Protocol (iWARP)
Either works for standard iSCSI communications, that is, not using Remote Direct Memory Access (RDMA).
When use of RDMA with a 25 Gbps Ethernet adapter becomes possible then RDMA links work between RoCE ports or between iWARP ports (that is, from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host).
Changing MTU value to other than 1500 is not allowed on RoCE ports. If you are upgrading from older versions with more than 1500 MTU for RoCE ports, then consider changing them to MTU 1500 before upgrade.
For Ethernet switches and adapters supported in hosts, visit the SSIC.
Example of a RoCE adapter for use in a host
Example of an iWARP adapter for use in a host
IP Partnership
On SV3 node hardware, IP partnerships are not supported on 1 Gb Ethernet ports - these are only for system management. For other SVC node types, IP replication can be configured on any Ethernet port.
Using an Ethernet switch to convert a 25 Gb to a 1 Gb IP partnership, or a 10 Gb to a 1 Gb IP partnership is not supported. Therefore, the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.
Host Limitations
SAN BOOT function on AIX 7.2 TL5
SAN BOOT is not supported for AIX 7.2 TL5 when connected by using the NVME/FC protocol.
RDM Volumes attached to guests in VMware 7.0
Using RDM (raw device mapping) volumes attached to any guests, with the RoCE iSER protocol, results in pathing issues or inability to boot the guest.
N2225/N2226 SAS HBA
VMware 6.7 (Guest O/S SLES12SP4) connected with SAS N2225/N2226 host adapters are not supported.
Lenovo 430-16e/8e SAS HBA
VMware 6.7 and 6.5 (Guest O/S SLES12SP4) connected with SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2019 and 2016 connected with SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2016 HyperV
RHEL v7.1 guests on Windows 2016 HyperV, with Virtual Fibre Channel, are not supported.
iSER
SV3 model nodes do not support iSER host attachment.
Operating systems not currently supported for use with iSER:
- Windows 2012 R2 with Mellanox ConnectX-4 Lx EN adapters
- Windows 2016 with Mellanox ConnectX-4 Lx EN adapters
Windows NTP server
The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server.
IBM i connected using directly attached Fibre Channel
IBM i is not supported as a host operating system when connected using directly attached Fibre Channel to FlashSystem or SAN Volume Controller systems running 8.6.1.0 or later versions.
IBM i is supported as a host operating system when connected to FlashSystem or SAN Volume Controller systems via a Fibre Channel switch.
Fabric Limitations
Only one Fibre Channel Forwarder (FCF) switch per fabric is supported.
Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.
Priority Flow Control for iSCSI / iSER
Priority Flow Control for iSCSI / iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.
- The name of a volume group cannot be changed while a replication policy is assigned.
- The name of a volume cannot be changed while the volume is in a volume group with a replication policy assigned.
- Ownership groups are not supported by policy-based replication.
- Policy-based replication is not supported on HyperSwap topology systems.
- Remote Copy and policy-based replication can be configured on the same volume only if the volume is operating as the production volume for both types of replication, for the purpose of migrating from Remote Copy to policy-based replication.
- Policy-based replication cannot be used with volumes that are:
- Image mode
- HyperSwap
- Configured to use Transparent Cloud Tiering (TCT)
- Resize (expand or shrink)
- Migrate to image mode, or add an image mode copy
- Move to a different I/O group
A limit of one I/O group if the system has any storage partitions.
If using policy-based high availability, the following restrictions apply:
- A limit of one I/O group can be configured in the system.
- A limit of one partnership per system can be configured.
- Only Fibre Channel SCSI hosts can be configured in a storage partition configured for high availability.
- Host clusters cannot be used for hosts configured in a storage partition configured for high availability.
- A partnership cannot be used for both high availability and asynchronous replication.
- Only the 'standard' system topology is supported.
- Reduced host interoperability support; only RHEL and VMware ESXi hosts are supported.
- Only SCSI Fibre Channel hosts are supported.
- Persistent Reserve commands are not supported.
If using policy-based replication with VMware Virtual Volumes (vVols), the following restrictions apply:
- A limit of one I/O group can be configured in the system.
- A limit of one partnership per system can be configured for vVol replication. Additional partnerships can be configured for non-vVol replication.
- Only the 'standard' system topology is supported.
Configuration limits for IBM Storage SAN Volume Controller:
Property
|
Hardware Type
|
Maximum Number
|
Notes
|
System (Cluster) Properties
|
|||
I/O groups / Control Enclosures per system (cluster) | 4 | Each containing two nodes | |
Active nodes per system (cluster) |
8
|
Arranged as four I/O groups | |
Spare Nodes per system | 4 | ||
Nodes per fabric |
64
|
Maximum number of IBM Storage nodes that can be present on the same Fibre Channel fabric, with visibility of each other | |
Inter-cluster partnerships per system |
3
|
A system can be partnered with up to three remote systems. No more than four systems can be in the same connected set | |
IP Quorum devices per system |
5
|
||
Data encryption keys per system |
1,024
|
||
Key servers per system | 4 | ||
Node Properties
|
|||
iSCSI sessions per node | 1,024 | A maximum of 256 can be backend sessions. This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions |
|
iSER sessions per node | 256 | Model SA2/SV2 only | |
iSCSI + iSER sessions per node | 1,088 | ||
Managed Disk Properties
|
|||
Managed disks (MDisks) per system |
4,096
|
The maximum number of logical units, which can be managed by a cluster. Internal distributed arrays consume 16 logical units. This number includes external MDisks that have not been configured into storage pools (managed disk groups) |
|
Managed disks per storage pool (managed disk group) |
128
|
||
Storage pools per system |
1,024
|
||
Parent pools per system |
128
|
||
Child pools per system |
1,023
|
||
Managed disk extent size |
8,192 MB
|
||
Capacity for an individual internal managed disk (array) |
-
|
No limit is imposed beyond the maximum number of drives per array limits. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Capacity for an individual external managed disk |
1 PB
|
Note: External managed disks larger than 2 TB are only supported for certain types of Storage Systems. Refer to the supported hardware matrix for further details. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Total storage capacity manageable per system |
32 PB
|
The maximum requires an extent size of 8192 MB to be used This limit represents the per system maximum of 2^22 extents. The maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Maximum Provisioning policies | 40 | ||
Data Reduction Pool Properties
|
|||
Data Reduction Pools per system |
4
|
||
MDisks per Data Reduction Pool |
128
|
||
Volume copies per Data Reduction Pool | 15,864 | ||
Extents per I/O group per Data Reduction Pool |
524,288 (512K)
|
||
Volume (Virtual Disk) Properties
|
|||
Basic volumes per system |
15,864
|
Each basic volume uses one VDisk, each with one copy. If a Remote Copy partnership exists to a system that supports a lower number of volumes, the maximum number of volumes is reduced to the lower limit, or 8192 if that is greater. For example, if one system has a limit of 15864 volumes and the other has a limit of 8192 volumes, both systems are limited to 8192 volumes. |
|
Volume copies in host mappable volumes per system |
15,864 | ||
Stretched volumes per system |
7,932
|
Each stretched volume uses 1 VDisk, each with two copies. | |
HyperSwap volumes per system | 2,000 | Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship, and 4 FlashCopy mappings. | |
Volume copies per volume | 2 | ||
Total mirrored volume capacity per I/O group |
2145-SA2
2145-SV2
|
1 PiB | |
2145-SV3 | 40 PiB | (Shared with FlashCopy) | |
Volumes per I/O group (Volumes per caching I/O group) |
- | No limit is imposed here beyond the volumes per system limit. | |
Volume groups per system | 1,024 | ||
Volume groups per storage partition | 1,024 | ||
Volumes per volume group | 512 | ||
Volumes per storage pool | - | No limit is imposed beyond the volumes per system limit | |
Fully allocated volume capacity | 256 TB | Maximum size for an individual fully allocated volume. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Thin-provisioned (space-efficient) per-volume capacity for volumes in regular and data reduction pools | 256 TB | Maximum size for an individual thin-provisioned volume Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Host mappings per system | 64,000 | See also - volume mappings per host object | |
Host Properties
|
|||
Host objects (IDs) per system |
2,048
|
||
Host objects per storage partition | - | No limit is imposed here beyond the host object limits | |
Host objects (IDs) per I/O group |
2145-SA2
2145-SV2
|
512
|
|
2145-SV3 | 2,048 | ||
Volume mappings per host object |
2,048
|
Although SVC allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing or managing this number of volumes. The practical mapping limit is restricted by the host OS, not SVC.
Note: this limit does not apply to hosts of type adminlun (used to support VMware vVols). |
|
Portset Objects per system | 72 | FC + Ethernet | |
IP address objects per system | 2,048 | ||
IP addresses per port | 64 |
When a node fails over, Ethernet ports with the same ID will be configured with all the IP addresses of the partner. Hence there can be a maximum of 128 IP addresses configured per Ethernet port during failover.
For Emulex ports, there can be a maximum of 3 unique VLANs per port and a maximum of 32 IP addresses per port.
For Mellanox iSER connectivity, there can be a maximum of 31 VLANs per port and a maximum of 31 IP addresses per port with VLAN.
|
|
IP address objects per node | 256 | ||
Unique IP addresses per port | 64 | ||
Routable IP addresses per port | 1 | ||
IP addresses per node per portset | Host | 4 | |
Remote Copy
(IP Replication)
|
1 | ||
Remote Copy
(High Speed Ethernet)
|
2 | ||
Storage | Number of Ethernet Ports on node | ||
Fibre Channel ports per portset | 4 | ||
Fibre Channel Host objects per portset | - | Same as the maximum number of hosts supported on that platform | |
Host Cluster Properties
|
|||
Host clusters per system |
512
|
||
Hosts in a host cluster |
128
|
||
Fibre Channel Host Properties
|
|||
Fibre Channel hosts per system |
2,048
|
||
Fibre Channel host ports per system |
8,192
|
||
Fibre Channel hosts per I/O group |
2145-SA2
2145-SV2
|
512
|
|
2145-SV3 | 2,048 | ||
Fibre Channel host ports per I/O group |
2,048
|
||
NPIV Direct Attach Logins per Fibre Channel WWPN | 128 | ||
Fibre Channel host ports per host object (ID) |
32
|
||
iSCSI Host Properties
|
|||
iSCSI hosts per system |
2,048
|
||
iSCSI hosts per I/O group |
2145-SA2
2145-SV2
|
512
|
|
2145-SV3 | 1,024 | ||
iSCSI names per host object |
4
|
||
iSCSI names per I/O group |
1,024
|
||
iSER Host Properties
|
|||
iSER hosts per system |
2145-SA2
2145-SV2
|
2,048
|
|
iSER hosts per I/O group |
2145-SA2
2145-SV2
|
512
|
|
iSER names per host object |
2145-SA2
2145-SV2
|
4
|
|
Adapter Hardware Properties | |||
4-port 16 Gbps Fibre Channel adapters per node / canister |
2145-SA2
2145-SV2
|
3 | |
2145-SV3 | N/A | ||
4-port 32 Gbps Fibre Channel adapters per node / canister |
2145-SA2
2145-SV2
|
3 | |
2145-SV3 | N/A | ||
Gen4 4-port 32 Gbps Fibre Channel adapters per node / canister |
2145-SA2
2145-SV2
|
N/A | |
2145-SV3 | 6 | ||
4-port 64 Gbps Fibre Channel adapters per node/canister |
2145-SA2
2145-SV2
|
N/A | |
2145-SV3 | 3 | ||
On board 1 Gbps Ethernet I/O ports per node / canister |
2145-SA2
2145-SV2
|
1 | Technician port |
2145-SV3 | 3 | 3 management ports (including 1 technician port) | |
On board 10 Gbps Ethernet I/O ports per node /canister |
2145-SA2
2145-SV2
|
4 | |
2145-SV3 | N/A | ||
2-port 25 Gbps iWARP adapters per node / canister |
2145-SA2
2145-SV2
|
3 | |
2145-SV3 | 6 | ||
25 Gbps iWARP ports per node | 2145-SA2 2145-SV2 |
6 | |
2145-SV3 | 12 | ||
2-port 25 Gbps RoCE adapters per node / canister |
2145-SA2
2145-SV2
|
3 | |
2145-SV3 | 6 | ||
25 Gbps RoCE ports per node canister | 2145-SA2 2145-SV2 |
6 | |
2145-SV3 | 12 | ||
2-port 100 Gbps NVMe / RDMA RoCEv2 adapters per node | 2145-SV3 | 6 | |
NVMe over Fabrics Host Properties
|
|||
NVMe Qualified Names (NQNs) per host object (ID) | 2 | ||
NVMe over Fibre Channel hosts per system |
64
|
This limit is not policed by the IBM Storage software. Any configurations that exceed this limit might experience significant adverse performance impact. | |
NVMe over Fibre Channel hosts per I/O group |
16
|
This limit is not policed by the IBM Storage software. Any configurations that exceed this limit might experience significant adverse performance impact. | |
NVMe connections per FC port | 16 | This limit is the number of FC2 logins supported. | |
NVMe over RDMA hosts per system | 1,024 | ||
NVMe over RDMA hosts per I/O group | 256 | ||
Primary RDMA connections per port | 256 | ||
NVMe over RDMA - Host queue size |
64 | ||
NVMe over RDMA - Number of IO queues |
8 | ||
NVMe over TCP hosts per system | 1,024 | ||
NVMe over TCP hosts per I/O group | 256 | ||
NVMe over TCP - Host queue size | 128 | ||
NVMe over TCP - Number of IO queues | 8 | ||
Copy Services Properties
|
|||
Total Remote Copy capacity per IO group (includes Metro Mirror, Volume Migration and HyperSwap) | 2145-SA2 | 1 PiB | |
2145-SV2
2145-SV3
|
2 PiB | ||
Remote Copy Metro Mirror relationships per system |
10,000
|
||
Remote Copy Active-Active relationships (HyperSwap volumes) per system | 2,000 | ||
Remote Copy migration relationships per system | 256 | ||
Maximum round-trip latency for Metro Mirror, HyperSwap and Migration relationships | 3ms | ||
Remote Copy relationships per consistency group | - | No limit is imposed beyond the Remote Copy relationships per system limit. | |
Remote Copy consistency groups per system |
256
|
||
3-site Remote Copy relationships per consistency group | 256 | ||
3-site Remote Copy consistency groups per system | 16 | ||
3-site Metro Mirror Remote Copy relationships per system | 2,500 | ||
3-site HyperSwap Remote Copy relationships per system | 2,000 | ||
FlashCopy Properties | |||
FlashCopy mappings per system |
15,864
|
||
FlashCopy consistency groups per system | 500 | ||
FlashCopy mappings per consistency group | 512 | ||
FlashCopy targets per source |
256
|
||
Snapshots per system | 15,863 |
This maximum requires a single basic volume. For each additional basic volume created, the maximum number of snapshots is reduced by one.
For example, on a system with 1,000 basic volumes, the maximum number of snapshots per system is 14,864. |
|
Snapshots per volume copy | 15,863 |
This maximum requires a single basic volume. For each additional volume copy created, the maximum number of snapshots is reduced by one.
For example, on a system with 1,000 mirrored volumes, the maximum number of snapshots per system is 13,864. |
|
Thin-Clone, Clone Volumes per system | 15,862 | ||
Thin-Clone Volumes per Source Volume | 15,862 | ||
Clone Volumes per Source Volume | 15,862 | ||
Total FlashCopy bitmap Allowance per I/O group
- of which legacy FlashCopy can have up to
|
2145-SA2
2145-SV2
|
2 GiB | |
2145-SV3 | 20 GiB | (Shared with volume mirroring) | |
2145-SA2
2145-SV2
|
2 GiB | ||
2145-SV3 | 4 GiB | ||
Total FlashCopy volume capacity per I/O group
- of which legacy FlashCopy can have up to
|
2145-SA2
2145-SV2
|
4 PiB | |
2145-SV3 | 40 PiB | (Shared with volume mirroring) | |
2145-SA2
2145-SV2
|
4 PiB | ||
2145-SV3 | 8 PiB | ||
Safeguarded policies per system | 32 | Includes 3 predefined and 29 user-defined | |
Snapshot policies per system | 32 | ||
Replication Properties
Policy-based replication
|
|||
Total volume capacity per I/O group using policy-based replication (asynchronous replication or HA)
|
2145-SA2
2145-SV2
|
2 PiB | |
2145-SV3 | 4 PiB | ||
Volumes using policy-based replication (asynchronous replication or HA) | 7,932 | ||
VMware Virtual Volumes (vVols) using asynchronous policy-based replication | See notes | Contact IBM to review the supported limits when planning to use vVol replication | |
Volume groups per system using policy-based replication | - | No limit is imposed beyond the volume groups per system limit | |
Volume groups per system using policy-based replication for VMware Virtual Volumes (vVols) | See notes | Contact IBM to review the supported limits when planning to use vVol replication | |
Volumes per volume group using policy-based replication | - | No limit is imposed beyond the volumes per volume group limit | |
VMware Virtual Volumes (vVols) per volume group using policy-based replication | See notes | Contact IBM to review the supported limits when planning to use vVol replication | |
Maximum round-trip latency for asynchronous policy-based replication using Fibre Channel partnerships | 250ms | ||
Maximum round-trip latency for asynchronous policy-based replication using IP partnerships | 80ms | ||
Maximum round-trip latency for asynchronous policy-based replication using High Speed Ethernet partnerships | 3ms | ||
Maximum round-trip latency for policy-based high availability | 1ms | ||
Replication policies per system | 32 | ||
I/O groups per system using policy-based replication | - |
No limit is imposed beyond the I/O groups per system (cluster) limit
Note: Policy-based high availability supports a maximum of one I/O group in the system.
|
|
I/O group connections per system using policy-based replication | 12 | Example: If I/O group 0 in system 1 is replicating to two I/O groups in system 2, this counts as two connections. If I/O group 1 in system 1 is also replicating to these same I/O groups, that counts as two additional connections, for a total of four connections. | |
Total number of I/O groups in remote systems that can be configured for policy-based replication for an I/O group | 6 | ||
Storage partitions per system | 4 | ||
IP Partnership Properties
|
|||
Inter-cluster IP partnerships per system |
3
|
A system can be partnered with up to three remote systems. | |
Inter-site links per IP partnership |
2
|
A maximum of two inter-site links can be used between two IP partnership sites. | |
Ports per node per IP partnership |
1
|
A maximum of one port per node can be used for IP partnership. | |
Replication over High Speed Ethernet Properties |
|||
High speed Ethernet partnerships per system | 1 | ||
High speed portsets per system | 6 | ||
High speed portsets per high speed partnership | 2 | ||
IP addresses per node per high speed portset | 2 | ||
External Storage System Properties
|
|||
Storage system WWNNs per system (cluster) |
1,024
|
||
Storage system WWPNs per system (cluster) |
1,024
|
||
WWNNs per storage system |
16
|
||
WWPNs per WWNN |
16
|
||
WWPNs per MDisk | 16 | WWPNs per MDisk means the limit of Storage System WWPNs that can have LUN mappings for a specific Storage System Logical Unit (LU) | |
LUNs (managed disks) per storage system |
-
|
No limit is imposed beyond the managed disks per system (cluster) limit | |
System and User Management Properties
|
|||
User accounts per system |
400
|
Includes the default user accounts | |
User groups per system |
256
|
Includes the default user groups | |
Ownership groups per system | 64 | ||
Remote authentication (LDAP) services per system | 1 | ||
Multifactor authentication services per system | 1 | ||
Single sign-on authentication services per system | 1 | ||
DNS servers per system | 2 | ||
NTP servers per system |
1
|
||
Maximum number of iSNS servers | 1 | ||
Concurrent OpenSSH sessions per system |
32
|
||
Two Person Integrity Requests per system | 4 | ||
Patches per node (Includes Installed, Obsolete, and Error) | 30 | ||
Event notification Properties
|
|||
SNMP servers per system |
6
|
||
Syslog servers per system |
6
|
||
Email (SMTP) servers per system |
6
|
Email servers are used in turn until the email is successfully sent | |
Email users (recipients) per system |
12
|
||
LDAP servers per system |
6
|
||
REST API Properties
|
|||
Maximum active connections per cluster | 4 | RESTful API | |
Maximum requests/sec to auth endpoint | 3 | RESTful API | |
Maximum requests/sec to command endpoints | 10 | RESTful API |
Extents
The following table compares the maximum volume, MDisk, and system capacity for each extent size.
Extent size
|
Maximum non thin-provisioned volume capacity
|
Maximum thin-provisioned volume capacity (for regular pools)
|
Maximum thin-provisioned and compressed volume size in data reduction pools
|
Maximum total data reduced volume capacity in a single data reduction pool per I/O group
|
Maximum
Virtualized MDisk capacity
|
Total storage capacity manageable per system 1
|
1,024MB
|
128 TB
|
130 TB
|
128 TB
|
512 TB
|
128 TB
|
4 PB
|
2,048MB
|
256 TB
|
256 TB
|
256 TB
|
1 PB
|
256 TB
|
8 PB
|
4,096MB
|
256 TB
|
256 TB
|
256 TB
|
2 PB
|
512 TB
|
16 PB
|
8,192MB
|
256 TB
|
256 TB
|
256 TB
|
4 PB
|
1 PB
|
32 PB
|
1 The total capacity values assume that all of the storage pools in the system use the same extent size.
Was this topic helpful?
Document Information
Modified date:
03 May 2024
UID
ibm17039378