Preventive Service Planning
Abstract
This document lists the configuration limits and restrictions specific to IBM Storwize V5000, V5010, V5020 and V5030 software version 7.8.x
Content
The use of WAN optimization devices such as Riverbed is not supported in native Ethernet IP partnership configurations containing Storwize V5000.
DRAID Strip Size
For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For these drives a strip size of 256 should be used.
NPIV (N_Port ID Virtualization)
SAN Volume Controller and Storwize Version 7.7 introduced support for NPIV (N_Port ID Virtualization) for Fibre Channel fabric attachment. FCoE is not supported with NPIV. The following recommendations and restrictions should be followed when implementing the NPIV feature.
Operating systems not currently supported for use with NPIV:
- IBM z Systems operating systems
- RHEL6 and earlier on IBM Power
- HPUX 11iV2
- Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM
- Oracle Solaris 10 (supported with v7.8.1.2 or later)
- Oracle Solaris 11 (supported with v7.8.1.2 or later)
General Requirements
Required SDD versions for IBM AIX and Microsoft Windows Environments:
- IBM AIX Operating Systems require a minimum SDDPCM version of 2.6.8.0
- Microsoft Windows requires a minimum SDDDSM version 2.4.7.0. The latest recommended level which resolves issues listed below is 2.4.7.1.
User intervention may be required when changing NPIV states from "Transitional" to "Disabled". All Paths to a LUN with SDDDSM or SDDPCM may remain "Non-Optimized" when NPIV is "Disabled" from "Transitional" state.
To resolve this issue please use the following instructions:
IBM AIX
For SDDPCM:
Run "pcmpath chgprefercntl device <device number>/<device number range>" on AIX. This will restore both Optimized and Non-Optimized paths for all the LUN's correctly.
Windows 2008 and 2012
For SDDDSM:
Run "datapath rescanhw" on Windows. This will restore both Optimized and Non-Optimized paths for all the LUN's correctly.
This issue is resolved with SDDDSM version 2.4.7.1
Windows 2008 and 2012 Non-Preferred Paths with SDDDSM
When NPIV enters into Transitional state from Disabled, with all the SDDDSM paths in Non-Preferred state, the paths to the Virtual ports also become Non-Preferred. This path configuration might cause IO failures as soon as NPIV moves into "Enabled" state.
As a work around user should configure "at least one preferred path" to each LUN, when in NPIV "Disabled" state.
This issue is resolved with SDDDSM version 2.4.7.1
Solaris
Emulex HBA Settings:
When implementing NPIV on Solaris 11 the default disk IO timeout needs to be changed to 120s by adding "set sd:sd_io_time=120" in /etc/system file, A system reboot is required for the change to be implemented.
When ports on host HBA are connected to 16GB SAN, NPIV is not supported.
Other Operating Systems
Other operating Systems may also experience the same issue when changing the NPIV state from "Transitional" to "Disabled" in which case the operating system specific rescan command should be used.
NPIV mode on SVC or Storwize is only supported when used with Brocade or Cisco fibre channel SAN switches which are NPIV capable.
NEBS
NEBS is not currently supported on V5010, V5020 or V5030 hardware with the any of the following components installed:
FC AC58 600GB 15K 2.5" HDD 2.5 Inch (SFF) Disk Drives
FC AC17 600GB 15K 3.5" HDD 3.5 Inch (LFF) Disk Drives
FC AC0D 10Gb Ethernet Adapter Pair
HyperSwap
When using the HyperSwap function with software version 7.8.0.0 and higher, please configure your host multipath driver to use an ALUA-based path policy.
Due to the requirement for multiple access IO groups, SAS attached host types are not supported with HyperSwap volumes.
A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer. This restriction prevents a HyperSwap volume on one system being virtualized by another.
AIX Live Partition Mobility (LPM)
AIX LPM is supported with the HyperSwap function and AIX 7.x
Clustered Systems
A Storwize V5000 system at version 7.8.0.0 and higher requires native Fibre Channel SAN or alternatively 8Gbps/16Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Fibre Channel over Ethernet (FCoE) connectivity for communication between all nodes in the local cluster is also supported.
Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel, native Ethernet connectivity and FCoE connectivity, however direct FCoE links are only supported to a maximum of 300 metres. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.
Storwize V5030 systems can be added into existing Storwize V5000 clustered systems that include previous generation Storwize V5000 systems. All systems within a cluster must be using the same version of Storwize V5000 software.
When Clustering V5000 with V5030, NEBs must be disabled on the V5030.
Direct Attachment
For information on support configurations using direct attachment please visit the following document
Direct Attachment of Storwize and SAN Volume Controller Systems
16Gbps Fibre Channel Canister Connection
Please see SSIC for supported 16Gbps Fibre Channel configurations supported with 16Gbps node hardware. Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only. Direct connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported. Other configured switches which are not directly connected to the16Gbps Node hardware can be any supported fabric switch as currently listed in SSIC.
IP Partnership
Using an Ethernet Switch to convert a 10Gbps IP partnership link to 1Gbps link and vice versa is not supported. Therefore, the IP infrastructure on the two partnership sites should both be 1Gbps or 10Gbps. However, bandwidth limiting on 10Gbps and 1Gbps IP partnership between sites is supported.
Fabric Limitation
Only one FCF (Fibre Channel Forwarder) switch per fabric is supported.
VMware vSphere Virtual Volumes (VVols)
The maximum number of Virtual Machines on a single VMware ESXi host in a Storwize / VVol storage configuration is limited to 680.
The use of VMware vSphere Virtual Volumes (VVols) on a system that is configured for HyperSwap is not currently supported with SVC/Storwize.
DS4000 Maintenance
Storwize V5000 supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V5000. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.
Note: The DS4000 ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.
Host Limitations
Microsoft Offload Data Transfer (ODX) and SDDDSM Requirements
Storwize V5000 version 7.5.0 introduced support for Microsft ODX. In order to use this function all windows hosts accessing Storwize V5000 are required to be at a minimum SDDDSM version of 2.4.5.0. Earlier versions of SDDDSM are not supported when the ODX function is activated.Windows NTP server
The Linux NTP client used by Storwize V5000 may not always function correctly with Windows W32Time NTP Server
Windows 2012 Hyper-V + 12Gb SAS (Host)
Windows 2012 Hyper-V with a 12Gb SAS HBA connectivity with both the V3700 and V5K is currently unsupported on SVC 7.8.x. The configurations are supported on SVC 7.7.x. We are planning to resolve this restriction as soon as possible.
Oracle
Oracle Version and OS
|
Restrictions that apply:
|
Oracle RAC 10g on Linux Host
|
1
|
Restriction 1: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it to 90s or 120s. Command to use: crsctl set css misscount 90
Priority Flow Control for iSCSI
Priority Flow Control for iSCSI is supported on Brocade VDX 10-gigabit Ethernet switches only.
SCSI LUN ID 0
SAS hosts running Linux or VMware operating systems.
Removal of LUN's mapped to SCSI ID 0 is not supported and may result in a loss of access to the remaining LUNs.
Maximum Configurations
Configuration limits for Storwize V5000:
Property |
Maximum Number
|
Comments |
System (Cluster) Properties
|
||
Control enclosures per system (cluster) |
2
|
Each control enclosure contains two node canisters |
Nodes per system |
4
|
Arranged as two I/O groups |
Nodes per fabric |
64
|
Maximum number of SVC and V5000 nodes that can be present on the same Fibre Channel fabric, with visibility of each other |
Fabrics per system |
6
|
The number of counterpart Fibre Channel SANs which are supported - Up to 4 fabrics using native Fibre Channel ports - Up to 2 fabrics using FCoE ports |
Inter-cluster partnerships per system |
3
|
A system may be partnered with up to three remote systems. No more than four systems may be in the same connected set. A maximum of 1 IP partnership is supported per system. |
USB ports |
2 to 16
|
|
IP Quorum devices per system |
5
|
|
Node Properties
|
||
Logins per node Fibre Channel WWPN |
512
|
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems |
Fibre Channel buffer credits per port - 8Gbps FC adapter |
255
|
The number of credits granted by the switch to the node |
Fibre Channel buffer credits per port - 16Gbps FC adapter |
4095
|
The number of credits granted by the switch to the node |
iSCSI sessions per node |
1024
|
2048 in IP failover mode (when partner node is unavailable) |
Managed Disk Properties
|
||
Managed disks (MDisks) per system |
4096
|
The maximum number of logical units which can be managed by a system, including internal arrays. Internal distributed arrays consume 16 logical units. This number also includes external MDisks which have not been configured into storage pools (managed disk groups) |
Managed disks per storage pool (managed disk group) |
128
|
|
Storage pools per system |
1024
|
|
Parent pools per system |
128
|
|
Child pools per system |
1023
|
|
Managed disk extent size |
8192 MB
|
|
Capacity for an individual internal managed disk (array) |
-
|
No limit is imposed beyond the maximum number of drives per array limits. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Capacity for an individual external managed disk |
1 PB
|
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Total storage capacity manageable per system |
32 PB
|
Maximum requires an extent size of 8192 MB to be used This limit represents the per system maximum of 2^22 extents. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Volume (Virtual Disk) Properties
|
||
Basic Volumes (VDisks) per V5000 Gen1 system |
8192
|
Each Basic Volume uses 1 VDisk, each with one copy. Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit below |
Basic Volumes (VDisks) per V5030 system |
8192
|
Each Basic Volume uses 1 VDisk, each with one copy. Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit below |
Basic Volumes (VDisks) per V5020 system |
4096
|
Each Basic Volume uses 1 VDisk, each with one copy. Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit below |
Basic Volumes (VDisks) per V5010 system |
2048
|
Each Basic Volume uses 1 VDisk, each with one copy. Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit below |
HyperSwap volumes per system |
1024
|
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship and 4 FlashCopy mappings. |
Volumes per I/O group (volumes per caching I/O group) V5000 Gen1 |
8192
|
|
Volumes per I/O group (volumes per caching I/O group) V5030 |
8192
|
|
Volumes per I/O group (volumes per caching I/O group) V5020 |
4096
|
|
Volumes per I/O group (volumes per caching I/O group) V5010 |
2048
|
|
Volumes accessible per I/O group V5000 Gen1 |
8192
|
|
Volumes accessible per I/O group V5030 |
8192
|
|
Volumes accessible per I/O group V5020 |
4096
|
|
Volumes accessible per I/O group V5010 |
2048
|
|
Thin-provisioned (space-efficient) volume copies per system |
8192
|
No limit is imposed here beyond the volume copies per system limit. |
Compressed volume copies per V5030 system |
400
|
Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit below |
Compressed volume copies per V5030 I/O group |
200
|
|
Volumes per storage pool |
-
|
No limit is imposed beyond the volumes per system limit |
Fully-allocated volume capacity |
256 TB
|
Maximum size for an individual fully-allocated volume. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Thin-provisioned (space-efficient) volume capacity |
256 TB
|
Maximum size for an individual thin-provisioned volume. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
HyperSwap volume capacity in a single I/O group using RAID |
850 TiB
|
This is due to the limit on bitmap space for mirroring and replication in each I/O group. See the Knowledge Center for details. |
Host mappings per system |
20,000
|
See also - volume mappings per host object below |
Mirrored Volume (Virtual Disk) Properties
|
||
Copies per volume |
2
|
|
Volume copies per system |
8192
|
|
Total mirrored volume capacity per I/O group |
1024 TB
|
|
Generic Host Properties
|
||
Host objects (IDs) per system |
512
|
A host object may contain both Fibre Channel ports and iSCSI names |
Host objects (IDs) per I/O group |
256
|
Refer to the additional Fibre Channel and iSCSI host limits below |
Volume mappings per host object |
512
|
|
Total Fibre Channel ports and iSCSI names per system |
4096
|
|
Total Fibre Channel ports and iSCSI names per I/O group |
2048
|
|
Total Fibre Channel ports and iSCSI names per host object |
32
|
|
iSCSI names per host object (ID) |
8
|
|
Host Cluster Properties
|
||
Host clusters per system |
512
|
|
Hosts in a host cluster |
128
|
|
Fibre Channel Host Properties (including hosts attached using FCoE)
|
||
Fibre Channel hosts per system |
512
|
|
Fibre Channel host ports per system |
4096
|
|
Fibre Channel hosts per I/O group |
256
|
|
Fibre Channel host ports per I/O group |
2048
|
|
Fibre Channel hosts ports per host object (ID) |
32
|
|
iSCSI Host Properties
|
||
iSCSI hosts per system |
1024
|
|
iSCSI hosts per I/O group |
256
|
|
iSCSI names per host object (ID) |
8
|
|
iSCSI names per I/O group |
512
|
|
iSCSI (SCSI 3) registrations per VDisk |
512
|
|
Copy Services Properties
|
||
Remote Copy (Metro Mirror and Global Mirror) relationships per system |
4096
|
This can be any mix of Metro Mirror and Global Mirror relationships. |
Active-Active Relationships |
1024
|
This is the limit for the number of HyperSwap volumes in a system |
Remote Copy relationships per consistency group |
-
|
No limit is imposed beyond the Remote Copy relationships per system limit. Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice. |
Remote Copy consistency groups per system |
256
|
|
Total Metro Mirror and Global Mirror volume capacity per I/O group |
1024 TB
|
This limit is the total capacity for all master and auxiliary volumes in the I/O group. |
Total number of Global Mirror with Change Volumes relationships per system |
256
|
|
FlashCopy mappings per system |
4096
|
|
FlashCopy targets per source |
256
|
|
FlashCopy mappings per consistency group |
512
|
|
FlashCopy consistency groups per system |
500
|
|
Total FlashCopy volume capacity per I/O group |
4096 TB
|
|
IP Partnership Properties
|
||
Inter-cluster IP partnerships per system |
1
|
A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC. |
I/O groups per system |
2
|
The nodes from a maximum of two I/O groups per system can be used for IP partnership. |
Inter-site links per IP partnership |
2
|
A maximum of two inter-site links can be used between two IP partnership sites. |
Ports per node |
1
|
A maximum of one port per node can be used for IP partnership. |
IP partnership Software Compression Limit Storwize V5030 |
140 MB/s
|
|
Internal Storage Properties
|
||
SAS chains per control enclosure |
2
1 2 |
V5000 Gen 1 V5010/V5020 V5030 |
Enclosures per SAS chain |
9/10
10 10 |
V5000 Gen 1 (Upto 10 on SAS port 1 and upto 9 on SAS port 2) V5010/V5020 V5030 |
Expansion enclosures per control enclosure |
19
10 20 |
V5000 Gen 1 V5010/V5020 V5030 |
Drives per I/O group |
480
392 504 |
V5000 Gen 1 V5010/V5020 V5030 |
Drives per system |
960
392 1008 |
V5000 Gen 1 V5010/V5020 V5030 (Maximum requires a system containing two control enclosures, each with the maximum number of expansion enclosures) |
Min-Max drives per enclosure |
0-12
or 0-24 |
Limit depends on the enclosure model |
Non-Distributed RAID Array Properties
|
||
Arrays per system |
128
|
|
Drives per array |
16
|
|
Min-Max member drives per RAID-0 array |
1-8
|
|
Min-Max member drives per RAID-1 array |
2-2
|
|
Min-Max member drives per RAID-5 array |
3-16
|
|
Min-Max member drives per RAID-6 array |
5-16
|
|
Min-Max member drives per RAID-10 array |
2-16
|
|
Hot spare drives |
-
|
No limit is imposed |
Distributed RAID Array Properties
|
||
Arrays per system |
20
|
The presence of non-DRAID arrays will reduce this limit |
Encrypted arrays per system |
20
|
The presence of non-DRAID arrays will reduce this limit |
Arrays per I/O group |
10
|
The presence of non-DRAID arrays will reduce this limit |
Drives per array |
128
|
|
Min-Max member drives per RAID-5 array |
4-128
|
|
Min-Max member drives per RAID-6 array |
6-128
|
|
Rebuild areas per array |
1-4
|
|
Min-Max stripe width for RAID-5 array |
3-16
|
|
Min-Max stripe width for RAID-6 array |
5-16
|
|
Max drive capacity for RAID-5 array |
6 TB
|
|
External Storage System Properties
|
||
Storage system WWNNs per system (cluster) |
1024
|
|
Storage system WWPNs per system (cluster) |
1024
|
|
WWNNs per storage system |
16
|
|
WWPNs per WWNN |
16
|
|
LUNs (managed disks) per storage system |
-
|
No limit is imposed beyond the managed disks per system limit |
System and User Management Properties
|
||
User accounts per system |
400
|
Includes the default user accounts |
User groups per system |
256
|
Includes the default user groups |
Authentication servers per system |
1
|
|
NTP servers per system |
1
|
|
iSNS servers per system |
1
|
|
Concurrent open SSH sessions per system |
32
|
|
Event Notification Properties
|
||
SNMP servers per system |
6
|
|
Syslog servers per system |
6
|
|
Email (SMTP) servers per system |
6
|
Email servers are used in turn until the email is successfully sent |
Email users (recipients) per system |
12
|
|
LDAP servers per system |
6
|
Extents
The following table compares the maximum volume, MDisk and system capacity for each extent size.
Extent size (MB)
|
Maximum non thin-provisioned volume capacity in GB
|
Maximum thin-provisioned volume capacity in GB
|
Maximum compressed volume size**
|
Maximum MDisk capacity in GB
|
Maximum DRAID Mdisk capacity in TB
|
Total storage capacity manageable per system*
|
16
|
2048 (2 TB)
|
2000
|
2048 (2 TB)
|
32
|
64 TB
|
|
32
|
4096 (4 TB)
|
4000
|
4096 (4 TB)
|
64
|
128 TB
|
|
64
|
8192 (8 TB)
|
8000
|
8192 (8 TB)
|
128
|
256 TB
|
|
128
|
16,384 (16 TB)
|
16,000
|
16,384 (16 TB)
|
256
|
512 TB
|
|
256
|
32,768 (32 TB)
|
32,000
|
32,768 (32 TB)
|
512
|
1 PB
|
|
512
|
65,536 (64 TB)
|
65,000
|
65,536 (64 TB)
|
1024 (1 PB)
|
2 PB
|
|
1024
|
131,072 (128 TB)
|
130,000
|
96TB **
|
131,072 (128 TB)
|
2048 (2 PB)
|
4 PB
|
2048
|
262,144 (256 TB)
|
260,000
|
96TB **
|
262,144 (256 TB)
|
4096 (4 PB)
|
8 PB
|
4096
|
262,144 (256 TB)
|
262,144
|
96TB **
|
524,288 (512 TB)
|
8192 (8 PB)
|
16 PB
|
8192
|
262,144 (256 TB)
|
262,144
|
96TB **
|
1,048,576 (1024 TB)
|
16384 (16 PB)
|
32 PB
|
* The total capacity values assumes that all of the storage pools in the system use the same extent size.
** Please see the following Flash
Was this topic helpful?
Document Information
Modified date:
26 March 2023
UID
ssg1S1009562