IBM Support

V8.4.0.x Configuration Limits and Restrictions for IBM FlashSystem 5x00

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to IBM FlashSystem 5000, 5100 and 5200 family software version 8.4.0.x.

Content

The use of WAN optimization devices such as Riverbed is not supported in native Ethernet IP partnership configurations containing FlashSystem 5x00.


Data Reduction Pools
The following restrictions apply for Data Reduction Pools (DRP):

  1. VMware vSphere Virtual Volumes (vVols) are not supported in a DRP;
  2. A volume in a DRP cannot be shrunk;
  3. No volume move between I/O groups if volume in a DRP (use FlashCopy or Metro Mirror / Global Mirror instead);
  4. No split of a volume mirror to copy in a different I/O group;
  5. Real/used/free/free/tier capacities are not reported per volume - only per pool.

Traditional RAID
FlashSystem 5000, 5100 and 5200 systems do not support either RAID-5 or RAID-6 traditional RAID arrays.


DRAID Strip Size
For candidate drives with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For these drives a strip size of 256 should be used.


Nondisruptive Volume Move (NDVM)
The following Fibre Channel attached host types are supported for nondisruptively moving a volume between I/O groups (control enclosures):

Host Operating System Host Multipathing Host Clustering Notes
AIX 7.2 AIXPCM Nondisruptive volume move may result in the same volume being mapped to different hosts in the same host cluster using different SCSI IDs. If the host cluster cannot tolerate this configuration then nondisruptive volume move cannot be used. SAN boot is supported
NPIV is supported
Microsoft Windows 2019 MSDSM Hyper-V Failover Cluster SAN boot is supported
Microsoft Windows 2016 MSDSM Hyper-V Failover Cluster SAN Boot is supported
RedHat 8 Native
The original paths may need to be manually removed on the host after removing access to the old I/O group
SLES 15 Native The original paths may need to be manually removed on the host after removing access to the old I/O group
VMware 6.7 Native VAAI is supported
VMware 6.5 Native VAAI is supported
Solaris 11.3 SPARC MPXIO SAN boot is supported

Note: For all other host types, I/O should be quiesced when moving a volume.

When moving a volume that is mapped to a host cluster then you must rescan disk paths on all host cluster nodes to ensure the new paths have been detected before removing access from the original I/O group.


Clustered Systems
A system requires native Fibre Channel SAN or alternatively 16Gbps/32Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Clustering can also be accomplished with 25Gbps Ethernet, for standard topologies. For HyperSwap topologies a SCORE request will be required. Contact your IBM representative to raise a SCORE request. Note that this is only supported on FS5100 systems.

Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel and Native Ethernet connectivity. Distances greater than 300 meters are only supported when using an FCIP link or Fibre Channel between source and target.

All systems within a cluster must be using the same version of FlashSystem 5x00 software.

Model Clustering over Fibre Channel Clustering over 25Gb Ethernet HyperSwap over Fibre Channel HyperSwap over Ethernet (25Gb only) Metro / Global Mirror replication over Fibre Channel Metro / Global Mirror replication over Ethernet (10Gb or 25Gb)
5010/5015 Not supported Not supported Not supported Not supported Yes Yes
5030/5035 Yes up to 2 I/O groups Not supported Yes up to 2 I/O groups Not supported Yes Yes
5100/5200 Yes up to 4 I/O groups Yes up to 4 I/O groups Yes up to 4 I/O groups Yes up to 4 I/O groups Yes Yes


Transparent Cloud Tiering
Transparent cloud tiering, available on FS5100 and FS5200 systems only, is defined by configuration limitations and rules. See the IBM Documentation TCT maximum limits page for details.

The following restrictions apply for Transparent Cloud Tiering:

  1. When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account - even if the cloud account object is removed and remade on the system, the encryption type for that cloud account may not be changed while back up data for that system exists in the cloud provider.
  2. When performing rekey operations on a system that has an encryption-enabled cloud account, perform the commit operation immediately after the prepare operation. Remember to retain the previous system master key (on USB or in Keyserver) as this key may still be needed to retrieve your cloud backup data when performing a T4 recovery or an import.
  3. Restore_uid option should not be used when backup is imported to a new cluster.
  4. Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1.
  5. Transparent cloud tiering uses Sig V2, when connecting to Amazon regions, and does not currently support regions that require Sig V4.


NPIV (N_Port ID Virtualization)

The following recommendations and restrictions should be followed when implementing the NPIV feature.

FCoE is not supported with NPIV.

Operating systems not currently supported for use with NPIV:

  • HPUX 11iV2
  • Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM
  • IBMi in Direct attach

Other Operating Systems
Other operating Systems may also experience the same issue when changing the NPIV state from "Transitional" to "Disabled" in which case the operating system specific rescan command should be used.

Fabric Attachment
NPIV mode on SVC or Storwize is only supported when used with Brocade or Cisco Fibre Channel SAN switches that are NPIV capable.


HyperSwap
When using the HyperSwap function, configure your host multipath driver to use an ALUA-based path policy.

Due to the requirement for multiple access IO groups, SAS attached host types are not supported with HyperSwap volumes.

A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer.  This restriction prevents a HyperSwap volume on one system being virtualized by another.

AIX Live Partition Mobility (LPM)
AIX LPM is supported with the HyperSwap function and AIX 7.x.


Direct Attachment
SAN boot on Windows 2019 (Qlogic HBA) is not supported with 32GB direct attached systems.


Fibre Channel Canister Connection

Visit the IBM System Storage Inter-operation Center (SSIC) for Fibre Channel configurations supported with node HBA hardware.

Auto-negotiation with 32, 16 and 8Gbps networks is supported.

Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only.

Direct connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported.

Other configured switches that are not directly connected to node HBA hardware can be any supported fabric switch as currently listed in SSIC.


25Gbps Ethernet Canister Connection

For 50xx Products, one optional 2-port 25Gbps Ethernet adapter is supported in each node canister for iSCSI communication with iSCSI capable Ethernet ports in hosts via Ethernet switches.

For 5200 Products, two optional 2-port 25Gbps Ethernet adapters are supported in each node canister for iSCSI communication with iSCSI capable Ethernet ports in hosts via Ethernet switches.

There are two types of 25Gbps Ethernet adapter Feature supported:

1) RDMA over Converged Ethernet (RoCE)

2) Internet Wide-area RDMA Protocol (iWARP)

Either option will work for standard iSCSI communications, i.e. not using Remote Direct Memory Access (RDMA). A future software release will add (RDMA) links that use new protocols that support RDMA such as NVMe over Ethernet.

When use of RDMA with a 25Gbps Ethernet adapter becomes possible then RDMA links will only work between RoCE ports or between iWARP ports.

i.e. from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host.

For Ethernet switches and adapters supported in hosts, visit the SSIC

Example of a RoCE adapter for use in a host.
https://docs.nvidia.com/networking/display/cx4lxen

Example of a iWARP adapter for use in a host.
https://www.chelsio.com/nic/unified-wire-adapters/t6225-cr/

The 25Gbps come with SPF28 fitted, which can be used to connect to switches using OM3 optical cables.


IP Partnership
IP partnerships are supported on any of the available Ethernet ports. Using an Ethernet switch to convert a 25Gb to a 1Gb IP partnership, or a 10Gb to a 1Gb IP partnership is not supported. Therefore, the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.


VMware vSphere Virtual Volumes (vVols)
The maximum number of virtual machines on a single VMware ESXi host in a Storwize / vVol storage configuration is limited to 680.

The use of VMware vSphere Virtual Volumes (vVols) on a system that is configured for HyperSwap is not currently supported with SVC / Storwize.


REST API
It is not possible to access the REST API by using a cluster's IPv6 address.


Host Limitations

SAN BOOT function on AIX 7.2 TL5
SAN BOOT is not supported for AIX 7.2 TL5 when connected using the NVME / FC protocol.

RDM Volumes attached to guests in VMware 7.0
Using RDM (raw device mapping) volumes attached to any guests, with the RoCE iSER protocol, results in pathing issues or inability to boot the guest.

N2225 / N2226 SAS HBA
VMware 6.7 (Guest O/S SLES12SP4) connected via SAS N2225 / N2226 host adapters are not supported.

Lenovo 430-16e/8e SAS HBA
VMware 6.7 and 6.5 (Guest O/S SLES12SP4) connected via SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2019 and 2016 connected via SAS Lenovo 430-16e/8e host adapters are not supported.

Windows 2016 HyperV
RHEL v7.1 guests on Windows 2016 HyperV, with Virtual Fibre Channel, are not supported.

iSER
Operating systems not currently supported for use with iSER:

  • Windows 2012 R2 using Mellanox ConnectX-4 Lx EN
  • Windows 2016 using Mellanox ConnectX-4 Lx EN

Windows NTP server 
The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server.


Fabric Limitation
Only one FCF (Fibre Channel Forwarder) switch per fabric is supported.

Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.


Priority Flow Control for iSCSI / iSER
Priority Flow Control for iSCSI/ iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.


Maximum Configurations

Configuration limits for FlashSystem 5x00:

Property
Hardware Type
Maximum Number
Comments
System (Cluster) Properties
Control enclosures per system (cluster)
FS5010
FS5015
1 Each control enclosure contains two node canisters
FS5030
FS5035
FS5100
2
FS5200 4
Nodes per system
FS5010
FS5015
2
FS5030
FS5100
4
Arranged as two I/O groups
FS5200 8 Arranged as four I/O groups
Nodes per fabric
64
Maximum number of Spectrum Virtualize nodes that can be present on the same Fibre Channel fabric, with visibility of each other
Fabrics per system
6
The number of counterpart Fibre Channel SANs that are supported
- Up to 4 fabrics using native Fibre Channel ports
Inter-cluster partnerships per system
3
A system can be partnered with up to three remote systems. No more than four systems can be in the same connected set.
A maximum of 1 IP partnership is supported per system.
IP Quorum devices per system
5
Data encryption keys per system
1024
Node Properties 
Logins per node Fibre Channel WWPN
512
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems
Fibre Channel buffer credits per port
4095
The number of credits granted by the switch to the node
iSCSI sessions per node
1024
2048 in IP failover mode (when partner node is unavailable).
This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions
Managed Disk Properties 
Managed disks (MDisks) per system
4096
The maximum number of logical units that can be managed by a system, including internal arrays.

Internal distributed arrays consume 16 logical units.

This number also includes external MDisks that have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools per system
1024
Parent pools per system
128
Child pools per system
1023
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
-
No limit is imposed beyond the maximum number of drives per array limits.
Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of Storage Systems. Refer to the supported hardware matrix for further details.
Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
The maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Data Reduction Pool Properties
Data Reduction Pools per system
4
MDisks per Data Reduction Pool
128
Volume copies per data reduction pool
8192 - (Number of Data Reduction Pools * 12)
Extents per I/O group per Data Reduction Pool
128000
Volume (Virtual Disk) Properties 
Basic Volumes (VDisks) per system
8192
Each Basic Volume uses 1 VDisk, each with one copy.

Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit
HyperSwap volumes per system
FS5030
FS5035
1250
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship and 4 FlashCopy mappings.
FS5100
FS5200
2000
Volumes per I/O group (volumes per caching I/O group)
8192
Volumes accessible per I/O group
8192
Volumes per storage pool
-
No limit is imposed beyond the volumes per system limit
Fully allocated volume capacity
256 TB
Maximum size for an individual fully allocated volume. 

Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Thin-provisioned (space-efficient) per-volume capacity for volumes copies in regular and data reduction pools
256 TB
Maximum size for an individual thin-provisioned volume.

Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
HyperSwap volume capacity in a single I/O group using RAID
850 TiB
This is due to the limit on bitmap space for mirroring and replication in each I/O group.
See the IBM Documentation for details.
Host mappings per system
20,000
See also - volume mappings per host object
Mirrored Volume (Virtual Disk) Properties 
Copies per volume
2
Volume copies per system
8192
Total mirrored volume capacity per I/O group
1024 TB
Host Properties 
Host objects (IDs) per system
FS5010
FS5015
256
FS5030
FS5035
FS5100
512
FS5200 1024
Host objects (IDs) per I/O group
256
Volume mappings per host object
512
Host Cluster Properties 
Host clusters per system
512
Hosts in a host cluster
128
Fibre Channel Host Properties
Fibre Channel hosts per system
FS5010
FS5015
256
FS5030
FS5035
FS5100
512
FS5200 1024
Fibre Channel host ports per system
FS5010
FS5015
2048
FS5030
FS5035
FS5100
4096
FS5200 8192
Fibre Channel hosts per I/O group
256
Fibre Channel host ports per I/O group
2048
Fibre Channel host ports per host object (ID)
32
iSCSI Host Properties 
iSCSI hosts per system
FS5010
FS5015
256
FS5030
FS5035
FS5100
512
FS5200
1024
iSCSI hosts per I/O group
256
iSCSI names per host object (ID)
4
iSCSI names per I/O group
512
NVMe over Fibre Channel Host Properties 
FC-NVMe hosts per system
FS5100
32
This limit is not policed by the IBM Storage Virtualize software. Any configurations that exceed this limit may experience significant adverse performance impact.
FS5200 64
FC-NVMe hosts per I/O group
FS5100
FS5200
16
This limit is not policed by the IBM Storage Virtualize software. Any configurations that exceed this limit may experience significant adverse performance impact.
Fibre Channel Logins per FC-NVMe WWPN
FS5100
FS5200
16 These are the number of FC2 logins supported.
NVMe Qualified Names (NQNs) per host object (ID)
FS5100
FS5200
2
Copy Services Properties 
Remote Copy (Metro Mirror and Global Mirror) relationships per system
4096
This can be any mix of Metro Mirror and Global Mirror relationships.
Active-Active Relationships
FS5010
FS5015
0
FS5030
FS5035
1250
This is the limit for the number of HyperSwap volumes in a system
FS5100
FS5200
2000
Maximum round-trip latency for FC replication 80ms
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per system limit.

Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice.
Remote Copy consistency groups per system
256
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
Total number of Global Mirror with Change Volumes relationships per system
256
60s cycle time (Change volumes used for active-active relationships do not count toward this limit).
256
300s cycle time (Change volumes used for active-active relationships do not count toward this limit).
FlashCopy mappings per system
8192
FlashCopy targets per source
256
FlashCopy mappings per consistency group
512
FlashCopy consistency groups per system
500
Total FlashCopy volume capacity per I/O group
4096 TB
3-site Remote Copy (Metro Mirror) relationships per consistency group
256
3-site Remote Copy (Metro Mirror) consistency groups per system
16
3-site Remote Copy (Metro Mirror) relationships per system
1024
IP Partnership Properties 
Inter-cluster IP partnerships per system
1
A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC.
I/O groups per system
2
The nodes from a maximum of two I/O groups per system can be used for IP partnership.
Inter-site links per IP partnership
2
A maximum of two inter-site links can be used between two IP partnership sites.
Ports per node
1
A maximum of one port per node can be used for IP partnership.
IP partnership Software Compression Limit
FS5015
FS5030
FS5035
140 MB/s
FS5100
FS5200
N/A
Internal Storage Properties 
SAS chains per control enclosure
FS5010
FS5015
1
FS5030
FS5035
FS5100
FS5200
2
Enclosures per SAS chain
10
Expansion enclosures per control enclosure
FS5010
FS5015
10
FS5030
FS5035
FS5100
FS5200
20
Drives per I/O group
FS5000
504
FS5100
760
FS5200 748
Drives per system
FS5000
1008
Maximum requires a system containing two control enclosures, each with the maximum number of expansion enclosures
FS5100
1520
FS5200 2992
SCM drives per I/O group 12
Non-Distributed RAID Array Properties 
Arrays per system
FS5010
FS5030
FS5100
128
Drives per array
FS5010
FS5030
FS5100
16
Minimum-Maximum member drives per RAID-0 array
FS5010
FS5030
FS5100
1-8
Minimum-Maximum member drives per RAID-1 array
FS5010
FS5030
FS5100
2-2
Minimum-Maximum member drives per RAID-5 array
FS5010
FS5030
FS5100
3-16
Minimum-Maximum member drives per RAID-6 array
FS5010
FS5030
FS5100
5-16
Minimum-Maximum member drives per RAID-10 array
FS5010
FS5030
FS5100
2-16
Hot spare drives
FS5010
FS5030
FS5100
-
No limit is imposed
Distributed RAID Array Properties 
Arrays per system FS5100
20
The presence of non-DRAID arrays reduces this limit
FS5200 32 The presence of non-DRAID arrays reduces this limit
Encrypted arrays per system FS5100
20
The presence of non-DRAID arrays reduces this limit
FS5200 32 The presence of non-DRAID arrays reduces this limit
Arrays per I/O group
10
The presence of non-DRAID arrays reduces this limit
Drives per array
128
Minimum-Maximum member drives per RAID-1 array
FS5015
FS5035
FS5200
2-16
Minimum-Maximum member drives per RAID-5 array
4-128
Minimum-Maximum member drives per RAID-6 array
6-128
Rebuild areas per non-FCM array
1-4
Rebuild areas per FCM array
FS5100
FS5200
1
Rebuild areas per non-FCM RAID-1 array (2 drives only)
FS5015
FS5035
FS5200
0
Rebuild areas per non-FCM RAID-1 array (>2 drives)
FS5015
FS5035
FS5200
1
Rebuild areas per FCM RAID-1 array
(2 drives only)
FS5015
FS5035
FS5200
0
Rebuild areas per FCM RAID-1 array
(>2 drives)
FS5015
FS5035
FS5200
1
Minimum-Maximum stripe width for RAID-5 array
3-16
Minimum-Maximum stripe width for RAID-6 array
5-16
Maximum drive capacity for RAID-5 array
8 TB
This limit applies to HDDs
Maximum drive capacity for RAID-1 array 8 TB This limit applies to HDDs
Drives added to an array in a single DRAID expansion
12
Concurrent DRAID expansions per system
4
Concurrent DRAID expansions per parent storage pool
1
External Storage System Properties 
Storage system WWNNs per system (cluster)
1024
Storage system WWPNs per system (cluster)
1024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system limit
System and User Management Properties 
User accounts per system
FS5100
FS5200
400
Includes the default user accounts
FS5010
FS5015
FS5030
FS5035
200
User groups per system
256
Includes the default user groups
Authentication servers per system
1
NTP servers per system
1
iSNS servers per system
1
Concurrent OpenSSH sessions per system
32
Event Notification Properties 
SNMP servers per system
6
Syslog servers per system
6
Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12
LDAP servers per system
6
REST API Properties 
Maximum active connections per cluster 4 RESTful API
Maximum requests/sec to auth endpoint 3 RESTful API
Maximum requests/sec to command endpoints 10 RESTful API
Number of simultaneous CLIs in progress 1 System
 

Extents 

The following table compares the maximum volume, MDisk, and system capacity for each extent size.

Extent size (MB) 
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB (for regular pools) 
Maximum compressed volume size (for regular pools) ** 
Maximum thin-provisioned and compressed volume size in data reduction pools in GB
Maximum total thin-provisioned and compressed capacity for all volumes in a single data reduction pool per IOgroup in GB
Maximum MDisk capacity in GB 
Maximum DRAID MDisk capacity in TB 
Total storage capacity manageable per system* 
16
2048 (2 TB)
2000
2TB
2048 (2 TB)
2048 (2 TB)
2048 (2 TB)
32
64 TB
32
4096 (4 TB)
4000
4TB
4096 (4 TB)
4096 (4 TB)
4096 (4 TB)
64
128 TB
64
8192 (8 TB)
8000
8TB
8192 (8 TB)
8192 (8 TB)
8192 (8 TB)
128
256 TB
128
16,384 (16 TB)
16,000
16TB
16,384 (16 TB)
16,384 (16 TB)
16,384 (16 TB)
256
512 TB
256
32,768 (32 TB)
32,000
32TB
32,768 (32 TB)
32,768 (32 TB)
32,768 (32 TB)
512
1 PB
512
65,536 (64 TB)
65,000
64TB
65,536 (64 TB)
65,536 (64 TB)
65,536 (64 TB)
1024 (1 PB)
2 PB
1024
131,072 (128 TB)
130,000
96TB ** 
131,072 (128 TB)
131,072 (128 TB)
131,072 (128 TB)
2048 (2 PB)
4 PB
2048
262,144 (256 TB)
260,000
96TB ** 
262,144 (256 TB)
262,144 (256 TB)
262,144 (256 TB)
4096 (4 PB)
8 PB
4096
262,144 (256 TB)
262,144
96TB ** 
262,144 (256 TB)
524,288 (512 TB)
524,288 (512 TB)
8192 (8 PB)
16 PB
8192
262,144 (256 TB)
262,144
96TB ** 
262,144 (256 TB)
1,048,576 (1024 TB)
1,048,576 (1024 TB)
16384 (16 PB)
32 PB

* The total capacity values assume that all of the storage pools in the system use the same extent size. 
** See the  following Flash

[{"Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST3FR9","label":"IBM FlashSystem 5000"},"ARM Category":[{"code":"a8m0z000000bqPqAAI","label":"Documentation"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"8.4.0"},{"Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST2HTZ","label":"IBM FlashSystem Software"},"ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"8.4.0"},{"Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST3FR9","label":"IBM FlashSystem 5x00"},"ARM Category":[{"code":"a8m0z000000bqPqAAI","label":"Documentation"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Version(s)"}]

Document Information

Modified date:
08 May 2024

UID

ibm16362339