Question & Answer
Question
Which APARs raised against IBM Storage Virtualize have been fixed?
In which PTFs were they made available?
Note that this document was formerly known as IBM Spectrum Virtualize APARs
Answer
The following table lists all APARs fixed in v7.3.0.1 or later. Where an APAR was fixed in multiple releases there will be multiple rows in the table.
APAR | VRMF | Description |
---|---|---|
DT112601 | 8.3.1.6 | Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery |
DT112601 | 8.5.0.0 | Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery |
DT112601 | 8.4.2.0 | Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery |
DT112601 | 8.4.0.4 | Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery |
HU00014 | 7.3.0.1 | Multiple node warmstarts if many volume-host mappings exist to a single host |
HU00017 | 7.3.0.1 | Node warmstart after failed mkrcpartnership command |
HU00026 | 7.3.0.1 | Node warmstart after all compressed volumes in an I/O group are deleted |
HU00130 | 7.4.0.0 | Node warmstart due to IPC queue state |
HU00133 | 7.3.0.1 | Loss of access to data when an enclosure goes offline during software upgrade |
HU00176 | 7.3.0.5 | Node warmstart due to a I/O deadlock when using FlashCopy |
HU00183 | 7.3.0.1 | GUI becomes non-responsive on larger configurations |
HU00195 | 7.3.0.1 | Multiple node warmstarts when creating the first compressed volume in an I/O group |
HU00219 | 7.3.0.1 | Node warmstart when stopping a FlashCopy map in a chain of FlashCopy mappings |
HU00236 | 7.3.0.1 | Performance degradation when changing the state of certain LEDs |
HU00241 | 7.3.0.1 | Unresponsive GUI caused by locked IPC sockets |
HU00247 | 7.8.1.5 | A rare deadlock condition can lead to a RAID5 or RAID6 array rebuild stalling at 99% |
HU00247 | 8.1.1.1 | A rare deadlock condition can lead to a RAID5 or RAID6 array rebuild stalling at 99% |
HU00251 | 7.4.0.0 | Unable to migrate volume mirror copies to alternate storage pool using GUI |
HU00253 | 7.3.0.1 | Global Mirror with Change Volumes does not resume copying after an I/O group goes offline at secondary cluster |
HU00257 | 7.3.0.1 | Multiple node warmstarts when EMC RecoverPoint appliance restarted |
HU00271 | 7.7.1.1 | An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts |
HU00271 | 7.5.0.9 | An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts |
HU00271 | 7.7.0.3 | An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts |
HU00271 | 7.6.1.5 | An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts |
HU00272 | 7.3.0.1 | Arrays incorrectly reporting resync progress as 0% or 255% |
HU00274 | 7.3.0.5 | Quiesce and resume of host I/O when Global Mirror consistency group reaches consistent_synchronized state |
HU00277 | 7.3.0.5 | Loss of access to data when adding a Global Mirror Change volume if the system is almost out of FlashCopy bitmap space |
HU00280 | 7.3.0.5 | Multiple node warmstarts triggered by a Global Mirror disconnection |
HU00281 | 7.5.0.0 | Single node warmstart due to internal code exception |
HU00283 | 7.3.0.1 | Multiple node warmstarts caused by invalid compressed volume metadata |
HU00287 | 7.3.0.1 | Multiple node warmstarts when using hosts where the IQN is the same, but uses different capitalisation |
HU00288 | 7.3.0.1 | GUI does not remember the most recently visited page |
HU00290 | 7.3.0.1 | System incorrectly attempts to upgrade firmware during maintenance discharge |
HU00291 | 7.3.0.1 | Node warmstart caused by instability in IP replication connection |
HU00293 | 7.3.0.1 | Node canister fails to boot after hard shutdown |
HU00294 | 7.3.0.1 | Event ID 981007 not always logged correctly |
HU00296 | 7.3.0.3 | Node warmstart when handling specific compression workloads |
HU00298 | 7.3.0.1 | Multiple node warmstarts when using IBM DS4000 using an incorrect host type |
HU00300 | 7.4.0.0 | Volume mirroring synchronisation exceeds the maximum copy rate |
HU00301 | 7.7.0.0 | A 4-node enhanced stretched cluster with non-mirrored volumes may get stuck in stalled_non_redundant during an upgrade |
HU00302 | 7.3.0.2 | Multiple repeating node warmstarts if system has previously run a code release earlier than 6.4.0 and is upgraded to v7.3.0.1 without stepping through a 6.4.x release |
HU00304 | 7.3.0.3 | Both node canister fault LEDs are set to ON following upgrade to v7.3 release |
HU00305 | 7.3.0.9 | System unable to detect and use newly added ports on EMC VMAX |
HU00305 | 7.4.0.0 | System unable to detect and use newly added ports on EMC VMAX |
HU00324 | 7.3.0.3 | Compressed volumes offline after upgrading to v7.3.0.1 or v7.3.0.2 |
HU00336 | 7.3.0.3 | Single node warmstart when volumes go offline on systems running v7.3.0.0, v7.3.0.1 or v7.3.0.2 |
HU00346 | 7.4.0.8 | Running GMCV relationships are not consistently displayed in GUI |
HU00354 | 7.3.0.4 | Loss of access to data if upgrading directly from 6.4.x to v7.3.0.1, v7.3.0.2 or v7.3.0.3 with multiple access I/O groups configured on any volume |
HU00389 | 7.4.0.3 | When a Storwize system is configured as a backend storage subsystem for an SVC or another Storwize system, the port statistics count the traffic between these two systems as remote cluster traffic instead of host to storage traffic (e.g. in TPC Port to Remote Node Send Data Rate instead of Port to Controller Send Data Rate) |
HU00389 | 7.3.0.9 | When a Storwize system is configured as a backend storage subsystem for an SVC or another Storwize system, the port statistics count the traffic between these two systems as remote cluster traffic instead of host to storage traffic (e.g. in TPC Port to Remote Node Send Data Rate instead of Port to Controller Send Data Rate) |
HU00422 | 7.3.0.5 | Node warmstart when using Global Mirror Change Volumes |
HU00432 | 7.3.0.4 | Performance reduction and node warmstarts when running out of cache resources in v7.3.0.1, v7.3.0.2, or v7.3.0.3 |
HU00443 | 7.3.0.5 | Global Mirror Change Volumes stops replicating after an upgrade from V641 to v7.2.0 or later |
HU00444 | 7.3.0.5 | Node warmstarts due to overloaded compression engine |
HU00446 | 7.3.0.5 | v7.3.0 cache does not make effective use of CPU resources Note: A restart is required to activate this fix if you upgrade from an earlier version of v7.3.0 |
HU00447 | 7.7.0.0 | A Link Reset on an 8Gbps Fibre Channel port causes fabric logout/login |
HU00448 | 7.3.0.8 | Increased latency on SVC and V7000 systems running v7.3 when using compressed volumes due to compression engine memory management |
HU00450 | 7.3.0.5 | Manual upgrade with stopped Global Mirror relationships can not complete non-disruptively due to dependent volumes |
HU00462 | 7.3.0.5 | Node warmstart when using Easytier with a single tier in a pool on v7.3 |
HU00463 | 7.3.0.8 | Increased host I/O response time to compressed volumes on v7.3.0, for specific I/O workloads |
HU00464 | 7.3.0.5 | Loss of access to data due to resource leak in v7.3.0 |
HU00465 | 7.3.0.5 | Node error 581 on 2145-CF8 nodes due to problem communicating with the IMM |
HU00467 | 7.4.0.0 | Global Mirror Change Volumes Freeze time not reported correctly if cycle takes longer than the cycle period to complete |
HU00468 | 7.3.0.8 | Drive firmware task not removed from GUI running tasks display following completion of drive firmware update action |
HU00468 | 7.5.0.0 | Drive firmware task not removed from GUI running tasks display following completion of drive firmware update action |
HU00470 | 7.6.0.0 | Single node warmstart on login attempt with incorrect password issued |
HU00472 | 7.3.0.8 | Node restarts leading to offline volumes when using Flashcopy or Remote copy |
HU00473 | 7.3.0.8 | SVC DH8 node reports node error 522 following system board replacement |
HU00481 | 7.3.0.8 | Node warmstart when new multi-tier storage pool is added and an MDisk overload condition is detected within the first day |
HU00484 | 7.4.0.0 | Loss of access to data when the lsdependentvdisks command is run with no parameters |
HU00485 | 7.3.0.8 | TPC cache statistics inaccurate or unavailable in v7.3.0 and later release |
HU00486 | 7.3.0.5 | Systems upgraded to 2145-DH8 nodes do not make use of the compression acceleration cards for compressed volumes |
HU00487 | 7.4.0.0 | Rebuild process stalls or unable to create MDisk due to unexpected RAID scrub state |
HU00490 | 7.4.0.0 | Node warmstart when using Metro Mirror or Global Mirror |
HU00493 | 7.3.0.8 | SVC DH8 node offline due to battery backplane problem |
HU00494 | 7.3.0.11 | Node warmstart caused by timing window when handling XCOPY commands |
HU00494 | 7.4.0.0 | Node warmstart caused by timing window when handling XCOPY commands |
HU00495 | 7.4.0.0 | Node warmstart caused by a single active write holding up the GM disconnect |
HU00496 | 7.3.0.5 | SVC volumes offline and data unrecoverable. For more details refer to this Flash |
HU00497 | 7.3.0.6 | Volumes offline due to incorrectly compressed data. Second fix for issue. For more details refer to this Flash |
HU00499 | 7.5.0.3 | Loss of access to data when a volume that is part of a Global Mirror Change Volumes relationship is removed with the force flag. |
HU00502 | 7.3.0.8 | EasyTier migration running at a reduced rate |
HU00505 | 7.4.0.0 | Multiple node warmstarts caused by timing window when using inter-system replication |
HU00506 | 7.3.0.8 | Increased destaging latency in upper cache when using v7.3.0 release |
HU00516 | 7.3.0.11 | Node warmstart due to software thread deadlock |
HU00516 | 7.4.0.3 | Node warmstart due to software thread deadlock |
HU00518 | 7.4.0.0 | Multiple Node warmstarts due to invalid SCSI commands generated by network probes |
HU00519 | 7.3.0.8 | Node warmstart due to FlashCopy deadlock condition |
HU00519 | 7.5.0.0 | Node warmstart due to FlashCopy deadlock condition |
HU00520 | 7.4.0.0 | Node warmstart caused by iSCSI command being aborted immediately after the command is issued |
HU00521 | 7.7.0.0 | Remote Copy relationships may be stopped and lose synch when a single node warmstart occurs at the secondary site |
HU00525 | 7.5.0.0 | Unable to manually mark monitoring events in the event log as fixed |
HU00526 | 7.3.0.8 | Node warmstarts caused by very large number of 512 byte write operations to compressed volumes |
HU00528 | 7.3.0.9 | Single PSU DC output turned off when there is no PSU hardware fault present |
HU00528 | 7.4.0.0 | Single PSU DC output turned off when there is no PSU hardware fault present |
HU00529 | 7.3.0.8 | Increased latency on SVC and V7000 systems running v7.3 (excluding DH8 & V7000 Gen2 models) when using compressed volumes due to defragmentation issue |
HU00536 | 7.6.0.0 | When stopping a GMCV relationship clean up process at secondary site hangs to the point of a primary node warmstart |
HU00538 | 7.4.0.0 | Node warmstart when removing host port (via GUI or CLI) when there is outstanding I/O to host |
HU00539 | 7.3.0.8 | Node warmstarts after stopping and restarting FlashCopy maps with compressed volumes as target of the map |
HU00540 | 7.4.0.0 | Configuration Backup fails due to invalid volume names |
HU00541 | 7.4.0.0 | Fix Procedure fails to complete successfully when servicing PSU |
HU00543 | 7.4.0.0 | Elongated I/O pause when starting or stopping remote copy relationships when there are a large number of remote copy relationships |
HU00544 | 7.4.0.0 | Storage pool offline when upgrading firmware on storage subsystems listed in APAR Environment |
HU00545 | 7.4.0.0 | Loss of Access to data after control chassis or midplane enclosure replacement |
HU00546 | 7.4.0.0 | Multiple Node warmstarts when attempting to access data beyond the end of the volume |
HU00547 | 7.4.0.0 | I/O delay during site failure using enhanced stretched cluster |
HU00548 | 7.4.0.0 | Unable to create IP partnership that had previously been deleted |
HU00629 | 7.4.0.3 | Performance degradation triggered by specific I/O pattern when using compressed volumes due to optimisation issue |
HU00629 | 7.3.0.9 | Performance degradation triggered by specific I/O pattern when using compressed volumes due to optimisation issue |
HU00630 | 7.4.0.3 | Temporary loss of paths for FCoE hosts after 497 days uptime due to FCoE driver timer problem |
HU00636 | 7.3.0.9 | Livedump prepare fails on V3500 & V3700 systems with 4GB memory when cache partition fullness is less than 35% |
HU00636 | 7.4.0.3 | Livedump prepare fails on V3500 & V3700 systems with 4GB memory when cache partition fullness is less than 35% |
HU00637 | 7.4.0.3 | HP MSA P2000 G3 controllers running a firmware version later than TS240P003 may not be recognised by SVC/Storwize |
HU00637 | 7.3.0.9 | HP MSA P2000 G3 controllers running a firmware version later than TS240P003 may not be recognised by SVC/Storwize |
HU00638 | 7.5.0.0 | Multiple node warmstarts when there is high backend latency |
HU00644 | 7.5.0.0 | Multiple node warmstarts when node port receives duplicate frames during a specific I/O timing window |
HU00645 | 7.4.0.2 | Loss of access to data when using compressed volumes on 7.4.0.1 can occur when there is a large number of consecutive and highly compressible writes to a compressed volume |
HU00646 | 7.3.0.9 | Easy tier throughput reduced due to Easy tier only moving 6 extents per 5 minutes regardless of extent size |
HU00646 | 7.4.0.3 | Easy tier throughput reduced due to Easy tier only moving 6 extents per 5 minutes regardless of extent size |
HU00648 | 7.3.0.9 | Node warmstart due to handling of parallel reads on compressed volumes |
HU00649 | 7.6.0.0 | In rare cases an unexpected IP address may be configured on management port eth0. This IP address is neither the service IP nor the cluster IP, but most likely set by DHCP during boot |
HU00649 | 7.5.0.9 | In rare cases an unexpected IP address may be configured on management port eth0. This IP address is neither the service IP nor the cluster IP, but most likely set by DHCP during boot |
HU00653 | 7.4.0.3 | 1691 RAID inconsistencies falsely reported due to RAID incomplete locking issue |
HU00653 | 7.3.0.9 | 1691 RAID inconsistencies falsely reported due to RAID incomplete locking issue |
HU00654 | 7.4.0.3 | Loss of access to data when FlashCopy stuck during a Global Mirror Change Volumes cycle |
HU00654 | 7.3.0.9 | Loss of access to data when FlashCopy stuck during a Global Mirror Change Volumes cycle |
HU00655 | 7.3.0.9 | Loss of access to data if a PSU in two different enclosures suffers an output failure simultaneously (whilst AC input is good) |
HU00655 | 7.4.0.3 | Loss of access to data if a PSU in two different enclosures suffers an output failure simultaneously (whilst AC input is good) |
HU00656 | 7.3.0.9 | Increase in reported CPU utilisation following upgrade to v7.2.0 or higher. For more details refer to this Flash |
HU00658 | 7.4.0.0 | Global Mirror source data may be incompletely replicated to target volumes. For more details refer to this Flash |
HU00658 | 7.3.0.9 | Global Mirror source data may be incompletely replicated to target volumes. For more details refer to this Flash |
HU00659 | 7.4.0.5 | Global Mirror with Change Volumes freeze time reported incorrectly |
HU00659 | 7.5.0.0 | Global Mirror with Change Volumes freeze time reported incorrectly |
HU00660 | 7.4.0.1 | Reduced performance on compressed volumes when running parallel workloads with large block size |
HU00660 | 7.3.0.9 | Reduced performance on compressed volumes when running parallel workloads with large block size |
HU00665 | 7.4.0.3 | Node warmstart due to software thread deadlock condition during execution of internal MDisk/discovery process |
HU00666 | 7.3.0.11 | Upgrade from v7.2.0 stalls due to dependent volumes |
HU00666 | 7.4.0.3 | Upgrade from v7.2.0 stalls due to dependent volumes |
HU00669 | 7.4.0.3 | Node warmstart and VMware host I/O timeouts if a node is removed from the cluster during upgrade from a pre v7.4.0 version to v7.4.0 whilst there are active VAAI CAW commands |
HU00671 | 7.4.0.5 | 1691 error on arrays when using multiple FlashCopies of the same source. For more details refer to this Flash |
HU00671 | 7.5.0.0 | 1691 error on arrays when using multiple FlashCopies of the same source. For more details refer to this Flash |
HU00671 | 7.3.0.12 | 1691 error on arrays when using multiple FlashCopies of the same source. For more details refer to this Flash |
HU00672 | 7.3.0.11 | Node warmstart due to compression stream condition |
HU00672 | 7.4.0.3 | Node warmstart due to compression stream condition |
HU00673 | 7.4.0.5 | Drive slot is not recognised following drive auto manage procedure |
HU00673 | 7.5.0.0 | Drive slot is not recognised following drive auto manage procedure |
HU00675 | 7.5.0.0 | Node warmstart following node start up/restart due to invalid CAW domain state |
HU00676 | 7.4.0.4 | Node warmstart due to compression engine restart |
HU00676 | 7.3.0.11 | Node warmstart due to compression engine restart |
HU00677 | 7.4.0.3 | Node warmstart or loss of access to GUI/CLI due to defunct SSH processes |
HU00678 | 7.4.0.3 | iSCSI hosts incorrectly show an offline status following update to V6.4 from a pre V6.4 release |
HU00680 | 7.3.0.10 | Compressed volumes go offline due to false detection event. Applies to V7000 Generation 2 systems only running v7.3.0.9 or v7.4.0.3 |
HU00680 | 7.4.0.4 | Compressed volumes go offline due to false detection event. Applies to V7000 Generation 2 systems only running v7.3.0.9 or v7.4.0.3 |
HU00711 | 7.3.0.11 | GUI response slow when filtering a large number of volumes |
HU00711 | 7.4.0.5 | GUI response slow when filtering a large number of volumes |
HU00719 | 7.6.1.6 | After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery |
HU00719 | 7.5.0.10 | After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery |
HU00719 | 7.7.0.0 | After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery |
HU00725 | 7.4.0.5 | Loss of access to data when adding a Global Mirror Change Volume relationship to a consistency group on the primary site, when the secondary site does not have a secondary volume defined |
HU00725 | 7.5.0.2 | Loss of access to data when adding a Global Mirror Change Volume relationship to a consistency group on the primary site, when the secondary site does not have a secondary volume defined |
HU00726 | 7.5.0.0 | Single node warmstart due to stuck I/O following offline MDisk group condition |
HU00726 | 7.4.0.10 | Single node warmstart due to stuck I/O following offline MDisk group condition |
HU00731 | 7.4.0.5 | Single node warmstart due to invalid volume memory allocation pointer |
HU00732 | 7.6.0.0 | Single node warmstart due to stalled Remote Copy recovery as a result of pinned write IOs on incorrect queue |
HU00733 | 7.6.0.0 | Stop with access results in node warmstarts after a recovervdiskbysystem command |
HU00733 | 7.5.0.11 | Stop with access results in node warmstarts after a recovervdiskbysystem command |
HU00734 | 7.7.1.1 | Multiple node warmstarts due to deadlock condition during RAID group rebuild |
HU00735 | 7.5.0.0 | Host I/O statistics incorrectly including logically failed writes |
HU00737 | 7.5.0.0 | GUI does not warn of lack of space condition when collecting a Snap, this results in some files missing from the Snap |
HU00740 | 7.5.0.5 | Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node |
HU00740 | 7.6.0.0 | Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node |
HU00740 | 7.4.0.7 | Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node |
HU00744 | 8.2.1.4 | Single node warmstart due to an accounting issue within the cache component |
HU00744 | 7.8.1.10 | Single node warmstart due to an accounting issue within the cache component |
HU00744 | 8.1.3.6 | Single node warmstart due to an accounting issue within the cache component |
HU00745 | 7.5.0.2 | IP Replication does not return to using full throughput following packet loss on IP link used for replication |
HU00745 | 7.4.0.5 | IP Replication does not return to using full throughput following packet loss on IP link used for replication |
HU00746 | 7.6.0.0 | Single node warmstart during a synchronisation process of the RAID array |
HU00747 | 7.8.1.0 | Node warmstarts can occur when drives become degraded |
HU00749 | 7.6.0.0 | Multiple node warmstarts in I/O group after starting Remote Copy |
HU00752 | 7.3.0.11 | Email notifications and call home stops working after updating to v7.3.0.10 |
HU00756 | 7.5.0.7 | Performance statistics BBCZ counter values reported incorrectly |
HU00756 | 7.6.0.0 | Performance statistics BBCZ counter values reported incorrectly |
HU00756 | 7.4.0.6 | Performance statistics BBCZ counter values reported incorrectly |
HU00757 | 7.6.0.0 | Multiple node warmstarts when removing a Global Mirror relationship with secondary volume that has been offline |
HU00759 | 7.4.0.5 | catxmlspec cli command (used by external monitoring applications such as Spectrum Control Base) not working |
HU00761 | 7.3.0.11 | Array rebuild fails to start after a drive is manually taken offline |
HU00762 | 7.5.0.13 | Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node |
HU00762 | 7.8.0.2 | Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node |
HU00762 | 7.7.1.7 | Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node |
HU00762 | 7.6.1.7 | Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node |
HU00763 | 7.7.1.7 | A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed |
HU00763 | 7.8.1.1 | A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed |
HU01237 | 7.7.1.7 | A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed |
HU01237 | 7.8.1.1 | A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed |
HU00764 | 7.5.0.0 | Loss of access to data due to persistent reserve host registration keys exceeding the current supported value of 256 |
HU00794 | 7.6.0.0 | Hang up of GM I/O stream can effect MM I/O in another remote copy stream |
HU00804 | 7.5.0.0 | Loss of access to data due to SAS recovery mechanism operating on both nodes in I/O group simultaneously |
HU00805 | 7.5.0.0 | Some SAS ports are displayed in hexadecimal values instead of decimal values in the performance statistics xml files |
HU00806 | 7.5.0.0 | mkarray command fails when creating an encrypted array due to pending bitmap state |
HU00807 | 7.5.0.0 | Increase in node cpu usage due to FlashCopy mappings with high cleaning rate |
HU00808 | 7.5.0.0 | NTP trace logs not collected on configuration node |
HU00809 | 7.5.0.3 | Both nodes shutdown when power is lost to one node for more than 15 seconds |
HU00811 | 7.4.0.5 | Loss of access to data when SAN connectivity problems leads to backend controller being detected as incorrect type |
HU00811 | 7.5.0.0 | Loss of access to data when SAN connectivity problems leads to backend controller being detected as incorrect type |
HU00815 | 7.5.0.1 | FlashCopy source and target volumes offline when FlashCopy maps are started. |
HU00816 | 7.5.0.2 | Loss of access to data following upgrade to v7.5.0.0 or 7.5.0.1 when, i) the cluster has previously run release 6.1.0 or earlier at some point in its life span or ii) the cluster has 2,600 or more MDisks |
HU00819 | 7.6.0.0 | Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues |
HU00819 | 7.4.0.8 | Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues |
HU00819 | 7.5.0.7 | Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues |
HU00820 | 7.4.0.5 | Data integrity issue when using encrypted arrays. For more details refer to this Flash |
HU00820 | 7.5.0.2 | Data integrity issue when using encrypted arrays. For more details refer to this Flash |
HU00821 | 7.5.0.3 | Single node warmstart due to HBA firmware behaviour |
HU00823 | 7.6.0.0 | Node warmstart due to inconsistent EasyTier status when EasyTier is disabled on all managed disk groups |
HU00825 | 7.5.0.2 | Java exception error when using the Service Assistant GUI to complete an enclosure replacement procedure |
HU00825 | 7.4.0.5 | Java exception error when using the Service Assistant GUI to complete an enclosure replacement procedure |
HU00827 | 7.6.0.0 | Both nodes in a single I/O group of a multi I/O group system can warmstart due to misallocation of volume stats entries |
HU00828 | 7.5.0.3 | FlashCopies take a long time or do not complete when the background copy rate set is non-zero |
HU00829 | 7.4.0.5 | 1125 (or 1066 on V7000 Generation 2) events incorrectly logged for all PSUs/Fan trays when there is a single PSU/fan tray fault |
HU00830 | 7.4.0.8 | When a node running iSCSI encounters a PDU with AHS it will warmstart |
HU00831 | 7.8.0.0 | Single node warmstart due to hung I/O caused by cache deadlock |
HU00831 | 7.7.1.5 | Single node warmstart due to hung I/O caused by cache deadlock |
HU00831 | 7.6.1.7 | Single node warmstart due to hung I/O caused by cache deadlock |
HU00832 | 7.4.0.6 | Automatic licensed feature activation fails for 6099 machine type |
HU00832 | 7.5.0.3 | Automatic licensed feature activation fails for 6099 machine type |
HU00833 | 7.5.0.3 | Single node warmstart when the mkhost cli command is run without the -iogrp flag |
HU00836 | 7.4.0.7 | Wrong volume copy may be taken offline in a timing window when metadata corruption is detected on a Thin Provisioned Volume and a node warmstart happens at the same time |
HU00838 | 7.6.0.0 | FlashCopy volume offline due to a cache flush issue |
HU00840 | 7.5.0.4 | Node warmstarts when Spectrum Virtualize iSCSI target receives garbled packets |
HU00840 | 7.6.0.0 | Node warmstarts when Spectrum Virtualize iSCSI target receives garbled packets |
HU00841 | 7.5.0.3 | Multiple node warmstarts leading to loss of access to data when changing a volume throttle rate to a value of more than 10000 IOPs or 40MBps |
HU00842 | 7.6.0.0 | Unable to clear bad blocks during an array resync process |
HU00843 | 7.5.0.3 | Single node warmstart when there is a high volume of ethernet traffic on link used for IP replication/iSCSI |
HU00844 | 7.5.0.3 | Multiple node warmstarts following installation of an additional SAS HIC |
HU00845 | 7.5.0.3 | Trial licenses for licensed feature activation are not available |
HU00845 | 7.4.0.6 | Trial licenses for licensed feature activation are not available |
HU00886 | 7.7.0.0 | Single node warmstart due to CLI startfcconsistgrp command timeout |
HU00890 | 7.6.0.0 | Technician port inittool redirects to SAT GUI |
HU00890 | 7.5.0.5 | Technician port inittool redirects to SAT GUI |
HU00890 | 7.4.0.8 | Technician port inittool redirects to SAT GUI |
HU00891 | 7.4.0.8 | The extent database defragmentation process can create duplicates whilst copying extent allocations resulting in a node warmstart to recover the database |
HU00891 | 7.3.0.13 | The extent database defragmentation process can create duplicates whilst copying extent allocations resulting in a node warmstart to recover the database |
HU00891 | 7.5.0.7 | The extent database defragmentation process can create duplicates whilst copying extent allocations resulting in a node warmstart to recover the database |
HU00897 | 7.7.0.0 | Spectrum Virtualize iSCSI target ignores maxrecvdatasegmentlength leading to host I/O error |
HU00898 | 7.5.0.3 | Potential data loss scenario when using compressed volumes on SVC and Storwize V7000 running software versions v7.3, v7.4 or v7.5. For more details refer to this Flash |
HU00898 | 7.4.0.6 | Potential data loss scenario when using compressed volumes on SVC and Storwize V7000 running software versions v7.3, v7.4 or v7.5. For more details refer to this Flash |
HU00898 | 7.3.0.12 | Potential data loss scenario when using compressed volumes on SVC and Storwize V7000 running software versions v7.3, v7.4 or v7.5. For more details refer to this Flash |
HU00899 | 7.6.0.2 | Node warmstart observed when 16G FC or 10G FCoE adapter detects heavy network congestion |
HU00900 | 7.6.0.0 | SVC FC driver warmstarts when it receives an unsupported but valid FC command |
HU00901 | 7.3.0.12 | Incorrect read cache hit percentage values reported in TPC |
HU00902 | 7.5.0.3 | Starting a Global Mirror Relationship or Consistency Group fails after changing a relationship to not use Change Volumes |
HU00903 | 7.6.0.0 | Emulex firmware paused causes single node warmstart |
HU00904 | 7.5.0.3 | Multiple node warmstarts leading to loss of access to data when the link used for IP Replication experiences packet loss and the data transfer rate occasionally drops to zero |
HU00904 | 7.4.0.6 | Multiple node warmstarts leading to loss of access to data when the link used for IP Replication experiences packet loss and the data transfer rate occasionally drops to zero |
HU00905 | 7.5.0.3 | The serial number value displayed in the GUI node properties dialog is incorrect |
HU00905 | 7.4.0.6 | The serial number value displayed in the GUI node properties dialog is incorrect |
HU00906 | 7.8.0.0 | When a compressed volume mirror copy is taken offline, write response times to the primary copy may reach prohibitively high levels leading to a loss of access to that volume |
HU00908 | 7.6.0.0 | Battery can charge too quickly on reconditioning and take node offline |
HU00909 | 7.6.0.0 | Single node warmstart may occur when removing an MDisk group that was using EasyTier |
HU00909 | 7.5.0.9 | Single node warmstart may occur when removing an MDisk group that was using EasyTier |
HU00910 | 7.7.0.0 | Handling of I/O to compressed volumes can result in a timeout condition that is resolved by a node warmstart |
HU00913 | 7.5.0.5 | Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB |
HU00913 | 7.4.0.6 | Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB |
HU00913 | 7.6.0.0 | Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB |
HU00915 | 7.6.0.0 | Loss of access to data when removing volumes associated with a GMCV relationship |
HU00915 | 7.5.0.9 | Loss of access to data when removing volumes associated with a GMCV relationship |
HU00921 | 7.8.1.10 | A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes |
HU00921 | 8.2.1.0 | A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes |
HU00921 | 8.2.0.0 | A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes |
HU00922 | 7.5.0.5 | Loss of access to data when moving volumes to another I/O group using the GUI |
HU00922 | 7.6.0.0 | Loss of access to data when moving volumes to another I/O group using the GUI |
HU00922 | 7.4.0.6 | Loss of access to data when moving volumes to another I/O group using the GUI |
HU00923 | 7.6.0.0 | Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters |
HU00923 | 7.5.0.7 | Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters |
HU00923 | 7.4.0.6 | Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters |
HU00924 | 7.4.0.6 | The Volumes by Pool display in the GUI shows incorrect EasyTier status |
HU00927 | 7.5.0.8 | Single node warmstart may occur while fast formatting a volume |
HU00928 | 7.7.0.0 | For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed |
HU00928 | 7.6.1.5 | For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed |
HU00928 | 7.5.0.9 | For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed |
HU00935 | 7.5.0.8 | A single node warmstart may occur when memory is asynchronously allocated for an I/O and the underlying FlashCopy map has changed at exactly the same time |
HU00935 | 7.6.0.1 | A single node warmstart may occur when memory is asynchronously allocated for an I/O and the underlying FlashCopy map has changed at exactly the same time |
HU00936 | 7.5.0.5 | During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline |
HU00936 | 7.3.0.13 | During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline |
HU00936 | 7.4.0.9 | During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline |
HU00936 | 7.6.0.2 | During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline |
HU00967 | 7.5.0.4 | Multiple warmstarts due to FlashCopy background copy limitation putting both nodes in service state |
HU00967 | 7.6.0.0 | Multiple warmstarts due to FlashCopy background copy limitation putting both nodes in service state |
HU00970 | 7.6.0.1 | Node warmstart when upgrading to v7.6.0.0 with volumes using more than 65536 extents |
HU00973 | 7.6.0.0 | Single node warmstart when concurrently creating new volume host mappings |
HU00975 | 7.6.0.0 | Single node warmstart due to a race condition reordering of the background process when allocating I/O blocks |
HU00975 | 7.5.0.7 | Single node warmstart due to a race condition reordering of the background process when allocating I/O blocks |
HU00980 | 7.3.0.13 | Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash |
HU00980 | 7.4.0.7 | Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash |
HU00980 | 7.5.0.5 | Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash |
HU00980 | 7.6.0.2 | Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash |
HU00982 | 7.6.0.0 | Single node warmstart when software update is attempted on some DH8 nodes |
HU00982 | 7.4.0.11 | Single node warmstart when software update is attempted on some DH8 nodes |
HU00989 | 7.6.0.2 | Where an array is not experiencing any I/O, a drive initialisation may cause node warmstarts |
HU00926 | 7.6.0.2 | Where an array is not experiencing any I/O, a drive initialisation may cause node warmstarts |
HU00990 | 7.5.0.8 | A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes |
HU00990 | 7.4.0.8 | A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes |
HU00990 | 7.7.0.0 | A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes |
HU00990 | 7.6.1.4 | A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes |
HU00991 | 7.5.0.5 | Performance impact on read pre-fetch workloads |
HU00991 | 7.6.0.0 | Performance impact on read pre-fetch workloads |
HU00992 | 7.6.0.0 | Multiple node warmstarts and offline MDisk group during an array resync process |
HU00993 | 7.6.0.0 | Event ID 1052 and ID 1032 entries in the eventlog are not being cleared |
HU00994 | 7.6.0.0 | Continual VPD updates |
HU00995 | 7.6.0.0 | Problems with delayed I/O causes multiple node warmstarts |
HU00996 | 7.6.0.0 | T2 system recovery when running svctask chenclosure. |
HU00997 | 7.6.0.0 | Single node warmstart on PCI events |
HU00998 | 7.6.0.0 | Support for Fujitsu Eternus DX100 S3 controller |
HU00999 | 7.6.0.0 | FlashCopy volumes may go offline during an upgrade |
HU01000 | 7.5.0.10 | SNMP and Call Home stop working when a node reboots and the Ethernet link is down |
HU01000 | 7.6.0.0 | SNMP and Call Home stop working when a node reboots and the Ethernet link is down |
HU01001 | 7.6.0.0 | CCU checker causes both nodes to warmstart |
HU01002 | 7.6.0.0 | 16Gb HBA causes multiple node warmstarts when unexpected FC frame content received |
HU01003 | 7.6.0.0 | An extremely rapid increase in read IOs, on a single volume, can make it difficult for the cache component to free sufficient memory quickly enough to keep up, resulting in node warmstarts |
HU01004 | 7.6.0.0 | Multiple node warmstarts when space efficient volumes are running out of capacity |
HU01005 | 7.6.0.0 | Unable to remove ghost MDisks |
HU01006 | 7.6.0.0 | Volume hosted on Hitachi controllers show high latency due to high I/O concurrency |
HU01007 | 7.7.0.0 | When a node warmstart occurs on one node in an I/O group, that is the primary site for GMCV relationships, due to an issue within FlashCopy, then the other node in that I/O group may also warmstart |
HU01007 | 7.6.0.0 | When a node warmstart occurs on one node in an I/O group, that is the primary site for GMCV relationships, due to an issue within FlashCopy, then the other node in that I/O group may also warmstart |
HU01008 | 7.6.0.0 | Single node warmstart during code upgrade |
HU01009 | 7.6.0.0 | Continual increase in fans speeds after replacement |
HU01016 | 7.6.1.3 | Node warmstarts can occur when a port scan is received on port 1260 |
HU01088 | 7.6.1.3 | Node warmstarts can occur when a port scan is received on port 1260 |
HU01017 | 7.7.1.3 | The result of CLI commands are sometimes not promptly presented in the GUI |
HU01017 | 7.7.0.5 | The result of CLI commands are sometimes not promptly presented in the GUI |
HU01017 | 7.6.1.5 | The result of CLI commands are sometimes not promptly presented in the GUI |
HU01019 | 7.5.0.8 | Customized grids view in GUI is not being returned after page refreshes |
HU01021 | 7.8.0.0 | A fault in a backend controller can cause excessive path state changes leading to node warmstarts and offline volumes |
HU01157 | 7.8.0.0 | A fault in a backend controller can cause excessive path state changes leading to node warmstarts and offline volumes |
HU01022 | 7.7.1.5 | Fibre channel adapter encountered a bit parity error resulting in a node warmstart |
HU01022 | 7.6.1.7 | Fibre channel adapter encountered a bit parity error resulting in a node warmstart |
HU01023 | 7.6.1.0 | Remote Copy services do not transfer data after upgrade to v7.6 |
HU01024 | 7.5.0.9 | A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip |
HU01024 | 7.4.0.10 | A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip |
HU01024 | 7.7.0.3 | A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip |
HU01024 | 7.7.1.1 | A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip |
HU01024 | 7.6.1.5 | A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip |
HU01027 | 7.6.0.4 | Single node warmstart, or unresponsive GUI, when creating compressed volumes |
HU01028 | 7.5.0.8 | Processing of lsnodebootdrive output may adversely impact management GUI performance |
HU01028 | 7.6.1.3 | Processing of lsnodebootdrive output may adversely impact management GUI performance |
HU01029 | 7.5.0.7 | Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI |
HU01029 | 7.6.0.4 | Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI |
HU01029 | 7.4.0.9 | Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI |
HU01029 | 7.3.0.13 | Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI |
HU01030 | 7.6.1.3 | Incremental FlashCopy always requires a full copy |
HU01030 | 7.5.0.8 | Incremental FlashCopy always requires a full copy |
HU01032 | 7.6.0.2 | Batteries going on and offline can take node offline |
HU01033 | 7.5.0.6 | After upgrade to v7.5.0.5 both nodes warmstart |
HU01034 | 7.6.0.3 | Single node warmstart stalls upgrade |
HU01039 | 7.7.0.0 | When volumes, which are still in a relationship, are forcefully removed then a node may experience warmstarts |
HU01042 | 7.5.0.9 | Single node warmstart due to 16Gb HBA firmware behaviour |
HU01042 | 7.6.1.3 | Single node warmstart due to 16Gb HBA firmware behaviour |
HU01043 | 7.3.0.13 | Long pause when upgrading |
HU01043 | 7.6.1.0 | Long pause when upgrading |
HU01046 | 7.6.1.4 | Free capacity is tracked using a count of free extents. If a child pool is shrunk the counter can wrap causing incorrect free capacity to be reported |
HU01046 | 7.5.0.8 | Free capacity is tracked using a count of free extents. If a child pool is shrunk the counter can wrap causing incorrect free capacity to be reported |
HU01050 | 7.6.1.6 | DRAID rebuild incorrectly reports event code 988300 |
HU01050 | 7.7.1.1 | DRAID rebuild incorrectly reports event code 988300 |
HU01050 | 7.7.0.5 | DRAID rebuild incorrectly reports event code 988300 |
HU01051 | 7.5.0.7 | Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster |
HU01051 | 7.4.0.8 | Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster |
HU01051 | 7.6.1.1 | Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster |
HU01052 | 7.6.1.3 | GUI operation with large numbers of volumes may adversely impact performance |
HU01052 | 7.5.0.8 | GUI operation with large numbers of volumes may adversely impact performance |
HU01053 | 7.5.0.8 | An issue in the drive automanage process during a replacement may result in a Tier 2 recovery |
HU01053 | 7.6.1.3 | An issue in the drive automanage process during a replacement may result in a Tier 2 recovery |
HU01056 | 7.6.0.4 | Both nodes in the same I/O group warmstart when using vVols |
HU01056 | 7.5.0.7 | Both nodes in the same I/O group warmstart when using vVols |
HU01057 | 7.8.1.0 | Slow GUI performance for some pages as the lsnodebootdrive command generates unexpected output |
HU01058 | 7.5.0.7 | Multiple node warmstarts may occur when volumes that are part of FlashCopy maps go offline (e.g due to insufficient space) |
HU01059 | 7.6.1.3 | When a tier in a storage pool runs out of free extents EasyTier can adversely affect performance |
HU01059 | 7.5.0.8 | When a tier in a storage pool runs out of free extents EasyTier can adversely affect performance |
HU01060 | 7.6.1.4 | Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts |
HU01060 | 7.7.0.0 | Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts |
HU01060 | 7.5.0.8 | Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts |
HU01062 | 7.5.0.9 | Tier 2 recovery may occur when max replication delay is used and remote copy I/O is delayed |
HU01062 | 7.6.1.1 | Tier 2 recovery may occur when max replication delay is used and remote copy I/O is delayed |
HU01063 | 7.7.1.1 | 3PAR controllers do not support OTUR commands resulting in device port exclusions |
HU01063 | 7.6.1.6 | 3PAR controllers do not support OTUR commands resulting in device port exclusions |
HU01064 | 7.5.0.9 | Management GUI incorrectly displays FC mappings that are part of GMCV relationships |
HU01064 | 7.6.1.3 | Management GUI incorrectly displays FC mappings that are part of GMCV relationships |
HU01067 | 7.5.0.8 | In a HyperSwap topology, where host I/O to a volume is being directed to both volume copies, for specific workload characteristics, I/O received within a small timing window could cause warmstarts on two nodes within separate I/O groups |
HU01067 | 7.6.1.1 | In a HyperSwap topology, where host I/O to a volume is being directed to both volume copies, for specific workload characteristics, I/O received within a small timing window could cause warmstarts on two nodes within separate I/O groups |
HU01069 | 7.6.0.4 | After upgrade from v7.5 or earlier to v7.6.0 or later all nodes may warmstart at the same time resulting in a Tier 2 recovery |
HU01069 | 7.7.0.0 | After upgrade from v7.5 or earlier to v7.6.0 or later all nodes may warmstart at the same time resulting in a Tier 2 recovery |
HU01070 | 7.5.0.8 | Increased preparation delay when FlashCopy Manager initiates a backup. This does not impact the performance of the associated data transfer. |
HU01072 | 7.6.1.4 | In certain configurations throttling too much may result in dropped IOs, which can lead to a single node warmstart |
HU01072 | 7.5.0.8 | In certain configurations throttling too much may result in dropped IOs, which can lead to a single node warmstart |
HU01073 | 7.6.1.1 | SVC CG8 nodes have internal SSDs but these are not displayed in internal storage page |
HU01073 | 7.5.0.7 | SVC CG8 nodes have internal SSDs but these are not displayed in internal storage page |
HU01074 | 7.6.1.5 | An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart |
HU01074 | 7.7.0.0 | An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart |
HU01074 | 7.5.0.9 | An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart |
HU01075 | 7.7.0.0 | Multiple node warmstarts can occur due to an unstable Remote Copy domain after an upgrade to v7.6.0 |
HU01076 | 7.6.1.3 | Where hosts share volumes using a particular reservation method, if the maximum number of reservations is exceeded, this may result in a single node warmstart |
HU01078 | 7.6.1.5 | When the rmnode command is run it removes persistent reservation data to prevent a stuck reservation. MS Windows and Hyper-V cluster design constantly monitors the reservation table and takes the associated volume offline whilst recovering cluster membership. This can result in a brief outage at the host level. |
HU01078 | 7.7.0.0 | When the rmnode command is run it removes persistent reservation data to prevent a stuck reservation. MS Windows and Hyper-V cluster design constantly monitors the reservation table and takes the associated volume offline whilst recovering cluster membership. This can result in a brief outage at the host level. |
HU01080 | 7.5.0.8 | Single node warmstart due to an I/O timeout in cache |
HU01080 | 7.6.1.3 | Single node warmstart due to an I/O timeout in cache |
HU01081 | 7.6.1.3 | When removing multiple nodes from a cluster a remaining node may warmstart |
HU01081 | 7.5.0.8 | When removing multiple nodes from a cluster a remaining node may warmstart |
HU01082 | 7.5.0.10 | A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline |
HU01082 | 7.6.1.5 | A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline |
HU01082 | 7.7.0.0 | A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline |
HU01086 | 7.6.1.1 | SVC reports incorrect SCSI TPGS data in an 8 node cluster causing host multi-pathing software to receive errors which may result in host outages |
HU01087 | 7.5.0.8 | With a partnership stopped at the remote site the stop button, in the GUI, at the local site will be disabled |
HU01087 | 7.6.1.3 | With a partnership stopped at the remote site the stop button, in the GUI, at the local site will be disabled |
HU01089 | 7.7.0.0 | svcconfig backup fails when an I/O group name contains a hyphen |
HU01089 | 7.6.1.5 | svcconfig backup fails when an I/O group name contains a hyphen |
HU01090 | 7.6.1.3 | Dual node warmstart due to issue with the call home process |
HU01091 | 7.6.1.3 | An issue with the CAW lock processing, under high SCSI-2 reservation workloads, may cause node warmstarts |
HU01092 | 7.6.1.3 | Systems which have undergone particular upgrade paths may be blocked from upgrading to v7.6 |
HU01094 | 7.6.1.3 | Single node warmstart due to rare resource locking contention |
HU01094 | 7.5.0.8 | Single node warmstart due to rare resource locking contention |
HU01096 | 7.5.0.8 | Batteries may be seen to continuously recondition |
HU01096 | 7.7.0.0 | Batteries may be seen to continuously recondition |
HU01096 | 7.6.1.4 | Batteries may be seen to continuously recondition |
HU01097 | 7.4.0.10 | For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid |
HU01097 | 7.6.1.5 | For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid |
HU01097 | 7.7.0.3 | For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid |
HU01097 | 7.5.0.9 | For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid |
HU01098 | 7.7.1.7 | Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk |
HU01098 | 7.6.1.8 | Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk |
HU01098 | 7.8.0.0 | Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk |
HU01100 | 7.6.1.3 | License information not showing on GUI after upgrade to 7.6.0.3 |
HU01103 | 7.6.1.1 | A specific drive type may insufficiently report media events causing a delay to failure handling |
HU01104 | 7.7.0.0 | When using GMCV relationships if a node in an I/O group loses communication with its partner it may warmstart |
HU01104 | 7.6.1.4 | When using GMCV relationships if a node in an I/O group loses communication with its partner it may warmstart |
HU01109 | 7.7.0.5 | Multiple nodes can experience a lease expiry when a FC port is having communications issues |
HU01109 | 7.6.1.6 | Multiple nodes can experience a lease expiry when a FC port is having communications issues |
HU01109 | 7.7.1.1 | Multiple nodes can experience a lease expiry when a FC port is having communications issues |
HU01110 | 7.6.1.5 | Spectrum Virtualize supports SSH connections using RC4 based ciphers |
HU01110 | 7.7.0.0 | Spectrum Virtualize supports SSH connections using RC4 based ciphers |
HU01110 | 7.5.0.9 | Spectrum Virtualize supports SSH connections using RC4 based ciphers |
HU01112 | 7.6.1.3 | When upgrading, the quorum lease times are not updated correctly which may cause lease expiries on both nodes |
HU01118 | 7.6.1.3 | Due to a firmware issue both nodes in a V7000 Gen 2 may be powered off |
HU01118 | 7.7.1.1 | Due to a firmware issue both nodes in a V7000 Gen 2 may be powered off |
HU01140 | 7.5.0.9 | EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance |
HU01140 | 7.7.1.1 | EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance |
HU01140 | 7.6.1.5 | EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance |
HU01140 | 7.7.0.3 | EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance |
HU01141 | 7.7.1.1 | Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery |
HU01141 | 7.7.0.3 | Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery |
HU01141 | 7.6.1.5 | Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery |
HU01141 | 7.5.0.9 | Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery |
HU01142 | 7.6.1.4 | Single node warmstart due to 16Gb HBA firmware receiving invalid FC frames |
HU01143 | 7.6.1.4 | Where nodes are missing config files some services will be prevented from starting |
HU01143 | 7.7.0.0 | Where nodes are missing config files some services will be prevented from starting |
HU01144 | 7.6.1.4 | Single node warmstart on the config node due to GUI contention |
HU01144 | 7.5.0.9 | Single node warmstart on the config node due to GUI contention |
HU01144 | 7.7.0.0 | Single node warmstart on the config node due to GUI contention |
HU01155 | 7.7.1.1 | When a lsvdisklba or lsmdisklba command is invoked, for an MDisk with a back end issue, a node warmstart may occur |
HU01156 | 7.7.0.0 | Single node warmstart due to an invalid FCoE frame from a HP-UX host |
HU01165 | 7.6.1.4 | When a SE volume goes offline both nodes may experience multiple warmstarts and go to service state |
HU01165 | 7.7.0.0 | When a SE volume goes offline both nodes may experience multiple warmstarts and go to service state |
HU01177 | 7.8.0.0 | A small timing window issue exists where a node warmstart or power failure can lead to repeated warmstarts of that node until a node rescue is performed |
HU01178 | 7.6.1.5 | Battery incorrectly reports zero percent charged |
HU01180 | 7.6.1.4 | When creating a snapshot on an ESX host, using vVols, a Tier 2 recovery may occur |
HU01180 | 7.7.0.0 | When creating a snapshot on an ESX host, using vVols, a Tier 2 recovery may occur |
HU01181 | 7.6.1.4 | Compressed volumes larger than 96 TiB may experience a loss of access to the volume. For more details refer to this Flash |
HU01181 | 7.7.0.0 | Compressed volumes larger than 96 TiB may experience a loss of access to the volume. For more details refer to this Flash |
HU01182 | 7.6.1.5 | Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command |
HU01182 | 7.7.0.3 | Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command |
HU01182 | 7.7.1.1 | Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command |
HU01183 | 7.6.1.5 | Node warmstart due to 16Gb HBA firmware entering a rare deadlock condition in its ELS frame handling |
HU01183 | 7.7.0.3 | Node warmstart due to 16Gb HBA firmware entering a rare deadlock condition in its ELS frame handling |
HU01184 | 7.6.1.5 | When removing multiple MDisks node warmstarts may occur |
HU01184 | 7.7.0.5 | When removing multiple MDisks node warmstarts may occur |
HU01184 | 7.7.1.1 | When removing multiple MDisks node warmstarts may occur |
HU01185 | 7.7.0.5 | iSCSI target closes connection when there is a mismatch in sequence number |
HU01185 | 7.7.1.1 | iSCSI target closes connection when there is a mismatch in sequence number |
HU01185 | 7.6.1.5 | iSCSI target closes connection when there is a mismatch in sequence number |
HU01185 | 7.5.0.10 | iSCSI target closes connection when there is a mismatch in sequence number |
HU01186 | 7.7.0.0 | Volumes going offline briefly may disrupt the operation of Remote Copy leading to a loss of access by hosts |
HU01187 | 7.6.1.6 | Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times |
HU01187 | 7.7.0.5 | Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times |
HU01187 | 7.7.1.1 | Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times |
HU01188 | 7.7.0.0 | Quorum lease times are not set correctly impacting system availability |
HU01189 | 7.7.0.0 | Improvement to DRAID dependency calculation when handling multiple drive failures |
HU01190 | 8.1.1.0 | Where a controller, which has been assigned to a specific site, has some logins intentionally removed then the system can continue to display the controller as degraded even when the DMP has been followed and errors fixed |
HU01192 | 7.7.0.1 | Some V7000 gen1 systems have an unexpected WWNN value which can cause a single node warmstart when upgrading to v7.7 |
HU01193 | 7.8.0.0 | A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting |
HU01193 | 7.7.1.5 | A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting |
HU01193 | 7.7.0.5 | A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting |
HU01193 | 7.6.1.7 | A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting |
HU01194 | 7.7.0.3 | A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing |
HU01194 | 7.6.1.5 | A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing |
HU01194 | 7.7.1.1 | A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing |
HU01198 | 7.6.1.5 | Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart |
HU01198 | 7.7.0.5 | Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart |
HU01198 | 7.7.1.1 | Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart |
HU01208 | 7.7.1.1 | After upgrading to v7.7 or later from v7.5 or earlier and then creating a DRAID array, with a node reset, the system may encounter repeated node warmstarts which will require a Tier 3 recovery |
HU01208 | 7.7.0.2 | After upgrading to v7.7 or later from v7.5 or earlier and then creating a DRAID array, with a node reset, the system may encounter repeated node warmstarts which will require a Tier 3 recovery |
HU01209 | 8.3.1.7 | It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart |
HU01209 | 8.5.0.0 | It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart |
HU01209 | 8.4.0.7 | It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart |
HU01210 | 7.6.1.5 | A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster |
HU01210 | 7.7.0.3 | A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster |
HU01210 | 7.7.1.1 | A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster |
HU01212 | 7.5.0.10 | GUI displays an incorrect timezone description for Moscow |
HU01212 | 7.7.0.3 | GUI displays an incorrect timezone description for Moscow |
HU01212 | 7.6.1.5 | GUI displays an incorrect timezone description for Moscow |
HU01213 | 7.8.0.0 | The LDAP password is visible in the auditlog |
HU01213 | 7.7.0.5 | The LDAP password is visible in the auditlog |
HU01214 | 7.7.0.5 | GUI and snap missing EasyTier heatmap information |
HU01214 | 7.6.1.5 | GUI and snap missing EasyTier heatmap information |
HU01214 | 7.7.1.1 | GUI and snap missing EasyTier heatmap information |
HU01219 | 7.7.1.1 | Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware |
HU01219 | 7.7.0.5 | Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware |
HU01219 | 7.6.1.6 | Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware |
HU01220 | 7.8.1.0 | Changing the type of a RC consistency group when a volume in a subordinate relationship is offline will cause a Tier 2 recovery |
HU01221 | 7.7.1.1 | Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware |
HU01221 | 7.7.0.5 | Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware |
HU01221 | 7.6.1.6 | Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware |
HU01222 | 8.6.3.0 | FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID |
HU01222 | 8.7.0.0 | FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID |
HU01223 | 7.7.0.5 | The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships |
HU01223 | 7.8.0.0 | The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships |
HU01223 | 7.7.1.5 | The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships |
HU01223 | 7.6.1.5 | The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships |
HU01225 | 7.8.0.2 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01225 | 7.7.1.6 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01225 | 7.6.1.7 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01330 | 7.8.0.2 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01330 | 7.7.1.6 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01330 | 7.6.1.7 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01412 | 7.8.0.2 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01412 | 7.7.1.6 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01412 | 7.6.1.7 | Node warmstarts due to inconsistencies arising from the way cache interacts with compression |
HU01226 | 7.6.1.6 | Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access |
HU01226 | 7.7.1.3 | Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access |
HU01226 | 7.7.0.5 | Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access |
HU01227 | 7.7.0.5 | High volumes of events may cause the email notifications to become stalled |
HU01227 | 7.8.1.0 | High volumes of events may cause the email notifications to become stalled |
HU01227 | 7.5.0.10 | High volumes of events may cause the email notifications to become stalled |
HU01227 | 7.6.1.5 | High volumes of events may cause the email notifications to become stalled |
HU01227 | 7.7.1.3 | High volumes of events may cause the email notifications to become stalled |
HU01228 | 7.6.1.8 | Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries |
HU01228 | 7.7.1.7 | Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries |
HU01228 | 7.8.0.0 | Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries |
HU01229 | 7.7.1.7 | The DMP for a 3105 event does not identify the correct problem canister |
HU01229 | 7.8.0.0 | The DMP for a 3105 event does not identify the correct problem canister |
HU01230 | 7.8.0.0 | A host aborting an outstanding logout command can lead to a single node warmstart |
HU01234 | 7.7.0.5 | After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI |
HU01234 | 7.7.1.3 | After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI |
HU01234 | 7.6.1.6 | After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI |
HU01238 | 8.4.0.0 | The mishandling of performance stats may occasionally result in some entries being overwritten |
HU01240 | 7.7.0.0 | For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time |
HU01240 | 7.6.1.5 | For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time |
HU01240 | 7.5.0.9 | For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time |
HU01244 | 7.7.1.1 | When a node is transitioning from offline to online it is possible for excessive CPU time to be used on another node in the cluster which may lead to a single node warmstart |
HU01245 | 7.5.0.11 | Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart |
HU01245 | 7.6.1.6 | Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart |
HU01245 | 7.7.0.0 | Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart |
HU01247 | 7.7.0.5 | When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result |
HU01247 | 7.6.1.7 | When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result |
HU01247 | 7.7.1.4 | When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result |
HU01247 | 7.8.0.0 | When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result |
HU01250 | 7.7.1.1 | When using lsvdisklba to find a bad block on a compressed volume, the volume can go offline |
HU01251 | 7.6.1.6 | When following the DMP for a 1685 event, if the option for drive reseat has already been attempted is selected, the process to replace a drive is not started |
HU01251 | 7.7.1.3 | When following the DMP for a 1685 event, if the option for drive reseat has already been attempted is selected, the process to replace a drive is not started |
HU01252 | 7.8.1.0 | Where a SVC is presenting storage from an 8-node V7000, an upgrade to that V7000 can pause I/O long enough for the SVC to take related MDisks offline |
HU01254 | 7.7.1.5 | A fluctuation of input AC power can cause a 584 error on a node |
HU01254 | 7.6.1.7 | A fluctuation of input AC power can cause a 584 error on a node |
HU01254 | 7.8.0.0 | A fluctuation of input AC power can cause a 584 error on a node |
HU01255 | 7.7.1.7 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01255 | 7.8.1.2 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01255 | 8.1.0.0 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01239 | 7.7.1.7 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01239 | 7.8.1.2 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01239 | 8.1.0.0 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01586 | 7.7.1.7 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01586 | 7.8.1.2 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01586 | 8.1.0.0 | The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access |
HU01257 | 7.7.1.3 | Large (>1MB) write IOs to volumes can lead to a hung I/O condition resulting in node warmstarts |
HU01258 | 7.5.0.10 | A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration |
HU01258 | 7.7.0.4 | A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration |
HU01258 | 7.6.1.6 | A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration |
HU01258 | 7.4.0.11 | A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration |
HU01258 | 7.7.1.1 | A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration |
HU01262 | 7.6.1.7 | Cached data for a HyperSwap volume may only be destaged from a single node in an I/O group |
HU01264 | 7.8.0.0 | Node warmstart due to an issue in the compression optimisation process |
HU01267 | 7.8.0.0 | An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an I/O group warmstarting |
HU01267 | 7.7.1.7 | An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an I/O group warmstarting |
HU01268 | 7.8.0.0 | Upgrade to 7.7.x fails on Storwize systems in the replication layer where a T3 recovery was performed in the past |
HU01269 | 7.7.1.5 | A rare timing conflict between two process may lead to a node warmstart |
HU01269 | 7.7.0.5 | A rare timing conflict between two process may lead to a node warmstart |
HU01269 | 7.8.0.0 | A rare timing conflict between two process may lead to a node warmstart |
HU01272 | 7.7.1.2 | Replacing a drive in a system with a DRAID array can result in T2 recovery warmstarts. For more details refer to this Flash |
HU01274 | 7.7.0.0 | DRAID lsarraysyncprogress command may appear to show array synchronisation stuck at 99% |
HU01276 | 8.2.1.0 | An issue in the handling of debug data from the FC adapter can cause a node warmstart |
HU01276 | 7.8.1.8 | An issue in the handling of debug data from the FC adapter can cause a node warmstart |
HU01276 | 8.2.0.0 | An issue in the handling of debug data from the FC adapter can cause a node warmstart |
HU01292 | 7.7.1.3 | Under some circumstances the re-calculation of grains to clean can take too long after a FlashCopy done event has been sent resulting in a node warmstart |
HU01292 | 7.7.0.5 | Under some circumstances the re-calculation of grains to clean can take too long after a FlashCopy done event has been sent resulting in a node warmstart |
HU01304 | 7.8.0.0 | SSH authentication fails if multiple SSH keys are configured on the client |
HU01309 | 7.8.1.0 | For FC logins, on a node that is online for more than 200 days, if a fabric event makes a login inactive then the node may be unable to re-establish the login |
HU01320 | 7.8.0.0 | A rare timing condition can cause hung I/O leading to warmstarts on both nodes in an I/O group. Probability can be increased in the presence of failing drives. |
HU01321 | 8.1.0.0 | Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring |
HU01321 | 7.8.1.3 | Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring |
HU01323 | 7.7.1.4 | Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart |
HU01323 | 7.8.0.0 | Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart |
HU01332 | 7.7.1.7 | Performance monitor and Spectrum Control show zero CPU utilisation for compression |
HU01332 | 7.8.1.1 | Performance monitor and Spectrum Control show zero CPU utilisation for compression |
HU01332 | 7.6.1.8 | Performance monitor and Spectrum Control show zero CPU utilisation for compression |
HU01340 | 7.7.0.5 | A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade |
HU01340 | 7.7.1.5 | A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade |
HU01340 | 7.8.0.0 | A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade |
HU01346 | 8.1.0.0 | An unexpected error 1036 may display on the event log even though a canister was never physically removed |
HU01347 | 7.7.1.4 | During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts |
HU01347 | 7.8.0.0 | During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts |
HU01353 | 7.5.0.11 | CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds |
HU01353 | 7.6.1.6 | CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds |
HU01353 | 7.8.1.1 | CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds |
HU01370 | 7.8.0.0 | lsfabric command may not list all logins when it is used with parameters |
HU01371 | 7.7.1.6 | A remote copy command related to HyperSwap may hang resulting in a warmstart of the config node |
HU01371 | 7.8.1.0 | A remote copy command related to HyperSwap may hang resulting in a warmstart of the config node |
HU01374 | 7.8.0.0 | Where an issue with Global Mirror causes excessive I/O delay, a timeout may not function resulting in a node warmstart |
HU01374 | 7.7.0.5 | Where an issue with Global Mirror causes excessive I/O delay, a timeout may not function resulting in a node warmstart |
HU01374 | 7.7.1.4 | Where an issue with Global Mirror causes excessive I/O delay, a timeout may not function resulting in a node warmstart |
HU01379 | 7.8.0.0 | Resource leak in the handling of Read Intensive drives leads to offline volumes |
HU01379 | 7.7.1.4 | Resource leak in the handling of Read Intensive drives leads to offline volumes |
HU01381 | 7.7.1.4 | A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state |
HU01381 | 7.8.0.0 | A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state |
HU01382 | 7.8.0.1 | Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access |
HU01382 | 7.7.1.5 | Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access |
HU01385 | 7.8.1.3 | A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy |
HU01385 | 8.1.0.0 | A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy |
HU01385 | 7.7.1.7 | A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy |
HU01386 | 7.7.1.3 | Where latency between sites is greater than 1ms host write latency can be adversely impacted. This is can be more likely in the presence of large I/O transfer sizes or high IOPS |
HU01388 | 7.8.1.0 | Where a HyperSwap volume is the source of a FlashCopy mapping and the HyperSwap relationship is out of sync when the HyperSwap volume comes back online a switch of direction will occur and FlashCopy operation may delay I/O leading to node warmstarts |
HU01391 | 7.7.1.7 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01391 | 7.6.1.8 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01391 | 7.8.1.1 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01391 | 7.5.0.12 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01581 | 7.7.1.7 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01581 | 7.6.1.8 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01581 | 7.8.1.1 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01581 | 7.5.0.12 | Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware |
HU01392 | 7.7.1.5 | Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a Tier 2 recovery |
HU01392 | 7.8.0.0 | Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a Tier 2 recovery |
HU01394 | 7.8.1.0 | Node warmstarts may occur on systems which are performing Global Mirror replication, due to a low-probability timing window |
HU01395 | 7.8.1.0 | Malformed URLs sent by security scanners whilst correctly discarded can cause considerable exception logging on config nodes leading to performance degradation that can adversely affect remote copy |
HU01396 | 8.1.0.0 | HBA firmware resources can become exhausted resulting in node warmstarts |
HU01399 | 7.8.0.0 | For certain config nodes the CLI Help commands may not work |
HU01399 | 7.6.1.7 | For certain config nodes the CLI Help commands may not work |
HU01399 | 7.7.0.5 | For certain config nodes the CLI Help commands may not work |
HU01399 | 7.7.1.5 | For certain config nodes the CLI Help commands may not work |
HU01402 | 7.7.1.5 | Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available |
HU01402 | 7.6.1.7 | Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available |
HU01402 | 7.8.0.0 | Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available |
HU01402 | 7.7.0.5 | Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available |
HU01404 | 7.8.1.0 | A node warmstart may occur when a new volume is created using fast format and foreground I/O is submitted to the volume |
HU01409 | 7.8.0.2 | Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over |
HU01409 | 7.7.1.5 | Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over |
HU01409 | 7.6.1.7 | Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over |
HU01410 | 7.7.1.5 | An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state |
HU01410 | 7.6.1.7 | An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state |
HU01410 | 7.8.0.2 | An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state |
HU01413 | 7.8.1.0 | Node warmstarts when establishing an FC partnership between a system on v7.7.1 or later with another system which in turn has a partnership to another system running v6.4.1 or earlier |
HU01415 | 7.8.0.1 | When a V3700 with 1GE adapters is upgraded to v7.8.0.0 iSCSI hosts will lose access to volumes |
HU01416 | 7.8.1.0 | ISL configuration activity may cause a cluster-wide lease expiry |
HU01416 | 7.7.1.7 | ISL configuration activity may cause a cluster-wide lease expiry |
HU01420 | 8.1.1.0 | An issue in DRAID can cause repeated node warmstarts in the circumstances of a degraded copyback operation to a drive |
HU01420 | 7.8.1.6 | An issue in DRAID can cause repeated node warmstarts in the circumstances of a degraded copyback operation to a drive |
HU01426 | 7.8.0.2 | Systems running v7.6.1 or earlier, with compressed volumes, that upgrade to v7.8.0 or later will fail when the first node warmstarts and enters a service state |
HU01428 | 7.7.1.7 | Scheduling issue adversely affects performance resulting in node warmstarts |
HU01428 | 7.8.1.0 | Scheduling issue adversely affects performance resulting in node warmstarts |
HU01430 | 7.7.1.7 | Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts |
HU01430 | 7.6.1.8 | Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts |
HU01430 | 7.8.1.1 | Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts |
HU01432 | 7.8.0.2 | Node warmstart due to an accounting issue within the cache component |
HU01432 | 7.6.1.7 | Node warmstart due to an accounting issue within the cache component |
HU01432 | 7.7.1.5 | Node warmstart due to an accounting issue within the cache component |
HU01434 | 7.6.0.0 | A node port can become excluded, when its login status changes, leading to a load imbalance across available local ports |
HU01442 | 7.8.0.2 | Upgrading to v7.7.1.5 or v7.8.0.1 with encryption enable will result in multiple Tier 2 recoveries and a loss of access |
HU01445 | 7.7.1.9 | Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart |
HU01445 | 7.8.1.0 | Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart |
HU01446 | 8.1.0.0 | Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart |
HU01446 | 7.8.1.6 | Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart |
HU01447 | 7.6.1.7 | The management of FlashCopy grains during a restore process can miss some IOs |
HU01447 | 7.7.0.5 | The management of FlashCopy grains during a restore process can miss some IOs |
HU01447 | 7.5.0.9 | The management of FlashCopy grains during a restore process can miss some IOs |
HU01454 | 8.1.0.0 | During an array rebuild a quiesce operation can become stalled leading to a node warmstart |
HU01455 | 7.8.0.0 | VMWare hosts with ATS enabled can see LUN disconnects to volumes when GMCV is used |
HU01457 | 7.7.1.7 | In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI |
HU01457 | 8.1.0.0 | In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI |
HU01457 | 7.8.1.3 | In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI |
HU01458 | 8.1.0.0 | A node warmstart may occur when hosts submit writes to Remote Copy secondary volumes (which are in a read-only mode) |
HU01459 | 7.8.0.2 | The event log indicates incorrect enclosure type |
HU01460 | 8.1.3.0 | If during an array rebuild another drive fails the high processing demand in RAID for handling many medium errors during the rebuild can lead to a node warmstart |
HU01462 | 8.1.1.0 | Environmental factors can trigger a protection mechanism, that causes the SAS chip to freeze, resulting in a single node warmstart |
HU01463 | 7.8.1.0 | SSH Forwarding is enabled on the SSH server |
HU01466 | 7.8.1.0 | Stretched cluster and HyperSwap I/O routing does not work properly due to incorrect ALUA data |
HU01466 | 7.7.1.7 | Stretched cluster and HyperSwap I/O routing does not work properly due to incorrect ALUA data |
HU01467 | 7.8.1.8 | Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools |
HU01467 | 8.1.0.0 | Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools |
HU01467 | 7.7.1.7 | Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools |
HU01469 | 7.7.1.7 | Resource exhaustion in the iSCSI component can result in a node warmstart |
HU01469 | 7.8.1.1 | Resource exhaustion in the iSCSI component can result in a node warmstart |
HU01470 | 7.8.1.0 | T3 might fail during svcconfig recover -execute while running chemail if the email_machine_address contains a comma |
HU01471 | 7.8.1.1 | Power system down using the GUI on V5000 causes the fans to run high while the system is offline but power is still applied to the enclosure |
HU01472 | 8.1.0.0 | A locking issue in Global Mirror can cause a warmstart on the secondary cluster |
HU01472 | 7.8.1.6 | A locking issue in Global Mirror can cause a warmstart on the secondary cluster |
HU01473 | 7.8.1.0 | EasyTier migrates an excessive number of cold extents to an overloaded nearline array |
HU01473 | 7.7.1.6 | EasyTier migrates an excessive number of cold extents to an overloaded nearline array |
HU01474 | 7.8.1.0 | Host writes to a read-only secondary volume trigger I/O timeout warmstarts |
HU01474 | 7.7.1.6 | Host writes to a read-only secondary volume trigger I/O timeout warmstarts |
HU01476 | 7.8.1.6 | A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed |
HU01476 | 8.1.0.0 | A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed |
HU01477 | 7.7.1.7 | Due to the way enclosure data is read it is possible for a firmware mismatch between nodes to occur during an upgrade |
HU01477 | 7.8.1.1 | Due to the way enclosure data is read it is possible for a firmware mismatch between nodes to occur during an upgrade |
HU01479 | 7.6.1.8 | The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks |
HU01479 | 7.8.1.0 | The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks |
HU01479 | 7.7.1.6 | The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks |
HU01480 | 7.8.1.0 | Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI |
HU01480 | 7.6.1.8 | Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI |
HU01480 | 7.7.1.6 | Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI |
HU01481 | 7.8.1.3 | A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts |
HU01481 | 8.1.0.0 | A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts |
HU01483 | 7.8.1.0 | mkdistributedarray command may get stuck in the prepare state. Any interaction with the volumes in that array will result in multiple warmstarts |
HU01483 | 7.7.1.6 | mkdistributedarray command may get stuck in the prepare state. Any interaction with the volumes in that array will result in multiple warmstarts |
HU01484 | 7.7.1.7 | During a RAID array rebuild there may be node warmstarts |
HU01484 | 7.8.1.1 | During a RAID array rebuild there may be node warmstarts |
HU01485 | 7.8.1.9 | When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed |
HU01485 | 8.1.3.6 | When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed |
HU01485 | 8.2.1.4 | When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed |
HU01487 | 7.7.1.6 | Small increase in read response time for source volumes with additional FlashCopy maps |
HU01487 | 7.8.1.0 | Small increase in read response time for source volumes with additional FlashCopy maps |
HU01488 | 7.8.0.0 | SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures |
HU01488 | 7.7.1.7 | SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures |
HU01488 | 7.6.1.8 | SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures |
HU01490 | 7.8.1.3 | When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups |
HU01490 | 8.1.0.0 | When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups |
HU01490 | 7.6.1.8 | When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups |
HU01490 | 7.7.1.7 | When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups |
HU01492 | 7.8.1.8 | All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter |
HU01492 | 8.2.1.0 | All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter |
HU01492 | 8.1.3.4 | All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter |
HU02024 | 7.8.1.8 | All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter |
HU02024 | 8.2.1.0 | All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter |
HU02024 | 8.1.3.4 | All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter |
HU01494 | 8.1.2.0 | A change to the FC port mask may fail even though connectivity would be sufficient |
HU01496 | 7.8.1.1 | SVC node type SV1 reports wrong FRU part number for compression accelerator |
HU01497 | 7.8.1.0 | A drive can still be offline even though the error is showing as corrected in the Event Log |
HU01498 | 7.8.1.0 | GUI may be exposed to CVE-2017-5638 (see Section 3.1) |
HU01498 | 7.5.0.13 | GUI may be exposed to CVE-2017-5638 (see Section 3.1) |
HU01498 | 7.7.1.6 | GUI may be exposed to CVE-2017-5638 (see Section 3.1) |
HU01499 | 7.6.1.7 | When an offline volume copy comes back online, under rare conditions, the flushing process can cause the cache to enter an invalid state, delaying I/O, and resulting in node warmstarts |
HU01500 | 7.7.1.6 | Node warmstarts can occur when the iSCSI Ethernet MTU is changed |
HU01503 | 7.8.1.1 | When the 3PAR host type is set to legacy the round robin algorithm, used to select the MDisk port for I/O submission to 3PAR controllers, does not work correctly and I/O may be submitted to fewer controller ports, adversely affecting performance |
HU01503 | 7.6.1.8 | When the 3PAR host type is set to legacy the round robin algorithm, used to select the MDisk port for I/O submission to 3PAR controllers, does not work correctly and I/O may be submitted to fewer controller ports, adversely affecting performance |
HU01505 | 7.6.1.8 | A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity |
HU01505 | 7.7.1.7 | A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity |
HU01505 | 7.8.1.1 | A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity |
HU01506 | 7.6.1.8 | Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts |
HU01506 | 8.1.0.0 | Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts |
HU01506 | 7.7.1.7 | Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts |
HU01507 | 8.1.3.6 | Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy |
HU01507 | 7.8.1.8 | Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy |
HU01507 | 8.2.1.0 | Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy |
HU01509 | 8.1.0.0 | Where a drive is generating medium errors, an issue in the handling of array rebuilds can result in an MDisk group being repeated taken offline |
HU01512 | 8.1.1.0 | During a DRAID MDisk copy-back operation a miscalculation of the remaining work may cause a node warmstart |
HU01512 | 7.8.1.8 | During a DRAID MDisk copy-back operation a miscalculation of the remaining work may cause a node warmstart |
HU01516 | 7.7.1.1 | When node configuration data exceeds 8K in size some user defined settings may not be stored permanently resulting in node warmstarts |
HU01519 | 7.7.1.7 | One PSU may silently fail leading to the possibility of a dual node reboot |
HU01519 | 7.8.0.0 | One PSU may silently fail leading to the possibility of a dual node reboot |
HU01520 | 7.8.1.1 | Where the system is being used as secondary site for Remote Copy during an upgrade to v7.8.1 the node may warmstart |
HU01521 | 8.1.0.0 | Remote Copy does not correctly handle STOP commands for relationships which may lead to node warmstarts |
HU01522 | 8.1.0.0 | A node warmstart may occur when a Fibre Channel frame is received with an unexpected value for host login type |
HU01523 | 8.2.1.0 | An issue with FC adapter initialisation can lead to a node warmstart |
HU01523 | 7.8.1.8 | An issue with FC adapter initialisation can lead to a node warmstart |
HU01523 | 8.2.0.0 | An issue with FC adapter initialisation can lead to a node warmstart |
HU01524 | 8.1.0.0 | When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up |
HU01524 | 7.8.1.6 | When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up |
HU01525 | 7.8.1.3 | During an upgrade a resource locking issue in the compression component can cause a node to warmstart multiple times and become unavailable |
HU01525 | 8.1.1.0 | During an upgrade a resource locking issue in the compression component can cause a node to warmstart multiple times and become unavailable |
HU01528 | 7.7.1.7 | Both nodes may warmstart due to Sendmail throttling |
HU01531 | 7.8.1.1 | Spectrum Control is unable to receive notifications from SVC/Storwize. Spectrum Control may experience an out-of-memory condition |
HU01535 | 7.8.1.3 | An issue with Fibre Channel driver handling of command processing can result in a node warmstart |
HU01545 | 8.1.0.0 | A locking issue in the stats collection process may result in a node warmstart |
HU01549 | 7.7.1.7 | During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes |
HU01549 | 7.6.1.8 | During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes |
HU01549 | 8.1.0.0 | During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes |
HU01549 | 7.8.1.3 | During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes |
HU01550 | 8.1.0.0 | Removing a volume with -force while it is still receiving I/O from a host may lead to a node warmstart |
HU01554 | 8.1.0.0 | Node warmstart may occur during a livedump collection |
HU01555 | 7.5.0.12 | The system may generate duplicate WWPNs |
HU01556 | 7.8.1.8 | The handling of memory pool usage by Remote Copy may lead to a node warmstart |
HU01556 | 8.1.0.0 | The handling of memory pool usage by Remote Copy may lead to a node warmstart |
HU01563 | 8.1.0.0 | Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart |
HU01563 | 7.8.1.3 | Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart |
HU01564 | 7.8.1.8 | FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop |
HU01564 | 8.2.0.2 | FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop |
HU01564 | 8.2.1.0 | FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop |
HU01566 | 7.8.1.1 | After upgrading, numerous 1370 errors are seen in the Event Log |
HU01566 | 7.7.1.7 | After upgrading, numerous 1370 errors are seen in the Event Log |
HU01569 | 7.7.1.7 | When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes |
HU01569 | 7.8.1.3 | When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes |
HU01569 | 7.6.1.8 | When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes |
HU01569 | 8.1.0.0 | When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes |
HU01570 | 7.8.1.1 | Reseating a drive in an array may cause the MDisk to go offline |
HU01571 | 8.2.1.0 | An upgrade can become stalled due to a node warmstart |
HU01571 | 8.2.0.0 | An upgrade can become stalled due to a node warmstart |
HU01572 | 7.7.1.7 | SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access |
HU01572 | 7.8.1.8 | SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access |
HU01572 | 7.6.1.8 | SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access |
HU01572 | 8.1.0.0 | SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access |
HU01573 | 8.1.0.0 | Node warmstart due to a stats collection scheduling issue |
HU01579 | 7.8.1.8 | In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive |
HU01579 | 7.7.1.7 | In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive |
HU01579 | 8.1.0.0 | In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive |
HU01582 | 8.1.0.0 | A compression issue in IP replication can result in a node warmstart |
HU01582 | 7.7.1.7 | A compression issue in IP replication can result in a node warmstart |
HU01582 | 7.8.1.3 | A compression issue in IP replication can result in a node warmstart |
HU01583 | 8.1.0.0 | Running mkhostcluster with duplicate host names or IDs in the seedfromhost argument will cause a Tier 2 recovery |
HU01584 | 7.8.1.3 | An issue in array indexing can cause a RAID array to go offline repeatedly |
HU01584 | 8.1.0.0 | An issue in array indexing can cause a RAID array to go offline repeatedly |
HU01602 | 8.1.1.0 | When security scanners send garbage data to SVC/Storwize iSCSI target addresses a node warmstart may occur |
HU01609 | 7.7.1.7 | When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts |
HU01609 | 7.8.1.1 | When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts |
HU01609 | 7.6.1.8 | When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts |
IT15343 | 7.7.1.7 | When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts |
IT15343 | 7.8.1.1 | When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts |
IT15343 | 7.6.1.8 | When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts |
HU01610 | 8.1.0.0 | The handling of the background copy backlog by FlashCopy can cause latency for other unrelated FlashCopy maps |
HU01614 | 7.7.1.7 | After a node is upgraded hosts defined as TPGS may have paths set to inactive |
HU01614 | 7.8.1.3 | After a node is upgraded hosts defined as TPGS may have paths set to inactive |
HU01614 | 8.1.0.0 | After a node is upgraded hosts defined as TPGS may have paths set to inactive |
HU01615 | 8.1.0.0 | A timing issue relating to process communication can result in a node warmstart |
HU01617 | 8.1.3.6 | Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery |
HU01617 | 7.8.1.9 | Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery |
HU01617 | 8.2.1.0 | Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery |
HU01618 | 8.1.1.0 | When using the charraymember CLI command if a member id is entered that is greater than the maximum number of members in a TRAID array then a T2 recovery will be initiated |
HU01619 | 7.8.1.6 | A misreading of the PSU register can lead to failure events being logged incorrectly |
HU01619 | 8.1.2.0 | A misreading of the PSU register can lead to failure events being logged incorrectly |
HU01619 | 8.1.1.2 | A misreading of the PSU register can lead to failure events being logged incorrectly |
HU01620 | 8.1.1.0 | Configuration changes can slow critical processes and, if this coincides with cloud account statistical data being adjusted, a Tier 2 recovery may occur |
HU01620 | 7.8.1.5 | Configuration changes can slow critical processes and, if this coincides with cloud account statistical data being adjusted, a Tier 2 recovery may occur |
HU01622 | 8.1.0.0 | If a Dense Draw enclosure is put into maintenance mode during an upgrade of the enclosure management firmware then further upgrades to adjacent enclosures will be prevented |
HU01623 | 8.1.0.0 | An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships |
HU01623 | 7.8.1.6 | An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships |
HU01624 | 7.7.1.9 | GUI response can become very slow in systems with a large number of compressed and uncompressed volume |
HU01624 | 7.8.1.3 | GUI response can become very slow in systems with a large number of compressed and uncompressed volume |
HU01625 | 7.8.1.3 | In systems with a consistency group of HyperSwap or Metro Mirror relationships if an upgrade attempts to commit whilst a relationship is out of synch then there may be multiple warmstarts and a Tier 2 recovery |
HU01626 | 8.1.0.0 | Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue |
HU01626 | 7.8.1.2 | Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue |
HU01628 | 7.8.1.6 | In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading |
HU01628 | 7.7.1.9 | In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading |
HU01630 | 8.1.0.0 | When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts |
HU01630 | 7.8.1.6 | When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts |
HU01631 | 7.8.1.3 | A memory leak in EasyTier when pools are in Balanced mode can lead to node warmstarts |
HU01631 | 8.1.0.0 | A memory leak in EasyTier when pools are in Balanced mode can lead to node warmstarts |
HU01632 | 8.1.1.0 | A congested fabric causes the Fibre Channel adapter firmware to abort I/O resulting in node warmstarts |
HU01632 | 7.8.1.3 | A congested fabric causes the Fibre Channel adapter firmware to abort I/O resulting in node warmstarts |
HU01633 | 8.1.1.0 | Even though synchronisation has completed a RAID array may still show progress to be at 99% |
HU01635 | 7.7.1.7 | A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes or performance degradation |
HU01635 | 7.8.0.0 | A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes or performance degradation |
HU01636 | 7.8.1.3 | A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller |
HU01636 | 8.1.0.0 | A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller |
HU01636 | 7.7.1.7 | A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller |
HU01638 | 7.8.1.3 | When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail |
HU01638 | 7.7.1.7 | When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail |
HU01638 | 8.1.0.0 | When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail |
HU01645 | 7.8.1.3 | After upgrading to v7.8 a reboot of a node will initiate a continual boot cycle |
HU01646 | 7.8.1.3 | A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster |
HU01646 | 7.7.1.7 | A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster |
HU01646 | 8.1.0.0 | A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster |
HU01653 | 8.1.0.0 | An automatic Tier 3 recovery process may fail due to a RAID indexing issue |
HU01654 | 8.1.1.0 | There may be a node warmstart when a switch of direction, in a HyperSwap relationship, fails to complete properly |
HU01654 | 7.8.1.3 | There may be a node warmstart when a switch of direction, in a HyperSwap relationship, fails to complete properly |
HU01655 | 7.8.1.5 | The algorithm used to calculate an SSDs replacement date can sometimes produce incorrect results leading to a premature End-of-Life error being reported |
HU01655 | 8.1.1.1 | The algorithm used to calculate an SSDs replacement date can sometimes produce incorrect results leading to a premature End-of-Life error being reported |
HU01657 | 8.2.0.0 | The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart |
HU01657 | 7.8.1.8 | The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart |
HU01657 | 8.2.1.0 | The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart |
HU01657 | 8.1.3.4 | The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart |
HU01659 | 8.2.1.4 | Node Fault LED can be seen to flash in the absence of an error condition.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed |
HU01659 | 7.8.1.9 | Node Fault LED can be seen to flash in the absence of an error condition.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed |
HU01659 | 8.1.3.6 | Node Fault LED can be seen to flash in the absence of an error condition.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed |
HU01661 | 8.1.3.4 | A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation |
HU01661 | 7.8.1.8 | A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation |
HU01661 | 8.2.1.0 | A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation |
HU01661 | 8.2.0.0 | A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation |
HU01664 | 7.8.1.6 | A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade |
HU01664 | 8.1.2.0 | A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade |
HU01664 | 8.1.1.2 | A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade |
HU01664 | 7.7.1.9 | A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade |
HU01665 | 8.1.0.1 | In environments where backend controllers are busy the creation of a new filesystem, with default settings, on a Linux host under conditions of parallel workloads can overwhelm the capabilities of the backend storage MDisk group and lead to warmstarts due to hung I/O on multiple nodes |
HU01667 | 8.2.1.0 | A timing-window issue, in the remote copy component, may cause a node warmstart |
HU01667 | 8.2.0.0 | A timing-window issue, in the remote copy component, may cause a node warmstart |
HU01670 | 8.1.0.1 | Enabling RSA without a valid service IP address may cause multiple node warmstarts |
HU01671 | 8.1.1.0 | Metadata between two nodes in an I/O group can become out of step leaving one node unaware of work scheduled on its partner. This can lead to stuck array synchronisation and false 1691 events |
HU01673 | 8.1.0.1 | GUI rejects passwords that include special characters |
HU01675 | 7.8.1.0 | Memory allocation issues may cause GUI and I/O performance issues |
HU01678 | 8.1.1.0 | Entering an invalid parameter in the addvdiskaccess command may initiate a Tier 2 recovery |
HU01678 | 7.8.1.8 | Entering an invalid parameter in the addvdiskaccess command may initiate a Tier 2 recovery |
HU01679 | 8.1.0.0 | An issue in the RAID component can very occasionally cause a single node warmstart |
HU01679 | 7.8.1.5 | An issue in the RAID component can very occasionally cause a single node warmstart |
HU01687 | 7.7.1.9 | For volumes by host, ports by host and volumes by pool pages in the GUI when the number of items is greater than 50 then the item name will not be displayed |
HU01687 | 7.8.1.5 | For volumes by host, ports by host and volumes by pool pages in the GUI when the number of items is greater than 50 then the item name will not be displayed |
HU01688 | 8.1.1.0 | Unexpected used_virtualization figure in lslicense output after upgrade |
HU01697 | 7.8.1.6 | A timeout issue in RAID member management can lead to multiple node warmstarts |
HU01697 | 8.1.0.0 | A timeout issue in RAID member management can lead to multiple node warmstarts |
HU01698 | 8.1.1.0 | A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted |
HU01698 | 7.7.1.9 | A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted |
HU01698 | 7.8.1.6 | A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted |
HU01700 | 8.1.0.1 | If a thin-provisioned or compressed volume is deleted, and another volume is immediately created with the same real capacity, warmstarts may occur |
HU01701 | 8.1.1.0 | Following loss of all logins to an external controller, that is providing quorum, when the controller next logs in it will not be automatically used for quorum |
HU01704 | 7.8.1.5 | In systems using HyperSwap a rare timing window issue can result in a node warmstart |
HU01704 | 8.1.0.0 | In systems using HyperSwap a rare timing window issue can result in a node warmstart |
HU01706 | 7.7.1.8 | Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash |
HU01706 | 8.1.0.2 | Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash |
HU01706 | 7.8.1.4 | Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash |
HU01708 | 7.8.1.9 | A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks |
HU01708 | 8.1.3.0 | A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks |
HU01715 | 8.1.2.0 | Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung I/O leading to a node warmstart |
HU01715 | 7.8.1.8 | Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung I/O leading to a node warmstart |
HU01718 | 8.1.2.0 | Hung I/O due to issues on the inter-site links can lead to multiple node warmstarts |
HU01719 | 8.1.3.4 | Node warmstart due to a parity error in the HBA driver firmware |
HU01719 | 8.2.0.0 | Node warmstart due to a parity error in the HBA driver firmware |
HU01719 | 8.2.1.0 | Node warmstart due to a parity error in the HBA driver firmware |
HU01719 | 7.8.1.8 | Node warmstart due to a parity error in the HBA driver firmware |
HU01720 | 8.1.1.2 | An issue in the handling of compressed volume shrink operations, in the presence of EasyTier migrations, can cause DRAID MDisk timeouts leading to an offline MDisk group |
HU01720 | 8.1.2.0 | An issue in the handling of compressed volume shrink operations, in the presence of EasyTier migrations, can cause DRAID MDisk timeouts leading to an offline MDisk group |
HU01723 | 7.8.1.9 | A timing window issue, around nodes leaving and re-joining clusters, can lead to hung I/O and node warmstarts |
HU01723 | 8.1.2.0 | A timing window issue, around nodes leaving and re-joining clusters, can lead to hung I/O and node warmstarts |
HU01724 | 8.1.3.0 | An I/O lock handling issue between nodes can lead to a single node warmstart |
HU01724 | 7.8.1.5 | An I/O lock handling issue between nodes can lead to a single node warmstart |
HU01725 | 8.1.2.0 | Snap collection audit log selection filter can, incorrectly, skip some of the latest logs |
HU01726 | 7.8.1.8 | A slow raid member drive in an MDisk may cause node warmstarts and the MDisk to go offline for a short time |
HU01726 | 8.1.1.0 | A slow raid member drive in an MDisk may cause node warmstarts and the MDisk to go offline for a short time |
HU01727 | 8.1.2.0 | Due to a memory accounting issue an out of range access attempt will cause a node warmstart |
HU01729 | 7.8.1.5 | Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node, unable to progress, may warmstart |
HU01729 | 8.1.0.0 | Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node, unable to progress, may warmstart |
HU01730 | 7.8.1.5 | When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter |
HU01730 | 7.7.1.9 | When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter |
HU01730 | 8.1.1.1 | When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter |
HU01731 | 7.8.1.5 | When a node is placed into service mode it is possible for all compression cards within the node to be marked as failed |
HU01733 | 8.2.1.0 | Canister information, for the High Density Expansion Enclosure, may be incorrectly reported |
HU01733 | 8.1.3.4 | Canister information, for the High Density Expansion Enclosure, may be incorrectly reported |
HU01733 | 7.8.1.8 | Canister information, for the High Density Expansion Enclosure, may be incorrectly reported |
HU01733 | 8.2.0.0 | Canister information, for the High Density Expansion Enclosure, may be incorrectly reported |
HU01735 | 7.8.1.8 | Multiple power failures can cause a RAID array to get into a stuck state leading to offline volumes |
HU01735 | 8.1.2.0 | Multiple power failures can cause a RAID array to get into a stuck state leading to offline volumes |
HU01736 | 8.1.0.0 | A single node warmstart may occur when the topology setting of the cluster is changed |
HU01737 | 7.8.1.10 | On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update |
HU01737 | 8.1.3.6 | On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update |
HU01737 | 8.2.0.0 | On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update |
HU01737 | 8.2.1.0 | On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update |
HU01740 | 8.1.1.2 | The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail |
HU01740 | 8.1.2.0 | The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail |
HU01740 | 7.8.1.6 | The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail |
HU01743 | 8.2.1.0 | Where hosts are directly attached a mishandling of the login process, by the fabric controller, may result in dual node warmstarts |
HU01745 | 7.5.0.14 | testssl.sh identifies Logjam (CVE-2015-4000), fixed in v7.5.0.0, as a vulnerability |
HU01746 | 8.3.1.0 | Adding a volume copy may deactivate any associated MDisk throttling |
HU01747 | 7.8.1.6 | The incorrect detection of a cache issue can lead to a node warmstart |
HU01747 | 8.1.1.0 | The incorrect detection of a cache issue can lead to a node warmstart |
HU01750 | 8.1.2.0 | An issue in heartbeat handling between nodes can cause a node warmstart |
HU01751 | 8.1.3.0 | When RAID attempts to flag a strip as bad, and that strip has already been flagged, a node may warmstart |
HU01751 | 7.8.1.8 | When RAID attempts to flag a strip as bad, and that strip has already been flagged, a node may warmstart |
HU01752 | 8.1.3.0 | A problem with the way IBM FlashSystem FS900 handles SCSI WRITE SAME commands (without the Unmap bit set) can lead to port exclusions |
HU01756 | 8.1.2.0 | A scheduling issue may cause a config node warmstart |
HU01756 | 8.1.1.2 | A scheduling issue may cause a config node warmstart |
HU01758 | 8.2.0.0 | After an unexpected power loss, all nodes, in a cluster, may warmstart repeatedly, necessitating a Tier 3 recovery |
HU01758 | 8.2.1.0 | After an unexpected power loss, all nodes, in a cluster, may warmstart repeatedly, necessitating a Tier 3 recovery |
HU01760 | 8.1.3.4 | FlashCopy map progress appears to be stuck at zero percent |
HU01760 | 8.2.0.2 | FlashCopy map progress appears to be stuck at zero percent |
HU01760 | 7.8.1.8 | FlashCopy map progress appears to be stuck at zero percent |
HU01760 | 8.2.1.0 | FlashCopy map progress appears to be stuck at zero percent |
HU01761 | 8.1.3.6 | Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts |
HU01761 | 8.2.1.0 | Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts |
HU01761 | 8.2.0.0 | Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts |
HU01763 | 7.7.1.9 | A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node |
HU01763 | 8.1.1.1 | A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node |
HU01763 | 7.8.1.5 | A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node |
HU01765 | 8.2.1.0 | Node warmstart may occur when there is a delay to I/O at the secondary site |
HU01765 | 8.2.0.0 | Node warmstart may occur when there is a delay to I/O at the secondary site |
HU01767 | 8.1.2.0 | Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash |
HU01767 | 8.1.1.2 | Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash |
HU01767 | 7.7.1.9 | Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash |
HU01767 | 7.8.1.6 | Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash |
HU01767 | 7.5.0.14 | Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash |
HU01769 | 8.1.1.2 | Systems with DRAID arrays, with more than 131,072 extents, may experience multiple warmstarts due to a backend SCSI UNMAP issue |
HU01769 | 8.1.2.1 | Systems with DRAID arrays, with more than 131,072 extents, may experience multiple warmstarts due to a backend SCSI UNMAP issue |
HU01771 | 8.1.1.2 | An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline |
HU01771 | 7.8.1.6 | An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline |
HU01771 | 8.1.2.0 | An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline |
HU01771 | 7.7.1.9 | An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline |
HU01772 | 8.2.1.0 | The mail queue may become blocked preventing the transmission of event log messages |
HU01774 | 7.8.1.8 | After a failed mkhost command for an iSCSI host any I/O from that host will cause multiple warmstarts |
HU01774 | 8.1.3.0 | After a failed mkhost command for an iSCSI host any I/O from that host will cause multiple warmstarts |
HU01777 | 8.3.0.0 | Where not all I/O groups have NPIV enabled, hosts may be shown as Degraded with an incorrect count of node logins |
HU01778 | 8.1.3.4 | An issue, in the HBA adapter, is exposed where a switch port keeps the link active but does not respond to link resets resulting in a node warmstart |
HU01780 | 8.1.3.0 | Migrating a volume to an image-mode volume on controllers that support SCSI unmap will trigger repeated cluster recoveries |
HU01781 | 8.1.3.0 | An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes |
HU01781 | 7.8.1.10 | An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes |
HU01782 | 8.6.0.0 | A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC |
HU01782 | 8.4.0.10 | A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC |
HU01782 | 8.5.4.0 | A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC |
HU01782 | 8.5.0.7 | A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC |
HU01783 | 7.6.1.7 | Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550 |
HU01783 | 7.8.0.0 | Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550 |
HU01783 | 7.7.1.4 | Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550 |
HU01784 | 8.2.1.0 | If a cluster using IP quorum experiences a site outage, the IP quorum device may become invalid. Restarting the quorum application will resolve the issue |
HU01785 | 7.8.1.7 | An issue with memory mapping may lead to multiple node warmstarts |
HU01786 | 7.8.1.8 | An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log |
HU01786 | 8.2.1.0 | An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log |
HU01786 | 8.2.0.0 | An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log |
HU01786 | 8.1.3.4 | An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log |
HU01790 | 7.8.1.8 | On the Create Volumes page the Accessible I/O Groups selection may not update when the Caching I/O group selection is changed |
HU01790 | 8.1.3.3 | On the Create Volumes page the Accessible I/O Groups selection may not update when the Caching I/O group selection is changed |
HU01791 | 8.2.0.0 | Using the chhost command will remove stored CHAP secrets |
HU01791 | 8.2.1.0 | Using the chhost command will remove stored CHAP secrets |
HU01791 | 8.1.3.4 | Using the chhost command will remove stored CHAP secrets |
HU01792 | 8.1.1.2 | When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to this Flash |
HU01792 | 8.1.2.1 | When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to this Flash |
HU01792 | 7.8.1.6 | When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to this Flash |
HU01793 | 7.8.1.8 | The Maximum final size value in the Expand Volume dialog can display an incorrect value preventing expansion |
HU01795 | 8.1.3.0 | A thread locking issue in the Remote Copy component may cause a node warmstart |
HU01796 | 8.3.1.0 | Battery Status LED may not illuminate |
HU01797 | 8.2.0.0 | Hitachi G1500 backend controllers may exhibit higher than expected latency |
HU01797 | 8.1.3.4 | Hitachi G1500 backend controllers may exhibit higher than expected latency |
HU01797 | 7.8.1.8 | Hitachi G1500 backend controllers may exhibit higher than expected latency |
HU01797 | 8.2.1.0 | Hitachi G1500 backend controllers may exhibit higher than expected latency |
HU01798 | 8.1.3.0 | Manual (user-paced) upgrade to v8.1.2 may invalidate hardened data putting all nodes in service state if they are shutdown and then restarted. Automatic upgrade is not affected by this issue. For more details refer to this Flash |
HU01799 | 7.8.1.8 | Timing window issue can affect operation of the HyperSwap addvolumecopy command causing all nodes to warmstart |
HU01799 | 8.2.1.0 | Timing window issue can affect operation of the HyperSwap addvolumecopy command causing all nodes to warmstart |
HU01800 | 8.1.3.0 | Under some rare circumstance a node warmstart may occur whilst creating volumes in a Data Reduction Pool |
HU01801 | 8.1.3.0 | An issue in the handling of unmaps for MDisks can lead to a node warmstart |
HU01802 | 8.1.3.0 | USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable |
HU01802 | 7.8.1.7 | USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable |
HU01803 | 8.1.3.0 | The garbage collection process in Data Reduction Pool may become stalled resulting in no reclamation of free space from removed volumes |
HU01804 | 8.1.3.0 | During a system upgrade the processing required to upgrade the internal mapping between volumes and volume copies can lead to high latency impacting host I/O |
HU01807 | 8.2.1.0 | The lsfabric command may show incorrect local node id and local node name for some Fibre Channel logins |
HU01807 | 8.2.0.0 | The lsfabric command may show incorrect local node id and local node name for some Fibre Channel logins |
HU01809 | 8.1.3.0 | An issue in the handling of extent allocation in Data Reduction Pools can result in volumes being taken offline |
HU01810 | 8.2.1.0 | Deleting volumes, or using FlashCopy/Global Mirror with Change Volumes, in a Data Reduction Pool, may impact the performance of other volumes in the pool |
HU01811 | 8.2.0.0 | DRAID rebuilds, for large (>10TB) drives, may require lengthy metadata processing leading to a node warmstart |
HU01811 | 8.2.1.0 | DRAID rebuilds, for large (>10TB) drives, may require lengthy metadata processing leading to a node warmstart |
HU01813 | 7.8.1.8 | An issue with Global Mirror stream recovery handling at secondary sites can adversely impact replication performance |
HU01815 | 8.1.3.3 | In Data Reduction Pools, volume size is limited to 96TB |
HU01815 | 8.2.0.2 | In Data Reduction Pools, volume size is limited to 96TB |
HU01815 | 8.2.1.0 | In Data Reduction Pools, volume size is limited to 96TB |
HU01817 | 8.2.1.0 | Volumes used for vVols metadata or cloud backup, that are associated with a FlashCopy mapping, cannot be included in any further FlashCopy mappings |
HU01817 | 8.2.0.0 | Volumes used for vVols metadata or cloud backup, that are associated with a FlashCopy mapping, cannot be included in any further FlashCopy mappings |
HU01818 | 8.1.3.0 | Excessive debug logging in the Data Reduction Pools component can adversely impact system performance |
HU01820 | 8.1.3.0 | When an unusual I/O request pattern is received it is possible for the handling of Data Reduction Pool metadata to become stuck, leading to a node warmstart |
HU01821 | 8.1.3.4 | An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies |
HU01821 | 8.2.0.3 | An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies |
HU01821 | 8.2.1.0 | An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies |
HU01824 | 7.8.1.8 | Switching replication direction for HyperSwap relationships can lead to long I/O timeouts |
HU01824 | 8.1.3.4 | Switching replication direction for HyperSwap relationships can lead to long I/O timeouts |
HU01825 | 8.1.3.4 | Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery |
HU01825 | 8.2.1.0 | Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery |
HU01825 | 7.8.1.8 | Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery |
HU01828 | 8.1.3.3 | Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue |
HU01828 | 8.2.1.0 | Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue |
HU01828 | 8.2.0.2 | Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue |
HU01829 | 8.1.3.1 | An issue in statistical data collection can prevent EasyTier from working with Data Reduction Pools |
HU01830 | 7.8.1.11 | Missing security-enhancing HTTP response headers |
HU01830 | 8.1.3.0 | Missing security-enhancing HTTP response headers |
HU01831 | 7.8.0.0 | Cluster-wide warmstarts may occur when the SAN delivers a FDISC frame with an invalid WWPN |
HU01832 | 8.2.1.0 | Creation and distribution of the config file may cause an out-of-memory condition, leading to a node warmstart |
HU01832 | 7.8.1.12 | Creation and distribution of the config file may cause an out-of-memory condition, leading to a node warmstart |
HU01833 | 8.1.3.4 | If both nodes, in an I/O group, start up together a timing window issue may occur, that would prevent them running garbage collection, leading to a related Data Reduction Pool running out of space |
HU01833 | 8.2.1.0 | If both nodes, in an I/O group, start up together a timing window issue may occur, that would prevent them running garbage collection, leading to a related Data Reduction Pool running out of space |
HU01835 | 8.1.3.1 | Multiple warmstarts may be experienced due to an issue with Data Reduction Pool garbage collection where data for a volume is detected after the volume itself has been removed |
HU01836 | 7.8.1.11 | When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts |
HU01836 | 8.2.1.8 | When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts |
HU01836 | 8.3.0.0 | When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts |
HU01837 | 8.1.3.2 | In systems where a vVols metadata volume has been created an upgrade to v8.1.3 or later will cause a node warmstart stalling the upgrade |
HU01837 | 8.2.1.0 | In systems where a vVols metadata volume has been created an upgrade to v8.1.3 or later will cause a node warmstart stalling the upgrade |
HU01839 | 7.8.1.8 | Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected |
HU01839 | 8.2.1.0 | Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected |
HU01839 | 8.1.3.4 | Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected |
HU01840 | 8.1.3.1 | When removing large numbers of volumes each with multiple copies it is possible to hit a timeout condition leading to warmstarts |
HU01842 | 8.2.1.0 | Bursts of I/O to Read-Intensive Drives can be interpreted as dropped frames against the resident slots, leading to redundant drives being incorrectly failed |
HU01842 | 8.1.3.4 | Bursts of I/O to Read-Intensive Drives can be interpreted as dropped frames against the resident slots, leading to redundant drives being incorrectly failed |
HU01842 | 7.8.1.8 | Bursts of I/O to Read-Intensive Drives can be interpreted as dropped frames against the resident slots, leading to redundant drives being incorrectly failed |
HU01843 | 8.3.0.0 | A node hardware issue can cause a CLI command to timeout resulting in a node warmstart |
HU01843 | 8.2.1.6 | A node hardware issue can cause a CLI command to timeout resulting in a node warmstart |
HU01845 | 8.2.1.0 | If the execution of a rmvdisk -force command, for the FlashCopy target volume in a GMCV relationship, coincides with the start of a GMCV cycle all nodes may warmstart |
HU01846 | 8.2.1.0 | Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state |
HU01846 | 7.8.1.8 | Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state |
HU01846 | 8.1.3.4 | Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state |
HU01847 | 8.2.0.2 | FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts |
HU01847 | 8.1.3.3 | FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts |
HU01847 | 7.8.1.8 | FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts |
HU01847 | 8.2.1.0 | FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts |
HU01848 | 8.2.1.0 | During an upgrade, systems with a large AIX VIOS setup may have multiple node warmstarts with the possibility of a loss of access to data |
HU01848 | 8.2.0.0 | During an upgrade, systems with a large AIX VIOS setup may have multiple node warmstarts with the possibility of a loss of access to data |
HU01849 | 8.2.0.3 | An excessive number of SSH sessions may lead to a node warmstart |
HU01849 | 7.8.1.9 | An excessive number of SSH sessions may lead to a node warmstart |
HU01849 | 8.1.3.4 | An excessive number of SSH sessions may lead to a node warmstart |
HU01849 | 8.2.1.0 | An excessive number of SSH sessions may lead to a node warmstart |
HU01850 | 8.1.3.3 | When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily |
HU01850 | 8.2.0.2 | When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily |
HU01850 | 8.2.1.0 | When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily |
HU01851 | 8.2.1.0 | When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools |
HU01851 | 8.2.0.1 | When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools |
HU01851 | 8.1.3.2 | When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools |
HU01852 | 8.1.3.3 | The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available |
HU01852 | 8.2.1.0 | The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available |
HU01852 | 8.2.0.2 | The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available |
HU01853 | 8.1.3.0 | In a Data Reduction Pool, it is possible for metadata to be assigned incorrect values leading to offline managed disk groups |
HU01855 | 8.2.1.0 | Clusters using Data Reduction Pools can experience multiple warmstarts on all nodes putting them in a service state |
HU01855 | 8.1.3.4 | Clusters using Data Reduction Pools can experience multiple warmstarts on all nodes putting them in a service state |
HU01856 | 8.2.0.0 | A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart |
HU01856 | 8.1.3.3 | A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart |
HU01856 | 8.2.1.0 | A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart |
HU01857 | 8.1.3.6 | Improved validation of user input in GUI |
HU01857 | 8.2.1.4 | Improved validation of user input in GUI |
HU01858 | 8.2.0.2 | Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline |
HU01858 | 8.2.1.0 | Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline |
HU01858 | 8.1.3.3 | Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline |
HU01860 | 8.2.1.4 | During garbage collection the flushing of extents may become stuck leading to a timeout and a single node warmstart |
HU01860 | 8.1.3.6 | During garbage collection the flushing of extents may become stuck leading to a timeout and a single node warmstart |
HU01862 | 8.1.3.4 | When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access |
HU01862 | 8.2.0.3 | When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access |
HU01862 | 8.2.1.0 | When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access |
HU01863 | 8.2.1.0 | In rare circumstances, a drive replacement may result in a ghost drive (i.e. a drive with the same ID as the replaced drive stuck in a permanently offline state) |
HU01863 | 7.8.1.11 | In rare circumstances, a drive replacement may result in a ghost drive (i.e. a drive with the same ID as the replaced drive stuck in a permanently offline state) |
HU01865 | 7.8.1.9 | When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros) |
HU01865 | 8.2.1.4 | When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros) |
HU01865 | 8.1.3.6 | When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros) |
HU01866 | 7.7.1.9 | A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group |
HU01866 | 8.1.2.0 | A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group |
HU01866 | 7.8.1.6 | A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group |
HU01867 | 8.1.3.0 | Expansion of a volume may fail due to an issue with accounting of physical capacity. All nodes will warmstart in order to clear the problem. The expansion may be triggered by writing data to a thin-provisioned or compressed volume. |
HU01868 | 8.3.0.0 | After deleting an encrypted external MDisk, it is possible for the encrypted status of volumes to change to no, even though all remaining MDisks are encrypted |
HU01868 | 8.2.1.11 | After deleting an encrypted external MDisk, it is possible for the encrypted status of volumes to change to no, even though all remaining MDisks are encrypted |
HU01868 | 7.8.1.12 | After deleting an encrypted external MDisk, it is possible for the encrypted status of volumes to change to no, even though all remaining MDisks are encrypted |
HU01869 | 8.2.1.4 | Volume copy deletion, in a Data Reduction Pool, triggered by rmvdiskcopy rmvolumecopy or addvdiskcopy -autodelete (or similar) may become stalled with the copy being left in deleting status |
HU01869 | 8.1.3.6 | Volume copy deletion, in a Data Reduction Pool, triggered by rmvdiskcopy rmvolumecopy or addvdiskcopy -autodelete (or similar) may become stalled with the copy being left in deleting status |
HU01870 | 8.1.3.3 | LDAP server communication fails with SSL or TLS security configured |
HU01871 | 8.2.1.0 | An issue with bitmap synchronisation can lead to a node warmstart |
HU01872 | 8.3.0.0 | An issue with cache partition fairness can favour small IOs over large ones leading to a node warmstart |
HU01873 | 8.1.3.4 | Deleting a volume, in a Data Reduction Pool, while volume protection is enabled and when the volume was not explicitly unmapped, before deletion, may result in simultaneous node warmstarts. For more details refer to this Flash |
HU01873 | 8.2.1.0 | Deleting a volume, in a Data Reduction Pool, while volume protection is enabled and when the volume was not explicitly unmapped, before deletion, may result in simultaneous node warmstarts. For more details refer to this Flash |
HU01876 | 8.1.3.6 | Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur |
HU01876 | 8.2.0.3 | Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur |
HU01876 | 8.2.1.0 | Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur |
HU01876 | 7.8.1.9 | Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur |
HU01877 | 8.1.3.0 | Where a volume is being expanded, and the additional capacity is to be formatted, the creation of a related volume copy may result in multiple warmstarts and a potential loss of access to data |
HU01878 | 8.1.3.4 | During an upgrade from v7.8.1 or earlier to v8.1.3 or later if an MDisk goes offline then at completion all volumes may go offline |
HU01878 | 8.2.1.0 | During an upgrade from v7.8.1 or earlier to v8.1.3 or later if an MDisk goes offline then at completion all volumes may go offline |
HU01879 | 8.2.1.0 | Latency induced by DWDM inter-site links may result in a node warmstart |
HU01880 | 8.3.0.0 | When a write, to a secondary volume, becomes stalled, a node at the primary site may warmstart |
HU01880 | 8.2.1.8 | When a write, to a secondary volume, becomes stalled, a node at the primary site may warmstart |
HU01881 | 8.2.0.2 | An issue within the compression card in FS9100 systems can result in the card being incorrectly flagged as failed leading to warmstarts |
HU01881 | 8.2.1.0 | An issue within the compression card in FS9100 systems can result in the card being incorrectly flagged as failed leading to warmstarts |
HU01883 | 8.2.1.0 | Config node processes may consume all available memory, leading to node warmstarts. This can be caused, for example, by large numbers of concurrent SSH connections being opened |
HU01885 | 8.1.3.4 | As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts |
HU01885 | 8.2.0.3 | As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts |
HU01885 | 8.2.1.0 | As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts |
HU01886 | 8.2.1.4 | The Unmap function can leave volume extents, that have not been freed, preventing managed disk and pool removal |
HU01886 | 8.1.3.6 | The Unmap function can leave volume extents, that have not been freed, preventing managed disk and pool removal |
HU01887 | 7.8.1.11 | In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts |
HU01887 | 8.1.3.6 | In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts |
HU01887 | 8.2.1.4 | In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts |
HU01888 | 8.1.3.6 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01888 | 8.3.0.0 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01888 | 7.8.1.10 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01888 | 8.2.1.6 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01997 | 8.1.3.6 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01997 | 8.3.0.0 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01997 | 7.8.1.10 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01997 | 8.2.1.6 | An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart |
HU01890 | 8.3.1.0 | FlashCopy mappings, from master volume to primary change volume, may become stalled when a T2 recovery occurs whilst the mappings are in a copying state |
HU01890 | 8.2.1.6 | FlashCopy mappings, from master volume to primary change volume, may become stalled when a T2 recovery occurs whilst the mappings are in a copying state |
HU01891 | 8.3.1.0 | An issue in DRAID grain process scheduling can lead to a duplicate entry condition that is cleared by a node warmstart |
HU01892 | 7.8.1.11 | LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported |
HU01892 | 8.3.0.0 | LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported |
HU01892 | 8.2.1.6 | LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported |
HU01893 | 8.2.1.0 | Excessive reporting frequency of NVMe drive diagnostics generates large numbers of callhome events |
HU01894 | 8.2.1.11 | After node reboot, or warmstart, some volumes accessed by AIX, VIO or VMware hosts may experience stuck SCSI2 reservations on the NPIV failover ports of the partner node. This can cause a loss of access to data |
HU01894 | 8.3.1.0 | After node reboot, or warmstart, some volumes accessed by AIX, VIO or VMware hosts may experience stuck SCSI2 reservations on the NPIV failover ports of the partner node. This can cause a loss of access to data |
HU01895 | 8.2.1.0 | Where a banner has been created, without a new line at the end, any subsequent T4 recovery will fail |
HU01899 | 7.8.1.8 | In a HyperSwap cluster, when the primary I/O group has a dead domain, nodes will repeatedly warmstart |
HU01900 | 8.2.1.4 | Executing a command, that can result in a shared mapping being created or destroyed, for an individual host, in a host cluster, without that command applying to all hosts in the host cluster, may lead to multiple node warmstarts with the possibility of a T2 recovery |
HU01901 | 8.2.1.0 | Enclosure management firmware, in an expansion enclosure, will reset a canister after a certain number of discovery requests have been received, from the controller, for that canister. It is possible simultaneous resets may occur in adjacent canisters causing a temporary loss of access to data |
HU01902 | 7.8.1.8 | During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade |
HU01902 | 8.1.3.4 | During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade |
HU01902 | 8.2.1.4 | During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade |
HU01904 | 8.3.0.0 | A timing issue can cause a remote copy relationship to become stuck, in a pausing state, resulting in a node warmstart |
HU01906 | 8.2.1.0 | Low-level hardware errors may not be recovered correctly, causing a canister to reboot. If multiple canisters reboot, this may result in loss of access to data |
HU01906 | 8.2.0.3 | Low-level hardware errors may not be recovered correctly, causing a canister to reboot. If multiple canisters reboot, this may result in loss of access to data |
HU01907 | 7.8.1.9 | An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error |
HU01907 | 8.1.3.4 | An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error |
HU01907 | 8.2.1.0 | An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error |
HU01909 | 8.3.0.0 | Upgrading a system with Read-Intensive drives to 8.2, or later, may result in node warmstarts |
HU01910 | 8.1.3.6 | When FlashCopy mappings are created, with a grain size of 64KB, it is possible for an overflow condition in the bitmap to occur. This can resulting in multiple node warmstarts with a possible loss of access to data |
HU01910 | 8.2.1.4 | When FlashCopy mappings are created, with a grain size of 64KB, it is possible for an overflow condition in the bitmap to occur. This can resulting in multiple node warmstarts with a possible loss of access to data |
HU01911 | 8.2.1.4 | The System Overview screen, in the GUI, may display nodes in the wrong site |
HU01912 | 8.2.1.4 | Systems with iSCSI-attached controllers may see node warmstarts due to I/O request timeouts |
HU01913 | 8.2.0.0 | A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access |
HU01913 | 8.2.1.0 | A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access |
HU01913 | 8.1.3.6 | A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access |
HU01913 | 7.8.1.9 | A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access |
HU01915 | 8.2.1.4 | Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust |
HU01915 | 8.1.3.6 | Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust |
HU01915 | 7.8.1.10 | Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust |
IT28654 | 8.2.1.4 | Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust |
IT28654 | 8.1.3.6 | Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust |
IT28654 | 7.8.1.10 | Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust |
HU01916 | 8.2.1.4 | The GUI Dashboard and the CLI lssystem command report physical capacity incorrectly |
HU01916 | 8.1.3.6 | The GUI Dashboard and the CLI lssystem command report physical capacity incorrectly |
HU01917 | 7.8.1.12 | Chrome browser support requires a self-signed certificate to include subject alternate name |
HU01917 | 8.3.0.0 | Chrome browser support requires a self-signed certificate to include subject alternate name |
HU01917 | 8.2.1.11 | Chrome browser support requires a self-signed certificate to include subject alternate name |
HU01918 | 8.2.1.4 | Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash |
HU01918 | 8.2.0.4 | Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash |
HU01918 | 8.1.3.5 | Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash |
HU01919 | 8.3.0.0 | During an upgrade some components may take too long to initialise causing node warmstarts |
HU01920 | 8.2.1.1 | An issue in the garbage collection process can cause node warmstarts and offline pools |
HU01920 | 8.1.3.5 | An issue in the garbage collection process can cause node warmstarts and offline pools |
HU01920 | 8.2.0.4 | An issue in the garbage collection process can cause node warmstarts and offline pools |
HU01921 | 8.2.1.11 | Where FlashCopy mapping targets are also in remote copy relationships there may be node warmstarts with a temporary loss of access to data |
HU01921 | 8.3.0.0 | Where FlashCopy mapping targets are also in remote copy relationships there may be node warmstarts with a temporary loss of access to data |
HU01923 | 8.3.1.0 | An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts |
HU01923 | 8.2.1.11 | An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts |
HU01923 | 7.8.1.11 | An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts |
HU01924 | 8.2.1.11 | Migrating extents to an MDisk, that is not a member of an MDisk group, may result in a Tier 2 recovery |
HU01924 | 8.3.0.1 | Migrating extents to an MDisk, that is not a member of an MDisk group, may result in a Tier 2 recovery |
HU01925 | 8.2.1.4 | Systems will incorrectly report offline and unresponsive NVMe drives after an I/O group outage. These errors will fail to auto-fix and must be manually marked as fixed |
HU01926 | 8.2.1.4 | When a node, with 32GB of RAM, is upgraded to v8.2.1 it may experience a warmstart resulting in a failed upgrade |
HU01928 | 8.2.1.4 | When two IOs attempt to access the same address, the state of the data may be incorrectly set to invalid causing offline volumes and, possibly, offline pools |
HU01928 | 8.1.3.6 | When two IOs attempt to access the same address, the state of the data may be incorrectly set to invalid causing offline volumes and, possibly, offline pools |
HU01929 | 8.2.1.4 | Drive fault type 3 (error code 1686) may be seen in the Event Log for empty slots |
HU01930 | 8.2.1.4 | Certain types of FlashCore Module (FCM) failure may not result in a call home, delaying the shipment of a replacement |
HU01931 | 8.3.1.2 | Where a high rate of CLI commands are received, it is possible for inter-node processing code to be delayed which results in a small increase in receive queue time on the config node |
HU01931 | 8.2.1.11 | Where a high rate of CLI commands are received, it is possible for inter-node processing code to be delayed which results in a small increase in receive queue time on the config node |
HU01932 | 8.2.1.2 | When a rmvdisk command initiates a Data Reduction Pool rehoming process any I/O to the removed volume may cause multiple warmstarts leading to a loss of access |
HU01933 | 8.2.1.6 | Under rare circumstances the Data Reduction Pool deduplication rehoming process can become truncated. Subsequent detection of inconsistent metadata can lead to offline Data Reduction Pools |
HU01933 | 8.3.0.0 | Under rare circumstances the Data Reduction Pool deduplication rehoming process can become truncated. Subsequent detection of inconsistent metadata can lead to offline Data Reduction Pools |
HU01934 | 8.2.0.3 | An issue in the handling of faulty canister components can lead to multiple node warmstarts for that canister |
HU01934 | 8.2.1.0 | An issue in the handling of faulty canister components can lead to multiple node warmstarts for that canister |
HU01936 | 8.2.1.8 | When shrinking a volume, that has host mappings, there may be recurring node warmstarts |
HU01936 | 8.3.0.0 | When shrinking a volume, that has host mappings, there may be recurring node warmstarts |
HU01937 | 8.2.1.4 | DRAID copy-back operation can overload NVMe drives resulting in high I/O latency |
HU01939 | 8.2.1.4 | After replacing a canister, and attempting to bring the new canister into the cluster, it may remain offline |
HU01940 | 7.8.1.8 | Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low |
HU01940 | 8.1.0.0 | Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low |
HU01941 | 8.2.1.4 | After upgrading the system to v8.2, or later, when expanding a mirrored volume, the formatting of additional space may become stalled |
HU01942 | 8.2.1.8 | NVMe drive ports can go offline, for a very short time, when an upgrade of that drives firmware commences |
HU01942 | 8.3.0.0 | NVMe drive ports can go offline, for a very short time, when an upgrade of that drives firmware commences |
HU01943 | 8.3.1.0 | Stopping a GMCV relationship with the -access flag may result in more processing than is required |
HU01944 | 8.2.1.4 | Proactive host failover not waiting for 25 seconds before allowing nodes to go offline during upgrades or maintenance |
HU01944 | 7.8.1.11 | Proactive host failover not waiting for 25 seconds before allowing nodes to go offline during upgrades or maintenance |
HU01945 | 8.2.1.4 | Systems with Flash Core Modules are unable to upgrade the firmware for those drives |
HU01952 | 7.8.1.11 | When the compression accelerator hardware driver detects an uncorrectable error the node will reboot |
HU01953 | 8.3.1.0 | Following a Data Reduction Pool recovery, in some circumstances, it may not be possible to create new volumes, via the GUI, due to an incorrect value being returned from the lsmdiskgrp |
HU01955 | 8.3.0.0 | The presence of unsupported configurations, in a Spectrum Virtualize environment, can cause a mishandling of unsupported commands leading to a node warmstart |
HU01956 | 8.3.0.0 | The output from a lsdrive command shows the write endurance usage, for new read-intensive SSDs, as blank rather than 0% |
HU01957 | 8.2.1.0 | Due to an issue in Data Reduction Pools, when the system attempts an upgrade, there may be node warmstarts |
HU01957 | 8.1.3.6 | Due to an issue in Data Reduction Pools, when the system attempts an upgrade, there may be node warmstarts |
HU01959 | 8.2.1.4 | An timing window issue in the Thin Provisioning component can cause a node warmstart |
HU01961 | 8.2.1.4 | A hardware issue can provoke the system to repeatedly try to collect a statesave, from the enclosure management firmware, causing 1048 errors in the Event Log |
HU01962 | 8.2.1.4 | When Call Home servers return an invalid message it can be incorrectly reported as an error 3201 in the Event Log |
HU01963 | 8.3.0.0 | A deadlock condition in the deduplication component can lead to a node warmstart |
HU01964 | 8.3.1.0 | An issue in the cache component may limit I/O throughput |
HU01965 | 8.2.1.0 | A timing window issue in the deduplication component can lead to I/O timeouts, and a node warmstart, with the possibility of an offline MDisk group |
HU01967 | 8.3.1.0 | When I/O, in remote copy relationships, experiences delays (1720 and/or 1920 errors are logged) an I/O group may warmstart |
HU01967 | 8.2.1.8 | When I/O, in remote copy relationships, experiences delays (1720 and/or 1920 errors are logged) an I/O group may warmstart |
HU01968 | 8.3.1.2 | An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group |
HU01968 | 8.2.1.12 | An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group |
HU02215 | 8.3.1.2 | An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group |
HU02215 | 8.2.1.12 | An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group |
HU01969 | 8.3.0.0 | It is possible, after an rmrcrelationship command is run, that the connection to the remote cluster may be lost |
HU01970 | 7.8.1.12 | When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted with -force, then all nodes may repeatedly warmstart |
HU01970 | 8.2.1.11 | When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted with -force, then all nodes may repeatedly warmstart |
HU01970 | 8.3.1.0 | When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted with -force, then all nodes may repeatedly warmstart |
HU01971 | 8.2.1.4 | Spurious DIMM over-temperature errors may cause a node to go offline with node error 528 |
HU01972 | 8.1.3.6 | When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts |
HU01972 | 8.2.1.4 | When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts |
HU01972 | 7.8.1.10 | When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts |
HU01974 | 8.2.1.6 | With all Remote Support Assistant connections closed, the GUI may show that a connection is still in progress |
HU01974 | 8.3.0.0 | With all Remote Support Assistant connections closed, the GUI may show that a connection is still in progress |
HU01976 | 8.2.1.4 | A new MDisk array may not be encrypted even though encryption is enabled on the system |
HU01977 | 8.4.0.0 | CLI commands can produce a return code of 1 even though execution was successful |
HU01978 | 8.2.1.6 | Unable to create HyperSwap volumes. The mkvolume command fails with CMMVC7050E error |
HU01978 | 8.3.0.0 | Unable to create HyperSwap volumes. The mkvolume command fails with CMMVC7050E error |
HU01979 | 8.2.1.6 | The figure for used_virtualization, in the output of a lslicense command, may be unexpectedly large |
HU01979 | 8.3.0.0 | The figure for used_virtualization, in the output of a lslicense command, may be unexpectedly large |
HU01981 | 7.8.1.11 | Although an issue, in the HBA firmware, is handled correctly it can still cause a node warmstart |
HU01981 | 8.2.1.0 | Although an issue, in the HBA firmware, is handled correctly it can still cause a node warmstart |
HU01982 | 8.2.1.6 | In an environment, with multiple IP Quorum servers, if the quorum component encounters a duplicate UID then a node may warmstart |
HU01982 | 8.3.0.0 | In an environment, with multiple IP Quorum servers, if the quorum component encounters a duplicate UID then a node may warmstart |
HU01983 | 8.3.0.0 | Improve debug data capture to assist in determining the reason for a Data Reduction Pool to be taken offline |
HU01983 | 8.2.1.6 | Improve debug data capture to assist in determining the reason for a Data Reduction Pool to be taken offline |
HU01985 | 8.2.1.6 | As a consequence of a Data Reduction Pool recovery, bad metadata may be created. When the region of disk associated with the bad metadata is accessed there may be an I/O group warmstarts |
HU01985 | 8.3.0.0 | As a consequence of a Data Reduction Pool recovery, bad metadata may be created. When the region of disk associated with the bad metadata is accessed there may be an I/O group warmstarts |
HU01986 | 8.2.1.6 | An accounting issue in the FlashCopy component may cause node warmstarts |
HU01986 | 8.3.0.0 | An accounting issue in the FlashCopy component may cause node warmstarts |
HU01987 | 8.2.1.4 | During SAN fabric power maintenance a cluster may breech resource limits, on the remaining node to node links, resulting in system-wide lease expiry |
HU01988 | 7.8.1.11 | In the Monitoring -> 3D view page, the "export to csv" button does not function |
HU01989 | 8.2.1.6 | For large drives, bitmap scanning, during an array rebuild, can timeout resulting in multiple node warmstarts, possibly leading to offline I/O groups |
HU01989 | 8.3.0.0 | For large drives, bitmap scanning, during an array rebuild, can timeout resulting in multiple node warmstarts, possibly leading to offline I/O groups |
HU01990 | 8.3.0.0 | Bad return codes from the partnership compression component can cause multiple node warmstarts taking nodes offline |
HU01991 | 8.2.1.6 | An issue in the handling of extent allocation, in the Data Reduction Pool component, can cause a node warmstart |
HU01991 | 8.3.0.0 | An issue in the handling of extent allocation, in the Data Reduction Pool component, can cause a node warmstart |
HU01998 | 8.2.1.6 | All SCSI command types can set volumes as busy resulting in I/O timeouts and multiple node warmstarts, with the possibility of an offline I/O group. For more details refer to this Flash |
HU01998 | 8.3.0.1 | All SCSI command types can set volumes as busy resulting in I/O timeouts and multiple node warmstarts, with the possibility of an offline I/O group. For more details refer to this Flash |
HU02000 | 8.2.1.4 | Data Reduction Pools may go offline due to a timing issue in metadata handling |
HU02001 | 8.2.1.4 | During a system upgrade an issue in callhome may cause a node warmstart stalling the upgrade |
HU02002 | 8.2.1.4 | On busy systems, diagnostic data collection may not complete correctly producing livedumps with missing pages |
HU02005 | 8.3.0.0 | An issue in the background copy process prevents grains, above a 128TB limit, from being cleaned properly. As a consequence there may be multiple node warmstarts with the potential for a loss of access to data |
HU02005 | 8.2.1.11 | An issue in the background copy process prevents grains, above a 128TB limit, from being cleaned properly. As a consequence there may be multiple node warmstarts with the potential for a loss of access to data |
HU02006 | 8.3.0.1 | Garbage collection behaviour can become overzealous, adversely affect performance |
HU02007 | 8.3.0.0 | During volume migration an issue, in the handling of old to new extents transfer, can lead to cluster-wide warmstarts |
HU02007 | 8.2.1.5 | During volume migration an issue, in the handling of old to new extents transfer, can lead to cluster-wide warmstarts |
HU02008 | 8.2.1.4 | When a DRAID rebuild occurs, occasionally a RAID deadlock condition can be triggered by a particular type of I/O workload. This can lead to repeated node warmstarts and a loss of access to data |
HU02009 | 8.2.1.5 | Systems which are using Data Reduction Pools, with the maximum possible extent size of 8GB, and which experience a very specific I/O workload, may experience an issue due to garbage collection. This can cause repeated node warmstarts and loss of access to data |
HU02009 | 8.3.0.0 | Systems which are using Data Reduction Pools, with the maximum possible extent size of 8GB, and which experience a very specific I/O workload, may experience an issue due to garbage collection. This can cause repeated node warmstarts and loss of access to data |
HU02010 | 8.3.1.9 | A single node warmstart may occur when a drive in a non-distributed RAID array is taken temporarily out-of-sync due to slow performance |
HU02010 | 8.4.0.10 | A single node warmstart may occur when a drive in a non-distributed RAID array is taken temporarily out-of-sync due to slow performance |
HU02011 | 8.2.1.5 | When a node warmstart occurs on a system using Data Reduction Pools, there is a small possibility that the node will not automatically return online. If the partner node is also offline, this can cause temporary loss of access to data |
HU02011 | 8.3.0.0 | When a node warmstart occurs on a system using Data Reduction Pools, there is a small possibility that the node will not automatically return online. If the partner node is also offline, this can cause temporary loss of access to data |
HU02012 | 8.2.1.5 | Under certain I/O workloads the garbage collection process can adversely impact volume write response times |
HU02012 | 8.3.0.0 | Under certain I/O workloads the garbage collection process can adversely impact volume write response times |
HU02013 | 8.2.1.4 | A race condition between the extent invalidation and destruction in the garbage collection process may cause a node warmstart with the possibility of offline volumes |
HU02013 | 8.1.3.6 | A race condition between the extent invalidation and destruction in the garbage collection process may cause a node warmstart with the possibility of offline volumes |
HU02014 | 7.8.1.11 | After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue |
HU02014 | 8.2.1.6 | After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue |
HU02014 | 8.3.0.1 | After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue |
HU02015 | 8.2.1.11 | Some read-intensive SSDs are incorrectly reporting wear rate thresholds generating unnecessary errors in the Event Log |
HU02015 | 8.3.1.2 | Some read-intensive SSDs are incorrectly reporting wear rate thresholds generating unnecessary errors in the Event Log |
HU02016 | 8.3.0.1 | A memory leak in the component that handles thin-provisioned MDisks can lead to an adverse performance impact with the possibility of offline MDisks. For more details refer to this Flash |
HU02016 | 8.2.1.6 | A memory leak in the component that handles thin-provisioned MDisks can lead to an adverse performance impact with the possibility of offline MDisks. For more details refer to this Flash |
HU02017 | 8.3.1.0 | Unstable inter-site links may cause a system-wide lease expiry leaving all nodes in a service state - one with error 564 and others with error 551 |
HU02019 | 8.2.1.4 | When the master and auxiliary volumes, in a relationship, have the same name it is not possible, in the GUI, to determine which is master or auxiliary |
HU02020 | 8.2.1.6 | An internal hardware bus, running at the incorrect speed, may give rise to spurious DIMM over-temperature errors |
HU02020 | 8.3.0.0 | An internal hardware bus, running at the incorrect speed, may give rise to spurious DIMM over-temperature errors |
HU02021 | 8.2.1.8 | Disabling garbage collection may cause a node warmstart |
HU02021 | 8.3.1.0 | Disabling garbage collection may cause a node warmstart |
HU02023 | 8.3.1.0 | An issue with the processing of FlashCopy map commands may result in a single node warmstart |
HU02025 | 8.2.1.4 | An issue with metadata handling, where a pool has been taken offline, may lead to an out of space condition in that pool preventing its return to operation |
HU02025 | 8.1.3.6 | An issue with metadata handling, where a pool has been taken offline, may lead to an out of space condition in that pool preventing its return to operation |
HU02026 | 8.3.1.0 | A timing window issue in the processing of FlashCopy status listing commands can cause a node warmstart |
HU02027 | 8.2.1.6 | Fabric congestion can cause internal resource constraints, in 16Gb HBAs, leading to lease expiries |
HU02027 | 8.3.0.0 | Fabric congestion can cause internal resource constraints, in 16Gb HBAs, leading to lease expiries |
HU02028 | 7.8.1.8 | An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart |
HU02028 | 8.2.1.0 | An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart |
HU02028 | 8.1.3.4 | An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart |
HU02028 | 8.2.0.0 | An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart |
HU02029 | 8.2.1.6 | An issue with the SSMTP process may result in failed callhome, inventory reporting and user notifications. A testemail command will fail with a CMMVC9051E error |
HU02029 | 8.3.0.0 | An issue with the SSMTP process may result in failed callhome, inventory reporting and user notifications. A testemail command will fail with a CMMVC9051E error |
HU02036 | 8.2.1.8 | It is possible for commands, that alter pool-level extent reservations (i.e. migratevdisk or rmmdisk), to conflict with an ongoing EasyTier migration, resulting in a Tier 2 recovery |
HU02036 | 8.3.0.1 | It is possible for commands, that alter pool-level extent reservations (i.e. migratevdisk or rmmdisk), to conflict with an ongoing EasyTier migration, resulting in a Tier 2 recovery |
HU02037 | 8.3.1.0 | A FlashCopy consistency group, with a mix of mappings in different states, cannot be stopped |
HU02037 | 8.2.1.6 | A FlashCopy consistency group, with a mix of mappings in different states, cannot be stopped |
HU02039 | 8.2.1.6 | An issue in the management steps of Data Reduction Pool recovery may lead to a node warmstart |
HU02039 | 8.3.0.0 | An issue in the management steps of Data Reduction Pool recovery may lead to a node warmstart |
HU02040 | 8.3.1.0 | VPD contains the incorrect FRU part number for the SAS adapter |
HU02042 | 8.1.3.4 | An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state |
HU02042 | 8.2.1.0 | An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state |
HU02042 | 8.2.0.2 | An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state |
HU02043 | 8.2.1.6 | Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565 |
HU02043 | 8.3.0.1 | Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565 |
HU02043 | 7.8.1.11 | Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565 |
HU02044 | 8.3.0.1 | Multiple DRAID arrays can, where one is performing a rebuild, be exposed to a RAID deadlock condition resulting in multiple node warmstarts and a loss of access to data |
HU02044 | 8.2.1.8 | Multiple DRAID arrays can, where one is performing a rebuild, be exposed to a RAID deadlock condition resulting in multiple node warmstarts and a loss of access to data |
HU02045 | 8.2.1.6 | When a node is removed from the cluster, using CLI, it may still be shown as online in the GUI. If an attempt is made to shutdown this node, from the GUI, whilst it appears to be online, then the whole cluster will shutdown |
HU02045 | 8.3.0.1 | When a node is removed from the cluster, using CLI, it may still be shown as online in the GUI. If an attempt is made to shutdown this node, from the GUI, whilst it appears to be online, then the whole cluster will shutdown |
HU02048 | 8.2.1.12 | An issue in the handling of ATS commands from VMware hosts can cause a single node warmstart |
HU02048 | 8.3.1.0 | An issue in the handling of ATS commands from VMware hosts can cause a single node warmstart |
HU02049 | 8.2.1.8 | GUI session handling has an issue that can generate many exceptions, adversely impacting GUI performance |
HU02049 | 7.8.1.11 | GUI session handling has an issue that can generate many exceptions, adversely impacting GUI performance |
HU02050 | 8.3.0.1 | Compression hardware can have an issue processing certain types of data resulting in node reboots and marking the compression hardware as faulty even though it is serviceable |
HU02050 | 8.2.1.8 | Compression hardware can have an issue processing certain types of data resulting in node reboots and marking the compression hardware as faulty even though it is serviceable |
HU02051 | 8.3.0.0 | If unexpected actions are taken during node replacement, node warmstarts and temporary loss of access to data may occur. This issue can only occur if a node is replaced, and then the old node is re-added to the cluster |
HU02052 | 8.3.1.0 | During an upgrade an issue, with buffer handling, in Data Reduction Pool can lead to a node warmstart |
HU02053 | 8.3.0.1 | An issue with canister BIOS update can stall system upgrades |
HU02053 | 8.2.1.6 | An issue with canister BIOS update can stall system upgrades |
HU02054 | 8.3.1.0 | The event log handler maintains a second list of events. On rare occasions, for log full events, these lists can get out of step, resulting in a Tier 2 recovery |
HU02054 | 8.2.1.11 | The event log handler maintains a second list of events. On rare occasions, for log full events, these lists can get out of step, resulting in a Tier 2 recovery |
HU02055 | 8.2.1.6 | Creating a FlashCopy snapshot, in the GUI, does not set the same preferred node for both source and target volumes. This may adversely impact performance |
HU02055 | 8.3.0.1 | Creating a FlashCopy snapshot, in the GUI, does not set the same preferred node for both source and target volumes. This may adversely impact performance |
HU02058 | 8.3.1.3 | Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery |
HU02058 | 8.4.0.0 | Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery |
HU02058 | 8.2.1.12 | Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery |
HU02059 | 8.3.0.0 | Event Log may display quorum errors even though quorum devices are available |
HU02062 | 8.3.1.0 | An issue, with node index numbers for I/O groups, when using 32Gb HBAs may result in host ports incorrectly being reported offline |
HU02062 | 8.3.0.2 | An issue, with node index numbers for I/O groups, when using 32Gb HBAs may result in host ports incorrectly being reported offline |
HU02063 | 8.2.1.8 | HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB |
HU02063 | 7.8.1.11 | HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB |
HU02063 | 8.3.1.0 | HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB |
HU02064 | 8.2.1.8 | An issue in the firmware for compression accelerator cards can cause offline compressed volumes. For more details refer to this Flash |
HU02064 | 8.3.0.1 | An issue in the firmware for compression accelerator cards can cause offline compressed volumes. For more details refer to this Flash |
HU02065 | 8.2.1.11 | Mishandling of Data Reduction Pool allocation request rejections can lead to node warmstarts that can take an MDisk group offline |
HU02065 | 8.3.1.0 | Mishandling of Data Reduction Pool allocation request rejections can lead to node warmstarts that can take an MDisk group offline |
HU02066 | 8.3.1.0 | If, during large (>8KB) reads from a host, a medium error is encountered, on backend storage, then there may be node warmstarts, with the possibility of a loss of access to data |
HU02067 | 8.2.1.6 | If multiple recipients are specified, for callhome emails, then no callhome emails will be sent |
HU02067 | 8.3.0.1 | If multiple recipients are specified, for callhome emails, then no callhome emails will be sent |
HU02069 | 8.2.1.11 | When a SCSI command, containing an invalid byte, is received there may be a node warmstart. This can affect both nodes, in an I/O group, at the same time |
HU02072 | 8.2.1.6 | An issue in the handling of email transmission can write a large file to the node boot drive. If this causes the boot drive to become full, the node will go offline with error 565 |
HU02072 | 8.3.0.1 | An issue in the handling of email transmission can write a large file to the node boot drive. If this causes the boot drive to become full, the node will go offline with error 565 |
HU02073 | 8.3.0.1 | Detection of an invalid list entry in the parity handling process can lead to a node warmstart |
HU02075 | 8.3.1.0 | A FlashCopy snapshot, sourced from the target of an Incremental FlashCopy map, can sometimes, temporarily, present incorrect data to the host |
HU02077 | 8.3.0.1 | A node upgrading to v8.2.1 or later will lose access to controllers directly-attached to its FC ports and the upgrade will stall |
HU02077 | 8.2.1.8 | A node upgrading to v8.2.1 or later will lose access to controllers directly-attached to its FC ports and the upgrade will stall |
HU02078 | 8.2.1.8 | Heavily unbalanced workloads, in stretched-cluster configurations, can bias inter-node traffic through one port, adversely affecting performance |
HU02078 | 8.3.1.0 | Heavily unbalanced workloads, in stretched-cluster configurations, can bias inter-node traffic through one port, adversely affecting performance |
HU02079 | 8.3.0.1 | Starting a FlashCopy mapping, within a Data Reduction Pool, a large number of times may cause a node warmstart |
HU02080 | 8.3.0.1 | When a Data Reduction Pool is running low on free space, the credit allocation algorithm, for garbage collection, can be exposed to a race condition, adversely affecting performance |
HU02080 | 8.2.1.11 | When a Data Reduction Pool is running low on free space, the credit allocation algorithm, for garbage collection, can be exposed to a race condition, adversely affecting performance |
HU02083 | 8.2.1.8 | During DRAID rebuilds, an issue in the handling of memory buffers can lead to multiple node warmstarts and a loss of access to data. For more details refer to this Flash |
HU02083 | 8.3.0.1 | During DRAID rebuilds, an issue in the handling of memory buffers can lead to multiple node warmstarts and a loss of access to data. For more details refer to this Flash |
HU02084 | 8.3.0.1 | If a node goes offline, after the firmware of multiple NVMe drives has been upgraded, then incorrect 3090/90021 errors may be seen in the Event Log |
HU02085 | 8.3.1.0 | Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios |
HU02085 | 8.2.1.8 | Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios |
HU02085 | 7.8.1.11 | Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios |
HU02086 | 8.3.0.1 | An issue, in IP Quorum, may cause a Tier 2 recovery, during initial connection to a candidate device |
HU02086 | 8.2.1.8 | An issue, in IP Quorum, may cause a Tier 2 recovery, during initial connection to a candidate device |
HU02087 | 8.3.0.1 | LDAP users with SSH keys cannot create volumes after upgrading to 8.3.0.0 |
HU02088 | 8.5.0.0 | There can be multiple node warmstarts when no mailservers are configured |
HU02088 | 8.4.2.0 | There can be multiple node warmstarts when no mailservers are configured |
HU02088 | 8.4.0.10 | There can be multiple node warmstarts when no mailservers are configured |
HU02089 | 8.3.0.1 | Due to changes to quorum management, during an upgrade to v8.2.x, or later, there may be multiple warmstarts, with the possibility of a loss of access to data |
HU02089 | 8.2.1.8 | Due to changes to quorum management, during an upgrade to v8.2.x, or later, there may be multiple warmstarts, with the possibility of a loss of access to data |
HU02090 | 8.2.1.8 | When a failing drive experiences an error, RAID may mishandle it, resulting in a node warmstart |
HU02090 | 8.3.0.0 | When a failing drive experiences an error, RAID may mishandle it, resulting in a node warmstart |
HU02091 | 8.2.1.11 | Upgrading to v8.2.1.8, or later, may result in a licensing error in the Event Log |
HU02091 | 8.3.1.2 | Upgrading to v8.2.1.8, or later, may result in a licensing error in the Event Log |
HU02092 | 8.4.0.0 | The effectiveness of slow drain mitigation can become reduced causing fabric congestion to adversely impact all ports on an adapter |
HU02093 | 8.2.1.8 | A locking issue in the inter-node communications, of V5030 systems, can lead to a deadlock condition, resulting in a node warmstart |
HU02095 | 8.2.1.12 | The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI |
HU02095 | 8.5.0.0 | The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI |
HU02095 | 8.4.0.2 | The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI |
HU02095 | 8.3.1.4 | The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI |
HU02097 | 8.3.0.1 | Workloads, with data that is highly suited to deduplication, can provoke high CPU utilisation, as multiple destinations try to dedupe to one source. This adversely impacts performance with the possibility of offline MDisk groups |
HU02097 | 8.2.1.11 | Workloads, with data that is highly suited to deduplication, can provoke high CPU utilisation, as multiple destinations try to dedupe to one source. This adversely impacts performance with the possibility of offline MDisk groups |
HU02099 | 8.3.1.0 | Cloud callhome error 3201 messages may appear in the Event Log |
HU02099 | 8.2.1.8 | Cloud callhome error 3201 messages may appear in the Event Log |
HU02102 | 7.8.1.11 | Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart |
HU02102 | 8.3.1.0 | Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart |
HU02102 | 8.2.1.9 | Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart |
HU02102 | 8.3.0.2 | Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart |
HU02103 | 8.3.1.0 | The system management firmware may, incorrectly, attempt to obtain an IP address, using DHCP, making it accessible via Ethernet |
HU02103 | 8.2.1.11 | The system management firmware may, incorrectly, attempt to obtain an IP address, using DHCP, making it accessible via Ethernet |
HU02104 | 8.3.0.2 | An issue in the RAID component, in the presence of very high I/O workload and the exhaustion of cache resources, can see a deadlock condition occurring which prevents further I/O processing. The system detects this issue and takes the storage pool offline for a six minute period, to clear the problem. The pool is then brought online automatically, and normal operation resumes. For more details refer to this Flash |
HU02104 | 8.2.1.9 | An issue in the RAID component, in the presence of very high I/O workload and the exhaustion of cache resources, can see a deadlock condition occurring which prevents further I/O processing. The system detects this issue and takes the storage pool offline for a six minute period, to clear the problem. The pool is then brought online automatically, and normal operation resumes. For more details refer to this Flash |
HU02106 | 8.2.1.11 | Multiple node warmstarts, in quick succession, can cause the partner node to lease expire |
HU02106 | 8.3.1.2 | Multiple node warmstarts, in quick succession, can cause the partner node to lease expire |
HU02108 | 8.2.1.11 | Deleting a managed disk group, with -force, may cause multiple warmstarts with the possibility of a loss of access to data |
HU02108 | 8.3.1.0 | Deleting a managed disk group, with -force, may cause multiple warmstarts with the possibility of a loss of access to data |
HU02109 | 8.2.1.11 | Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers |
HU02109 | 8.3.0.2 | Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers |
HU02109 | 8.3.1.0 | Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers |
HU02111 | 8.3.1.0 | An issue with how Data Reduction Pool handles data, at the sub-extent level, may result in a node warmstart |
HU02111 | 8.2.1.11 | An issue with how Data Reduction Pool handles data, at the sub-extent level, may result in a node warmstart |
HU02114 | 8.3.1.0 | Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state |
HU02114 | 8.2.1.11 | Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state |
HU02114 | 8.3.0.2 | Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state |
HU02115 | 8.3.1.0 | Attempting to upgrade all drive firmware, with an inadequate drive package, may lead to multiple node warmstarts, with the possibility of a loss of access to data |
HU02115 | 8.3.0.2 | Attempting to upgrade all drive firmware, with an inadequate drive package, may lead to multiple node warmstarts, with the possibility of a loss of access to data |
HU02119 | 8.3.1.0 | NVMe drive replacement on 8.3.0.0 or 8.3.0.1 may result in the GUI, and lsdrive CLI command, showing a ghost drive |
HU02121 | 8.2.1.8 | When the system changes from copyback to rebuild a failure to clear related metadata can cause multiple node warmstarts, with the possibility of a loss of access |
HU02121 | 8.3.0.0 | When the system changes from copyback to rebuild a failure to clear related metadata can cause multiple node warmstarts, with the possibility of a loss of access |
HU02123 | 8.3.0.0 | For direct-attached hosts, a race condition between the FLOGI and Link UP processes can result in FC ports not coming online |
HU02123 | 8.2.1.11 | For direct-attached hosts, a race condition between the FLOGI and Link UP processes can result in FC ports not coming online |
HU02124 | 8.2.1.11 | Due to an issue with FCM thin provisioning calculations the GUI may incorrectly display volume capacity and capacity savings as zero |
HU02126 | 8.3.0.1 | There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node |
HU02126 | 8.2.1.9 | There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node |
HU02126 | 7.8.1.11 | There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node |
HU02127 | 8.5.0.0 | 32Gbps FC ports will auto-negotiate to 8Gbps, if they are connected to a 16Gbps Cisco switch port |
HU02127 | 8.4.2.0 | 32Gbps FC ports will auto-negotiate to 8Gbps, if they are connected to a 16Gbps Cisco switch port |
HU02128 | 8.3.1.2 | Deduplication volume lookup can over utilise resources causing an adverse performance impact |
HU02129 | 8.2.1.6 | GUI drive filtering fails with An error occurred loading table data |
HU02129 | 8.3.0.1 | GUI drive filtering fails with An error occurred loading table data |
HU02130 | 8.3.0.1 | An issue with the RAID scrub process can overload Nearline SAS drives causing premature failures |
HU02131 | 8.2.1.9 | When changing DRAID configuration, for an array with an active workload, a deadlock condition can occur resulting in a single node warmstart |
HU02131 | 8.3.0.1 | When changing DRAID configuration, for an array with an active workload, a deadlock condition can occur resulting in a single node warmstart |
HU02132 | 8.2.1.12 | Removing a thin-provisioned volume and then immediately creating one of the same size may cause node warmstarts |
HU02132 | 8.3.1.0 | Removing a thin-provisioned volume and then immediately creating one of the same size may cause node warmstarts |
HU02133 | 8.3.0.0 | NVMe drives may become degraded after a drive reseat or node reboot |
HU02133 | 8.2.1.9 | NVMe drives may become degraded after a drive reseat or node reboot |
HU02134 | 8.3.0.0 | A timing issue, in handling chquorum CLI commands, can result in fewer than three quorum devices being available |
HU02135 | 8.3.1.2 | Removing multiple IQNs for an iSCSI host can result in a Tier 2 recovery |
HU02135 | 8.2.1.11 | Removing multiple IQNs for an iSCSI host can result in a Tier 2 recovery |
HU02137 | 8.2.1.11 | An issue with support for target resets in Nimble Storage controllers may cause a node warmstart |
HU02137 | 8.3.1.2 | An issue with support for target resets in Nimble Storage controllers may cause a node warmstart |
HU02138 | 8.2.1.11 | An issue in Data Reduction Pool garbage collection can cause I/O timeouts leading to an offline pool |
HU02138 | 8.3.1.0 | An issue in Data Reduction Pool garbage collection can cause I/O timeouts leading to an offline pool |
HU02139 | 8.4.0.0 | When 32Gbps FC adapters are fitted the maximum supported ambient temperature is decreased leading to more threshold exceeded errors in the Event Log |
HU02141 | 8.3.1.0 | An issue in the max replication delay function may trigger a Tier 2 recovery, after posting multiple 1920 errors in the Event Log. For more details refer to this Flash |
HU02141 | 8.2.1.11 | An issue in the max replication delay function may trigger a Tier 2 recovery, after posting multiple 1920 errors in the Event Log. For more details refer to this Flash |
HU02142 | 8.2.1.12 | It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing |
HU02142 | 8.4.0.0 | It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing |
HU02142 | 8.3.1.3 | It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing |
HU02143 | 8.3.1.0 | The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven |
HU02143 | 8.2.1.10 | The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven |
HU02143 | 8.3.0.3 | The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven |
HU02146 | 8.3.1.0 | An issue in inter-node message handling may cause a node warmstart |
HU02149 | 7.8.1.11 | When an Enhanced Stretch Cluster is using NPIV, in transitional mode, the path priority is not being reported correctly to some hosts |
HU02149 | 8.2.1.11 | When an Enhanced Stretch Cluster is using NPIV, in transitional mode, the path priority is not being reported correctly to some hosts |
HU02149 | 8.3.0.0 | When an Enhanced Stretch Cluster is using NPIV, in transitional mode, the path priority is not being reported correctly to some hosts |
HU02152 | 8.3.1.0 | Due to an issue in RAID there may be I/O timeouts, leading to node warmstarts, with the possibility of a loss of access to data |
HU02153 | 8.3.1.4 | Fabric or host issues can cause aborted IOs to block the port throttle queue leading to adverse performance that is cleared by a node warmstart |
HU02153 | 8.4.0.0 | Fabric or host issues can cause aborted IOs to block the port throttle queue leading to adverse performance that is cleared by a node warmstart |
HU02154 | 8.3.1.2 | If a node is rebooted, when remote support is enabled, then all other nodes will warmstart |
HU02154 | 8.2.1.11 | If a node is rebooted, when remote support is enabled, then all other nodes will warmstart |
HU02155 | 8.2.1.11 | Upgrading to v8.2.1 may result in offline managed disk groups and OOS events (1685/1687) appearing in the Event Log |
HU02156 | 8.3.1.3 | Global Mirror environments may experience more frequent 1920 events due to writedone message queuing |
HU02156 | 8.4.0.0 | Global Mirror environments may experience more frequent 1920 events due to writedone message queuing |
HU02156 | 8.2.1.12 | Global Mirror environments may experience more frequent 1920 events due to writedone message queuing |
HU02157 | 8.2.1.12 | Issuing a mkdistributedarray command may result in a node warmstart |
HU02157 | 8.3.1.0 | Issuing a mkdistributedarray command may result in a node warmstart |
HU02159 | 8.7.0.0 | A rare issue caused by unexpected I/O in the upper cache can cause a node to warmstart |
HU02162 | 8.3.1.3 | When a node warmstart occurs during an upgrade from v8.3.0.0, or earlier, to 8.3.0.1, or later, with dedup enabled it can lead to repeated node warmstarts across the cluster necessitating a Tier 3 recovery |
HU02164 | 8.4.0.0 | An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted |
HU02164 | 8.2.1.12 | An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted |
HU02164 | 8.3.1.3 | An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted |
HU02166 | 8.2.1.4 | A timing window issue, in RAID code that handles recovery after a drive has been taken out of sync, due to a slow I/O, can cause a single node warmstart |
HU02168 | 8.3.1.2 | In the event of unexpected power loss a node may not save system data |
HU02168 | 8.2.1.11 | In the event of unexpected power loss a node may not save system data |
HU02169 | 8.3.1.0 | After a Tier 3 recovery, different nodes may report different UIDs for a subset of volumes |
HU02170 | 8.4.0.0 | During NVMe SSD firmware upgrade processes peak read latency may reach 10sec |
HU02171 | 8.5.0.0 | The timezone for Iceland is set incorrectly |
HU02171 | 8.4.0.7 | The timezone for Iceland is set incorrectly |
HU02171 | 8.4.2.0 | The timezone for Iceland is set incorrectly |
HU02172 | 8.4.0.0 | The CLI command lsdependentvdisks -enclosure X causes node warmstarts if no nodes are online in that enclosure |
HU02173 | 8.2.1.11 | During a pending fabric login, when an abort is received, it is possible for a related entry in the WWPN table to not be removed. The node will warmstart to clear this condition |
HU02173 | 8.3.1.0 | During a pending fabric login, when an abort is received, it is possible for a related entry in the WWPN table to not be removed. The node will warmstart to clear this condition |
HU02174 | 8.4.0.7 | A timing window issue related to remote copy memory allocation can result in a node warmstart |
HU02174 | 8.5.0.0 | A timing window issue related to remote copy memory allocation can result in a node warmstart |
HU02174 | 8.4.2.0 | A timing window issue related to remote copy memory allocation can result in a node warmstart |
HU02175 | 8.3.1.2 | A GUI issue can cause drive counts to be inconsistent and crash browsers |
HU02176 | 8.2.1.12 | During upgrade a node may limit the number of target ports it reports causing a failover contradiction on hosts |
HU02178 | 8.3.1.2 | IP Quorum hosts may not be shown in lsquorum command output |
HU02180 | 8.3.1.3 | When a svctask restorefcmap command is run on a VVol that is the target of another FlashCopy mapping both nodes in an I/O group may warmstart |
HU02182 | 8.3.1.2 | Cisco MDS switches with old firmware may refuse port logins leading to a loss of access. For more details refer to this Flash |
HU02183 | 8.2.1.11 | An issue in the way inter-node communication is handled can lead to a node warmstart |
HU02183 | 8.3.1.0 | An issue in the way inter-node communication is handled can lead to a node warmstart |
HU02184 | 8.2.1.12 | When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide |
HU02184 | 8.4.0.0 | When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide |
HU02184 | 8.3.1.3 | When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide |
HU02186 | 8.2.1.13 | NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash |
HU02186 | 8.3.1.5 | NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash |
HU02186 | 8.4.0.0 | NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash |
HU02190 | 8.3.1.0 | Error 1046 not triggering a Call Home even though it is a hardware fault |
HU02190 | 8.2.1.11 | Error 1046 not triggering a Call Home even though it is a hardware fault |
HU02194 | 8.3.1.3 | Password reset via USB drive does not work as expected and user is not able to login to Management or Service assistant GUI with the new password |
HU02194 | 8.4.0.0 | Password reset via USB drive does not work as expected and user is not able to login to Management or Service assistant GUI with the new password |
HU02196 | 8.3.1.3 | A particular sequence of internode messaging delays can lead to a cluster wide lease expiry |
HU02196 | 8.4.0.0 | A particular sequence of internode messaging delays can lead to a cluster wide lease expiry |
HU02253 | 8.3.1.3 | A particular sequence of internode messaging delays can lead to a cluster wide lease expiry |
HU02253 | 8.4.0.0 | A particular sequence of internode messaging delays can lead to a cluster wide lease expiry |
HU02197 | 8.3.1.0 | Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery |
HU02197 | 8.2.1.11 | Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery |
HU02197 | 7.8.1.12 | Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery |
HU02200 | 8.2.1.12 | When upgrading from v8.1 or earlier to v8.2.1 or later a remote copy issue may cause a node warmstart, stalling the upgrade |
HU02201 | 8.2.1.12 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02201 | 8.4.0.2 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02201 | 7.8.1.13 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02201 | 8.5.0.0 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02201 | 8.3.1.3 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02221 | 8.2.1.12 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02221 | 8.4.0.2 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02221 | 7.8.1.13 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02221 | 8.5.0.0 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02221 | 8.3.1.3 | Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors |
HU02202 | 8.3.1.2 | During an migratevdisk operation if MDisk tiers in the target pool do not match those in the source pool then a Tier 2 recovery may occur |
HU02203 | 8.3.1.2 | When a node reboots, it is possible for the node to be unable to communicate with some of the NVMe drives in the enclosure |
HU02203 | 8.2.1.11 | When a node reboots, it is possible for the node to be unable to communicate with some of the NVMe drives in the enclosure |
HU02204 | 8.3.1.2 | After a Tier 2 recovery a node may fail to rejoin the cluster |
HU02205 | 8.2.1.11 | Incremental FlashCopy targets can be corrupted when the FlashCopy source is a target of a remote copy relationship |
HU02205 | 8.3.1.0 | Incremental FlashCopy targets can be corrupted when the FlashCopy source is a target of a remote copy relationship |
HU02206 | 8.3.1.0 | Garbage collection can operate at inappropriate times, generating inefficient backend workload, adversely affecting flash drive write endurance and overloading nearline drives |
HU02207 | 8.3.1.2 | If hosts send more concurrent iSCSI commands than a node can handle then it may enter a service state (error 578) |
HU02208 | 8.3.1.3 | An issue with the handling of files by quorum can lead to a node warmstart |
HU02208 | 8.4.0.0 | An issue with the handling of files by quorum can lead to a node warmstart |
HU02210 | 8.3.1.3 | There is a very small timing window where a volume may be reported as offline, to a host, during its conversion from a regular volume to a HyperSwap volume |
HU02210 | 8.4.0.0 | There is a very small timing window where a volume may be reported as offline, to a host, during its conversion from a regular volume to a HyperSwap volume |
HU02212 | 8.2.1.11 | Remote Copy secondary may have inconsistent data following a stop with -access due to a missing bitmap merge from FlashCopy to Remote Copy. For more details refer to this Flash |
HU02212 | 8.3.1.2 | Remote Copy secondary may have inconsistent data following a stop with -access due to a missing bitmap merge from FlashCopy to Remote Copy. For more details refer to this Flash |
HU02213 | 8.3.1.3 | A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash |
HU02213 | 8.2.1.12 | A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash |
HU02213 | 8.4.0.0 | A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash |
HU02214 | 8.2.1.11 | Under a certain I/O pattern it is possible for metadata management in Data Reduction Pools to become inconsistent leading to a node warmstart |
HU02214 | 8.3.1.0 | Under a certain I/O pattern it is possible for metadata management in Data Reduction Pools to become inconsistent leading to a node warmstart |
HU02216 | 8.3.1.2 | When migrating or deleting a Change Volume of a RC relationship the system might be exposed to a Tier 2 (Automatic Cluster Restart) recovery. When deleting the Change Volumes, the T2 will re-occur which will place the nodes into a 564 state. The migration of the Change Volume will trigger a T2 and recover. For more details refer to this Flash |
HU02217 | 8.4.2.0 | Incomplete re-synchronisation following a Tier 3 recovery can lead to RAID inconsistencies |
HU02219 | 8.5.0.12 | Certain tier 1 flash drives report 'SCSI check condition: Aborted command' events |
HU02222 | 8.2.1.11 | Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to this Flash |
HU02222 | 8.3.1.2 | Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to this Flash |
HU02222 | 7.8.1.13 | Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to this Flash |
HU02224 | 8.3.1.2 | When the RAID component fails to free up memory quickly enough for I/O processing there can be a single node warmstart |
HU02225 | 8.4.0.0 | An issue in the Thin Provisioning feature can lead to multiple warmstarts with the possibility of a loss of access to data |
HU02226 | 8.5.0.0 | Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster |
HU02226 | 8.4.0.6 | Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster |
HU02226 | 8.3.1.6 | Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster |
HU02227 | 8.3.1.4 | Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline |
HU02227 | 8.5.0.0 | Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline |
HU02227 | 8.2.1.12 | Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline |
HU02227 | 8.4.0.2 | Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline |
HU02229 | 8.3.1.2 | An issue in the BIOS firmware of some systems can cause a severe performance impact for iSCSI hosts |
HU02230 | 8.4.0.0 | For IBM Flash Core Modules a change of state, from unused to candidate, can lead to a Tier 2 recovery |
HU02230 | 8.3.1.3 | For IBM Flash Core Modules a change of state, from unused to candidate, can lead to a Tier 2 recovery |
HU02232 | 8.4.0.0 | Forced removal of large volumes in FlashCopy mappings can cause multiple node warmstarts with the possibility of a loss of access |
HU02234 | 8.3.1.2 | An issue in HyperSwap Read Passthrough can cause multiple node warmstarts with the possibility of a loss of access to data |
HU02235 | 8.3.1.2 | The SSH CLI prompt can contain the characters FB after the cluster name |
HU02237 | 8.3.1.2 | Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash |
HU02237 | 8.2.1.11 | Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash |
HU02237 | 8.3.0.2 | Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash |
HU02238 | 8.3.0.2 | Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash |
HU02238 | 7.8.1.12 | Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash |
HU02238 | 8.2.1.11 | Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash |
HU02238 | 8.3.1.2 | Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash |
HU02239 | 8.4.0.0 | A rare race condition in the Xcopy function can cause a single node warmstart |
HU02241 | 8.2.1.12 | IP Replication can fail to create IP partnerships via the secondary cluster management IP |
HU02241 | 8.4.0.0 | IP Replication can fail to create IP partnerships via the secondary cluster management IP |
HU02241 | 8.3.1.3 | IP Replication can fail to create IP partnerships via the secondary cluster management IP |
HU02242 | 8.3.1.2 | An iSCSI IP address, with a gateway argument of 0.0.0.0, is not properly assigned to each Ethernet port and any previously set iSCSI IP address may be retained |
HU02243 | 8.5.0.0 | DMP for 1670 event (replace CMOS) will shutdown a node without confirmation from user |
HU02243 | 8.4.2.0 | DMP for 1670 event (replace CMOS) will shutdown a node without confirmation from user |
HU02244 | 8.3.1.3 | False positive node error 766 (depleted CMOS battery) messages may appear in the Event Log |
HU02244 | 8.2.1.12 | False positive node error 766 (depleted CMOS battery) messages may appear in the Event Log |
HU02245 | 8.4.0.0 | First support data collection fails to upload successfully |
HU02247 | 8.2.1.11 | Unnecessary Ethernet MAC flapping messages reported in switch logs |
HU02247 | 8.3.1.0 | Unnecessary Ethernet MAC flapping messages reported in switch logs |
HU02248 | 8.3.1.3 | After upgrade the system may be unable to perform LDAP authentication |
HU02250 | 8.4.0.0 | Duplicate volume names may cause multiple asserts |
HU02251 | 8.3.1.3 | A warmstart may occur when a node receives iSCSI host login/logout requests out of sequence |
HU02251 | 8.4.0.0 | A warmstart may occur when a node receives iSCSI host login/logout requests out of sequence |
HU02255 | 8.3.1.3 | A timing issue in the processing of login requests can cause a single node warmstart |
HU02255 | 8.4.0.0 | A timing issue in the processing of login requests can cause a single node warmstart |
HU02261 | 8.3.1.4 | A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash |
HU02261 | 8.5.0.0 | A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash |
HU02261 | 8.4.0.2 | A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash |
HU02262 | 8.4.0.0 | Entering the CLI applydrivesoftware -cancel command may result in cluster-wide warmstarts |
HU02262 | 8.3.1.3 | Entering the CLI applydrivesoftware -cancel command may result in cluster-wide warmstarts |
HU02263 | 8.4.2.0 | The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only |
HU02263 | 8.4.0.6 | The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only |
HU02263 | 8.5.0.0 | The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only |
HU02265 | 8.4.0.0 | Enhanced inventory can sometimes be missing from callhome data due to the lsfabric command timing out |
HU02266 | 8.2.1.12 | An issue in auto-expand can cause expansion to fail and the volume to be taken offline |
HU02266 | 8.3.1.3 | An issue in auto-expand can cause expansion to fail and the volume to be taken offline |
HU02266 | 8.4.0.0 | An issue in auto-expand can cause expansion to fail and the volume to be taken offline |
HU02267 | 8.4.0.0 | After upgrade it is possible for a node IP address to become duplicated with the cluster IP address and access to the config node to be lost as a consequence |
HU02273 | 8.4.2.0 | When write I/O workload to a HyperSwap volume site reaches a certain thresholds, the system should switch the primary and secondary copies. There are circumstances where this will not happen |
HU02273 | 8.5.0.0 | When write I/O workload to a HyperSwap volume site reaches a certain thresholds, the system should switch the primary and secondary copies. There are circumstances where this will not happen |
HU02274 | 8.4.2.0 | Due to a timing issue in how events are handled an active quorum loss and re-acquisition cycle can be triggered with a 3124 error |
HU02274 | 8.5.0.0 | Due to a timing issue in how events are handled an active quorum loss and re-acquisition cycle can be triggered with a 3124 error |
HU02275 | 8.3.0.0 | Performing any sort of hardware maintenance during an upgrade may cause a cluster to destroy itself, with nodes entering candidate or service state 550 |
HU02277 | 7.8.1.13 | RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash |
HU02277 | 8.4.0.2 | RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash |
HU02277 | 8.2.1.12 | RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash |
HU02277 | 8.5.0.0 | RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash |
HU02277 | 8.3.1.3 | RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash |
HU02280 | 8.3.1.4 | Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown |
HU02280 | 8.5.0.0 | Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown |
HU02280 | 8.4.0.2 | Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown |
HU02281 | 8.3.1.3 | When upgrading from v8.2.1, or earlier, to v8.3.0, or later, the CLI and GUI may incorrectly show all hosts offline. Checks from the host perspective will show them to be online |
HU02282 | 8.4.0.2 | After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline |
HU02282 | 8.3.1.4 | After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline |
HU02282 | 8.5.0.0 | After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline |
HU02285 | 8.3.1.0 | Single node warmstart due to cache resource allocation issue |
HU02288 | 8.3.0.0 | A node might fail to come online after a reboot or warmstart such as during an upgrade |
HU02288 | 8.2.1.12 | A node might fail to come online after a reboot or warmstart such as during an upgrade |
HU02289 | 8.4.0.0 | An issue with internal resource allocation in high-end systems, with 1000s of mirror copies, may cause multiple warmstarts with the possibility of a loss of access |
HU02289 | 8.3.1.3 | An issue with internal resource allocation in high-end systems, with 1000s of mirror copies, may cause multiple warmstarts with the possibility of a loss of access |
HU02290 | 8.5.0.0 | An issue in the virtualization component can divide up IO resources incorrectly leading to adverse impact on queuing times for mdisks CPU cores leading to performance impact |
HU02291 | 8.5.0.0 | Internal counters for upper cache stage/destage I/O rates and latencies are not collected and zeroes are usually displayed |
HU02291 | 8.4.0.2 | Internal counters for upper cache stage/destage I/O rates and latencies are not collected and zeroes are usually displayed |
HU02292 | 8.3.1.4 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02292 | 8.2.1.12 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02292 | 8.5.0.0 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02292 | 8.4.0.2 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02308 | 8.3.1.4 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02308 | 8.2.1.12 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02308 | 8.5.0.0 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02308 | 8.4.0.2 | The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart |
HU02295 | 8.4.2.0 | When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery |
HU02295 | 8.3.1.3 | When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery |
HU02295 | 8.2.1.12 | When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery |
HU02295 | 8.5.0.0 | When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery |
HU02296 | 8.3.1.6 | The zero page functionality can become corrupt causing a volume to be initialised with non-zero data |
HU02296 | 8.5.0.0 | The zero page functionality can become corrupt causing a volume to be initialised with non-zero data |
HU02296 | 8.4.2.0 | The zero page functionality can become corrupt causing a volume to be initialised with non-zero data |
HU02296 | 8.4.0.6 | The zero page functionality can become corrupt causing a volume to be initialised with non-zero data |
HU02297 | 8.5.0.0 | Error handling for a failing backend controller can lead to multiple warmstarts |
HU02297 | 8.4.0.7 | Error handling for a failing backend controller can lead to multiple warmstarts |
HU02297 | 8.4.2.0 | Error handling for a failing backend controller can lead to multiple warmstarts |
HU02298 | 8.4.0.0 | A high frequency of 1920 events and restarting of consistency groups may provoke a Tier 2 recovery |
HU02299 | 8.4.0.0 | NVMe drives can become locked due to a missing encryption key condition |
HU02300 | 8.5.0.0 | Use of Enhanced Callhome in censored mode may lead to adverse performance around 02:00 (2AM) |
HU02300 | 8.4.0.2 | Use of Enhanced Callhome in censored mode may lead to adverse performance around 02:00 (2AM) |
HU02301 | 8.5.0.0 | iSCSI hosts connected to iWARP 25G adapters may experience adverse performance impacts |
HU02301 | 8.4.0.2 | iSCSI hosts connected to iWARP 25G adapters may experience adverse performance impacts |
HU02303 | 8.5.0.0 | Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 |
HU02303 | 8.3.1.3 | Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 |
HU02303 | 8.4.0.2 | Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 |
HU02305 | 8.5.0.0 | Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 |
HU02305 | 8.3.1.3 | Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 |
HU02305 | 8.4.0.2 | Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 |
HU02304 | 8.5.0.0 | Some RAID operations for certain NVMe drives may cause adverse I/O performance |
HU02304 | 8.4.0.2 | Some RAID operations for certain NVMe drives may cause adverse I/O performance |
HU02306 | 8.3.1.9 | An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline |
HU02306 | 8.4.2.0 | An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline |
HU02306 | 8.5.0.0 | An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline |
HU02306 | 8.4.0.4 | An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline |
HU02309 | 8.4.2.0 | Due to a change in how FlashCopy and remote copy interact, multiple warmstarts may occur with the possibility of lease expiries |
HU02309 | 8.5.0.0 | Due to a change in how FlashCopy and remote copy interact, multiple warmstarts may occur with the possibility of lease expiries |
HU02310 | 8.4.0.2 | Where a FlashCopy mapping exists between two volumes in the same Data Reduction Pool and the same I/O group, and the target volume has deduplication enabled, then the target may contain invalid data |
HU02310 | 8.5.0.0 | Where a FlashCopy mapping exists between two volumes in the same Data Reduction Pool and the same I/O group, and the target volume has deduplication enabled, then the target may contain invalid data |
HU02311 | 8.3.1.4 | An issue in volume copy flushing may lead to higher than expected write cache delays |
HU02311 | 8.5.0.0 | An issue in volume copy flushing may lead to higher than expected write cache delays |
HU02311 | 8.4.0.2 | An issue in volume copy flushing may lead to higher than expected write cache delays |
HU02312 | 8.4.0.3 | Changing the preferred node for a volume when it is in a remote copy relationship can result in multiple node warmstarts. For more details refer to this Flash |
HU02312 | 8.5.0.0 | Changing the preferred node for a volume when it is in a remote copy relationship can result in multiple node warmstarts. For more details refer to this Flash |
HU02313 | 8.2.1.12 | When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash |
HU02313 | 8.4.0.2 | When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash |
HU02313 | 8.5.0.0 | When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash |
HU02313 | 8.3.1.4 | When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash |
HU02314 | 8.3.1.4 | Due to a RAID issue when a bad block is detected on a NVMe drive there may be multiple node warmstarts with a possibility of a loss of access to data |
HU02314 | 8.4.0.0 | Due to a RAID issue when a bad block is detected on a NVMe drive there may be multiple node warmstarts with a possibility of a loss of access to data |
HU02315 | 8.5.0.0 | Failover for VMware iSER hosts may pause I/O for more than 120 seconds |
HU02315 | 8.3.1.4 | Failover for VMware iSER hosts may pause I/O for more than 120 seconds |
HU02315 | 8.4.0.2 | Failover for VMware iSER hosts may pause I/O for more than 120 seconds |
HU02317 | 8.4.0.2 | A DRAID expansion can stall shortly after it is initiated |
HU02317 | 8.5.0.0 | A DRAID expansion can stall shortly after it is initiated |
HU02317 | 8.3.1.4 | A DRAID expansion can stall shortly after it is initiated |
HU02318 | 8.3.0.0 | An issue in the handling of iSCSI host I/O may cause a node to kernel panic and go into service with error 578 |
HU02319 | 8.4.0.3 | The GUI can become unresponsive |
HU02319 | 8.4.1.0 | The GUI can become unresponsive |
HU02319 | 8.5.0.0 | The GUI can become unresponsive |
HU02320 | 8.5.0.6 | A battery fails to perform re-condition. This is identified when 'lsenclosurebattery' shows the 'last_recondition_timestamp' as an empty field on the impacted node |
HU02321 | 8.3.1.4 | Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries |
HU02321 | 8.4.0.2 | Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries |
HU02321 | 8.5.0.0 | Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries |
HU02322 | 8.4.0.0 | A deadlock condition in the Data Reduction Pool function may cause multiple node warmstarts and a temporary loss of access to data |
HU02322 | 8.3.1.4 | A deadlock condition in the Data Reduction Pool function may cause multiple node warmstarts and a temporary loss of access to data |
HU02323 | 8.4.0.0 | Stalled I/O during DRAID expansion can cause node warmstarts and a temporary loss of access to data |
HU02323 | 8.3.1.4 | Stalled I/O during DRAID expansion can cause node warmstarts and a temporary loss of access to data |
HU02325 | 8.4.0.3 | Tier 2 and Tier 3 recoveries can fail due to node warmstarts |
HU02325 | 8.5.0.0 | Tier 2 and Tier 3 recoveries can fail due to node warmstarts |
HU02325 | 8.4.1.0 | Tier 2 and Tier 3 recoveries can fail due to node warmstarts |
HU02326 | 8.4.0.3 | Delays in passing messages between nodes in an I/O group can adversely impact write performance |
HU02326 | 8.5.0.0 | Delays in passing messages between nodes in an I/O group can adversely impact write performance |
HU02326 | 8.3.1.6 | Delays in passing messages between nodes in an I/O group can adversely impact write performance |
HU02326 | 8.4.1.0 | Delays in passing messages between nodes in an I/O group can adversely impact write performance |
HU02327 | 8.2.1.15 | Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash |
HU02327 | 8.3.1.6 | Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash |
HU02327 | 8.4.0.0 | Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash |
HU02328 | 8.4.2.0 | Due to an issue with the handling of NVMe registration keys, changing the node WWNN in an active system will cause a lease expiry |
HU02328 | 8.5.0.0 | Due to an issue with the handling of NVMe registration keys, changing the node WWNN in an active system will cause a lease expiry |
HU02331 | 8.5.0.0 | Due to a threshold issue an error code 3400 may appear too often in the event log |
HU02331 | 8.3.1.6 | Due to a threshold issue an error code 3400 may appear too often in the event log |
HU02331 | 8.4.0.3 | Due to a threshold issue an error code 3400 may appear too often in the event log |
HU02331 | 8.4.1.0 | Due to a threshold issue an error code 3400 may appear too often in the event log |
HU02332 | 7.8.1.15 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02332 | 8.5.0.0 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02332 | 8.4.0.3 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02332 | 8.2.1.12 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02332 | 8.3.1.6 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02332 | 8.4.1.0 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02336 | 7.8.1.15 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02336 | 8.5.0.0 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02336 | 8.4.0.3 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02336 | 8.2.1.12 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02336 | 8.3.1.6 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02336 | 8.4.1.0 | When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart |
HU02334 | 8.4.0.0 | Node to node connectivity issues may trigger repeated logins/logouts resulting in a single node warmstart |
HU02335 | 8.4.0.7 | Cannot properly set the site for a host in a multi-site configuration (hyperswap or stretched) via the GUI |
HU02335 | 8.4.1.0 | Cannot properly set the site for a host in a multi-site configuration (hyperswap or stretched) via the GUI |
HU02338 | 7.8.1.13 | An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image |
HU02338 | 8.5.0.0 | An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image |
HU02338 | 8.3.1.4 | An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image |
HU02338 | 8.4.0.2 | An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image |
HU02339 | 8.5.0.5 | Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data |
HU02339 | 8.6.0.0 | Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data |
HU02339 | 8.5.2.0 | Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data |
HU02339 | 8.4.0.7 | Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data |
HU02340 | 8.3.1.4 | High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster |
HU02340 | 8.4.1.0 | High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster |
HU02340 | 8.4.0.3 | High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster |
HU02340 | 8.5.0.0 | High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster |
HU02341 | 8.3.1.2 | Cloud Callhome can become disabled due to an internal issue. A related error may not being recorded in the event log |
HU02342 | 8.4.0.4 | Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state |
HU02342 | 8.3.1.6 | Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state |
HU02342 | 7.8.1.15 | Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state |
HU02342 | 8.5.0.0 | Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state |
HU02342 | 8.2.1.15 | Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state |
HU02343 | 8.5.0.0 | For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts |
HU02343 | 8.3.1.7 | For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts |
HU02343 | 8.4.0.6 | For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts |
HU02345 | 8.3.1.6 | When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance |
HU02345 | 8.5.0.0 | When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance |
HU02345 | 8.4.2.0 | When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance |
HU02345 | 8.4.0.4 | When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance |
HU02346 | 8.5.0.0 | A mismatch between LBA stored by snapshot and disk allocator processes in the thin-provisioning component may cause a single node warmstart |
HU02346 | 8.4.2.0 | A mismatch between LBA stored by snapshot and disk allocator processes in the thin-provisioning component may cause a single node warmstart |
HU02347 | 8.5.0.0 | An issue in the handling of boot drive failure can lead to the partner drive also being failed |
HU02349 | 8.4.2.0 | Using an incorrect FlashCopy consistency group id to stop consistency group will result in T2 recovery if the incorrect id is >501 |
HU02349 | 8.5.0.0 | Using an incorrect FlashCopy consistency group id to stop consistency group will result in T2 recovery if the incorrect id is >501 |
HU02353 | 8.4.0.0 | The GUI will refuse to start a GMCV relationship if one of the change volumes has an ID of 0 |
HU02354 | 8.2.1.12 | An issue in the handling of read transfers may cause hung host IOs leading to a node warmstart |
HU02358 | 8.3.1.3 | An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart |
HU02358 | 8.4.0.0 | An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart |
HU02358 | 8.2.1.12 | An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart |
HU02360 | 8.4.1.0 | Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash |
HU02360 | 8.3.1.5 | Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash |
HU02360 | 8.5.0.0 | Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash |
HU02360 | 8.4.0.3 | Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash |
HU02362 | 8.5.0.0 | When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted |
HU02362 | 8.4.1.0 | When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted |
HU02362 | 8.3.1.6 | When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted |
HU02362 | 8.4.0.3 | When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted |
HU02364 | 8.3.1.9 | False 989001 Managed Disk Group space warnings can be generated |
HU02364 | 8.4.0.0 | False 989001 Managed Disk Group space warnings can be generated |
HU02366 | 8.3.1.6 | Slow internal resource reclamation by the RAID component can cause a node warmstart |
HU02366 | 8.4.2.0 | Slow internal resource reclamation by the RAID component can cause a node warmstart |
HU02366 | 8.4.0.3 | Slow internal resource reclamation by the RAID component can cause a node warmstart |
HU02366 | 8.5.0.0 | Slow internal resource reclamation by the RAID component can cause a node warmstart |
HU02366 | 8.2.1.15 | Slow internal resource reclamation by the RAID component can cause a node warmstart |
HU02367 | 8.5.0.0 | An issue with how RAID handles drive failures may lead to a node warmstart |
HU02367 | 8.4.2.0 | An issue with how RAID handles drive failures may lead to a node warmstart |
HU02367 | 8.3.1.9 | An issue with how RAID handles drive failures may lead to a node warmstart |
HU02367 | 8.4.0.10 | An issue with how RAID handles drive failures may lead to a node warmstart |
HU02368 | 8.5.0.0 | When consistency groups from code levels prior to v8.3 are carried through to v8.3 or later then there can be multiple node warmstarts with the possibility of a loss of access |
HU02368 | 8.4.2.0 | When consistency groups from code levels prior to v8.3 are carried through to v8.3 or later then there can be multiple node warmstarts with the possibility of a loss of access |
HU02372 | 8.3.1.9 | Host SAS port 4 is missing from the GUI view on some systems. |
HU02372 | 8.5.0.6 | Host SAS port 4 is missing from the GUI view on some systems. |
HU02372 | 8.4.0.10 | Host SAS port 4 is missing from the GUI view on some systems. |
HU02373 | 8.4.2.0 | An incorrect compression flag in metadata can take a DRP offline |
HU02373 | 8.3.1.6 | An incorrect compression flag in metadata can take a DRP offline |
HU02373 | 8.4.0.3 | An incorrect compression flag in metadata can take a DRP offline |
HU02373 | 8.5.0.0 | An incorrect compression flag in metadata can take a DRP offline |
HU02374 | 8.5.0.0 | Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports |
HU02374 | 8.2.1.15 | Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports |
HU02374 | 8.4.0.6 | Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports |
HU02374 | 8.4.1.0 | Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports |
HU02375 | 8.4.0.3 | An issue in how the GUI handles volume data can adversely impact its responsiveness |
HU02375 | 8.5.0.0 | An issue in how the GUI handles volume data can adversely impact its responsiveness |
HU02375 | 8.3.1.6 | An issue in how the GUI handles volume data can adversely impact its responsiveness |
HU02376 | 8.3.1.6 | FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes |
HU02376 | 8.4.0.3 | FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes |
HU02376 | 8.4.1.0 | FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes |
HU02376 | 8.5.0.0 | FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes |
HU02377 | 8.3.1.6 | A race condition in DRP may stop IO being processed leading to timeouts |
HU02378 | 8.4.2.0 | Multiple maximum replication delay events and Remote Copy relationship restarts can cause multiple node warmstarts with the possibility of a loss of access |
HU02378 | 8.5.0.0 | Multiple maximum replication delay events and Remote Copy relationship restarts can cause multiple node warmstarts with the possibility of a loss of access |
HU02381 | 8.5.0.0 | When the proxy server password is changed to one with more than 40 characters the config node will warmstart |
HU02381 | 8.4.0.3 | When the proxy server password is changed to one with more than 40 characters the config node will warmstart |
HU02381 | 8.4.2.0 | When the proxy server password is changed to one with more than 40 characters the config node will warmstart |
HU02382 | 8.4.2.0 | A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade) |
HU02382 | 8.4.0.6 | A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade) |
HU02382 | 8.5.0.0 | A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade) |
HU02383 | 8.4.0.6 | An additional 20 second IO delay can occur when a system update commits |
HU02383 | 8.5.0.0 | An additional 20 second IO delay can occur when a system update commits |
HU02383 | 8.4.2.0 | An additional 20 second IO delay can occur when a system update commits |
HU02384 | 8.4.0.4 | An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access |
HU02384 | 8.4.2.0 | An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access |
HU02384 | 8.5.0.0 | An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access |
HU02384 | 8.3.1.6 | An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access |
HU02385 | 8.5.0.0 | Unexpected emails from Inventory Script can be found on mailserver |
HU02385 | 8.4.2.0 | Unexpected emails from Inventory Script can be found on mailserver |
HU02386 | 8.4.0.7 | Enclosure fault LED can remain on due to race condition when location LED state is changed |
HU02386 | 8.4.2.0 | Enclosure fault LED can remain on due to race condition when location LED state is changed |
HU02386 | 8.5.0.0 | Enclosure fault LED can remain on due to race condition when location LED state is changed |
HU02387 | 8.4.0.3 | When using the GUI the maximum Data Reduction Pools limitation incorrectly includes child pools |
HU02387 | 8.5.0.0 | When using the GUI the maximum Data Reduction Pools limitation incorrectly includes child pools |
HU02388 | 8.4.2.0 | GUI can hang randomly due to an out of memory issue after running any task |
HU02388 | 8.4.0.4 | GUI can hang randomly due to an out of memory issue after running any task |
HU02388 | 8.5.0.0 | GUI can hang randomly due to an out of memory issue after running any task |
HU02390 | 8.4.0.0 | A memory handling issue in the REST API may cause an out-of-memory condition when listing a large number of volumes |
HU02390 | 8.3.1.3 | A memory handling issue in the REST API may cause an out-of-memory condition when listing a large number of volumes |
HU02391 | 8.3.1.9 | An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server |
HU02391 | 8.5.0.0 | An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server |
HU02391 | 8.4.0.10 | An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server |
HU02392 | 8.5.0.0 | Validation in the Upload Support Package feature will reject new case number formats in the PMR field |
HU02392 | 8.4.0.3 | Validation in the Upload Support Package feature will reject new case number formats in the PMR field |
HU02392 | 8.3.1.6 | Validation in the Upload Support Package feature will reject new case number formats in the PMR field |
HU02393 | 8.4.2.0 | Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group |
HU02393 | 8.5.0.0 | Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group |
HU02393 | 8.3.1.6 | Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group |
HU02393 | 8.4.0.4 | Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group |
HU02393 | 8.2.1.15 | Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group |
HU02397 | 8.4.2.0 | A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline |
HU02397 | 8.3.1.6 | A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline |
HU02397 | 8.4.0.4 | A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline |
HU02397 | 8.5.0.0 | A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline |
HU02399 | 8.3.1.6 | Boot drives may be reported as having invalid state by the GUI, even though they are online |
HU02400 | 8.2.1.15 | A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area |
HU02400 | 8.5.0.0 | A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area |
HU02400 | 8.4.0.4 | A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area |
HU02400 | 8.3.1.6 | A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area |
HU02401 | 8.3.1.6 | EasyTier can move extents between identical mdisks until one runs out of space |
HU02401 | 8.2.1.15 | EasyTier can move extents between identical mdisks until one runs out of space |
HU02401 | 8.4.0.4 | EasyTier can move extents between identical mdisks until one runs out of space |
HU02401 | 8.5.0.0 | EasyTier can move extents between identical mdisks until one runs out of space |
HU02402 | 8.5.0.0 | The remote support feature may use more memory than expected causing a temporary loss of access |
HU02402 | 8.4.0.7 | The remote support feature may use more memory than expected causing a temporary loss of access |
HU02405 | 8.4.0.4 | An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros |
HU02405 | 8.5.0.0 | An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros |
HU02405 | 8.4.2.0 | An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros |
HU02406 | 8.2.1.15 | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash |
HU02406 | 7.8.1.15 | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash |
HU02406 | 8.4.3.1 | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash |
HU02406 | 8.5.0.0 | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash |
HU02406 | 8.3.1.6 | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash |
HU02406 | 8.4.2.1 | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash |
HU02406 | 8.4.0.4 | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash |
HU02409 | 8.5.0.0 | If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive |
HU02409 | 8.4.0.6 | If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive |
HU02409 | 8.3.1.7 | If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive |
HU02410 | 8.5.0.0 | A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery |
HU02410 | 8.3.1.7 | A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery |
HU02410 | 8.4.2.0 | A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery |
HU02410 | 8.4.0.6 | A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery |
HU02411 | 8.5.0.0 | An issue in the NVMe drive presence checking can result in a node warmstart |
HU02411 | 8.4.2.0 | An issue in the NVMe drive presence checking can result in a node warmstart |
HU02414 | 8.3.1.6 | Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily |
HU02414 | 8.5.0.0 | Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily |
HU02414 | 8.4.2.0 | Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily |
HU02414 | 8.4.0.4 | Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily |
HU02415 | 8.5.0.0 | An issue in garbage collection IO flow logic can take a pool offline temporarily |
HU02416 | 8.5.0.0 | A timing window issue in DRP can cause a valid condition to be deemed invalid triggering a single node warmstart |
HU02417 | 8.5.0.0 | Restoring a reverse FlashCopy mapping to a volume that is also the source of an incremental FlashCopy mapping can take longer than expected |
HU02418 | 8.5.0.0 | During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash |
HU02418 | 8.3.1.6 | During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash |
HU02418 | 8.4.0.5 | During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash |
HU02418 | 8.4.2.1 | During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash |
HU02419 | 8.5.0.0 | During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string |
HU02419 | 8.4.0.2 | During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string |
HU02419 | 8.4.2.0 | During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string |
HU02419 | 8.3.1.6 | During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string |
HU02420 | 8.6.0.0 | During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access |
HU02420 | 8.5.2.0 | During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access |
HU02420 | 8.4.0.10 | During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access |
HU02420 | 8.5.0.6 | During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access |
HU02421 | 8.4.2.1 | A logic fault in the socket communication sub-system can cause multiple node warmstarts when more than 8 external clients attempt to connect. It is possible for this to lead to a loss of access |
HU02421 | 8.5.0.0 | A logic fault in the socket communication sub-system can cause multiple node warmstarts when more than 8 external clients attempt to connect. It is possible for this to lead to a loss of access |
HU02422 | 8.3.1.6 | GUI performance can be degraded when displaying large numbers of volumes or other objects |
HU02422 | 8.4.2.0 | GUI performance can be degraded when displaying large numbers of volumes or other objects |
HU02422 | 8.5.0.0 | GUI performance can be degraded when displaying large numbers of volumes or other objects |
HU02422 | 8.4.0.4 | GUI performance can be degraded when displaying large numbers of volumes or other objects |
HU02423 | 8.5.0.0 | Volume copies may be taken offline even though there is sufficient free capacity |
HU02423 | 8.4.0.6 | Volume copies may be taken offline even though there is sufficient free capacity |
HU02423 | 8.4.2.0 | Volume copies may be taken offline even though there is sufficient free capacity |
HU02424 | 8.4.0.0 | Frequent GUI refreshing adversely impacts usability on some screens |
HU02424 | 8.3.1.6 | Frequent GUI refreshing adversely impacts usability on some screens |
HU02425 | 8.5.0.0 | An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition. |
HU02425 | 8.4.2.0 | An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition. |
HU02425 | 8.3.1.6 | An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition. |
HU02425 | 8.4.0.3 | An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition. |
HU02426 | 8.4.0.4 | Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts |
HU02426 | 8.5.0.0 | Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts |
HU02426 | 8.4.2.0 | Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts |
HU02428 | 8.5.0.0 | Issuing a movevdisk CLI command immediately after removing an associated GMCV relationship can trigger a Tier 2 recovery |
HU02428 | 8.4.0.6 | Issuing a movevdisk CLI command immediately after removing an associated GMCV relationship can trigger a Tier 2 recovery |
HU02429 | 7.8.1.14 | System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI |
HU02429 | 8.4.0.2 | System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI |
HU02429 | 8.2.1.12 | System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI |
HU02429 | 8.5.0.0 | System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI |
HU02429 | 8.3.1.6 | System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI |
HU02430 | 8.5.0.0 | Expanding or shrinking the real size of FlashCopy target volumes can cause recurring node warmstarts and may cause nodes to revert to candidate state |
HU02430 | 8.4.2.1 | Expanding or shrinking the real size of FlashCopy target volumes can cause recurring node warmstarts and may cause nodes to revert to candidate state |
HU02433 | 8.2.1.15 | When a BIOS upgrade occurs excessive tracefile entries can be generated |
HU02433 | 8.3.1.7 | When a BIOS upgrade occurs excessive tracefile entries can be generated |
HU02434 | 8.4.0.6 | An issue in the internal accounting of FlashCopy resources can lead to multiple node warmstarts taking a cluster offline |
HU02434 | 8.5.0.0 | An issue in the internal accounting of FlashCopy resources can lead to multiple node warmstarts taking a cluster offline |
HU02435 | 8.5.0.0 | The removal of deduplicated volumes can cause repeated node warmstarts and the possibility of offline Data Reduction Pools |
HU02435 | 8.4.2.1 | The removal of deduplicated volumes can cause repeated node warmstarts and the possibility of offline Data Reduction Pools |
HU02437 | 8.5.0.0 | Error 2700 is not reported in the Event Log when an incorrect NTP server IP is entered |
HU02438 | 8.5.0.0 | Certain conditions can provoke a cache behaviour that unbalances workload distribution across CPU cores leading to performance impact |
HU02438 | 8.4.0.6 | Certain conditions can provoke a cache behaviour that unbalances workload distribution across CPU cores leading to performance impact |
HU02439 | 8.5.0.0 | An IP partnership between a pre-v8.4.2 system and v8.4.2 or later system may be disconnected because of a keepalive timeout |
HU02439 | 8.4.0.10 | An IP partnership between a pre-v8.4.2 system and v8.4.2 or later system may be disconnected because of a keepalive timeout |
HU02440 | 8.5.0.0 | Using the migrateexts command when both source and target mdisks are unmanaged can trigger a Tier 2 recovery |
HU02440 | 8.4.0.6 | Using the migrateexts command when both source and target mdisks are unmanaged can trigger a Tier 2 recovery |
HU02441 | 8.5.1.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02441 | 8.5.3.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02441 | 8.6.0.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02441 | 8.5.0.3 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02441 | 8.4.3.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02441 | 8.4.2.1 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02486 | 8.5.1.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02486 | 8.5.3.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02486 | 8.6.0.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02486 | 8.5.0.3 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02486 | 8.4.3.0 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02486 | 8.4.2.1 | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts |
HU02442 | 8.4.0.6 | Issuing a lspotentialarraysize CLI command with an invalid drive class can trigger a Tier 2 recovery |
HU02442 | 8.5.0.0 | Issuing a lspotentialarraysize CLI command with an invalid drive class can trigger a Tier 2 recovery |
HU02443 | 8.3.1.9 | An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart |
HU02443 | 8.5.0.0 | An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart |
HU02443 | 8.4.0.10 | An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart |
HU02444 | 8.4.0.6 | Some security scanners can report unauthenticated targets against all the iSCSI IP addresses of a node |
HU02444 | 8.5.0.0 | Some security scanners can report unauthenticated targets against all the iSCSI IP addresses of a node |
HU02445 | 8.5.0.0 | When attempting to expand a volume, if the volume size is greater than 1TB the GUI may not display the expansion pop-up window |
HU02446 | 8.6.0.0 | An invalid alert relating to GMCV freeze time can be displayed |
HU02446 | 8.5.1.0 | An invalid alert relating to GMCV freeze time can be displayed |
HU02448 | 8.5.0.0 | IP Replication statistics displayed in the GUI and XML can be incorrect |
HU02449 | 8.5.0.6 | Due to a timing issue, it is possible (but very unlikely) that maintenance on a SAS 92F/92G expansion enclosure could cause multiple node warmstarts, leading to a loss of access |
HU02450 | 8.4.0.7 | A defect in the frame switching functionality of 32Gbps HBA firmware can cause a node warmstart |
HU02450 | 8.5.0.0 | A defect in the frame switching functionality of 32Gbps HBA firmware can cause a node warmstart |
HU02451 | 8.3.1.7 | An incorrect IP Quorum lease extension setting can lead to a node warmstart |
HU02452 | 8.5.0.0 | An issue in NVMe I/O write functionality can cause a single node warmstart |
HU02452 | 8.4.0.7 | An issue in NVMe I/O write functionality can cause a single node warmstart |
HU02453 | 8.3.1.9 | It may not be possible to connect to GUI or CLI without a restart of the Tomcat server |
HU02453 | 8.6.0.0 | It may not be possible to connect to GUI or CLI without a restart of the Tomcat server |
HU02453 | 8.5.0.2 | It may not be possible to connect to GUI or CLI without a restart of the Tomcat server |
HU02453 | 8.4.0.10 | It may not be possible to connect to GUI or CLI without a restart of the Tomcat server |
HU02454 | 8.5.0.0 | Large numbers of 2251 errors are recorded in the Event Log even though LDAP appears to be working |
HU02455 | 8.5.0.0 | After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery |
HU02455 | 8.4.0.7 | After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery |
HU02455 | 8.3.1.7 | After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery |
HU02456 | 8.5.0.10 | Unseating a NVMe drive after automanage failure can cause a node to warmstart |
HU02456 | 8.5.2.0 | Unseating a NVMe drive after automanage failure can cause a node to warmstart |
HU02460 | 8.5.0.0 | Multiple node warmstarts triggered by ports on the 32G fibre channel adapter failing |
HU02460 | 8.3.1.7 | Multiple node warmstarts triggered by ports on the 32G fibre channel adapter failing |
HU02461 | 8.5.0.0 | Livedump collection can fail multiple times |
HU02462 | 8.5.0.12 | A node can warm start when a FlashCopy volume is flushing, quiesces and has pinned data |
HU02463 | 8.6.0.0 | LDAP user accounts can become locked out because of multiple failed login attempts |
HU02463 | 8.5.0.6 | LDAP user accounts can become locked out because of multiple failed login attempts |
HU02463 | 8.5.1.0 | LDAP user accounts can become locked out because of multiple failed login attempts |
HU02463 | 8.4.0.10 | LDAP user accounts can become locked out because of multiple failed login attempts |
HU02464 | 8.5.1.0 | An issue in the processing of NVMe host logouts can cause multiple node warmstarts |
HU02464 | 8.6.0.0 | An issue in the processing of NVMe host logouts can cause multiple node warmstarts |
HU02464 | 8.5.0.5 | An issue in the processing of NVMe host logouts can cause multiple node warmstarts |
HU02466 | 8.4.0.7 | An issue in the handling of drive failures can result in multiple node warmstarts |
HU02466 | 8.5.0.6 | An issue in the handling of drive failures can result in multiple node warmstarts |
HU02466 | 8.3.1.7 | An issue in the handling of drive failures can result in multiple node warmstarts |
HU02467 | 8.3.1.9 | When one node disappears from the cluster the surviving node can be unable to achieve quorum allegiance in a timely manner causing it to lease expire |
HU02467 | 8.4.0.0 | When one node disappears from the cluster the surviving node can be unable to achieve quorum allegiance in a timely manner causing it to lease expire |
HU02468 | 8.6.0.0 | lsvdisk preferred_node_id filter not working correctly |
HU02468 | 8.5.0.6 | lsvdisk preferred_node_id filter not working correctly |
HU02468 | 8.5.1.0 | lsvdisk preferred_node_id filter not working correctly |
HU02471 | 8.6.0.0 | After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue |
HU02471 | 8.5.1.0 | After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue |
HU02471 | 8.3.1.9 | After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue |
HU02471 | 7.8.1.15 | After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue |
HU02471 | 8.4.0.10 | After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue |
HU02474 | 8.5.0.6 | An SFP failure can cause a node warmstart |
HU02474 | 8.3.1.9 | An SFP failure can cause a node warmstart |
HU02474 | 8.4.0.7 | An SFP failure can cause a node warmstart |
HU02475 | 8.6.0.0 | Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery |
HU02475 | 8.5.0.6 | Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery |
HU02475 | 8.5.2.0 | Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery |
HU02475 | 8.4.0.9 | Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery |
HU02479 | 8.4.0.7 | If an NVMe host cancels a large number of I/O requests, multiple node warmstarts might occur |
HU02479 | 8.5.0.5 | If an NVMe host cancels a large number of I/O requests, multiple node warmstarts might occur |
HU02482 | 8.4.0.7 | Issue with 25Gb ethernet adapter card firmware can cause the node to warmstart should a specific signal be received from the iSer switch. It is possible for this signal to be propagated to all nodes resulting in a loss of access to data |
HU02483 | 8.6.0.0 | T2 Recovery occurred after mkrcrelationship command was run |
HU02483 | 8.5.2.0 | T2 Recovery occurred after mkrcrelationship command was run |
HU02484 | 8.6.0.0 | The GUI does not allow expansion of DRP thin or compressed volumes |
HU02484 | 8.5.2.0 | The GUI does not allow expansion of DRP thin or compressed volumes |
HU02484 | 8.5.0.5 | The GUI does not allow expansion of DRP thin or compressed volumes |
HU02485 | 8.3.1.9 | Reoccurring node warmstarts on systems with DRP that have been upgraded to 8.3.1.7 or 8.3.1.8 |
HU02487 | 8.5.2.0 | Problems expanding the size of a volume using the GUI |
HU02487 | 8.6.0.0 | Problems expanding the size of a volume using the GUI |
HU02487 | 8.5.0.6 | Problems expanding the size of a volume using the GUI |
HU02488 | 8.5.0.3 | Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost) |
HU02488 | 8.5.1.0 | Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost) |
HU02488 | 8.6.0.0 | Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost) |
HU02490 | 8.6.0.0 | Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded |
HU02490 | 8.5.2.0 | Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded |
HU02490 | 8.5.0.6 | Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded |
HU02491 | 8.5.2.0 | On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur |
HU02491 | 8.6.0.0 | On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur |
HU02491 | 8.5.0.5 | On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur |
HU02492 | 8.5.0.5 | Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected. |
HU02492 | 8.6.0.0 | Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected. |
HU02492 | 8.5.2.0 | Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected. |
HU02494 | 8.5.0.5 | A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events. |
HU02494 | 8.5.2.0 | A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events. |
HU02494 | 8.6.0.0 | A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events. |
HU02497 | 8.5.2.0 | A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts |
HU02497 | 8.4.0.7 | A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts |
HU02497 | 8.6.0.0 | A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts |
HU02497 | 8.5.0.5 | A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts |
HU02498 | 8.5.0.5 | If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load. |
HU02498 | 8.6.0.0 | If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load. |
HU02498 | 8.5.2.0 | If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load. |
HU02499 | 8.3.1.9 | A pop up with the message saying 'The server was unable to process the request' may occur due to an invalid time stamp in the file used to provide the pop up reminder |
HU02500 | 8.5.0.5 | If a volume in a FlashCopy mapping is deleted, and the deletion fails (for example because the user does not have the correct permissions to delete that volume), node warmstarts can occur, leading to loss of access |
HU02501 | 8.5.2.0 | If an internal I/O timeout occurs in a RAID array, a node warmstart can occur |
HU02501 | 8.6.0.0 | If an internal I/O timeout occurs in a RAID array, a node warmstart can occur |
HU02501 | 8.5.0.5 | If an internal I/O timeout occurs in a RAID array, a node warmstart can occur |
HU02502 | 8.5.0.5 | On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access |
HU02502 | 8.5.2.0 | On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access |
HU02502 | 8.6.0.0 | On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access |
HU02503 | 8.5.0.5 | The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI |
HU02503 | 8.5.1.0 | The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI |
HU02503 | 8.6.0.0 | The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI |
HU02504 | 8.5.0.5 | The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP |
HU02504 | 8.5.1.0 | The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP |
HU02504 | 8.6.0.0 | The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP |
HU02505 | 8.5.0.5 | A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running |
HU02505 | 8.5.2.0 | A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running |
HU02505 | 8.6.0.0 | A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running |
HU02506 | 8.6.0.0 | On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access. |
HU02506 | 8.5.2.0 | On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access. |
HU02506 | 8.5.0.4 | On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access. |
HU02507 | 8.5.2.0 | A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts. |
HU02507 | 8.6.0.0 | A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts. |
HU02507 | 8.5.0.6 | A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts. |
HU02508 | 8.6.0.0 | The mkippartnership cli command does not allow a portset with a space in the name as a parameter. |
HU02508 | 8.5.2.0 | The mkippartnership cli command does not allow a portset with a space in the name as a parameter. |
HU02508 | 8.5.0.6 | The mkippartnership cli command does not allow a portset with a space in the name as a parameter. |
HU02509 | 8.5.0.5 | Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use |
HU02509 | 8.5.2.0 | Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use |
HU02509 | 8.6.0.0 | Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use |
HU02511 | 8.5.0.6 | Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms |
HU02511 | 8.4.0.9 | Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms |
HU02511 | 8.5.2.0 | Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms |
HU02511 | 8.6.0.0 | Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms |
HU02512 | 8.5.0.5 | An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts |
HU02512 | 8.5.2.0 | An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts |
HU02512 | 8.6.0.0 | An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts |
HU02513 | 8.5.0.6 | When upgrading one side of a cluster from 8.4.2 to either 8.5.0 or 8.5.2, when the other side of the cluster is still running 8.4.2, when you run either 'mkippartnership' or 'rmippartnership' commands from the cluster that is running 8.5.0 or 8.5.2, then an iplink node warmstart can occur |
HU02514 | 8.5.0.5 | Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file |
HU02514 | 8.5.2.0 | Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file |
HU02514 | 8.6.0.0 | Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file |
HU02515 | 8.5.0.5 | Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected |
HU02515 | 8.5.2.0 | Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected |
HU02515 | 8.6.0.0 | Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected |
HU02518 | 8.4.0.8 | Certain hardware platforms running 8.4.0.7 have an issue with the Trusted Platform Module (TPM). This causes issues communicating with encryption keyservers and invalid SSL certificates |
HU02519 | 8.5.0.6 | Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession |
HU02519 | 8.6.0.0 | Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession |
HU02519 | 8.5.2.0 | Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession |
HU02520 | 8.5.0.6 | Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession |
HU02520 | 8.6.0.0 | Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession |
HU02520 | 8.5.2.0 | Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession |
HU02522 | 8.5.0.6 | When upgrading from 8.4.1 or lower to a level that uses IP portsets (8.4.2 or higher), there is an issue when the port ID on each node has a different remote copy use |
HU02523 | 8.5.2.0 | False Host WWPN state shows as degraded for direct attached host after upgrading to 8.5.0.2 |
HU02523 | 8.6.0.0 | False Host WWPN state shows as degraded for direct attached host after upgrading to 8.5.0.2 |
HU02525 | 8.5.3.0 | Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss |
HU02525 | 8.5.0.6 | Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss |
HU02525 | 8.6.0.0 | Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss |
HU02528 | 8.5.3.0 | When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values |
HU02528 | 8.5.0.6 | When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values |
HU02528 | 8.6.0.0 | When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values |
HU02529 | 8.6.0.0 | A single node warmstart may occur due to a rare timing window, when a disconnection occurs between two systems in an IP replication partnership |
HU02530 | 8.5.2.0 | Upgrades from 8.4.2 or 8.5 fail to start on some platforms |
HU02530 | 8.6.0.0 | Upgrades from 8.4.2 or 8.5 fail to start on some platforms |
HU02530 | 8.5.0.6 | Upgrades from 8.4.2 or 8.5 fail to start on some platforms |
HU02532 | 8.4.0.9 | Nodes that are running 8.4.0.7 or 8.4.0.8, or upgrading to either of these levels may suffer asserts if NVME hosts are configured |
HU02534 | 8.5.3.0 | When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes |
HU02534 | 8.5.0.6 | When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes |
HU02534 | 8.6.0.0 | When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes |
HU02538 | 8.6.0.0 | Some systems may suffer a thread locking issue caused by the background copy / cleaning progress for flash copy maps |
HU02538 | 8.5.2.0 | Some systems may suffer a thread locking issue caused by the background copy / cleaning progress for flash copy maps |
HU02539 | 8.6.0.0 | If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port |
HU02539 | 8.5.0.10 | If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port |
HU02539 | 8.5.4.0 | If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port |
HU02540 | 8.6.0.0 | Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts |
HU02540 | 8.5.0.6 | Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts |
HU02540 | 8.5.2.1 | Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts |
HU02541 | 8.5.3.0 | In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data. |
HU02541 | 8.6.0.0 | In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data. |
HU02541 | 8.5.0.6 | In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data. |
HU02542 | 8.5.0.6 | On systems that are running 8.4.2 or 8.5.0, when deleting a Hyperswap volume, or Hyperswap volume copy, that has Safeguarded copy snapshots configured, a T2 recovery can occur causing loss of access to data. |
HU02543 | 8.5.0.6 | After upgrade to 850, the 'lshost -delim' command shows hosts in offline state, while 'lshost' shows them online |
HU02544 | 8.5.2.2 | On systems running 8.5.2.1, if you are not logged in as superuser and you try to create a partnership for policy-based replication, or enable policy-based replication on an existing partnership, then this can trigger a single node warmstart. |
HU02544 | 8.6.0.0 | On systems running 8.5.2.1, if you are not logged in as superuser and you try to create a partnership for policy-based replication, or enable policy-based replication on an existing partnership, then this can trigger a single node warmstart. |
HU02545 | 8.6.0.0 | When following the 'removing and replacing a faulty node canister' procedure, the satask chbootdrive -replacecanister fails to clear the reported 545 error - instead the replacement reboots into 525 / 522 service state |
HU02546 | 8.5.2.2 | On systems running 8.5.2.1, and with Policy-based replication configured, if you created more than 1PB of replicated volumes then this can lead to a loss of hardened data |
HU02546 | 8.6.0.0 | On systems running 8.5.2.1, and with Policy-based replication configured, if you created more than 1PB of replicated volumes then this can lead to a loss of hardened data |
HU02549 | 8.5.3.0 | When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade |
HU02549 | 8.6.0.0 | When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade |
HU02549 | 8.5.0.6 | When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade |
HU02551 | 8.5.3.0 | When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints |
HU02551 | 8.6.0.0 | When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints |
HU02551 | 8.5.0.6 | When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints |
HU02553 | 8.5.0.7 | Remote copy relationships may not correctly display the name of the vdisk on the remote cluster |
HU02553 | 8.6.0.0 | Remote copy relationships may not correctly display the name of the vdisk on the remote cluster |
HU02553 | 8.5.3.0 | Remote copy relationships may not correctly display the name of the vdisk on the remote cluster |
HU02555 | 8.6.0.0 | A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured |
HU02555 | 8.5.3.0 | A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured |
HU02555 | 8.4.0.10 | A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured |
HU02555 | 8.5.0.7 | A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured |
HU02556 | 8.6.0.0 | In rare circumstances, a FlashSystem 9500 (or SV3) node might be unable to boot, requiring a replacement of the boot drive and TPM |
HU02557 | 8.5.0.7 | Systems may be unable to upgrade from pre-8.5.0 to 8.5.0 due to a previous node upgrade and certain DRP conditions existing |
HU02558 | 8.5.4.0 | A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur. |
HU02558 | 8.5.0.6 | A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur. |
HU02558 | 8.6.0.0 | A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur. |
HU02559 | 8.5.3.0 | A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information |
HU02559 | 8.5.0.6 | A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information |
HU02559 | 8.6.0.0 | A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information |
HU02559 | 8.4.0.10 | A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information |
HU02560 | 8.5.0.6 | When creating a SAS host using the GUI, portset is incorrectly added. The command fails with CMMVC9777E as the portset parameter is not supported with the given type of host. |
HU02561 | 8.5.3.0 | If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur |
HU02561 | 8.6.0.0 | If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur |
HU02561 | 8.4.0.10 | If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur |
HU02561 | 8.5.0.6 | If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur |
HU02561 | 8.3.1.9 | If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur |
HU02562 | 8.4.0.10 | A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations |
HU02562 | 8.6.0.0 | A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations |
HU02562 | 8.5.3.0 | A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations |
HU02562 | 8.5.0.6 | A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations |
HU02563 | 8.4.0.10 | Improve dimm slot identification for memory errors |
HU02563 | 8.5.3.0 | Improve dimm slot identification for memory errors |
HU02563 | 8.6.0.0 | Improve dimm slot identification for memory errors |
HU02563 | 8.5.0.6 | Improve dimm slot identification for memory errors |
HU02564 | 8.5.0.6 | The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct |
HU02564 | 8.3.1.9 | The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct |
HU02564 | 8.4.0.10 | The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct |
HU02565 | 8.5.0.8 | Node warmstart when generating data compression savings data for 'lsvdiskanalysis' |
HU02565 | 8.6.0.0 | Node warmstart when generating data compression savings data for 'lsvdiskanalysis' |
HU02567 | 8.5.3.0 | Due to a low probability timing window, FlashCopy reads can occur indefinitely to an offline Vdisk. This can cause host write delays to flashcopy target volumes that can exceed 6 minutes |
HU02567 | 8.6.0.0 | Due to a low probability timing window, FlashCopy reads can occur indefinitely to an offline Vdisk. This can cause host write delays to flashcopy target volumes that can exceed 6 minutes |
HU02568 | 8.6.0.0 | Unable to create remote copy relationship with 'mkrcrelationship' with Aux volume ID greater than 10,000 when one of the systems in the set of partnered systems is limited to 10,000 volumes, either due to the limits of the platform (hardware) or the installed software version |
HU02569 | 8.6.0.0 | Due to a low-probability timing window, when processing I/O from both SCSI and NVMe hosts, a node may warmstart to clear the condition |
HU02569 | 8.5.3.0 | Due to a low-probability timing window, when processing I/O from both SCSI and NVMe hosts, a node may warmstart to clear the condition |
HU02571 | 8.4.0.10 | In a Hyperswap cluster, a Tier 2 recovery may occur after manually shutting down both nodes that are in one IO group |
HU02572 | 8.6.0.0 | When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade. |
HU02572 | 8.5.0.7 | When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade. |
HU02572 | 8.5.4.0 | When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade. |
HU02573 | 8.5.0.10 | HBA firmware can cause a port to appear to be flapping. The port will not work again until the HBA is restarted by rebooting the node. |
HU02573 | 8.6.0.0 | HBA firmware can cause a port to appear to be flapping. The port will not work again until the HBA is restarted by rebooting the node. |
HU02579 | 8.5.0.7 | The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable |
HU02579 | 8.6.0.0 | The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable |
HU02579 | 8.5.3.0 | The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable |
HU02580 | 8.6.0.0 | If FlashCopy mappings are force stopped, and the targets are in a remote copy relationship, then a node may warmstart |
HU02580 | 8.5.3.0 | If FlashCopy mappings are force stopped, and the targets are in a remote copy relationship, then a node may warmstart |
HU02581 | 8.6.0.0 | Due to a low probability timing window, a node warmstart might occur when I/O is sent to a partner node and before the partner node recognizes that the disk is online |
HU02581 | 8.5.3.0 | Due to a low probability timing window, a node warmstart might occur when I/O is sent to a partner node and before the partner node recognizes that the disk is online |
HU02583 | 8.5.3.0 | FCM drive ports maybe excluded after a failed drive firmware download. Depending on the number of drives impacted, this may take the RAID array offline |
HU02583 | 8.6.0.0 | FCM drive ports maybe excluded after a failed drive firmware download. Depending on the number of drives impacted, this may take the RAID array offline |
HU02584 | 8.6.0.0 | If a HyperSwap volume is created with cache disabled in a Data Reduction Pool (DRP), multiple node warmstarts may occur. |
HU02585 | 8.6.1.0 | An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring |
HU02585 | 8.5.0.12 | An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring |
HU02585 | 8.6.0.1 | An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring |
HU02585 | 8.7.0.0 | An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring |
HU02586 | 8.5.0.8 | When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly |
HU02586 | 8.6.0.0 | When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly |
HU02586 | 8.5.4.0 | When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly |
HU02589 | 8.5.4.0 | Reducing the expiration date of snapshots can cause volume creation and deletion to stall |
HU02589 | 8.6.0.0 | Reducing the expiration date of snapshots can cause volume creation and deletion to stall |
HU02591 | 8.5.0.12 | Multiple node asserts can occur when running commands with the 'preferred node' filter during an upgrade to 8.5.0.0 and above. |
HU02592 | 8.6.0.0 | In some scenarios DRP can request RAID to attempt a read by reconstructing data from other strips. In certain cases this can result in a node warmstart |
HU02593 | 8.3.1.9 | NVMe drive is incorrectly reporting end of life due to flash degradation |
HU02593 | 8.4.0.10 | NVMe drive is incorrectly reporting end of life due to flash degradation |
HU02593 | 8.5.0.0 | NVMe drive is incorrectly reporting end of life due to flash degradation |
HU02594 | 8.5.4.0 | Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated |
HU02594 | 8.5.0.8 | Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated |
HU02594 | 8.6.0.0 | Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated |
HU02597 | 8.4.0.10 | A single node may warmstart to recover from the situation were different fibres update the completed count for the allocation extent in question |
HU02600 | 8.6.0.0 | Single node warmstart caused by a rare race condition triggered by multiple aborts and I/O issues |
IC57642 | 7.8.1.5 | A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide |
IC57642 | 8.1.0.0 | A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide |
IC80230 | 7.4.0.0 | Both nodes warmstart due to Ethernet storm |
IC85931 | 7.6.0.0 | When the user is copying iostats files between nodes the automatic clean up process may occasionally result in an failure message (ID 980440) in the event log |
IC85931 | 7.5.0.8 | When the user is copying iostats files between nodes the automatic clean up process may occasionally result in an failure message (ID 980440) in the event log |
IC89562 | 7.3.0.1 | Node warmstart when handling a large number of XCOPY commands |
IC89608 | 7.3.0.1 | Node warmstart when handling a large number of XCOPY commands |
IC90374 | 7.3.0.5 | Node warmstart due to a I/O deadlock when using FlashCopy functions |
IC90799 | 7.3.0.8 | Node warmstart when drive medium error detected at the same time as drive is changing state to offline |
IC92356 | 7.5.0.0 | Improve DMP for handling 2500 event for V7000 using Unified storage |
IC92665 | 7.3.0.1 | Multiple node warmstarts caused by iSCSI initiator using the same IQN as SVC or Storwize |
IC92993 | 7.3.0.1 | Fix Procedure for 1686 not replacing drive correctly |
IC94781 | 7.3.0.1 | GUI Health status pod still showing red after offline node condition has been recovered |
II14767 | 8.3.1.3 | An issue with how cache handles ownership of volumes across multiple sites can lead to cross-site destage, adversely impacting write latency. For more details refer to this Flash |
II14767 | 8.4.0.0 | An issue with how cache handles ownership of volumes across multiple sites can lead to cross-site destage, adversely impacting write latency. For more details refer to this Flash |
II14771 | 7.3.0.9 | Node warmstart due to compression/index re-writing timing condition |
II14771 | 7.4.0.3 | Node warmstart due to compression/index re-writing timing condition |
II14778 | 7.4.0.6 | Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required |
II14778 | 7.5.0.5 | Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required |
II14778 | 7.6.0.0 | Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required |
IT01250 | 7.3.0.1 | Loss of access to data when node or node canister goes offline during drive update |
IT03354 | 7.3.0.8 | Poor read performance with iSCSI single threaded read workload |
IT04105 | 7.3.0.8 | EasyTier does not promote extents between different tiers when using release v7.3.0 |
IT04911 | 7.3.0.9 | Node warmstarts due to RAID synchronisation inconsistency |
IT05219 | 7.3.0.8 | Compressed volumes offline on systems running v7.3 release due to decompression issue |
IT06407 | 7.3.0.9 | Node warmstart due to compressed volume metadata |
IT10251 | 7.6.0.0 | Freeze time update delayed after reduction of cycle period |
IT10470 | 7.5.0.13 | Noisy/high speed fan |
IT10470 | 7.6.0.0 | Noisy/high speed fan |
IT12088 | 7.7.0.0 | If a node encounters a SAS-related warmstart the node can remain in service with a 504/505 error, indicating that is was unable to pick up the necessary VPD to become active again |
IT14917 | 7.5.0.8 | Node warmstarts due to a timing window in the cache component. For more details refer to this Flash |
IT14917 | 7.4.0.10 | Node warmstarts due to a timing window in the cache component. For more details refer to this Flash |
IT14917 | 7.7.1.5 | Node warmstarts due to a timing window in the cache component. For more details refer to this Flash |
IT14917 | 7.6.1.7 | Node warmstarts due to a timing window in the cache component. For more details refer to this Flash |
IT14917 | 7.8.0.0 | Node warmstarts due to a timing window in the cache component. For more details refer to this Flash |
IT14922 | 7.6.1.3 | A memory issue, related to the email feature, may cause nodes to warmstart or go offline |
IT15366 | 7.6.1.4 | CLI command lsportsas may show unexpected port numbering |
IT15366 | 7.7.0.0 | CLI command lsportsas may show unexpected port numbering |
IT16012 | 7.8.0.0 | Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance |
IT16012 | 7.6.1.6 | Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance |
IT16012 | 7.7.0.5 | Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance |
IT16148 | 7.7.1.1 | When accelerate mode is enabled due to the way promote/swap plans are prioritized over demote EasyTier is only demoting 1 extent every 5 minutes |
IT16337 | 7.7.0.4 | Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash |
IT16337 | 7.6.1.5 | Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash |
IT16337 | 7.7.1.1 | Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash |
IT16337 | 7.5.0.10 | Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash |
IT17102 | 7.7.0.4 | Where the maximum number of I/O requests for a FC port has been exceeded, if a SCSI command, with an unsupported opcode, is received from a host then the node may warmstart |
IT17102 | 7.7.1.3 | Where the maximum number of I/O requests for a FC port has been exceeded, if a SCSI command, with an unsupported opcode, is received from a host then the node may warmstart |
IT17302 | 7.8.0.0 | Unexpected 45034 1042 entries in the Event Log |
IT17302 | 7.7.1.5 | Unexpected 45034 1042 entries in the Event Log |
IT17302 | 7.7.0.5 | Unexpected 45034 1042 entries in the Event Log |
IT17564 | 7.7.1.7 | All nodes in an I/O group may warmstart when a DRAID array experiences drive failures |
IT17564 | 7.8.0.0 | All nodes in an I/O group may warmstart when a DRAID array experiences drive failures |
IT17919 | 7.8.1.6 | A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts |
IT17919 | 8.1.0.0 | A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts |
IT18086 | 7.8.0.0 | When a volume is moved between I/O groups a node may warmstart |
IT18086 | 7.7.1.5 | When a volume is moved between I/O groups a node may warmstart |
IT18752 | 7.7.1.6 | When the config node processes an lsdependentvdisks command, issued via the GUI, that has a large number of objects in its parameters, it may warmstart |
IT18752 | 7.8.0.2 | When the config node processes an lsdependentvdisks command, issued via the GUI, that has a large number of objects in its parameters, it may warmstart |
IT19019 | 7.8.1.0 | V5000 control enclosure midplane FRU replacement may fail leading to both nodes reporting a 506 error |
IT19192 | 8.1.1.1 | An issue in the handling of GUI certificates may cause warmstarts leading to a Tier 2 recovery |
IT19192 | 7.8.1.5 | An issue in the handling of GUI certificates may cause warmstarts leading to a Tier 2 recovery |
IT19232 | 7.8.1.0 | Storwize systems can report unexpected drive location errors as a result of a RAID issue |
IT19387 | 8.1.0.0 | When two Storwize I/O groups are connected to each other (via direct connect) 1550 errors will be logged and reappear when marked as fixed |
IT19561 | 7.8.1.8 | An issue with register clearance in the FC driver code may cause a node warmstart |
IT19561 | 8.2.1.0 | An issue with register clearance in the FC driver code may cause a node warmstart |
IT19561 | 8.2.0.0 | An issue with register clearance in the FC driver code may cause a node warmstart |
IT19726 | 7.7.1.7 | Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled preventing the HBA firmware from generating the completion for a FC command |
IT19726 | 7.8.1.1 | Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled preventing the HBA firmware from generating the completion for a FC command |
IT19726 | 7.6.1.8 | Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled preventing the HBA firmware from generating the completion for a FC command |
IT19973 | 7.8.1.1 | Call home emails may not be sent due to a failure to retry |
IT20586 | 8.1.1.0 | Due to an issue in Lancer G5 firmware after a node reboot the LED of the 10GBE port may remain amber even though the port is working normally |
IT20627 | 7.7.1.7 | When Read-Intensive drives are used as quorum disks a drive outage can occur. Under some circumstances this can lead to a loss of access |
IT20627 | 7.8.1.1 | When Read-Intensive drives are used as quorum disks a drive outage can occur. Under some circumstances this can lead to a loss of access |
IT21383 | 7.7.1.7 | Heavy I/O may provoke inconsistencies in resource allocation leading to node warmstarts |
IT21896 | 8.3.1.0 | Where encryption keys have been lost it will not be possible to remove an empty MDisk group |
IT22376 | 7.7.1.7 | Upgrade of V5000 Gen 2 systems, with 16GB node canisters, can become stalled with multiple warmstarts on first node to be upgraded |
IT22591 | 8.1.3.4 | An issue in the HBA adapter firmware may result in node warmstarts |
IT22591 | 7.8.1.8 | An issue in the HBA adapter firmware may result in node warmstarts |
IT22802 | 8.1.0.1 | A memory management issue in cache may cause multiple node warmstarts possibly leading to a loss of access and necessitating a Tier 3 recovery |
IT23034 | 7.8.1.3 | With HyperSwap volumes and mirrored copies, at a single site, using rmvolumecopy to remove a copy, from an auxiliary volume, may result in a cluster-wide warmstart necessitating a Tier 2 recovery |
IT23034 | 8.1.0.1 | With HyperSwap volumes and mirrored copies, at a single site, using rmvolumecopy to remove a copy, from an auxiliary volume, may result in a cluster-wide warmstart necessitating a Tier 2 recovery |
IT23140 | 7.8.1.5 | When viewing the licensed functions GUI page the individual calculations for SCUs, for each tier, may be wrong. However the total is correct |
IT23747 | 8.1.1.1 | For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance |
IT23747 | 7.8.1.5 | For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance |
IT24900 | 7.8.1.8 | Whilst replacing a control enclosure midplane an issue at boot can prevent VPD being assigned delaying a return to service |
IT24900 | 8.1.3.0 | Whilst replacing a control enclosure midplane an issue at boot can prevent VPD being assigned delaying a return to service |
IT25367 | 7.8.1.12 | A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type |
IT25367 | 8.2.1.11 | A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type |
IT25367 | 8.3.0.0 | A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type |
IT25457 | 8.2.0.3 | Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error |
IT25457 | 8.1.3.4 | Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error |
IT25457 | 8.2.1.0 | Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error |
IT25850 | 7.8.1.8 | I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access |
IT25850 | 8.2.0.0 | I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access |
IT25850 | 8.2.1.0 | I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access |
IT25850 | 8.1.3.6 | I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access |
IT25970 | 8.2.1.0 | After a FlashCopy consistency group is started a node may warmstart |
IT26049 | 8.1.3.4 | An issue with CPU scheduling may cause the GUI to respond slowly |
IT26049 | 8.2.0.3 | An issue with CPU scheduling may cause the GUI to respond slowly |
IT26049 | 7.8.1.9 | An issue with CPU scheduling may cause the GUI to respond slowly |
IT26049 | 8.2.1.0 | An issue with CPU scheduling may cause the GUI to respond slowly |
IT26257 | 8.2.1.8 | Starting a relationship, when the remote volume is offline, may result in a T2 recovery |
IT26257 | 8.3.0.0 | Starting a relationship, when the remote volume is offline, may result in a T2 recovery |
IT26257 | 7.8.1.11 | Starting a relationship, when the remote volume is offline, may result in a T2 recovery |
IT26836 | 7.8.1.8 | Loading drive firmware may cause a node warmstart |
IT27460 | 8.2.1.0 | Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits |
IT27460 | 7.8.1.9 | Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits |
IT27460 | 8.1.3.6 | Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits |
IT28433 | 8.2.1.4 | Timing window issue in the Data Reduction Pool rehoming component can cause a single node warmstart |
IT28433 | 8.1.3.6 | Timing window issue in the Data Reduction Pool rehoming component can cause a single node warmstart |
IT28728 | 8.2.1.4 | Email alerts will not work where the mail server does not allow unqualified client host names |
IT29040 | 8.1.3.6 | Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access |
IT29040 | 8.2.1.0 | Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access |
IT29040 | 7.8.1.9 | Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access |
IT29853 | 8.2.1.0 | After upgrading to v8.1.1, or later, V5000 Gen 2 systems, with Gen 1 expansion enclosures, may experience multiple node warmstarts leading to a loss of access |
IT29853 | 8.1.3.4 | After upgrading to v8.1.1, or later, V5000 Gen 2 systems, with Gen 1 expansion enclosures, may experience multiple node warmstarts leading to a loss of access |
IT29867 | 8.3.1.0 | If a change volume, for a remote copy relationship, in a consistency group, runs out of space whilst properties, of the consistency group, are being changed then a Tier 2 recovery may occur |
IT29975 | 8.3.0.1 | During Ethernet port configuration, netmask validation will only accept a fourth octet of zero. Non-zero values will cause the interface to remain inactive |
IT30306 | 8.3.1.0 | A timing issue in callhome function initialisation may cause a node warmstart |
IT30448 | 8.2.1.8 | If an IP Quorum app is killed, during the commit phase of a code upgrade, then that offline IP Quorum device cannot be removed, post upgrade |
IT30448 | 8.3.0.1 | If an IP Quorum app is killed, during the commit phase of a code upgrade, then that offline IP Quorum device cannot be removed, post upgrade |
IT30449 | 8.2.1.8 | Attempting to activate USB encryption on a new V5030E will fail with a CMMVCU6054E error |
IT30595 | 8.2.1.8 | A resource shortage in the RAID component can cause MDisks to be taken offline |
IT30595 | 8.3.0.1 | A resource shortage in the RAID component can cause MDisks to be taken offline |
IT31113 | 8.2.1.11 | After a manual power off and on, of a system, both nodes, in an I/O group, may repeatedly assert into a service state |
IT31113 | 8.3.1.0 | After a manual power off and on, of a system, both nodes, in an I/O group, may repeatedly assert into a service state |
IT31300 | 8.3.1.0 | When a snap collection reads the status of PCI devices a CPU can be stalled leading to a cluster-wide lease expiry |
IT32338 | 8.3.1.3 | Testing LDAP Authentication fails if username & password are supplied |
IT32338 | 8.4.0.0 | Testing LDAP Authentication fails if username & password are supplied |
IT32440 | 8.3.1.2 | Under heavy I/O workload the processing of deduplicated I/O may cause a single node warmstart |
IT32519 | 8.3.1.2 | Changing an LDAP users password, in the directory, whilst this user is logged in to the GUI of a Spectrum Virtualize system may result in an account lockout in the directory, depending on the account lockout policy configured for the directory. Existing CLI logins via SSH are not affected |
IT32631 | 8.3.1.2 | Whilst upgrading the firmware for multiple drives an issue in the firmware checking can initiate a Tier 2 recovery |
IT33734 | 8.4.0.0 | Lower cache partitions may fill up even though higher destage rates are available |
IT33868 | 8.4.0.0 | Non-FCM NVMe drives may exhibit high write response times with the Spectrum Protect Blueprint script |
IT33912 | 8.4.0.7 | A multi-drive code download may fail resulting in a Tier 2 recovery |
IT33996 | 8.3.1.7 | An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart |
IT33996 | 8.4.0.7 | An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart |
IT33996 | 8.5.0.0 | An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart |
IT33996 | 8.4.2.0 | An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart |
IT34949 | 8.4.0.2 | lsnodevpd may show DIMM information in the wrong positions |
IT34949 | 8.5.0.0 | lsnodevpd may show DIMM information in the wrong positions |
IT34958 | 8.5.0.0 | During a system update a node returning to the cluster, after upgrade, may warmstart |
IT34958 | 8.4.2.0 | During a system update a node returning to the cluster, after upgrade, may warmstart |
IT35555 | 8.3.1.4 | Storwize V5030 systems running v8.3.1.3 may experience an offline pool under heavy I/O workloads |
IT36619 | 8.4.0.0 | After a node warmstart, system CPU utilisation may show an increase |
IT36792 | 8.3.1.6 | EasyTier can select a default performance profile for a drive which could cause too much hot data to be moved to lower tiers |
IT37654 | 8.4.2.0 | When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation |
IT37654 | 8.4.0.4 | When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation |
IT37654 | 8.5.0.0 | When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation |
IT38015 | 8.4.0.6 | During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts |
IT38015 | 8.5.0.0 | During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts |
IT38015 | 8.2.1.15 | During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts |
IT38015 | 8.3.1.6 | During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts |
IT38858 | 8.4.2.0 | Unable to resume Enable USB Encryption wizard via the GUI. The GUI will display error CMMVC9231E |
IT38858 | 8.5.0.0 | Unable to resume Enable USB Encryption wizard via the GUI. The GUI will display error CMMVC9231E |
IT40059 | 8.5.0.2 | Port to node metrics can appear inflated due to an issue in performance statistics aggregation |
IT40059 | 8.4.0.7 | Port to node metrics can appear inflated due to an issue in performance statistics aggregation |
IT40370 | 8.4.2.0 | An issue in the PCI fault recovery mechanism may cause a node to constantly reboot |
IT41088 | 8.6.0.0 | Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks |
IT41088 | 8.5.0.6 | Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks |
IT41088 | 8.5.2.0 | Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks |
IT41088 | 8.4.0.10 | Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks |
IT41088 | 8.3.1.9 | Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks |
IT41173 | 8.6.0.0 | If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare. |
IT41173 | 8.4.0.7 | If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare. |
IT41173 | 8.5.2.0 | If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare. |
IT41173 | 8.5.0.5 | If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare. |
IT41191 | 8.5.0.5 | If a REST API client authenticates as an LDAP user, a node warmstart can occur |
IT41191 | 8.6.0.0 | If a REST API client authenticates as an LDAP user, a node warmstart can occur |
IT41191 | 8.5.2.0 | If a REST API client authenticates as an LDAP user, a node warmstart can occur |
IT41447 | 8.5.0.6 | When removing the DNS server configuration, a node may discover unexpected metadata and warmstart |
IT41447 | 8.6.0.3 | When removing the DNS server configuration, a node may discover unexpected metadata and warmstart |
IT41447 | 8.4.0.10 | When removing the DNS server configuration, a node may discover unexpected metadata and warmstart |
IT41835 | 8.3.1.9 | A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type |
IT41835 | 8.5.0.6 | A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type |
IT41835 | 8.4.0.10 | A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type |
IT41835 | 8.5.2.0 | A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type |
IT41835 | 8.6.0.0 | A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type |
IT42403 | 8.5.0.6 | A limit is in place to prevent the use of 8TB drives or larger in RAID5 arrays due to the risk of data loss during an extended rebuild. This limit was intended for 8 TiB however it was implemented as 8TB. A 7.3TiB drive has a capacity of 8.02TB and as a result was incorrectly prevented from use in RAID5 |
IT42403 | 8.4.0.10 | A limit is in place to prevent the use of 8TB drives or larger in RAID5 arrays due to the risk of data loss during an extended rebuild. This limit was intended for 8 TiB however it was implemented as 8TB. A 7.3TiB drive has a capacity of 8.02TB and as a result was incorrectly prevented from use in RAID5 |
SVAPAR-100127 | 8.5.0.10 | The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. |
SVAPAR-100127 | 8.7.0.0 | The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. |
SVAPAR-100127 | 8.6.0.1 | The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. |
SVAPAR-100127 | 8.6.1.0 | The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. |
SVAPAR-100162 | 8.6.1.0 | Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur |
SVAPAR-100162 | 8.6.0.1 | Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur |
SVAPAR-100162 | 8.7.0.0 | Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur |
SVAPAR-100162 | 8.5.0.10 | Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur |
SVAPAR-100172 | 8.6.0.1 | During the enclosure component upgrade, which occurs after the cluster upgrade has committed, a system can experience spurious 'The PSU has indicated DC failure' events (error code 1126 ). The event will automatically fix itself after several seconds and there is no user action required |
SVAPAR-100564 | 8.7.0.0 | On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it. |
SVAPAR-100564 | 8.6.0.1 | On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it. |
SVAPAR-100564 | 8.6.1.0 | On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it. |
SVAPAR-100871 | 8.7.0.0 | Removing an NVMe host followed by running the 'lsnvmefabric' command causes a recurring single node warmstart |
SVAPAR-100924 | 8.7.0.0 | After the battery firmware is updated, either using the utility or by upgrading to a version with newer firmware, the battery LED may be falsely illuminated. |
SVAPAR-100924 | 8.6.2.0 | After the battery firmware is updated, either using the utility or by upgrading to a version with newer firmware, the battery LED may be falsely illuminated. |
SVAPAR-100958 | 8.5.0.10 | A single FCM may incorrectly report multiple medium errors for the same LBA |
SVAPAR-100958 | 8.7.0.0 | A single FCM may incorrectly report multiple medium errors for the same LBA |
SVAPAR-100958 | 8.6.0.1 | A single FCM may incorrectly report multiple medium errors for the same LBA |
SVAPAR-100958 | 8.6.1.0 | A single FCM may incorrectly report multiple medium errors for the same LBA |
SVAPAR-100977 | 8.6.1.0 | When a zone containing NVMe devices is enabled, a node warmstart might occur. |
SVAPAR-100977 | 8.6.0.1 | When a zone containing NVMe devices is enabled, a node warmstart might occur. |
SVAPAR-100977 | 8.7.0.0 | When a zone containing NVMe devices is enabled, a node warmstart might occur. |
SVAPAR-100977 | 8.5.0.10 | When a zone containing NVMe devices is enabled, a node warmstart might occur. |
SVAPAR-102271 | 8.6.0.2 | Enable IBM Storage Defender integration for Data Reduction Pools |
SVAPAR-102271 | 8.7.0.0 | Enable IBM Storage Defender integration for Data Reduction Pools |
SVAPAR-102271 | 8.6.1.0 | Enable IBM Storage Defender integration for Data Reduction Pools |
SVAPAR-102382 | 8.7.0.0 | Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed. |
SVAPAR-102382 | 8.6.0.3 | Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed. |
SVAPAR-102382 | 8.6.2.0 | Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed. |
SVAPAR-102573 | 8.6.1.0 | On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O |
SVAPAR-102573 | 8.6.0.1 | On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O |
SVAPAR-102573 | 8.7.0.0 | On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O |
SVAPAR-103696 | 8.6.0.1 | When taking a snapshot of a volume that is being replicated to another system using Policy Based Replication, the snapshot may contain data from an earlier point in time than intended |
SVAPAR-104159 | 8.6.0.2 | Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart. |
SVAPAR-104159 | 8.7.0.0 | Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart. |
SVAPAR-104159 | 8.6.2.0 | Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart. |
SVAPAR-104250 | 8.7.0.0 | There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition |
SVAPAR-104250 | 8.6.0.2 | There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition |
SVAPAR-104250 | 8.6.2.0 | There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition |
SVAPAR-104250 | 8.5.0.12 | There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition |
SVAPAR-104533 | 8.6.2.0 | Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools |
SVAPAR-104533 | 8.7.0.0 | Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools |
SVAPAR-104533 | 8.5.0.10 | Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools |
SVAPAR-104533 | 8.6.0.2 | Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools |
SVAPAR-105430 | 8.6.0.2 | When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts. |
SVAPAR-105430 | 8.7.0.0 | When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts. |
SVAPAR-105430 | 8.6.2.0 | When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts. |
SVAPAR-105727 | 8.6.2.0 | An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised |
SVAPAR-105727 | 8.5.0.10 | An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised |
SVAPAR-105727 | 8.6.0.2 | An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised |
SVAPAR-105727 | 8.7.0.0 | An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised |
SVAPAR-105861 | 8.6.2.0 | A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group |
SVAPAR-105861 | 8.6.0.2 | A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group |
SVAPAR-105861 | 8.7.0.0 | A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group |
SVAPAR-105955 | 8.6.0.3 | Single node warmstart during link recovery when using a secured IP partnership. |
SVAPAR-106693 | 8.6.0.2 | Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8 |
SVAPAR-106693 | 8.7.0.0 | Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8 |
SVAPAR-106693 | 8.6.2.0 | Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8 |
SVAPAR-106874 | 8.7.0.0 | A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication. |
SVAPAR-106874 | 8.6.0.2 | A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication. |
SVAPAR-106874 | 8.6.2.0 | A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication. |
SVAPAR-107270 | 8.6.2.0 | If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting. |
SVAPAR-107270 | 8.7.0.0 | If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting. |
SVAPAR-107270 | 8.6.0.2 | If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting. |
SVAPAR-107547 | 8.7.0.0 | If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur. |
SVAPAR-107547 | 8.6.0.3 | If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur. |
SVAPAR-107547 | 8.6.2.0 | If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur. |
SVAPAR-107547 | 8.5.0.11 | If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur. |
SVAPAR-107558 | 8.7.0.0 | A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail. |
SVAPAR-107558 | 8.6.0.2 | A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail. |
SVAPAR-107558 | 8.6.2.0 | A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail. |
SVAPAR-107595 | 8.6.0.2 | Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources |
SVAPAR-107595 | 8.5.0.10 | Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources |
SVAPAR-107733 | 8.6.0.2 | The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!' |
SVAPAR-107733 | 8.7.0.0 | The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!' |
SVAPAR-107733 | 8.6.2.0 | The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!' |
SVAPAR-107734 | 8.7.0.0 | When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart. |
SVAPAR-107734 | 8.6.0.2 | When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart. |
SVAPAR-107734 | 8.6.2.0 | When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart. |
SVAPAR-107734 | 8.5.0.11 | When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart. |
SVAPAR-107815 | 8.7.0.0 | There is an issue between 3-Site, whilst adding snapshots on the auxfar site, that causes the node to warmstart |
SVAPAR-107815 | 8.6.2.0 | There is an issue between 3-Site, whilst adding snapshots on the auxfar site, that causes the node to warmstart |
SVAPAR-107852 | 8.7.0.0 | A Policy-Based High Availability node may warmstart during IP quorum disconnect and reconnect operations. |
SVAPAR-107852 | 8.6.2.0 | A Policy-Based High Availability node may warmstart during IP quorum disconnect and reconnect operations. |
SVAPAR-108469 | 8.7.0.0 | A single node warmstart may occur on nodes configure to use a secured IP partnership |
SVAPAR-108469 | 8.6.2.0 | A single node warmstart may occur on nodes configure to use a secured IP partnership |
SVAPAR-108469 | 8.6.0.4 | A single node warmstart may occur on nodes configure to use a secured IP partnership |
SVAPAR-108476 | 8.6.2.0 | Remote users with public SSH keys configured cannot failback to password authentication. |
SVAPAR-108476 | 8.7.0.0 | Remote users with public SSH keys configured cannot failback to password authentication. |
SVAPAR-108551 | 8.7.0.0 | An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded. |
SVAPAR-108551 | 8.5.0.11 | An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded. |
SVAPAR-108551 | 8.6.0.3 | An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded. |
SVAPAR-108551 | 8.6.2.0 | An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded. |
SVAPAR-108715 | 8.6.0.4 | The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI. |
SVAPAR-108715 | 8.5.0.12 | The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI. |
SVAPAR-108715 | 8.6.2.0 | The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI. |
SVAPAR-108715 | 8.7.0.0 | The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI. |
SVAPAR-108831 | 8.7.0.0 | FS9500 and SV3 nodes may not boot with the minimum configuration consisting of at least 2 DIMMS. |
SVAPAR-108831 | 8.6.2.0 | FS9500 and SV3 nodes may not boot with the minimum configuration consisting of at least 2 DIMMS. |
SVAPAR-109289 | 8.7.0.0 | Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets |
SVAPAR-109289 | 8.6.0.2 | Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets |
SVAPAR-109289 | 8.5.0.10 | Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets |
SVAPAR-109289 | 8.6.2.0 | Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets |
SVAPAR-109385 | 8.7.0.0 | When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage |
SVAPAR-109385 | 8.6.3.0 | When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage |
SVAPAR-110059 | 8.6.2.0 | When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail. |
SVAPAR-110059 | 8.6.0.1 | When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail. |
SVAPAR-110059 | 8.7.0.0 | When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail. |
SVAPAR-110234 | 8.5.0.11 | A single node warmstart can occur due to fibre channel adapter resource contention during 'chpartnership -stop' or 'mkfcpartnership' actions |
SVAPAR-110309 | 8.7.0.0 | When a volume group is assigned to an ownership group, and has a snapshot policy associated, running the 'lsvolumegroupsnapshotpolicy' or 'lsvolumegrouppopulation' command whilst logged in as an ownership group user, will cause a Config node to warmstart. |
SVAPAR-110426 | 8.6.0.3 | When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart |
SVAPAR-110426 | 8.6.2.0 | When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart |
SVAPAR-110426 | 8.7.0.0 | When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart |
SVAPAR-110735 | 8.6.2.0 | Additional policing has been introduced to ensure that FlashCopy target volumes are not used with policy-based replication. Commands 'chvolumegroup -replicationpolicy' will fail if any volume in the group is the target of a FlashCopy map. 'chvdisk -volumegroup' will fail if the volume is the target of a FlashCopy map, and the volume group has a replication policy. |
SVAPAR-110735 | 8.7.0.0 | Additional policing has been introduced to ensure that FlashCopy target volumes are not used with policy-based replication. Commands 'chvolumegroup -replicationpolicy' will fail if any volume in the group is the target of a FlashCopy map. 'chvdisk -volumegroup' will fail if the volume is the target of a FlashCopy map, and the volume group has a replication policy. |
SVAPAR-110742 | 8.6.2.0 | A System is unable to send email to email server because the password contains a hash '#' character. |
SVAPAR-110742 | 8.7.0.0 | A System is unable to send email to email server because the password contains a hash '#' character. |
SVAPAR-110743 | 8.7.0.0 | Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes. |
SVAPAR-110743 | 8.6.2.0 | Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes. |
SVAPAR-110743 | 8.6.0.4 | Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes. |
SVAPAR-110745 | 8.7.0.0 | Policy-based Replication (PBR) snapshots and Change Volumes are factored into the preferred node assignment. This can lead to a perceived imbalance of the distribution of preferred node assignments. |
SVAPAR-110745 | 8.6.2.0 | Policy-based Replication (PBR) snapshots and Change Volumes are factored into the preferred node assignment. This can lead to a perceived imbalance of the distribution of preferred node assignments. |
SVAPAR-110749 | 8.7.0.0 | There is an issue when configuring volumes using the wizard, the underlying command that is called is the 'mkvolume' command rather than the previous 'mkvdisk' command. With 'mkvdisk' it was possible to format the volumes, whereas with 'mkvolume' it is not possible |
SVAPAR-110749 | 8.6.2.0 | There is an issue when configuring volumes using the wizard, the underlying command that is called is the 'mkvolume' command rather than the previous 'mkvdisk' command. With 'mkvdisk' it was possible to format the volumes, whereas with 'mkvolume' it is not possible |
SVAPAR-110765 | 8.6.2.0 | In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter |
SVAPAR-110765 | 8.5.0.12 | In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter |
SVAPAR-110765 | 8.7.0.0 | In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter |
SVAPAR-110765 | 8.6.0.4 | In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter |
SVAPAR-111021 | 8.7.0.0 | Unable to load resource page in GUI if the IO group ID:0 does not have any nodes. |
SVAPAR-111021 | 8.6.2.0 | Unable to load resource page in GUI if the IO group ID:0 does not have any nodes. |
SVAPAR-111021 | 8.5.0.12 | Unable to load resource page in GUI if the IO group ID:0 does not have any nodes. |
SVAPAR-111021 | 8.6.0.4 | Unable to load resource page in GUI if the IO group ID:0 does not have any nodes. |
SVAPAR-111173 | 8.7.0.0 | Loss of access when two drives experience slowness at the same time |
SVAPAR-111187 | 8.7.0.0 | There is an issue if the browser language is set to French, that can cause the SNMP server creation wizard not to be displayed. |
SVAPAR-111187 | 8.6.2.0 | There is an issue if the browser language is set to French, that can cause the SNMP server creation wizard not to be displayed. |
SVAPAR-111239 | 8.6.2.0 | In rare situations it is possible for a node running Global Mirror with Change Volumes (GMCV) to assert |
SVAPAR-111239 | 8.7.0.0 | In rare situations it is possible for a node running Global Mirror with Change Volumes (GMCV) to assert |
SVAPAR-111257 | 8.7.0.0 | If many drive firmware upgrades are performed in quick succession, multiple nodes may go offline with node error 565 due to a full boot drive. |
SVAPAR-111257 | 8.6.2.0 | If many drive firmware upgrades are performed in quick succession, multiple nodes may go offline with node error 565 due to a full boot drive. |
SVAPAR-111444 | 8.6.0.4 | Direct attached fibre channel hosts may not log into the NPIV host port due to a timing issue with the Registered State Change Notification (RSCN). |
SVAPAR-111705 | 8.6.2.0 | If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts. |
SVAPAR-111705 | 8.6.0.3 | If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts. |
SVAPAR-111705 | 8.7.0.0 | If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts. |
SVAPAR-111812 | 8.6.0.3 | Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes. |
SVAPAR-111812 | 8.7.0.0 | Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes. |
SVAPAR-111812 | 8.6.3.0 | Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes. |
SVAPAR-111989 | 8.7.0.0 | Downloading software with a Fix ID longer than 64 characters fails with an error |
SVAPAR-111989 | 8.6.2.0 | Downloading software with a Fix ID longer than 64 characters fails with an error |
SVAPAR-111991 | 8.7.0.0 | Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character |
SVAPAR-111991 | 8.6.2.0 | Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character |
SVAPAR-111992 | 8.7.0.0 | Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings |
SVAPAR-111992 | 8.6.2.0 | Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings |
SVAPAR-111992 | 8.6.0.4 | Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings |
SVAPAR-111994 | 8.6.2.0 | Certain writes to deduplicated and compressed DRP vdisks may return a mismatch, leading to a DRP pool going offline. |
SVAPAR-111994 | 8.7.0.0 | Certain writes to deduplicated and compressed DRP vdisks may return a mismatch, leading to a DRP pool going offline. |
SVAPAR-111996 | 8.6.2.0 | After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade. |
SVAPAR-111996 | 8.5.0.12 | After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade. |
SVAPAR-111996 | 8.7.0.0 | After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade. |
SVAPAR-112007 | 8.6.2.0 | Running the 'chsystemlimits' command with no parameters can cause multiple node warmstarts. |
SVAPAR-112007 | 8.7.0.0 | Running the 'chsystemlimits' command with no parameters can cause multiple node warmstarts. |
SVAPAR-112107 | 8.6.0.3 | There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process. |
SVAPAR-112107 | 8.6.2.0 | There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process. |
SVAPAR-112107 | 8.5.0.11 | There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process. |
SVAPAR-112107 | 8.7.0.0 | There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process. |
SVAPAR-112119 | 8.6.2.0 | Volumes can go offline due to out of space issues. This can cause the node to warmstart. |
SVAPAR-112119 | 8.7.0.0 | Volumes can go offline due to out of space issues. This can cause the node to warmstart. |
SVAPAR-112203 | 8.7.0.0 | A node warmstart may occur when removing a volume from a volume group which uses policy-based Replication. |
SVAPAR-112203 | 8.6.2.0 | A node warmstart may occur when removing a volume from a volume group which uses policy-based Replication. |
SVAPAR-112243 | 8.7.0.0 | Prior to 8.4.0 NTP was used. After 8.4.0 this was changed to 'chronyd'. When upgrading from a lower level to 8.4 or higher, systems may experience compatibility issues. |
SVAPAR-112243 | 8.6.2.0 | Prior to 8.4.0 NTP was used. After 8.4.0 this was changed to 'chronyd'. When upgrading from a lower level to 8.4 or higher, systems may experience compatibility issues. |
SVAPAR-112525 | 8.7.0.0 | A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy |
SVAPAR-112525 | 8.5.0.11 | A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy |
SVAPAR-112525 | 8.6.0.3 | A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy |
SVAPAR-112525 | 8.6.2.0 | A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy |
SVAPAR-112707 | 8.6.0.3 | Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash |
SVAPAR-112707 | 8.5.0.11 | Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash |
SVAPAR-112707 | 8.7.0.0 | Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash |
SVAPAR-112707 | 8.6.3.0 | Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash |
SVAPAR-112711 | 8.5.0.11 | IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message. |
SVAPAR-112711 | 8.6.0.3 | IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message. |
SVAPAR-112711 | 8.6.2.0 | IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message. |
SVAPAR-112711 | 8.7.0.0 | IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message. |
SVAPAR-112712 | 8.6.3.0 | The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above. |
SVAPAR-112712 | 8.6.0.3 | The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above. |
SVAPAR-112712 | 8.7.0.0 | The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above. |
SVAPAR-112856 | 8.7.0.0 | Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes. |
SVAPAR-112856 | 8.6.3.0 | Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes. |
SVAPAR-112856 | 8.6.0.4 | Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes. |
SVAPAR-112939 | 8.7.0.0 | A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang. |
SVAPAR-112939 | 8.6.0.4 | A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang. |
SVAPAR-112939 | 8.6.3.0 | A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang. |
SVAPAR-113122 | 8.6.2.0 | A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. |
SVAPAR-113122 | 8.6.0.3 | A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. |
SVAPAR-113122 | 8.7.0.0 | A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. |
SVAPAR-110819 | 8.6.2.0 | A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. |
SVAPAR-110819 | 8.6.0.3 | A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. |
SVAPAR-110819 | 8.7.0.0 | A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. |
SVAPAR-113792 | 8.7.0.0 | Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts |
SVAPAR-113792 | 8.6.0.4 | Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts |
SVAPAR-113792 | 8.6.3.0 | Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts |
SVAPAR-114081 | 8.6.2.0 | The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins. |
SVAPAR-114081 | 8.7.0.0 | The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins. |
SVAPAR-114081 | 8.6.0.4 | The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins. |
SVAPAR-114086 | 8.7.0.0 | Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware. |
SVAPAR-114086 | 8.6.3.0 | Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware. |
SVAPAR-114145 | 8.7.0.0 | A timing issue triggered by disabling an IP partnership's compression state while replication is running may cause a node warmstart. |
SVAPAR-114899 | 8.6.0.2 | Out of order snapshot stopping can cause stuck cleaning processes to occur, following Policy-based Replication cycling. This manifests as extremely high CPU utilization on multiple CPU cores, causing excessively high volume response times. |
SVAPAR-115021 | 8.7.0.0 | Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state. |
SVAPAR-115021 | 8.6.3.0 | Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state. |
SVAPAR-115021 | 8.6.0.4 | Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state. |
SVAPAR-115136 | 8.6.0.3 | Failure of an NVMe drive has a small probability of triggering a PCIe credit timeout in a node canister, causing the node to reboot. |
SVAPAR-115136 | 8.5.0.12 | Failure of an NVMe drive has a small probability of triggering a PCIe credit timeout in a node canister, causing the node to reboot. |
SVAPAR-115478 | 8.6.0.4 | An issue in the thin-provisioning component may cause a node warmstart during upgrade from pre-8.5.4 to 8.5.4 or later. |
SVAPAR-115505 | 8.6.3.0 | Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started. |
SVAPAR-115505 | 8.6.0.4 | Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started. |
SVAPAR-115505 | 8.7.0.0 | Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started. |
SVAPAR-115520 | 8.7.0.0 | An unexpected sequence of NVMe host IO commands may trigger a node warmstart. |
SVAPAR-116265 | 8.7.0.0 | When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory. |
SVAPAR-116265 | 8.6.3.0 | When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory. |
SVAPAR-116592 | 8.7.0.0 | If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources. |
SVAPAR-116592 | 8.5.0.12 | If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources. |
SVAPAR-116592 | 8.6.0.4 | If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources. |
SVAPAR-117179 | 8.6.0.3 | Snap data collection does not collect an error log if the superuser password requires a change |
SVAPAR-117179 | 8.5.0.11 | Snap data collection does not collect an error log if the superuser password requires a change |
SVAPAR-117318 | 8.5.0.11 | A faulty SFP in a 32Gb Fibre Channel adapter may cause a single node warmstart, instead of reporting the port as failed. |
SVAPAR-117457 | 8.6.3.0 | A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. |
SVAPAR-117457 | 8.7.0.0 | A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. |
SVAPAR-117663 | 8.6.3.0 | The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time. |
SVAPAR-117663 | 8.7.0.0 | The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time. |
SVAPAR-117738 | 8.6.3.0 | The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive. |
SVAPAR-117738 | 8.7.0.0 | The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive. |
SVAPAR-117738 | 8.6.2.1 | The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive. |
SVAPAR-117768 | 8.6.3.0 | Cloud Callhome may stop working without logging an error |
SVAPAR-117768 | 8.6.0.3 | Cloud Callhome may stop working without logging an error |
SVAPAR-117768 | 8.7.0.0 | Cloud Callhome may stop working without logging an error |
SVAPAR-117781 | 8.6.0.3 | A single node warmstart may occur during Fabric Device Management Interface (FDMI) discovery if a virtual WWPN is discovered on a different physical port than is was previously. |
SVAPAR-119799 | 8.7.0.0 | Inter-node resource queuing on SV3 I/O groups, causes high write response time. |
SVAPAR-120156 | 8.7.0.0 | An internal process introduced in 8.6.0 to collect iSCSI port statistics can cause host performance to be affected |
SVAPAR-120156 | 8.6.0.4 | An internal process introduced in 8.6.0 to collect iSCSI port statistics can cause host performance to be affected |
SVAPAR-120359 | 8.6.3.0 | Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication |
SVAPAR-120359 | 8.7.0.0 | Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication |
SVAPAR-120391 | 8.7.0.0 | Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts. |
SVAPAR-120391 | 8.6.3.0 | Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts. |
SVAPAR-120397 | 8.6.3.0 | A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery. |
SVAPAR-120397 | 8.7.0.0 | A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery. |
SVAPAR-120399 | 8.6.0.4 | A host WWPN incorrectly shows as being still logged into the storage when it is not. |
SVAPAR-120399 | 8.6.3.0 | A host WWPN incorrectly shows as being still logged into the storage when it is not. |
SVAPAR-120399 | 8.7.0.0 | A host WWPN incorrectly shows as being still logged into the storage when it is not. |
SVAPAR-120399 | 8.5.0.12 | A host WWPN incorrectly shows as being still logged into the storage when it is not. |
SVAPAR-120495 | 8.6.3.0 | A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart. |
SVAPAR-120495 | 8.6.0.4 | A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart. |
SVAPAR-120495 | 8.7.0.0 | A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart. |
SVAPAR-120599 | 8.7.0.0 | On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart. |
SVAPAR-120599 | 8.6.3.0 | On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart. |
SVAPAR-120610 | 8.7.0.0 | Excessive 'chfcmap' commands can result in multiple node warmstarts occurring |
SVAPAR-120610 | 8.5.0.12 | Excessive 'chfcmap' commands can result in multiple node warmstarts occurring |
SVAPAR-120610 | 8.6.0.4 | Excessive 'chfcmap' commands can result in multiple node warmstarts occurring |
SVAPAR-120610 | 8.6.3.0 | Excessive 'chfcmap' commands can result in multiple node warmstarts occurring |
SVAPAR-120616 | 8.6.3.0 | After mapping a volume to an NVMe host, a customer is unable to map the same vdisk to a second NVMe host using the GUI, however it is possible using CLI. |
SVAPAR-120616 | 8.7.0.0 | After mapping a volume to an NVMe host, a customer is unable to map the same vdisk to a second NVMe host using the GUI, however it is possible using CLI. |
SVAPAR-120630 | 8.7.0.0 | An MDisk may go offline due to IO timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup. |
SVAPAR-120630 | 8.6.3.0 | An MDisk may go offline due to IO timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup. |
SVAPAR-120631 | 8.7.0.0 | When a user deletes a vdisk, and if 'chfcmap' is run afterwards against the same vdisk ID, a system recovery may occur. |
SVAPAR-120631 | 8.6.3.0 | When a user deletes a vdisk, and if 'chfcmap' is run afterwards against the same vdisk ID, a system recovery may occur. |
SVAPAR-120639 | 8.5.0.12 | The vulnerability scanner claims cookies were set without HttpOnly flag. |
SVAPAR-120639 | 8.7.0.0 | The vulnerability scanner claims cookies were set without HttpOnly flag. |
SVAPAR-120639 | 8.6.0.4 | The vulnerability scanner claims cookies were set without HttpOnly flag. |
SVAPAR-120639 | 8.6.3.0 | The vulnerability scanner claims cookies were set without HttpOnly flag. |
SVAPAR-120732 | 8.6.3.0 | Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file. |
SVAPAR-120732 | 8.7.0.0 | Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file. |
SVAPAR-120925 | 8.7.0.0 | A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool. |
SVAPAR-120925 | 8.6.3.0 | A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool. |
SVAPAR-121334 | 8.7.0.0 | Packets with unexpected size are received on the ethernet interface. This causes the internal buffers to become full, thereby causing a node to warmstart to clear the condition |
SVAPAR-121334 | 8.6.0.4 | Packets with unexpected size are received on the ethernet interface. This causes the internal buffers to become full, thereby causing a node to warmstart to clear the condition |
SVAPAR-122411 | 8.6.0.4 | A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome. |
SVAPAR-122411 | 8.5.0.12 | A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome. |
SVAPAR-122411 | 8.7.0.0 | A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome. |
SVAPAR-123644 | 8.6.0.4 | A system with NVMe drives may falsely log error 2560, indicting a Flash drive with high write endurance usage. The error cannot be cleared. |
SVAPAR-123644 | 8.7.0.0 | A system with NVMe drives may falsely log error 2560, indicting a Flash drive with high write endurance usage. The error cannot be cleared. |
SVAPAR-123644 | 8.5.0.12 | A system with NVMe drives may falsely log error 2560, indicting a Flash drive with high write endurance usage. The error cannot be cleared. |
SVAPAR-123874 | 8.7.0.0 | There is a timing window when using async-PBR or RC GMCV, with Volume Group snapshots, which results in the new snapshot VDisk mistakenly being taken offline, forcing the production volume offline for a brief period. |
SVAPAR-123874 | 8.6.0.4 | There is a timing window when using async-PBR or RC GMCV, with Volume Group snapshots, which results in the new snapshot VDisk mistakenly being taken offline, forcing the production volume offline for a brief period. |
SVAPAR-123945 | 8.6.0.4 | If a system SSL certificate is installed with the extension CA True it may trigger multiple node warmstarts. |
SVAPAR-123945 | 8.7.0.0 | If a system SSL certificate is installed with the extension CA True it may trigger multiple node warmstarts. |
SVAPAR-125416 | 8.7.0.0 | If the vdisk with ID 0 is deleted and then recreated, and is added to a volume group with an HA replication policy, its internal state may become invalid. If a node warmstart or upgrade occurs in this state, this may trigger multiple node warmstarts and loss of access. |
SVAPAR-126737 | 8.7.0.0 | If a user that does not have SecurityAdmin role runs the command 'rmmdiskgrp -force' on a pool with mirrored VDisks, a T2 recovery may occur. |
SVAPAR-126742 | 8.7.0.0 | A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix. |
SVAPAR-126742 | 8.5.0.12 | A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix. |
SVAPAR-126742 | 8.6.0.4 | A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix. |
SVAPAR-126767 | 8.7.0.0 | Upgrading to 8.6.0 when iSER clustering is configured, may cause multiple node warmstarts to occur, if node canisters have been swapped between slots since the system was manufactured. |
SVAPAR-126767 | 8.6.0.4 | Upgrading to 8.6.0 when iSER clustering is configured, may cause multiple node warmstarts to occur, if node canisters have been swapped between slots since the system was manufactured. |
SVAPAR-127063 | 8.6.0.4 | Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts |
SVAPAR-127063 | 8.5.0.12 | Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts |
SVAPAR-127063 | 8.7.0.0 | Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts |
SVAPAR-127825 | 8.7.0.0 | Due to an issue with the Fibre Channel adapter firmware the node may warmstart |
SVAPAR-127833 | 8.7.0.0 | Temperature warning is reported against the incorrect Secondary Expander Module (SEM) |
SVAPAR-127835 | 8.7.0.0 | A node may warmstart due to invalid RDMA receive size of zero. |
SVAPAR-127836 | 8.7.0.0 | Running some Safeguarded Copy commands can cause a cluster recovery in some platforms. |
SVAPAR-127836 | 8.6.0.4 | Running some Safeguarded Copy commands can cause a cluster recovery in some platforms. |
SVAPAR-127841 | 8.7.0.0 | A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur |
SVAPAR-127841 | 8.6.0.4 | A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur |
SVAPAR-127841 | 8.5.0.12 | A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur |
SVAPAR-127844 | 8.6.0.4 | The user is informed that a snapshot policy cannot be assigned. The error message CMMVC9893E is displayed. |
SVAPAR-127844 | 8.7.0.0 | The user is informed that a snapshot policy cannot be assigned. The error message CMMVC9893E is displayed. |
SVAPAR-127845 | 8.7.0.0 | Attempting to create a second I/O group, in the two `Caching I/O Group` dropdowns on the `Define Volume Properties` modal of `Create Volumes` results in error `CMMVC8709E the iogroups of cache memory storage are not in the same site as the storage groups`. |
SVAPAR-127869 | 8.7.0.0 | Multiple node warmstarts may occur, due to a rarely seen timing window, when quorum disk IO is submitted but there is no backend mdisk Logical Unit association that has been discovered by the agent for that quorum disk. |
SVAPAR-127871 | 8.7.0.0 | When performing a manual upgrade of the AUX cluster from 8.1.1.2 to 8.2.1.12, 'lsupdate' incorrectly reports that the code level is 7.7.1.5 |
SVAPAR-127908 | 8.5.0.12 | A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI |
SVAPAR-127908 | 8.6.0.4 | A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI |
SVAPAR-128010 | 8.7.0.0 | A node warmstart can sometimes occur due to a timeout on certain fibre channel adapters |
SVAPAR-128052 | 8.6.0.4 | A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter. |
SVAPAR-128052 | 8.5.0.12 | A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter. |
SVAPAR-128052 | 8.7.0.0 | A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter. |
SVAPAR-128228 | 8.6.0.4 | The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x |
SVAPAR-128228 | 8.5.0.12 | The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x |
SVAPAR-128379 | 8.5.0.12 | When collecting the debug data from a 16Gb or 32Gb Fibre Channel adapter, node warmstarts may occur, due to the firmware dump file exceeding the maximum size. |
SVAPAR-128401 | 8.7.0.0 | Upgrade to 8.6.3 may cause loss of access to iSCSI hosts, on FlashSystem 5015 and FlashSystem 5035 systems with a 4-port 10Gb ethernet adapter. |
SVAPAR-128414 | 8.7.0.0 | Thin-clone volumes in a Data Reduction Pool will incorrectly have compression disabled, if the source volume was uncompressed. |
SVAPAR-128626 | 8.7.0.0 | A node may warmstart or fail to start FlashCopy maps, in volume groups that contain Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume. |
SVAPAR-128626 | 8.6.0.4 | A node may warmstart or fail to start FlashCopy maps, in volume groups that contain Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume. |
SVAPAR-128912 | 8.7.0.0 | A T2 recovery may occur when attempting to take a snapshot from a volume group that contains volumes from multiple IO groups, and one of the IO groups is offline. |
SVAPAR-128913 | 8.7.0.0 | Multiple node asserts after a VDisk copy in a data reduction pool was removed while an IO group is offline and a T2 recovery occurred |
SVAPAR-128914 | 8.7.0.0 | A CMMVC9859E error will occur when trying to use 'addvolumecopy' to create Hyperswap volume from a VDisk with existing snapshots |
SVAPAR-129111 | 8.6.0.4 | When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address. |
SVAPAR-129298 | 8.7.0.0 | Manage disk group went offline during queueing of fibre rings on the overflow list causing the node to assert. |
SVAPAR-129298 | 8.6.0.4 | Manage disk group went offline during queueing of fibre rings on the overflow list causing the node to assert. |
SVAPAR-129318 | 8.7.0.0 | A Storage Virtualize cluster configured without IO group 0 is unable to send performance metrics |
SVAPAR-130438 | 8.7.0.0 | Upgrading a system to 8.6.2 or higher with a single portset assigned to an IP replication partnership may cause all nodes to warmstart when making a change to the partnership. |
SVAPAR-130553 | 8.7.0.0 | Converting a 3-site AuxFar volume to Hyperswap results in multiple node asserts |
SVAPAR-130646 | 8.7.0.0 | False positive Recovery point Objective (RPO) exceeded events (52004) reported for volume groups configured with Policy-Based Replication |
SVAPAR-130729 | 8.6.0.4 | When upgrading to 850, remote users configured with public keys do not failback to password prompt, if a key is not available. |
SVAPAR-130731 | 8.6.0.4 | During installation, a single node assert at the end of the software upgrade process may occur |
SVAPAR-131212 | 8.7.0.0 | The GUI partnership properties dialog crashes if the issuer certificate does not have an organization field |
SVAPAR-131233 | 8.7.0.0 | In an SVC stretched-cluster configuration with multiple I/O groups and policy-based replication, an attempt to create a new volume may fail due to an incorrect automatic I/O group assignment. |
SVAPAR-131250 | 8.7.0.0 | The system may not correctly balance fibre channel workload over paths to a back end controller. |
SVAPAR-131259 | 8.7.0.0 | Removal of the replication policy after the volume group was set to be independent exposed an issue that resulted in the FlashCopy internal state becoming incorrect, this meant subsequent FlashCopy actions failed incorrectly. |
SVAPAR-131567 | 8.6.0.4 | Node goes offline and enters service state when collecting diagnostic data for 100Gb/s adapters. |
SVAPAR-131651 | 8.7.0.0 | Policy-based Replication got stuck after both nodes in the I/O group on a target system restarted at the same time |
SVAPAR-131865 | 8.7.0.0 | A system may encounter communication issues when being configured with IPv6. |
SVAPAR-131993 | 8.7.0.0 | The IPV6 GUI field has been extended to accomodate the full length of the IPV6 address. |
SVAPAR-131994 | 8.7.0.0 | When implementing Safeguarded Copy, the associated child pool may run out of space, which can cause multiple Safeguarded Copies to go offline. This can cause the node to warmstart. |
SVAPAR-132001 | 8.7.0.0 | Unexpected lease expiries may occur when half of the nodes in the system start up, one after another in a short time. |
SVAPAR-132003 | 8.7.0.0 | A node may warmstart when an internal process to collect information from Ethernet ports takes longer than expected.. |
SVAPAR-132011 | 8.7.0.0 | In rare situations, a hosts WWPN may show incorrectly as still logged into the storage even though it is not. This can cause the host to incorrectly appear as degraded. |
SVAPAR-132013 | 8.7.0.0 | On a Hyperswap system, the preferred site node can lease expire if the remote site nodes suffered a warmstart. |
SVAPAR-132027 | 8.7.0.0 | An incorrect 'acknowledge' status for an initiator SCSI command is sent from the SCSI target side when no sense data was actually transferred. This may cause a node to warmstart. |
SVAPAR-132062 | 8.7.0.0 | vVols are reported as inaccessible due to a 30 minute timeout if the VASA provider is unavailable |
SVAPAR-132072 | 8.7.0.0 | A node may assert due to an Fibre Channel port constantly flapping between the FlashSystem and the host. |
SVAPAR-132123 | 8.5.0.12 | Vdisks can go offline after a T3 with an expanding DRAID1 array evokes some IO errors and data corruption |
SVAPAR-132123 | 8.6.0.4 | Vdisks can go offline after a T3 with an expanding DRAID1 array evokes some IO errors and data corruption |
SVAPAR-133392 | 8.7.0.0 | In rare situations involving multiple concurrent snapshot restore operations, an undetected data corruption may occur. |
SVAPAR-133442 | 8.7.0.0 | When using asynchronous policy based replication in DR test mode, if the DR volume group is put into production use (the volume group is made independent), an undetected data corruption may occur. |
SVAPAR-82950 | 8.5.3.1 | If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue |
SVAPAR-82950 | 8.6.0.0 | If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue |
SVAPAR-82950 | 8.5.0.8 | If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue |
SVAPAR-83290 | 8.5.0.7 | An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime. |
SVAPAR-83290 | 8.4.0.10 | An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime. |
SVAPAR-83290 | 8.5.4.0 | An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime. |
SVAPAR-83290 | 8.6.0.0 | An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime. |
SVAPAR-83456 | 8.6.0.0 | An NVMe codepath exists where by strict state checking incorrectly decides that a software flag state is invalid, there by triggering a node warmstart |
SVAPAR-83456 | 8.5.0.7 | An NVMe codepath exists where by strict state checking incorrectly decides that a software flag state is invalid, there by triggering a node warmstart |
SVAPAR-84116 | 8.6.0.0 | The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed |
SVAPAR-84116 | 8.4.0.11 | The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed |
SVAPAR-84116 | 8.5.0.8 | The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed |
SVAPAR-84305 | 8.6.0.0 | A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter |
SVAPAR-84305 | 8.4.0.10 | A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter |
SVAPAR-84305 | 8.5.4.0 | A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter |
SVAPAR-84305 | 8.5.0.7 | A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter |
SVAPAR-84331 | 8.4.0.10 | A node may warmstart when the 'lsnvmefabric -remotenqn' command is run |
SVAPAR-84331 | 8.5.0.7 | A node may warmstart when the 'lsnvmefabric -remotenqn' command is run |
SVAPAR-84331 | 8.6.0.0 | A node may warmstart when the 'lsnvmefabric -remotenqn' command is run |
SVAPAR-85093 | 8.6.0.0 | Systems that are using Policy-Based Replication may experience node warmstarts, if host I/O consists of large write I/Os with a high queue depth |
SVAPAR-85093 | 8.5.4.0 | Systems that are using Policy-Based Replication may experience node warmstarts, if host I/O consists of large write I/Os with a high queue depth |
SVAPAR-85396 | 8.5.0.7 | Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem |
SVAPAR-85396 | 8.4.0.10 | Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem |
SVAPAR-85396 | 8.6.0.0 | Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem |
SVAPAR-85396 | 8.5.4.0 | Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem |
SVAPAR-85640 | 8.5.0.12 | If new nodes/iogroups are added to an SVC cluster that is virtualizing a clustered SpecV system, an attempt to add the SVC node host objects to a host cluster on the backend SpecV system will fail with CLI error code CMMVC8278E due to incorrect policing |
SVAPAR-85640 | 8.6.0.0 | If new nodes/iogroups are added to an SVC cluster that is virtualizing a clustered SpecV system, an attempt to add the SVC node host objects to a host cluster on the backend SpecV system will fail with CLI error code CMMVC8278E due to incorrect policing |
SVAPAR-85658 | 8.5.0.12 | When replacing a boot drive, the new drive needs to be synchronized with the existing drive. The command to do this appears to run and does not return an error, but the new drive does not actually get synchronized. |
SVAPAR-85980 | 8.4.0.10 | iSCSI response times may increase on some systems with 25Gb ethernet adapters, after upgrade to 8.4.0.9 or 8.5.x |
SVAPAR-85980 | 8.5.0.8 | iSCSI response times may increase on some systems with 25Gb ethernet adapters, after upgrade to 8.4.0.9 or 8.5.x |
SVAPAR-86035 | 8.4.0.10 | Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart |
SVAPAR-86035 | 8.5.0.7 | Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart |
SVAPAR-86035 | 8.6.0.0 | Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart |
SVAPAR-86139 | 8.4.0.10 | Failover for VMware iSER hosts may pause I/O for more than 120 seconds |
SVAPAR-86182 | 8.6.0.0 | A node may warmstart if there is an encryption key error that prevents a distributed raid array from being created |
SVAPAR-86477 | 8.6.0.0 | In some situations ordered processes need to be replayed to ensure the continued management of user workloads. Circumstances exist where this processing can fail to get scheduled so the work remains locked. Software timers that check for this continued activity will detect a stall and force a recovery warmstart |
SVAPAR-87729 | 8.5.4.0 | After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts |
SVAPAR-87729 | 8.6.0.0 | After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts |
SVAPAR-87729 | 8.5.0.8 | After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts |
SVAPAR-87846 | 8.6.0.0 | Node warmstarts with unusual workload pattern on volumes with Policy-based replication |
SVAPAR-87846 | 8.5.4.0 | Node warmstarts with unusual workload pattern on volumes with Policy-based replication |
SVAPAR-88275 | 8.6.0.0 | A single-node warmstart may occur due to a very low-probability timing window in the thin-provisioning component. This can occur when the partner node has just gone offline, causing a loss of access to data |
HU02271 | 8.6.0.0 | A single-node warmstart may occur due to a very low-probability timing window in the thin-provisioning component. This can occur when the partner node has just gone offline, causing a loss of access to data |
SVAPAR-88279 | 8.6.0.0 | A low probability timing window exists in the Fibre Channel login management code. If there are many logins, and two nodes go offline in a very short time, this may cause other nodes in the cluster to warmstart |
SVAPAR-88887 | 8.5.0.12 | Loss of access to data after replacing all boot drives in system |
SVAPAR-88887 | 8.6.0.0 | Loss of access to data after replacing all boot drives in system |
SVAPAR-89172 | 8.6.0.0 | Snapshot volumes created by running the 'addsnapshot' command from the CLI can be slow to come online, this causes the Production volumes to incorrectly go offline |
SVAPAR-89271 | 8.7.0.0 | Policy-based Replication is not achieving the link_bandwidth_mbits configured on the partnership if only a single volume group is replicating in an I/O group, or workload is not balanced equally between volume groups owned by both nodes. |
SVAPAR-89296 | 8.5.4.0 | Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade |
SVAPAR-89296 | 8.5.0.8 | Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade |
SVAPAR-89296 | 8.6.0.0 | Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade |
SVAPAR-89331 | 8.7.0.0 | Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed |
SVAPAR-89331 | 8.6.0.4 | Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed |
SVAPAR-89331 | 8.6.3.0 | Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed |
SVAPAR-89692 | 8.6.0.0 | Battery back-up units may reach end of life prematurely on FS9500 / SV3 systems, despite the batteries being in good physical health, which will result in node errors and potentially nodes going offline if both batteries are affected |
SVAPAR-89692 | 8.5.0.8 | Battery back-up units may reach end of life prematurely on FS9500 / SV3 systems, despite the batteries being in good physical health, which will result in node errors and potentially nodes going offline if both batteries are affected |
SVAPAR-89694 | 8.5.0.8 | Kernel panics might occur on a subset of Spectrum Virtualize Hardware Platforms with a 10G Ethernet adapter running 8.4.0.10, 8.5.0.7 and 8.5.3.1 when taking a snap. For more details refer to this Flash |
SVAPAR-89694 | 8.4.0.11 | Kernel panics might occur on a subset of Spectrum Virtualize Hardware Platforms with a 10G Ethernet adapter running 8.4.0.10, 8.5.0.7 and 8.5.3.1 when taking a snap. For more details refer to this Flash |
SVAPAR-89764 | 8.6.0.0 | There is an issue with the asynchronous delete behavior of the Safeguarded Copies VDisks in the background that can cause an unexpected internal state in the FlashCopy component that can cause a single node assert |
SVAPAR-89780 | 8.6.0.0 | A node may warmstart after running the flashcopy command 'stopfcconsistgrp' due to the flashcopy maps in the consistency group being in an invalid state |
SVAPAR-89780 | 8.5.4.0 | A node may warmstart after running the flashcopy command 'stopfcconsistgrp' due to the flashcopy maps in the consistency group being in an invalid state |
SVAPAR-89781 | 8.6.0.0 | The 'lsportstats' command does not work via the REST API until code level 8.5.4.0 |
SVAPAR-89781 | 8.5.4.0 | The 'lsportstats' command does not work via the REST API until code level 8.5.4.0 |
SVAPAR-89951 | 8.5.4.0 | A single node warmstart might occur when a volume group with a replication policy switches the replication to cycling mode. |
SVAPAR-89951 | 8.6.0.0 | A single node warmstart might occur when a volume group with a replication policy switches the replication to cycling mode. |
SVAPAR-90395 | 8.5.0.8 | FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources |
SVAPAR-90395 | 8.6.0.0 | FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources |
SVAPAR-90395 | 8.5.4.0 | FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources |
SVAPAR-90438 | 8.6.0.0 | A conflict of host IO on one node, with array resynchronisation task on the partner node, can result in some regions of parity inconsistency. This is due to the asynchronous parity update behaviour leaving invalid parity in the RAID internal cache |
SVAPAR-90438 | 8.5.0.8 | A conflict of host IO on one node, with array resynchronisation task on the partner node, can result in some regions of parity inconsistency. This is due to the asynchronous parity update behaviour leaving invalid parity in the RAID internal cache |
SVAPAR-90459 | 8.6.0.0 | Possible undetected data corruption or multiple node warmstarts if a Traditional FlashCopy Clone of a volume is created before adding Volume Group Snapshots to the volume |
SVAPAR-90459 | 8.5.4.0 | Possible undetected data corruption or multiple node warmstarts if a Traditional FlashCopy Clone of a volume is created before adding Volume Group Snapshots to the volume |
SVAPAR-91111 | 8.5.4.0 | USB devices connected to an FS5035 node may be formatted on upgrade to 8.5.3 software |
SVAPAR-91111 | 8.6.0.0 | USB devices connected to an FS5035 node may be formatted on upgrade to 8.5.3 software |
SVAPAR-91860 | 8.6.0.0 | If an upgrade is started with the pause flag and then aborted, the pause flag may not be cleared. This can trigger the system to encounter an unexpected code path on the next upgrade, thereby causing a loss of access to data |
SVAPAR-91860 | 8.5.0.10 | If an upgrade is started with the pause flag and then aborted, the pause flag may not be cleared. This can trigger the system to encounter an unexpected code path on the next upgrade, thereby causing a loss of access to data |
SVAPAR-92066 | 8.6.0.0 | Node warmstarts can occur after running the 'lsvdiskfcmapcopies' command if Safeguarded Copy is used |
SVAPAR-92579 | 8.6.0.0 | If Volume Group Snapshots are in use on a Policy-Based Replication DR system, a timing window may result in a node warmstart for one or both nodes in the I/O group |
SVAPAR-92983 | 8.6.0.0 | There is an issue that prevents Remote users with SSH key to connect to the storage system if BatchMode is enabled |
SVAPAR-93054 | 8.6.0.0 | Backend systems on 8.2.1 and beyond have an issue that causes capacity information updates to stop after a T2 or T3 is performed. This affects all backend systems with FCM arrays |
SVAPAR-93054 | 8.5.0.12 | Backend systems on 8.2.1 and beyond have an issue that causes capacity information updates to stop after a T2 or T3 is performed. This affects all backend systems with FCM arrays |
SVAPAR-93309 | 8.6.0.0 | A node may briefly go offline after a battery firmware update |
SVAPAR-93309 | 8.5.0.12 | A node may briefly go offline after a battery firmware update |
SVAPAR-93442 | 8.6.0.0 | User ID does not have the authority to submit a command in some LDAP environments |
SVAPAR-93987 | 8.5.2.0 | A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets |
SVAPAR-93987 | 8.5.0.6 | A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets |
SVAPAR-93987 | 8.6.0.0 | A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets |
SVAPAR-94179 | 8.5.0.9 | Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node |
SVAPAR-94179 | 8.7.0.0 | Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node |
SVAPAR-94179 | 8.6.0.1 | Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node |
SVAPAR-94179 | 8.4.0.12 | Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node |
SVAPAR-94179 | 8.6.1.0 | Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node |
SVAPAR-94682 | 8.6.0.0 | SMTP fails if the length of the email server's domain name is longer than 40 characters |
SVAPAR-94686 | 8.6.0.0 | The GUI can become slow and unresponsive due to a steady stream of configuration updates such as 'svcinfo' queries for the latest configuration data |
SVAPAR-94686 | 8.5.0.10 | The GUI can become slow and unresponsive due to a steady stream of configuration updates such as 'svcinfo' queries for the latest configuration data |
SVAPAR-94703 | 8.6.0.0 | The estimated compression savings value shown in the GUI for a single volume is incorrect. The total savings for all volumes in the system will be shown |
SVAPAR-94902 | 8.6.0.0 | When attempting to enable local port masking for a specific subset of control enclosure based clusters, this may fail with the following message; 'The specified port mask cannot be applied because insufficient paths would exist for node communication' |
SVAPAR-94956 | 8.6.0.0 | When ISER clustering is configured with a default gateway of 0.0.0.0, the node IPs will not be activated during boot after a reboot or warmstart and the node will remain offline in 550/551 state |
SVAPAR-95349 | 8.6.0.0 | Adding a hyperswap volume copy to a clone of a Volume Group Snapshot may cause all nodes to warmstart, causing a loss of access |
SVAPAR-95384 | 8.7.0.0 | In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication |
SVAPAR-95384 | 8.6.0.1 | In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication |
SVAPAR-95384 | 8.6.1.0 | In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication |
SVAPAR-96656 | 8.6.0.0 | VMware hosts may experience errors creating snapshots, due to an issue in the VASA Provider |
SVAPAR-96777 | 8.7.0.0 | Policy-based Replication uses journal resources to handle replication. If these resources become exhausted, the volume groups with the highest RPO and most amount of resources should be purged to free up resources for other volume groups. The decision which volume groups to purge is made incorrectly, potentially causing too many volume groups to exceed their target RPO |
SVAPAR-96777 | 8.6.1.0 | Policy-based Replication uses journal resources to handle replication. If these resources become exhausted, the volume groups with the highest RPO and most amount of resources should be purged to free up resources for other volume groups. The decision which volume groups to purge is made incorrectly, potentially causing too many volume groups to exceed their target RPO |
SVAPAR-96952 | 8.6.2.0 | A single node warmstart may occur when updating the login counts associated with a backend controller. |
SVAPAR-96952 | 8.7.0.0 | A single node warmstart may occur when updating the login counts associated with a backend controller. |
SVAPAR-97502 | 8.6.1.0 | Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings |
SVAPAR-97502 | 8.6.0.1 | Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings |
SVAPAR-97502 | 8.7.0.0 | Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings |
SVAPAR-98128 | 8.6.1.0 | A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters |
SVAPAR-98128 | 8.6.0.1 | A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters |
SVAPAR-98128 | 8.7.0.0 | A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters |
SVAPAR-98184 | 8.6.1.0 | When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access |
SVAPAR-98184 | 8.6.0.1 | When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access |
SVAPAR-98184 | 8.7.0.0 | When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access |
SVAPAR-98497 | 8.7.0.0 | Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure |
SVAPAR-98497 | 8.6.0.1 | Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure |
SVAPAR-98497 | 8.6.1.0 | Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure |
SVAPAR-98567 | 8.5.0.9 | In FS50xx nodes, the TPM may become unresponsive after a number of weeks' runtime. This can lead to encryption or mdisk group CLI commands failing, or in some cases node warmstarts. This issue was partially addressed by SVAPAR-83290, but is fully resolved by this second fix. |
SVAPAR-98567 | 8.6.0.0 | In FS50xx nodes, the TPM may become unresponsive after a number of weeks' runtime. This can lead to encryption or mdisk group CLI commands failing, or in some cases node warmstarts. This issue was partially addressed by SVAPAR-83290, but is fully resolved by this second fix. |
SVAPAR-98576 | 8.6.1.0 | Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. |
SVAPAR-98576 | 8.6.0.2 | Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. |
SVAPAR-98576 | 8.5.0.10 | Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. |
SVAPAR-98576 | 8.7.0.0 | Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. |
SVAPAR-98611 | 8.6.1.0 | The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host |
SVAPAR-98611 | 8.5.0.12 | The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host |
SVAPAR-98611 | 8.6.0.1 | The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host |
SVAPAR-98611 | 8.7.0.0 | The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host |
SVAPAR-98612 | 8.6.1.0 | Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts |
SVAPAR-98612 | 8.6.0.1 | Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts |
SVAPAR-98612 | 8.7.0.0 | Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts |
SVAPAR-98672 | 8.6.0.1 | VMWare host crashes on servers connected using NVMe over Fibre Channel with the host_unmap setting disabled |
SVAPAR-98672 | 8.5.0.9 | VMWare host crashes on servers connected using NVMe over Fibre Channel with the host_unmap setting disabled |
SVAPAR-98893 | 8.6.1.0 | If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur |
SVAPAR-98893 | 8.6.0.1 | If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur |
SVAPAR-98893 | 8.7.0.0 | If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur |
SVAPAR-98971 | 8.5.0.9 | The GUI may show repeated invalid pop-ups stating configuration node failover has occurred |
SVAPAR-99175 | 8.5.0.10 | A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once. |
SVAPAR-99175 | 8.7.0.0 | A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once. |
SVAPAR-99175 | 8.6.0.1 | A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once. |
SVAPAR-99175 | 8.6.2.0 | A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once. |
SVAPAR-99273 | 8.5.2.0 | If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart. |
SVAPAR-99273 | 8.6.0.0 | If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart. |
SVAPAR-99273 | 8.5.0.10 | If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart. |
SVAPAR-99354 | 8.7.0.0 | Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot |
SVAPAR-99354 | 8.6.0.1 | Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot |
SVAPAR-99354 | 8.6.2.0 | Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot |
SVAPAR-99537 | 8.7.0.0 | If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed |
SVAPAR-99537 | 8.6.0.1 | If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed |
SVAPAR-99537 | 8.6.1.0 | If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed |
SVAPAR-99537 | 8.5.0.12 | If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed |
SVAPAR-99855 | 8.6.0.1 | After battery firmware is upgraded on SV3 or FS9500 as part of a software upgrade, there is a small probability that the battery may remain permanently offline |
SVAPAR-99997 | 8.6.1.0 | Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup' |
SVAPAR-99997 | 8.7.0.0 | Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup' |
SVAPAR-99997 | 8.6.0.2 | Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup' |
[{"Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STPVGU","label":"SAN Volume Controller"},"ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Version(s)"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST3FR7","label":"IBM Storwize V7000"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STSLR9","label":"IBM FlashSystem 9x00"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STHGUJ","label":"IBM Storwize V5000"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST3FR9","label":"IBM FlashSystem 5x00"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSA76Z4","label":"IBM FlashSystem 7x00"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]
Was this topic helpful?
Document Information
Modified date:
01 July 2024
UID
ibm16340241