IBM Support

IBM Storage Virtualize APARs

Question & Answer


Question

Which APARs raised against IBM Storage Virtualize have been fixed?
In which PTFs were they made available?
 
Note that this document was formerly known as IBM Spectrum Virtualize APARs

Answer

APAR VRMF Description
DT1126018.3.1.6Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery
DT1126018.4.0.4Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery
DT1126018.4.2.0Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery
DT1126018.5.0.0Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery
HU000147.3.0.1Multiple node warmstarts if many volume-host mappings exist to a single host
HU000177.3.0.1Node warmstart after failed mkrcpartnership command
HU000267.3.0.1Node warmstart after all compressed volumes in an I/O group are deleted
HU001307.4.0.0Node warmstart due to IPC queue state
HU001337.3.0.1Loss of access to data when an enclosure goes offline during software upgrade
HU001767.3.0.5Node warmstart due to a I/O deadlock when using FlashCopy
HU001837.3.0.1GUI becomes non-responsive on larger configurations
HU001957.3.0.1Multiple node warmstarts when creating the first compressed volume in an I/O group
HU002197.3.0.1Node warmstart when stopping a FlashCopy map in a chain of FlashCopy mappings
HU002367.3.0.1Performance degradation when changing the state of certain LEDs
HU002417.3.0.1Unresponsive GUI caused by locked IPC sockets
HU002477.8.1.5A rare deadlock condition can lead to a RAID5 or RAID6 array rebuild stalling at 99%
HU002478.1.1.1A rare deadlock condition can lead to a RAID5 or RAID6 array rebuild stalling at 99%
HU002517.4.0.0Unable to migrate volume mirror copies to alternate storage pool using GUI
HU002537.3.0.1Global Mirror with Change Volumes does not resume copying after an I/O group goes offline at secondary cluster
HU002577.3.0.1Multiple node warmstarts when EMC RecoverPoint appliance restarted
HU002717.5.0.9An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts
HU002717.6.1.5An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts
HU002717.7.0.3An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts
HU002717.7.1.1An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts
HU002727.3.0.1Arrays incorrectly reporting resync progress as 0% or 255%
HU002747.3.0.5Quiesce and resume of host I/O when Global Mirror consistency group reaches consistent_synchronized state
HU002777.3.0.5Loss of access to data when adding a Global Mirror Change volume if the system is almost out of FlashCopy bitmap space
HU002807.3.0.5Multiple node warmstarts triggered by a Global Mirror disconnection
HU002817.5.0.0Single node warmstart due to internal code exception
HU002837.3.0.1Multiple node warmstarts caused by invalid compressed volume metadata
HU002877.3.0.1Multiple node warmstarts when using hosts where the IQN is the same, but uses different capitalisation
HU002887.3.0.1GUI does not remember the most recently visited page
HU002907.3.0.1System incorrectly attempts to upgrade firmware during maintenance discharge
HU002917.3.0.1Node warmstart caused by instability in IP replication connection
HU002937.3.0.1Node canister fails to boot after hard shutdown
HU002947.3.0.1Event ID 981007 not always logged correctly
HU002967.3.0.3Node warmstart when handling specific compression workloads
HU002987.3.0.1Multiple node warmstarts when using IBM DS4000 using an incorrect host type
HU003007.4.0.0Volume mirroring synchronisation exceeds the maximum copy rate
HU003017.7.0.0A 4-node enhanced stretched cluster with non-mirrored volumes may get stuck in stalled_non_redundant during an upgrade
HU003027.3.0.2Multiple repeating node warmstarts if system has previously run a code release earlier than 6.4.0 and is upgraded to v7.3.0.1 without stepping through a 6.4.x release
HU003047.3.0.3Both node canister fault LEDs are set to ON following upgrade to v7.3 release
HU003057.3.0.9System unable to detect and use newly added ports on EMC VMAX
HU003057.4.0.0System unable to detect and use newly added ports on EMC VMAX
HU003247.3.0.3Compressed volumes offline after upgrading to v7.3.0.1 or v7.3.0.2
HU003367.3.0.3Single node warmstart when volumes go offline on systems running v7.3.0.0, v7.3.0.1 or v7.3.0.2
HU003467.4.0.8Running GMCV relationships are not consistently displayed in GUI
HU003547.3.0.4Loss of access to data if upgrading directly from 6.4.x to v7.3.0.1, v7.3.0.2 or v7.3.0.3 with multiple access I/O groups configured on any volume
HU003897.3.0.9When a Storwize system is configured as a backend storage subsystem for an SVC or another Storwize system, the port statistics count the traffic between these two systems as remote cluster traffic instead of host to storage traffic (e.g. in TPC Port to Remote Node Send Data Rate instead of Port to Controller Send Data Rate)
HU003897.4.0.3When a Storwize system is configured as a backend storage subsystem for an SVC or another Storwize system, the port statistics count the traffic between these two systems as remote cluster traffic instead of host to storage traffic (e.g. in TPC Port to Remote Node Send Data Rate instead of Port to Controller Send Data Rate)
HU004227.3.0.5Node warmstart when using Global Mirror Change Volumes
HU004327.3.0.4Performance reduction and node warmstarts when running out of cache resources in v7.3.0.1, v7.3.0.2, or v7.3.0.3
HU004437.3.0.5Global Mirror Change Volumes stops replicating after an upgrade from V641 to v7.2.0 or later
HU004447.3.0.5Node warmstarts due to overloaded compression engine
HU004467.3.0.5v7.3.0 cache does not make effective use of CPU resources Note: A restart is required to activate this fix if you upgrade from an earlier version of v7.3.0
HU004477.7.0.0A Link Reset on an 8Gbps Fibre Channel port causes fabric logout/login
HU004487.3.0.8Increased latency on SVC and V7000 systems running v7.3 when using compressed volumes due to compression engine memory management
HU004507.3.0.5Manual upgrade with stopped Global Mirror relationships can not complete non-disruptively due to dependent volumes
HU004627.3.0.5Node warmstart when using Easytier with a single tier in a pool on v7.3
HU004637.3.0.8Increased host I/O response time to compressed volumes on v7.3.0, for specific I/O workloads
HU004647.3.0.5Loss of access to data due to resource leak in v7.3.0
HU004657.3.0.5Node error 581 on 2145-CF8 nodes due to problem communicating with the IMM
HU004677.4.0.0Global Mirror Change Volumes Freeze time not reported correctly if cycle takes longer than the cycle period to complete
HU004687.3.0.8Drive firmware task not removed from GUI running tasks display following completion of drive firmware update action
HU004687.5.0.0Drive firmware task not removed from GUI running tasks display following completion of drive firmware update action
HU004707.6.0.0Single node warmstart on login attempt with incorrect password issued
HU004727.3.0.8Node restarts leading to offline volumes when using Flashcopy or Remote copy
HU004737.3.0.8SVC DH8 node reports node error 522 following system board replacement
HU004817.3.0.8Node warmstart when new multi-tier storage pool is added and an MDisk overload condition is detected within the first day
HU004847.4.0.0Loss of access to data when the lsdependentvdisks command is run with no parameters
HU004857.3.0.8TPC cache statistics inaccurate or unavailable in v7.3.0 and later release
HU004867.3.0.5Systems upgraded to 2145-DH8 nodes do not make use of the compression acceleration cards for compressed volumes
HU004877.4.0.0Rebuild process stalls or unable to create MDisk due to unexpected RAID scrub state
HU004907.4.0.0Node warmstart when using Metro Mirror or Global Mirror
HU004937.3.0.8SVC DH8 node offline due to battery backplane problem
HU004947.3.0.11Node warmstart caused by timing window when handling XCOPY commands
HU004947.4.0.0Node warmstart caused by timing window when handling XCOPY commands
HU004957.4.0.0Node warmstart caused by a single active write holding up the GM disconnect
HU004967.3.0.5SVC volumes offline and data unrecoverable. For more details refer to this Flash
HU004977.3.0.6Volumes offline due to incorrectly compressed data. Second fix for issue. For more details refer to this Flash
HU004997.5.0.3Loss of access to data when a volume that is part of a Global Mirror Change Volumes relationship is removed with the force flag.
HU005027.3.0.8EasyTier migration running at a reduced rate
HU005057.4.0.0Multiple node warmstarts caused by timing window when using inter-system replication
HU005067.3.0.8Increased destaging latency in upper cache when using v7.3.0 release
HU005167.3.0.11Node warmstart due to software thread deadlock
HU005167.4.0.3Node warmstart due to software thread deadlock
HU005187.4.0.0Multiple Node warmstarts due to invalid SCSI commands generated by network probes
HU005197.3.0.8Node warmstart due to FlashCopy deadlock condition
HU005197.5.0.0Node warmstart due to FlashCopy deadlock condition
HU005207.4.0.0Node warmstart caused by iSCSI command being aborted immediately after the command is issued
HU005217.7.0.0Remote Copy relationships may be stopped and lose synch when a single node warmstart occurs at the secondary site
HU005257.5.0.0Unable to manually mark monitoring events in the event log as fixed
HU005267.3.0.8Node warmstarts caused by very large number of 512 byte write operations to compressed volumes
HU005287.3.0.9Single PSU DC output turned off when there is no PSU hardware fault present
HU005287.4.0.0Single PSU DC output turned off when there is no PSU hardware fault present
HU005297.3.0.8Increased latency on SVC and V7000 systems running v7.3 (excluding DH8 & V7000 Gen2 models) when using compressed volumes due to defragmentation issue
HU005367.6.0.0When stopping a GMCV relationship clean up process at secondary site hangs to the point of a primary node warmstart
HU005387.4.0.0Node warmstart when removing host port (via GUI or CLI) when there is outstanding I/O to host
HU005397.3.0.8Node warmstarts after stopping and restarting FlashCopy maps with compressed volumes as target of the map
HU005407.4.0.0Configuration Backup fails due to invalid volume names
HU005417.4.0.0Fix Procedure fails to complete successfully when servicing PSU
HU005437.4.0.0Elongated I/O pause when starting or stopping remote copy relationships when there are a large number of remote copy relationships
HU005447.4.0.0Storage pool offline when upgrading firmware on storage subsystems listed in APAR Environment
HU005457.4.0.0Loss of Access to data after control chassis or midplane enclosure replacement
HU005467.4.0.0Multiple Node warmstarts when attempting to access data beyond the end of the volume
HU005477.4.0.0I/O delay during site failure using enhanced stretched cluster
HU005487.4.0.0Unable to create IP partnership that had previously been deleted
HU005568.5.0.17A partner node may go offline when a node in the IO group is removed due to a space efficient timing window.
HU005568.6.0.10A partner node may go offline when a node in the IO group is removed due to a space efficient timing window.
HU005568.7.0.7A partner node may go offline when a node in the IO group is removed due to a space efficient timing window.
HU005569.1.0.0A partner node may go offline when a node in the IO group is removed due to a space efficient timing window.
HU006297.3.0.9Performance degradation triggered by specific I/O pattern when using compressed volumes due to optimisation issue
HU006297.4.0.3Performance degradation triggered by specific I/O pattern when using compressed volumes due to optimisation issue
HU006307.4.0.3Temporary loss of paths for FCoE hosts after 497 days uptime due to FCoE driver timer problem
HU006367.3.0.9Livedump prepare fails on V3500 & V3700 systems with 4GB memory when cache partition fullness is less than 35%
HU006367.4.0.3Livedump prepare fails on V3500 & V3700 systems with 4GB memory when cache partition fullness is less than 35%
HU006377.3.0.9HP MSA P2000 G3 controllers running a firmware version later than TS240P003 may not be recognised by SVC/Storwize
HU006377.4.0.3HP MSA P2000 G3 controllers running a firmware version later than TS240P003 may not be recognised by SVC/Storwize
HU006387.5.0.0Multiple node warmstarts when there is high backend latency
HU006447.5.0.0Multiple node warmstarts when node port receives duplicate frames during a specific I/O timing window
HU006457.4.0.2Loss of access to data when using compressed volumes on 7.4.0.1 can occur when there is a large number of consecutive and highly compressible writes to a compressed volume
HU006467.3.0.9Easy tier throughput reduced due to Easy tier only moving 6 extents per 5 minutes regardless of extent size
HU006467.4.0.3Easy tier throughput reduced due to Easy tier only moving 6 extents per 5 minutes regardless of extent size
HU006487.3.0.9Node warmstart due to handling of parallel reads on compressed volumes
HU006497.5.0.9In rare cases an unexpected IP address may be configured on management port eth0. This IP address is neither the service IP nor the cluster IP, but most likely set by DHCP during boot
HU006497.6.0.0In rare cases an unexpected IP address may be configured on management port eth0. This IP address is neither the service IP nor the cluster IP, but most likely set by DHCP during boot
HU006537.3.0.91691 RAID inconsistencies falsely reported due to RAID incomplete locking issue
HU006537.4.0.31691 RAID inconsistencies falsely reported due to RAID incomplete locking issue
HU006547.3.0.9Loss of access to data when FlashCopy stuck during a Global Mirror Change Volumes cycle
HU006547.4.0.3Loss of access to data when FlashCopy stuck during a Global Mirror Change Volumes cycle
HU006557.3.0.9Loss of access to data if a PSU in two different enclosures suffers an output failure simultaneously (whilst AC input is good)
HU006557.4.0.3Loss of access to data if a PSU in two different enclosures suffers an output failure simultaneously (whilst AC input is good)
HU006567.3.0.9Increase in reported CPU utilisation following upgrade to v7.2.0 or higher. For more details refer to this Flash
HU006587.3.0.9Global Mirror source data may be incompletely replicated to target volumes. For more details refer to this Flash
HU006587.4.0.0Global Mirror source data may be incompletely replicated to target volumes. For more details refer to this Flash
HU006597.4.0.5Global Mirror with Change Volumes freeze time reported incorrectly
HU006597.5.0.0Global Mirror with Change Volumes freeze time reported incorrectly
HU006607.3.0.9Reduced performance on compressed volumes when running parallel workloads with large block size
HU006607.4.0.1Reduced performance on compressed volumes when running parallel workloads with large block size
HU006657.4.0.3Node warmstart due to software thread deadlock condition during execution of internal MDisk/discovery process
HU006667.3.0.11Upgrade from v7.2.0 stalls due to dependent volumes
HU006667.4.0.3Upgrade from v7.2.0 stalls due to dependent volumes
HU006697.4.0.3Node warmstart and VMware host I/O timeouts if a node is removed from the cluster during upgrade from a pre v7.4.0 version to v7.4.0 whilst there are active VAAI CAW commands
HU006717.3.0.121691 error on arrays when using multiple FlashCopies of the same source. For more details refer to this Flash
HU006717.4.0.51691 error on arrays when using multiple FlashCopies of the same source. For more details refer to this Flash
HU006717.5.0.01691 error on arrays when using multiple FlashCopies of the same source. For more details refer to this Flash
HU006727.3.0.11Node warmstart due to compression stream condition
HU006727.4.0.3Node warmstart due to compression stream condition
HU006737.4.0.5Drive slot is not recognised following drive auto manage procedure
HU006737.5.0.0Drive slot is not recognised following drive auto manage procedure
HU006757.5.0.0Node warmstart following node start up/restart due to invalid CAW domain state
HU006767.3.0.11Node warmstart due to compression engine restart
HU006767.4.0.4Node warmstart due to compression engine restart
HU006777.4.0.3Node warmstart or loss of access to GUI/CLI due to defunct SSH processes
HU006787.4.0.3iSCSI hosts incorrectly show an offline status following update to V6.4 from a pre V6.4 release
HU006807.3.0.10Compressed volumes go offline due to false detection event. Applies to V7000 Generation 2 systems only running v7.3.0.9 or v7.4.0.3
HU006807.4.0.4Compressed volumes go offline due to false detection event. Applies to V7000 Generation 2 systems only running v7.3.0.9 or v7.4.0.3
HU007117.3.0.11GUI response slow when filtering a large number of volumes
HU007117.4.0.5GUI response slow when filtering a large number of volumes
HU007197.5.0.10After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery
HU007197.6.1.6After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery
HU007197.7.0.0After a power failure both nodes may repeatedly warmstart and then attempt an auto-node rescue. This will remove hardened data and require a T3 recovery
HU007257.4.0.5Loss of access to data when adding a Global Mirror Change Volume relationship to a consistency group on the primary site, when the secondary site does not have a secondary volume defined
HU007257.5.0.2Loss of access to data when adding a Global Mirror Change Volume relationship to a consistency group on the primary site, when the secondary site does not have a secondary volume defined
HU007267.4.0.10Single node warmstart due to stuck I/O following offline MDisk group condition
HU007267.5.0.0Single node warmstart due to stuck I/O following offline MDisk group condition
HU007317.4.0.5Single node warmstart due to invalid volume memory allocation pointer
HU007327.6.0.0Single node warmstart due to stalled Remote Copy recovery as a result of pinned write IOs on incorrect queue
HU007337.5.0.11Stop with access results in node warmstarts after a recovervdiskbysystem command
HU007337.6.0.0Stop with access results in node warmstarts after a recovervdiskbysystem command
HU007347.7.1.1Multiple node warmstarts due to deadlock condition during RAID group rebuild
HU007357.5.0.0Host I/O statistics incorrectly including logically failed writes
HU007377.5.0.0GUI does not warn of lack of space condition when collecting a Snap, this results in some files missing from the Snap
HU007407.4.0.7Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node
HU007407.5.0.5Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node
HU007407.6.0.0Read/write performance latencies due to high CPU utilisation from EasyTier 3 processes on the configuration node
HU007447.8.1.10Single node warmstart due to an accounting issue within the cache component
HU007448.1.3.6Single node warmstart due to an accounting issue within the cache component
HU007448.2.1.4Single node warmstart due to an accounting issue within the cache component
HU007457.4.0.5IP Replication does not return to using full throughput following packet loss on IP link used for replication
HU007457.5.0.2IP Replication does not return to using full throughput following packet loss on IP link used for replication
HU007467.6.0.0Single node warmstart during a synchronisation process of the RAID array
HU007477.8.1.0Node warmstarts can occur when drives become degraded
HU007497.6.0.0Multiple node warmstarts in I/O group after starting Remote Copy
HU007527.3.0.11Email notifications and call home stops working after updating to v7.3.0.10
HU007567.4.0.6Performance statistics BBCZ counter values reported incorrectly
HU007567.5.0.7Performance statistics BBCZ counter values reported incorrectly
HU007567.6.0.0Performance statistics BBCZ counter values reported incorrectly
HU007577.6.0.0Multiple node warmstarts when removing a Global Mirror relationship with secondary volume that has been offline
HU007597.4.0.5catxmlspec cli command (used by external monitoring applications such as Spectrum Control Base) not working
HU007617.3.0.11Array rebuild fails to start after a drive is manually taken offline
HU007627.5.0.13Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node
HU007627.6.1.7Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node
HU007627.7.1.7Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node
HU007627.8.0.2Due to an issue in the cache component, nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node
HU007637.7.1.7A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed
HU007637.8.1.1A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed
HU012377.7.1.7A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed
HU012377.8.1.1A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed
HU007647.5.0.0Loss of access to data due to persistent reserve host registration keys exceeding the current supported value of 256
HU007947.6.0.0Hang up of GM I/O stream can effect MM I/O in another remote copy stream
HU008047.5.0.0Loss of access to data due to SAS recovery mechanism operating on both nodes in I/O group simultaneously
HU008057.5.0.0Some SAS ports are displayed in hexadecimal values instead of decimal values in the performance statistics xml files
HU008067.5.0.0mkarray command fails when creating an encrypted array due to pending bitmap state
HU008077.5.0.0Increase in node cpu usage due to FlashCopy mappings with high cleaning rate
HU008087.5.0.0NTP trace logs not collected on configuration node
HU008097.5.0.3Both nodes shutdown when power is lost to one node for more than 15 seconds
HU008117.4.0.5Loss of access to data when SAN connectivity problems leads to backend controller being detected as incorrect type
HU008117.5.0.0Loss of access to data when SAN connectivity problems leads to backend controller being detected as incorrect type
HU008157.5.0.1FlashCopy source and target volumes offline when FlashCopy maps are started.
HU008167.5.0.2Loss of access to data following upgrade to v7.5.0.0 or 7.5.0.1 when, i) the cluster has previously run release 6.1.0 or earlier at some point in its life span or ii) the cluster has 2,600 or more MDisks
HU008197.4.0.8Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues
HU008197.5.0.7Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues
HU008197.6.0.0Large increase in response time of Global Mirror primary volumes due to intermittent connectivity issues
HU008207.4.0.5Data integrity issue when using encrypted arrays. For more details refer to this Flash
HU008207.5.0.2Data integrity issue when using encrypted arrays. For more details refer to this Flash
HU008217.5.0.3Single node warmstart due to HBA firmware behaviour
HU008237.6.0.0Node warmstart due to inconsistent EasyTier status when EasyTier is disabled on all managed disk groups
HU008257.4.0.5Java exception error when using the Service Assistant GUI to complete an enclosure replacement procedure
HU008257.5.0.2Java exception error when using the Service Assistant GUI to complete an enclosure replacement procedure
HU008277.6.0.0Both nodes in a single I/O group of a multi I/O group system can warmstart due to misallocation of volume stats entries
HU008287.5.0.3FlashCopies take a long time or do not complete when the background copy rate set is non-zero
HU008297.4.0.51125 (or 1066 on V7000 Generation 2) events incorrectly logged for all PSUs/Fan trays when there is a single PSU/fan tray fault
HU008307.4.0.8When a node running iSCSI encounters a PDU with AHS it will warmstart
HU008317.6.1.7Single node warmstart due to hung I/O caused by cache deadlock
HU008317.7.1.5Single node warmstart due to hung I/O caused by cache deadlock
HU008317.8.0.0Single node warmstart due to hung I/O caused by cache deadlock
HU008327.4.0.6Automatic licensed feature activation fails for 6099 machine type
HU008327.5.0.3Automatic licensed feature activation fails for 6099 machine type
HU008337.5.0.3Single node warmstart when the mkhost cli command is run without the -iogrp flag
HU008367.4.0.7Wrong volume copy may be taken offline in a timing window when metadata corruption is detected on a Thin Provisioned Volume and a node warmstart happens at the same time
HU008387.6.0.0FlashCopy volume offline due to a cache flush issue
HU008407.5.0.4Node warmstarts when Spectrum Virtualize iSCSI target receives garbled packets
HU008407.6.0.0Node warmstarts when Spectrum Virtualize iSCSI target receives garbled packets
HU008417.5.0.3Multiple node warmstarts leading to loss of access to data when changing a volume throttle rate to a value of more than 10000 IOPs or 40MBps
HU008427.6.0.0Unable to clear bad blocks during an array resync process
HU008437.5.0.3Single node warmstart when there is a high volume of ethernet traffic on link used for IP replication/iSCSI
HU008447.5.0.3Multiple node warmstarts following installation of an additional SAS HIC
HU008457.4.0.6Trial licenses for licensed feature activation are not available
HU008457.5.0.3Trial licenses for licensed feature activation are not available
HU008867.7.0.0Single node warmstart due to CLI startfcconsistgrp command timeout
HU008907.4.0.8Technician port inittool redirects to SAT GUI
HU008907.5.0.5Technician port inittool redirects to SAT GUI
HU008907.6.0.0Technician port inittool redirects to SAT GUI
HU008917.3.0.13The extent database defragmentation process can create duplicates whilst copying extent allocations resulting in a node warmstart to recover the database
HU008917.4.0.8The extent database defragmentation process can create duplicates whilst copying extent allocations resulting in a node warmstart to recover the database
HU008917.5.0.7The extent database defragmentation process can create duplicates whilst copying extent allocations resulting in a node warmstart to recover the database
HU008977.7.0.0Spectrum Virtualize iSCSI target ignores maxrecvdatasegmentlength leading to host I/O error
HU008987.3.0.12Potential data loss scenario when using compressed volumes on SVC and Storwize V7000 running software versions v7.3, v7.4 or v7.5. For more details refer to this Flash
HU008987.4.0.6Potential data loss scenario when using compressed volumes on SVC and Storwize V7000 running software versions v7.3, v7.4 or v7.5. For more details refer to this Flash
HU008987.5.0.3Potential data loss scenario when using compressed volumes on SVC and Storwize V7000 running software versions v7.3, v7.4 or v7.5. For more details refer to this Flash
HU008997.6.0.2Node warmstart observed when 16G FC or 10G FCoE adapter detects heavy network congestion
HU009007.6.0.0SVC FC driver warmstarts when it receives an unsupported but valid FC command
HU009017.3.0.12Incorrect read cache hit percentage values reported in TPC
HU009027.5.0.3Starting a Global Mirror Relationship or Consistency Group fails after changing a relationship to not use Change Volumes
HU009037.6.0.0Emulex firmware paused causes single node warmstart
HU009047.4.0.6Multiple node warmstarts leading to loss of access to data when the link used for IP Replication experiences packet loss and the data transfer rate occasionally drops to zero
HU009047.5.0.3Multiple node warmstarts leading to loss of access to data when the link used for IP Replication experiences packet loss and the data transfer rate occasionally drops to zero
HU009057.4.0.6The serial number value displayed in the GUI node properties dialog is incorrect
HU009057.5.0.3The serial number value displayed in the GUI node properties dialog is incorrect
HU009067.8.0.0When a compressed volume mirror copy is taken offline, write response times to the primary copy may reach prohibitively high levels leading to a loss of access to that volume
HU009087.6.0.0Battery can charge too quickly on reconditioning and take node offline
HU009097.5.0.9Single node warmstart may occur when removing an MDisk group that was using EasyTier
HU009097.6.0.0Single node warmstart may occur when removing an MDisk group that was using EasyTier
HU009107.7.0.0Handling of I/O to compressed volumes can result in a timeout condition that is resolved by a node warmstart
HU009137.4.0.6Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB
HU009137.5.0.5Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB
HU009137.6.0.0Multiple node warmstarts when using a Metro Mirror or Global Mirror volume that is greater than 128TB
HU009157.5.0.9Loss of access to data when removing volumes associated with a GMCV relationship
HU009157.6.0.0Loss of access to data when removing volumes associated with a GMCV relationship
HU009217.8.1.10A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes
HU009218.2.0.0A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes
HU009218.2.1.0A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes
HU009227.4.0.6Loss of access to data when moving volumes to another I/O group using the GUI
HU009227.5.0.5Loss of access to data when moving volumes to another I/O group using the GUI
HU009227.6.0.0Loss of access to data when moving volumes to another I/O group using the GUI
HU009237.4.0.6Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters
HU009237.5.0.7Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters
HU009237.6.0.0Single node warmstart when receiving frame errors on 16GB Fibre Channel adapters
HU009247.4.0.6The Volumes by Pool display in the GUI shows incorrect EasyTier status
HU009277.5.0.8Single node warmstart may occur while fast formatting a volume
HU009287.5.0.9For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed
HU009287.6.1.5For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed
HU009287.7.0.0For certain I/O patterns a SAS firmware issue may lead to transport errors that become so prevalent that they cause a drive to become failed
HU009357.5.0.8A single node warmstart may occur when memory is asynchronously allocated for an I/O and the underlying FlashCopy map has changed at exactly the same time
HU009357.6.0.1A single node warmstart may occur when memory is asynchronously allocated for an I/O and the underlying FlashCopy map has changed at exactly the same time
HU009367.3.0.13During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline
HU009367.4.0.9During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline
HU009367.5.0.5During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline
HU009367.6.0.2During the volume repair process the compression engine restores a larger amount of data than required leading to the volume being offline
HU009677.5.0.4Multiple warmstarts due to FlashCopy background copy limitation putting both nodes in service state
HU009677.6.0.0Multiple warmstarts due to FlashCopy background copy limitation putting both nodes in service state
HU009707.6.0.1Node warmstart when upgrading to v7.6.0.0 with volumes using more than 65536 extents
HU009737.6.0.0Single node warmstart when concurrently creating new volume host mappings
HU009757.5.0.7Single node warmstart due to a race condition reordering of the background process when allocating I/O blocks
HU009757.6.0.0Single node warmstart due to a race condition reordering of the background process when allocating I/O blocks
HU009807.3.0.13Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash
HU009807.4.0.7Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash
HU009807.5.0.5Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash
HU009807.6.0.2Enhanced recovery procedure for compressed volumes affected by APAR HU00898. For more details refer to this Flash
HU009827.4.0.11Single node warmstart when software update is attempted on some DH8 nodes
HU009827.6.0.0Single node warmstart when software update is attempted on some DH8 nodes
HU009897.6.0.2Where an array is not experiencing any I/O, a drive initialisation may cause node warmstarts
HU009267.6.0.2Where an array is not experiencing any I/O, a drive initialisation may cause node warmstarts
HU009907.4.0.8A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes
HU009907.5.0.8A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes
HU009907.6.1.4A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes
HU009907.7.0.0A node warmstart on a cluster with Global Mirror secondary volumes can also result in a delayed response to hosts performing I/O to the Global Mirror primary volumes
HU009917.5.0.5Performance impact on read pre-fetch workloads
HU009917.6.0.0Performance impact on read pre-fetch workloads
HU009927.6.0.0Multiple node warmstarts and offline MDisk group during an array resync process
HU009937.6.0.0Event ID 1052 and ID 1032 entries in the eventlog are not being cleared
HU009947.6.0.0Continual VPD updates
HU009957.6.0.0Problems with delayed I/O causes multiple node warmstarts
HU009967.6.0.0T2 system recovery when running svctask chenclosure.
HU009977.6.0.0Single node warmstart on PCI events
HU009987.6.0.0Support for Fujitsu Eternus DX100 S3 controller
HU009997.6.0.0FlashCopy volumes may go offline during an upgrade
HU010007.5.0.10SNMP and Call Home stop working when a node reboots and the Ethernet link is down
HU010007.6.0.0SNMP and Call Home stop working when a node reboots and the Ethernet link is down
HU010017.6.0.0CCU checker causes both nodes to warmstart
HU010027.6.0.016Gb HBA causes multiple node warmstarts when unexpected FC frame content received
HU010037.6.0.0An extremely rapid increase in read IOs, on a single volume, can make it difficult for the cache component to free sufficient memory quickly enough to keep up, resulting in node warmstarts
HU010047.6.0.0Multiple node warmstarts when space efficient volumes are running out of capacity
HU010057.6.0.0Unable to remove ghost MDisks
HU010067.6.0.0Volume hosted on Hitachi controllers show high latency due to high I/O concurrency
HU010077.6.0.0When a node warmstart occurs on one node in an I/O group, that is the primary site for GMCV relationships, due to an issue within FlashCopy, then the other node in that I/O group may also warmstart
HU010077.7.0.0When a node warmstart occurs on one node in an I/O group, that is the primary site for GMCV relationships, due to an issue within FlashCopy, then the other node in that I/O group may also warmstart
HU010087.6.0.0Single node warmstart during code upgrade
HU010097.6.0.0Continual increase in fans speeds after replacement
HU010167.6.1.3Node warmstarts can occur when a port scan is received on port 1260
HU010887.6.1.3Node warmstarts can occur when a port scan is received on port 1260
HU010177.6.1.5The result of CLI commands are sometimes not promptly presented in the GUI
HU010177.7.0.5The result of CLI commands are sometimes not promptly presented in the GUI
HU010177.7.1.3The result of CLI commands are sometimes not promptly presented in the GUI
HU010197.5.0.8Customized grids view in GUI is not being returned after page refreshes
HU010217.8.0.0A fault in a backend controller can cause excessive path state changes leading to node warmstarts and offline volumes
HU011577.8.0.0A fault in a backend controller can cause excessive path state changes leading to node warmstarts and offline volumes
HU010227.6.1.7Fibre channel adapter encountered a bit parity error resulting in a node warmstart
HU010227.7.1.5Fibre channel adapter encountered a bit parity error resulting in a node warmstart
HU010237.6.1.0Remote Copy services do not transfer data after upgrade to v7.6
HU010247.4.0.10A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip
HU010247.5.0.9A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip
HU010247.6.1.5A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip
HU010247.7.0.3A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip
HU010247.7.1.1A single node warmstart may occur when the SAS firmwares ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip
HU010277.6.0.4Single node warmstart, or unresponsive GUI, when creating compressed volumes
HU010287.5.0.8Processing of lsnodebootdrive output may adversely impact management GUI performance
HU010287.6.1.3Processing of lsnodebootdrive output may adversely impact management GUI performance
HU010297.3.0.13Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI
HU010297.4.0.9Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI
HU010297.5.0.7Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI
HU010297.6.0.4Where a boot drive has been replaced with a new unformatted one, on a DH8 node, the node may warmstart when the user logs in as superuser to the CLI via its service IP or they login to the node via the service GUI. Additionally where the node is the config node this may happen when the user logs in as superuser to the cluster via CLI or management GUI
HU010307.5.0.8Incremental FlashCopy always requires a full copy
HU010307.6.1.3Incremental FlashCopy always requires a full copy
HU010327.6.0.2Batteries going on and offline can take node offline
HU010337.5.0.6After upgrade to v7.5.0.5 both nodes warmstart
HU010347.6.0.3Single node warmstart stalls upgrade
HU010397.7.0.0When volumes, which are still in a relationship, are forcefully removed then a node may experience warmstarts
HU010427.5.0.9Single node warmstart due to 16Gb HBA firmware behaviour
HU010427.6.1.3Single node warmstart due to 16Gb HBA firmware behaviour
HU010437.3.0.13Long pause when upgrading
HU010437.6.1.0Long pause when upgrading
HU010467.5.0.8Free capacity is tracked using a count of free extents. If a child pool is shrunk the counter can wrap causing incorrect free capacity to be reported
HU010467.6.1.4Free capacity is tracked using a count of free extents. If a child pool is shrunk the counter can wrap causing incorrect free capacity to be reported
HU010507.6.1.6DRAID rebuild incorrectly reports event code 988300
HU010507.7.0.5DRAID rebuild incorrectly reports event code 988300
HU010507.7.1.1DRAID rebuild incorrectly reports event code 988300
HU010517.4.0.8Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster
HU010517.5.0.7Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster
HU010517.6.1.1Large increase in response time of Global Mirror primary volumes when replicating large amounts of data concurrently to secondary cluster
HU010527.5.0.8GUI operation with large numbers of volumes may adversely impact performance
HU010527.6.1.3GUI operation with large numbers of volumes may adversely impact performance
HU010537.5.0.8An issue in the drive automanage process during a replacement may result in a Tier 2 recovery
HU010537.6.1.3An issue in the drive automanage process during a replacement may result in a Tier 2 recovery
HU010567.5.0.7Both nodes in the same I/O group warmstart when using vVols
HU010567.6.0.4Both nodes in the same I/O group warmstart when using vVols
HU010577.8.1.0Slow GUI performance for some pages as the lsnodebootdrive command generates unexpected output
HU010587.5.0.7Multiple node warmstarts may occur when volumes that are part of FlashCopy maps go offline (e.g due to insufficient space)
HU010597.5.0.8When a tier in a storage pool runs out of free extents EasyTier can adversely affect performance
HU010597.6.1.3When a tier in a storage pool runs out of free extents EasyTier can adversely affect performance
HU010607.5.0.8Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts
HU010607.6.1.4Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts
HU010607.7.0.0Prior warmstarts, perhaps due to a hardware error, can induce a dormant state within the FlashCopy code that may result in further warmstarts
HU010627.5.0.9Tier 2 recovery may occur when max replication delay is used and remote copy I/O is delayed
HU010627.6.1.1Tier 2 recovery may occur when max replication delay is used and remote copy I/O is delayed
HU010637.6.1.63PAR controllers do not support OTUR commands resulting in device port exclusions
HU010637.7.1.13PAR controllers do not support OTUR commands resulting in device port exclusions
HU010647.5.0.9Management GUI incorrectly displays FC mappings that are part of GMCV relationships
HU010647.6.1.3Management GUI incorrectly displays FC mappings that are part of GMCV relationships
HU010677.5.0.8In a HyperSwap topology, where host I/O to a volume is being directed to both volume copies, for specific workload characteristics, I/O received within a small timing window could cause warmstarts on two nodes within separate I/O groups
HU010677.6.1.1In a HyperSwap topology, where host I/O to a volume is being directed to both volume copies, for specific workload characteristics, I/O received within a small timing window could cause warmstarts on two nodes within separate I/O groups
HU010697.6.0.4After upgrade from v7.5 or earlier to v7.6.0 or later all nodes may warmstart at the same time resulting in a Tier 2 recovery
HU010697.7.0.0After upgrade from v7.5 or earlier to v7.6.0 or later all nodes may warmstart at the same time resulting in a Tier 2 recovery
HU010707.5.0.8Increased preparation delay when FlashCopy Manager initiates a backup. This does not impact the performance of the associated data transfer.
HU010727.5.0.8In certain configurations throttling too much may result in dropped IOs, which can lead to a single node warmstart
HU010727.6.1.4In certain configurations throttling too much may result in dropped IOs, which can lead to a single node warmstart
HU010737.5.0.7SVC CG8 nodes have internal SSDs but these are not displayed in internal storage page
HU010737.6.1.1SVC CG8 nodes have internal SSDs but these are not displayed in internal storage page
HU010747.5.0.9An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart
HU010747.6.1.5An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart
HU010747.7.0.0An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart
HU010757.7.0.0Multiple node warmstarts can occur due to an unstable Remote Copy domain after an upgrade to v7.6.0
HU010767.6.1.3Where hosts share volumes using a particular reservation method, if the maximum number of reservations is exceeded, this may result in a single node warmstart
HU010787.6.1.5When the rmnode command is run it removes persistent reservation data to prevent a stuck reservation. MS Windows and Hyper-V cluster design constantly monitors the reservation table and takes the associated volume offline whilst recovering cluster membership. This can result in a brief outage at the host level.
HU010787.7.0.0When the rmnode command is run it removes persistent reservation data to prevent a stuck reservation. MS Windows and Hyper-V cluster design constantly monitors the reservation table and takes the associated volume offline whilst recovering cluster membership. This can result in a brief outage at the host level.
HU010807.5.0.8Single node warmstart due to an I/O timeout in cache
HU010807.6.1.3Single node warmstart due to an I/O timeout in cache
HU010817.5.0.8When removing multiple nodes from a cluster a remaining node may warmstart
HU010817.6.1.3When removing multiple nodes from a cluster a remaining node may warmstart
HU010827.5.0.10A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline
HU010827.6.1.5A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline
HU010827.7.0.0A limitation in the RAID anti-deadlock page reservation process may lead to an MDisk group going offline
HU010867.6.1.1SVC reports incorrect SCSI TPGS data in an 8 node cluster causing host multi-pathing software to receive errors which may result in host outages
HU010877.5.0.8With a partnership stopped at the remote site the stop button, in the GUI, at the local site will be disabled
HU010877.6.1.3With a partnership stopped at the remote site the stop button, in the GUI, at the local site will be disabled
HU010897.6.1.5svcconfig backup fails when an I/O group name contains a hyphen
HU010897.7.0.0svcconfig backup fails when an I/O group name contains a hyphen
HU010907.6.1.3Dual node warmstart due to issue with the call home process
HU010917.6.1.3An issue with the CAW lock processing, under high SCSI-2 reservation workloads, may cause node warmstarts
HU010927.6.1.3Systems which have undergone particular upgrade paths may be blocked from upgrading to v7.6
HU010947.5.0.8Single node warmstart due to rare resource locking contention
HU010947.6.1.3Single node warmstart due to rare resource locking contention
HU010967.5.0.8Batteries may be seen to continuously recondition
HU010967.6.1.4Batteries may be seen to continuously recondition
HU010967.7.0.0Batteries may be seen to continuously recondition
HU010977.4.0.10For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid
HU010977.5.0.9For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid
HU010977.6.1.5For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid
HU010977.7.0.3For a small number of node warmstarts the SAS register values are retaining incorrect values rendering the debug information invalid
HU010987.6.1.8Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk
HU010987.7.1.7Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk
HU010987.8.0.0Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk
HU011007.6.1.3License information not showing on GUI after upgrade to 7.6.0.3
HU011037.6.1.1A specific drive type may insufficiently report media events causing a delay to failure handling
HU011047.6.1.4When using GMCV relationships if a node in an I/O group loses communication with its partner it may warmstart
HU011047.7.0.0When using GMCV relationships if a node in an I/O group loses communication with its partner it may warmstart
HU011097.6.1.6Multiple nodes can experience a lease expiry when a FC port is having communications issues
HU011097.7.0.5Multiple nodes can experience a lease expiry when a FC port is having communications issues
HU011097.7.1.1Multiple nodes can experience a lease expiry when a FC port is having communications issues
HU011107.5.0.9Spectrum Virtualize supports SSH connections using RC4 based ciphers
HU011107.6.1.5Spectrum Virtualize supports SSH connections using RC4 based ciphers
HU011107.7.0.0Spectrum Virtualize supports SSH connections using RC4 based ciphers
HU011127.6.1.3When upgrading, the quorum lease times are not updated correctly which may cause lease expiries on both nodes
HU011187.6.1.3Due to a firmware issue both nodes in a V7000 Gen 2 may be powered off
HU011187.7.1.1Due to a firmware issue both nodes in a V7000 Gen 2 may be powered off
HU011407.5.0.9EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance
HU011407.6.1.5EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance
HU011407.7.0.3EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance
HU011407.7.1.1EasyTier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance
HU011417.5.0.9Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery
HU011417.6.1.5Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery
HU011417.7.0.3Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery
HU011417.7.1.1Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a Tier 2 recovery
HU011427.6.1.4Single node warmstart due to 16Gb HBA firmware receiving invalid FC frames
HU011437.6.1.4Where nodes are missing config files some services will be prevented from starting
HU011437.7.0.0Where nodes are missing config files some services will be prevented from starting
HU011447.5.0.9Single node warmstart on the config node due to GUI contention
HU011447.6.1.4Single node warmstart on the config node due to GUI contention
HU011447.7.0.0Single node warmstart on the config node due to GUI contention
HU011557.7.1.1When a lsvdisklba or lsmdisklba command is invoked, for an MDisk with a back end issue, a node warmstart may occur
HU011567.7.0.0Single node warmstart due to an invalid FCoE frame from a HP-UX host
HU011657.6.1.4When a SE volume goes offline both nodes may experience multiple warmstarts and go to service state
HU011657.7.0.0When a SE volume goes offline both nodes may experience multiple warmstarts and go to service state
HU011777.8.0.0A small timing window issue exists where a node warmstart or power failure can lead to repeated warmstarts of that node until a node rescue is performed
HU011787.6.1.5Battery incorrectly reports zero percent charged
HU011807.6.1.4When creating a snapshot on an ESX host, using vVols, a Tier 2 recovery may occur
HU011807.7.0.0When creating a snapshot on an ESX host, using vVols, a Tier 2 recovery may occur
HU011817.6.1.4Compressed volumes larger than 96 TiB may experience a loss of access to the volume. For more details refer to this Flash
HU011817.7.0.0Compressed volumes larger than 96 TiB may experience a loss of access to the volume. For more details refer to this Flash
HU011827.6.1.5Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command
HU011827.7.0.3Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command
HU011827.7.1.1Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command
HU011837.6.1.5Node warmstart due to 16Gb HBA firmware entering a rare deadlock condition in its ELS frame handling
HU011837.7.0.3Node warmstart due to 16Gb HBA firmware entering a rare deadlock condition in its ELS frame handling
HU011847.6.1.5When removing multiple MDisks node warmstarts may occur
HU011847.7.0.5When removing multiple MDisks node warmstarts may occur
HU011847.7.1.1When removing multiple MDisks node warmstarts may occur
HU011857.5.0.10iSCSI target closes connection when there is a mismatch in sequence number
HU011857.6.1.5iSCSI target closes connection when there is a mismatch in sequence number
HU011857.7.0.5iSCSI target closes connection when there is a mismatch in sequence number
HU011857.7.1.1iSCSI target closes connection when there is a mismatch in sequence number
HU011867.7.0.0Volumes going offline briefly may disrupt the operation of Remote Copy leading to a loss of access by hosts
HU011877.6.1.6Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times
HU011877.7.0.5Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times
HU011877.7.1.1Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times
HU011887.7.0.0Quorum lease times are not set correctly impacting system availability
HU011897.7.0.0Improvement to DRAID dependency calculation when handling multiple drive failures
HU011908.1.1.0Where a controller, which has been assigned to a specific site, has some logins intentionally removed then the system can continue to display the controller as degraded even when the DMP has been followed and errors fixed
HU011927.7.0.1Some V7000 gen1 systems have an unexpected WWNN value which can cause a single node warmstart when upgrading to v7.7
HU011937.6.1.7A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting
HU011937.7.0.5A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting
HU011937.7.1.5A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting
HU011937.8.0.0A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting
HU011947.6.1.5A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing
HU011947.7.0.3A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing
HU011947.7.1.1A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing
HU011987.6.1.5Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart
HU011987.7.0.5Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart
HU011987.7.1.1Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart
HU012087.7.0.2After upgrading to v7.7 or later from v7.5 or earlier and then creating a DRAID array, with a node reset, the system may encounter repeated node warmstarts which will require a Tier 3 recovery
HU012087.7.1.1After upgrading to v7.7 or later from v7.5 or earlier and then creating a DRAID array, with a node reset, the system may encounter repeated node warmstarts which will require a Tier 3 recovery
HU012098.3.1.7It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart
HU012098.4.0.7It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart
HU012098.5.0.0It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart
HU012107.6.1.5A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster
HU012107.7.0.3A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster
HU012107.7.1.1A small number of systems have broken, or disabled, TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster
HU012127.5.0.10GUI displays an incorrect timezone description for Moscow
HU012127.6.1.5GUI displays an incorrect timezone description for Moscow
HU012127.7.0.3GUI displays an incorrect timezone description for Moscow
HU012137.7.0.5The LDAP password is visible in the auditlog
HU012137.8.0.0The LDAP password is visible in the auditlog
HU012147.6.1.5GUI and snap missing EasyTier heatmap information
HU012147.7.0.5GUI and snap missing EasyTier heatmap information
HU012147.7.1.1GUI and snap missing EasyTier heatmap information
HU012197.6.1.6Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware
HU012197.7.0.5Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware
HU012197.7.1.1Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware
HU012207.8.1.0Changing the type of a RC consistency group when a volume in a subordinate relationship is offline will cause a Tier 2 recovery
HU012217.6.1.6Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware
HU012217.7.0.5Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware
HU012217.7.1.1Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware
HU012228.6.3.0FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID
HU012228.7.0.0FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID
HU012237.6.1.5The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships
HU012237.7.0.5The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships
HU012237.7.1.5The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships
HU012237.8.0.0The handling of a rebooted nodes return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships
HU012257.6.1.7Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU012257.7.1.6Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU012257.8.0.2Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU013307.6.1.7Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU013307.7.1.6Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU013307.8.0.2Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU014127.6.1.7Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU014127.7.1.6Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU014127.8.0.2Node warmstarts due to inconsistencies arising from the way cache interacts with compression
HU012267.6.1.6Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access
HU012267.7.0.5Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access
HU012267.7.1.3Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access
HU012277.5.0.10High volumes of events may cause the email notifications to become stalled
HU012277.6.1.5High volumes of events may cause the email notifications to become stalled
HU012277.7.0.5High volumes of events may cause the email notifications to become stalled
HU012277.7.1.3High volumes of events may cause the email notifications to become stalled
HU012277.8.1.0High volumes of events may cause the email notifications to become stalled
HU012287.6.1.8Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries
HU012287.7.1.7Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries
HU012287.8.0.0Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries
HU012297.7.1.7The DMP for a 3105 event does not identify the correct problem canister
HU012297.8.0.0The DMP for a 3105 event does not identify the correct problem canister
HU012307.8.0.0A host aborting an outstanding logout command can lead to a single node warmstart
HU012347.6.1.6After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI
HU012347.7.0.5After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI
HU012347.7.1.3After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI
HU012388.4.0.0The mishandling of performance stats may occasionally result in some entries being overwritten
HU012407.5.0.9For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time
HU012407.6.1.5For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time
HU012407.7.0.0For some volumes the first write I/O, after a significant period (>120 sec) of inactivity, may experience a slightly elevated response time
HU012447.7.1.1When a node is transitioning from offline to online it is possible for excessive CPU time to be used on another node in the cluster which may lead to a single node warmstart
HU012457.5.0.11Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart
HU012457.6.1.6Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart
HU012457.7.0.0Making any config change that may interact with the primary change volume of a GMCV relationship, whilst data is being actively copied, can result in a node warmstart
HU012477.6.1.7When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result
HU012477.7.0.5When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result
HU012477.7.1.4When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result
HU012477.8.0.0When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result
HU012507.7.1.1When using lsvdisklba to find a bad block on a compressed volume, the volume can go offline
HU012517.6.1.6When following the DMP for a 1685 event, if the option for drive reseat has already been attempted is selected, the process to replace a drive is not started
HU012517.7.1.3When following the DMP for a 1685 event, if the option for drive reseat has already been attempted is selected, the process to replace a drive is not started
HU012527.8.1.0Where a SVC is presenting storage from an 8-node V7000, an upgrade to that V7000 can pause I/O long enough for the SVC to take related MDisks offline
HU012547.6.1.7A fluctuation of input AC power can cause a 584 error on a node
HU012547.7.1.5A fluctuation of input AC power can cause a 584 error on a node
HU012547.8.0.0A fluctuation of input AC power can cause a 584 error on a node
HU012557.7.1.7The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU012557.8.1.2The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU012558.1.0.0The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU012397.7.1.7The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU012397.8.1.2The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU012398.1.0.0The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU015867.7.1.7The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU015867.8.1.2The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU015868.1.0.0The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access
HU012577.7.1.3Large (>1MB) write IOs to volumes can lead to a hung I/O condition resulting in node warmstarts
HU012587.4.0.11A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration
HU012587.5.0.10A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration
HU012587.6.1.6A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration
HU012587.7.0.4A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration
HU012587.7.1.1A compressed volume copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration
HU012627.6.1.7Cached data for a HyperSwap volume may only be destaged from a single node in an I/O group
HU012647.8.0.0Node warmstart due to an issue in the compression optimisation process
HU012677.7.1.7An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an I/O group warmstarting
HU012677.8.0.0An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an I/O group warmstarting
HU012687.8.0.0Upgrade to 7.7.x fails on Storwize systems in the replication layer where a T3 recovery was performed in the past
HU012697.7.0.5A rare timing conflict between two process may lead to a node warmstart
HU012697.7.1.5A rare timing conflict between two process may lead to a node warmstart
HU012697.8.0.0A rare timing conflict between two process may lead to a node warmstart
HU012727.7.1.2Replacing a drive in a system with a DRAID array can result in T2 recovery warmstarts. For more details refer to this Flash
HU012747.7.0.0DRAID lsarraysyncprogress command may appear to show array synchronisation stuck at 99%
HU012767.8.1.8An issue in the handling of debug data from the FC adapter can cause a node warmstart
HU012768.2.0.0An issue in the handling of debug data from the FC adapter can cause a node warmstart
HU012768.2.1.0An issue in the handling of debug data from the FC adapter can cause a node warmstart
HU012927.7.0.5Under some circumstances the re-calculation of grains to clean can take too long after a FlashCopy done event has been sent resulting in a node warmstart
HU012927.7.1.3Under some circumstances the re-calculation of grains to clean can take too long after a FlashCopy done event has been sent resulting in a node warmstart
HU013047.8.0.0SSH authentication fails if multiple SSH keys are configured on the client
HU013097.8.1.0For FC logins, on a node that is online for more than 200 days, if a fabric event makes a login inactive then the node may be unable to re-establish the login
HU013207.8.0.0A rare timing condition can cause hung I/O leading to warmstarts on both nodes in an I/O group. Probability can be increased in the presence of failing drives.
HU013217.8.1.3Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring
HU013218.1.0.0Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write I/O to the (former) primary volume is still occurring
HU013237.7.1.4Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart
HU013237.8.0.0Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart
HU013327.6.1.8Performance monitor and Spectrum Control show zero CPU utilisation for compression
HU013327.7.1.7Performance monitor and Spectrum Control show zero CPU utilisation for compression
HU013327.8.1.1Performance monitor and Spectrum Control show zero CPU utilisation for compression
HU013407.7.0.5A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade
HU013407.7.1.5A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade
HU013407.8.0.0A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade
HU013468.1.0.0An unexpected error 1036 may display on the event log even though a canister was never physically removed
HU013477.7.1.4During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts
HU013477.8.0.0During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts
HU013537.5.0.11CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds
HU013537.6.1.6CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds
HU013537.8.1.1CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds
HU013707.8.0.0lsfabric command may not list all logins when it is used with parameters
HU013717.7.1.6A remote copy command related to HyperSwap may hang resulting in a warmstart of the config node
HU013717.8.1.0A remote copy command related to HyperSwap may hang resulting in a warmstart of the config node
HU013747.7.0.5Where an issue with Global Mirror causes excessive I/O delay, a timeout may not function resulting in a node warmstart
HU013747.7.1.4Where an issue with Global Mirror causes excessive I/O delay, a timeout may not function resulting in a node warmstart
HU013747.8.0.0Where an issue with Global Mirror causes excessive I/O delay, a timeout may not function resulting in a node warmstart
HU013797.7.1.4Resource leak in the handling of Read Intensive drives leads to offline volumes
HU013797.8.0.0Resource leak in the handling of Read Intensive drives leads to offline volumes
HU013817.7.1.4A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state
HU013817.8.0.0A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state
HU013827.7.1.5Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access
HU013827.8.0.1Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access
HU013857.7.1.7A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy
HU013857.8.1.3A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy
HU013858.1.0.0A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy
HU013867.7.1.3Where latency between sites is greater than 1ms host write latency can be adversely impacted. This is can be more likely in the presence of large I/O transfer sizes or high IOPS
HU013887.8.1.0Where a HyperSwap volume is the source of a FlashCopy mapping and the HyperSwap relationship is out of sync when the HyperSwap volume comes back online a switch of direction will occur and FlashCopy operation may delay I/O leading to node warmstarts
HU013917.5.0.12Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU013917.6.1.8Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU013917.7.1.7Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU013917.8.1.1Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU015817.5.0.12Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU015817.6.1.8Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU015817.7.1.7Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU015817.8.1.1Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware
HU013927.7.1.5Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a Tier 2 recovery
HU013927.8.0.0Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a Tier 2 recovery
HU013947.8.1.0Node warmstarts may occur on systems which are performing Global Mirror replication, due to a low-probability timing window
HU013957.8.1.0Malformed URLs sent by security scanners whilst correctly discarded can cause considerable exception logging on config nodes leading to performance degradation that can adversely affect remote copy
HU013968.1.0.0HBA firmware resources can become exhausted resulting in node warmstarts
HU013997.6.1.7For certain config nodes the CLI Help commands may not work
HU013997.7.0.5For certain config nodes the CLI Help commands may not work
HU013997.7.1.5For certain config nodes the CLI Help commands may not work
HU013997.8.0.0For certain config nodes the CLI Help commands may not work
HU014027.6.1.7Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available
HU014027.7.0.5Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available
HU014027.7.1.5Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available
HU014027.8.0.0Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available
HU014047.8.1.0A node warmstart may occur when a new volume is created using fast format and foreground I/O is submitted to the volume
HU014097.6.1.7Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over
HU014097.7.1.5Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over
HU014097.8.0.2Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over
HU014107.6.1.7An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state
HU014107.7.1.5An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state
HU014107.8.0.2An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state
HU014137.8.1.0Node warmstarts when establishing an FC partnership between a system on v7.7.1 or later with another system which in turn has a partnership to another system running v6.4.1 or earlier
HU014157.8.0.1When a V3700 with 1GE adapters is upgraded to v7.8.0.0 iSCSI hosts will lose access to volumes
HU014167.7.1.7ISL configuration activity may cause a cluster-wide lease expiry
HU014167.8.1.0ISL configuration activity may cause a cluster-wide lease expiry
HU014207.8.1.6An issue in DRAID can cause repeated node warmstarts in the circumstances of a degraded copyback operation to a drive
HU014208.1.1.0An issue in DRAID can cause repeated node warmstarts in the circumstances of a degraded copyback operation to a drive
HU014267.8.0.2Systems running v7.6.1 or earlier, with compressed volumes, that upgrade to v7.8.0 or later will fail when the first node warmstarts and enters a service state
HU014287.7.1.7Scheduling issue adversely affects performance resulting in node warmstarts
HU014287.8.1.0Scheduling issue adversely affects performance resulting in node warmstarts
HU014307.6.1.8Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts
HU014307.7.1.7Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts
HU014307.8.1.1Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts
HU014327.6.1.7Node warmstart due to an accounting issue within the cache component
HU014327.7.1.5Node warmstart due to an accounting issue within the cache component
HU014327.8.0.2Node warmstart due to an accounting issue within the cache component
HU014347.6.0.0A node port can become excluded, when its login status changes, leading to a load imbalance across available local ports
HU014427.8.0.2Upgrading to v7.7.1.5 or v7.8.0.1 with encryption enable will result in multiple Tier 2 recoveries and a loss of access
HU014457.7.1.9Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart
HU014457.8.1.0Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart
HU014467.8.1.6Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart
HU014468.1.0.0Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart
HU014477.5.0.9The management of FlashCopy grains during a restore process can miss some IOs
HU014477.6.1.7The management of FlashCopy grains during a restore process can miss some IOs
HU014477.7.0.5The management of FlashCopy grains during a restore process can miss some IOs
HU014548.1.0.0During an array rebuild a quiesce operation can become stalled leading to a node warmstart
HU014557.8.0.0VMWare hosts with ATS enabled can see LUN disconnects to volumes when GMCV is used
HU014577.7.1.7In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI
HU014577.8.1.3In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI
HU014578.1.0.0In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI
HU014588.1.0.0A node warmstart may occur when hosts submit writes to Remote Copy secondary volumes (which are in a read-only mode)
HU014597.8.0.2The event log indicates incorrect enclosure type
HU014608.1.3.0If during an array rebuild another drive fails the high processing demand in RAID for handling many medium errors during the rebuild can lead to a node warmstart
HU014628.1.1.0Environmental factors can trigger a protection mechanism, that causes the SAS chip to freeze, resulting in a single node warmstart
HU014637.8.1.0SSH Forwarding is enabled on the SSH server
HU014667.7.1.7Stretched cluster and HyperSwap I/O routing does not work properly due to incorrect ALUA data
HU014667.8.1.0Stretched cluster and HyperSwap I/O routing does not work properly due to incorrect ALUA data
HU014677.7.1.7Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools
HU014677.8.1.8Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools
HU014678.1.0.0Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools
HU014697.7.1.7Resource exhaustion in the iSCSI component can result in a node warmstart
HU014697.8.1.1Resource exhaustion in the iSCSI component can result in a node warmstart
HU014707.8.1.0T3 might fail during svcconfig recover -execute while running chemail if the email_machine_address contains a comma
HU014717.8.1.1Power system down using the GUI on V5000 causes the fans to run high while the system is offline but power is still applied to the enclosure
HU014727.8.1.6A locking issue in Global Mirror can cause a warmstart on the secondary cluster
HU014728.1.0.0A locking issue in Global Mirror can cause a warmstart on the secondary cluster
HU014737.7.1.6EasyTier migrates an excessive number of cold extents to an overloaded nearline array
HU014737.8.1.0EasyTier migrates an excessive number of cold extents to an overloaded nearline array
HU014747.7.1.6Host writes to a read-only secondary volume trigger I/O timeout warmstarts
HU014747.8.1.0Host writes to a read-only secondary volume trigger I/O timeout warmstarts
HU014767.8.1.6A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed
HU014768.1.0.0A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed
HU014777.7.1.7Due to the way enclosure data is read it is possible for a firmware mismatch between nodes to occur during an upgrade
HU014777.8.1.1Due to the way enclosure data is read it is possible for a firmware mismatch between nodes to occur during an upgrade
HU014797.6.1.8The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks
HU014797.7.1.6The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks
HU014797.8.1.0The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks
HU014807.6.1.8Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI
HU014807.7.1.6Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI
HU014807.8.1.0Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI
HU014817.8.1.3A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts
HU014818.1.0.0A failed I/O can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts
HU014837.7.1.6mkdistributedarray command may get stuck in the prepare state. Any interaction with the volumes in that array will result in multiple warmstarts
HU014837.8.1.0mkdistributedarray command may get stuck in the prepare state. Any interaction with the volumes in that array will result in multiple warmstarts
HU014847.7.1.7During a RAID array rebuild there may be node warmstarts
HU014847.8.1.1During a RAID array rebuild there may be node warmstarts
HU014857.8.1.9When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
HU014858.1.3.6When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
HU014858.2.1.4When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
HU014877.7.1.6Small increase in read response time for source volumes with additional FlashCopy maps
HU014877.8.1.0Small increase in read response time for source volumes with additional FlashCopy maps
HU014887.6.1.8SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures
HU014887.7.1.7SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures
HU014887.8.0.0SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures
HU014907.6.1.8When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups
HU014907.7.1.7When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups
HU014907.8.1.3When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups
HU014908.1.0.0When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups
HU014927.8.1.8All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter
HU014928.1.3.4All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter
HU014928.2.1.0All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter
HU020247.8.1.8All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter
HU020248.1.3.4All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter
HU020248.2.1.0All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter
HU014948.1.2.0A change to the FC port mask may fail even though connectivity would be sufficient
HU014967.8.1.1SVC node type SV1 reports wrong FRU part number for compression accelerator
HU014977.8.1.0A drive can still be offline even though the error is showing as corrected in the Event Log
HU014987.5.0.13GUI may be exposed to CVE-2017-5638 (see Section 3.1)
HU014987.7.1.6GUI may be exposed to CVE-2017-5638 (see Section 3.1)
HU014987.8.1.0GUI may be exposed to CVE-2017-5638 (see Section 3.1)
HU014997.6.1.7When an offline volume copy comes back online, under rare conditions, the flushing process can cause the cache to enter an invalid state, delaying I/O, and resulting in node warmstarts
HU015007.7.1.6Node warmstarts can occur when the iSCSI Ethernet MTU is changed
HU015037.6.1.8When the 3PAR host type is set to legacy the round robin algorithm, used to select the MDisk port for I/O submission to 3PAR controllers, does not work correctly and I/O may be submitted to fewer controller ports, adversely affecting performance
HU015037.8.1.1When the 3PAR host type is set to legacy the round robin algorithm, used to select the MDisk port for I/O submission to 3PAR controllers, does not work correctly and I/O may be submitted to fewer controller ports, adversely affecting performance
HU015057.6.1.8A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity
HU015057.7.1.7A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity
HU015057.8.1.1A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity
HU015067.6.1.8Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts
HU015067.7.1.7Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts
HU015068.1.0.0Creating a volume copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts
HU015077.8.1.8Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy
HU015078.1.3.6Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy
HU015078.2.1.0Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy
HU015098.1.0.0Where a drive is generating medium errors, an issue in the handling of array rebuilds can result in an MDisk group being repeated taken offline
HU015127.8.1.8During a DRAID MDisk copy-back operation a miscalculation of the remaining work may cause a node warmstart
HU015128.1.1.0During a DRAID MDisk copy-back operation a miscalculation of the remaining work may cause a node warmstart
HU015167.7.1.1When node configuration data exceeds 8K in size some user defined settings may not be stored permanently resulting in node warmstarts
HU015197.7.1.7One PSU may silently fail leading to the possibility of a dual node reboot
HU015197.8.0.0One PSU may silently fail leading to the possibility of a dual node reboot
HU015207.8.1.1Where the system is being used as secondary site for Remote Copy during an upgrade to v7.8.1 the node may warmstart
HU015218.1.0.0Remote Copy does not correctly handle STOP commands for relationships which may lead to node warmstarts
HU015228.1.0.0A node warmstart may occur when a Fibre Channel frame is received with an unexpected value for host login type
HU015237.8.1.8An issue with FC adapter initialisation can lead to a node warmstart
HU015238.2.0.0An issue with FC adapter initialisation can lead to a node warmstart
HU015238.2.1.0An issue with FC adapter initialisation can lead to a node warmstart
HU015247.8.1.6When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up
HU015248.1.0.0When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up
HU015257.8.1.3During an upgrade a resource locking issue in the compression component can cause a node to warmstart multiple times and become unavailable
HU015258.1.1.0During an upgrade a resource locking issue in the compression component can cause a node to warmstart multiple times and become unavailable
HU015287.7.1.7Both nodes may warmstart due to Sendmail throttling
HU015317.8.1.1Spectrum Control is unable to receive notifications from SVC/Storwize. Spectrum Control may experience an out-of-memory condition
HU015357.8.1.3An issue with Fibre Channel driver handling of command processing can result in a node warmstart
HU015458.1.0.0A locking issue in the stats collection process may result in a node warmstart
HU015497.6.1.8During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes
HU015497.7.1.7During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes
HU015497.8.1.3During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes
HU015498.1.0.0During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes
HU015508.1.0.0Removing a volume with -force while it is still receiving I/O from a host may lead to a node warmstart
HU015548.1.0.0Node warmstart may occur during a livedump collection
HU015557.5.0.12The system may generate duplicate WWPNs
HU015567.8.1.8The handling of memory pool usage by Remote Copy may lead to a node warmstart
HU015568.1.0.0The handling of memory pool usage by Remote Copy may lead to a node warmstart
HU015637.8.1.3Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart
HU015638.1.0.0Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart
HU015647.8.1.8FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop
HU015648.2.0.2FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop
HU015648.2.1.0FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop
HU015667.7.1.7After upgrading, numerous 1370 errors are seen in the Event Log
HU015667.8.1.1After upgrading, numerous 1370 errors are seen in the Event Log
HU015697.6.1.8When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes
HU015697.7.1.7When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes
HU015697.8.1.3When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes
HU015698.1.0.0When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes
HU015707.8.1.1Reseating a drive in an array may cause the MDisk to go offline
HU015718.2.0.0An upgrade can become stalled due to a node warmstart
HU015718.2.1.0An upgrade can become stalled due to a node warmstart
HU015727.6.1.8SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access
HU015727.7.1.7SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access
HU015727.8.1.8SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access
HU015728.1.0.0SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access
HU015738.1.0.0Node warmstart due to a stats collection scheduling issue
HU015797.7.1.7In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive
HU015797.8.1.8In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive
HU015798.1.0.0In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive
HU015827.7.1.7A compression issue in IP replication can result in a node warmstart
HU015827.8.1.3A compression issue in IP replication can result in a node warmstart
HU015828.1.0.0A compression issue in IP replication can result in a node warmstart
HU015838.1.0.0Running mkhostcluster with duplicate host names or IDs in the seedfromhost argument will cause a Tier 2 recovery
HU015847.8.1.3An issue in array indexing can cause a RAID array to go offline repeatedly
HU015848.1.0.0An issue in array indexing can cause a RAID array to go offline repeatedly
HU016028.1.1.0When security scanners send garbage data to SVC/Storwize iSCSI target addresses a node warmstart may occur
HU016097.6.1.8When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts
HU016097.7.1.7When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts
HU016097.8.1.1When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts
IT153437.6.1.8When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts
IT153437.7.1.7When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts
IT153437.8.1.1When the system is busy, the compression component may be paged out of memory resulting in latency that can lead to warmstarts
HU016108.1.0.0The handling of the background copy backlog by FlashCopy can cause latency for other unrelated FlashCopy maps
HU016147.7.1.7After a node is upgraded hosts defined as TPGS may have paths set to inactive
HU016147.8.1.3After a node is upgraded hosts defined as TPGS may have paths set to inactive
HU016148.1.0.0After a node is upgraded hosts defined as TPGS may have paths set to inactive
HU016158.1.0.0A timing issue relating to process communication can result in a node warmstart
HU016177.8.1.9Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery
HU016178.1.3.6Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery
HU016178.2.1.0Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery
HU016188.1.1.0When using the charraymember CLI command if a member id is entered that is greater than the maximum number of members in a TRAID array then a T2 recovery will be initiated
HU016197.8.1.6A misreading of the PSU register can lead to failure events being logged incorrectly
HU016198.1.1.2A misreading of the PSU register can lead to failure events being logged incorrectly
HU016198.1.2.0A misreading of the PSU register can lead to failure events being logged incorrectly
HU016207.8.1.5Configuration changes can slow critical processes and, if this coincides with cloud account statistical data being adjusted, a Tier 2 recovery may occur
HU016208.1.1.0Configuration changes can slow critical processes and, if this coincides with cloud account statistical data being adjusted, a Tier 2 recovery may occur
HU016228.1.0.0If a Dense Draw enclosure is put into maintenance mode during an upgrade of the enclosure management firmware then further upgrades to adjacent enclosures will be prevented
HU016237.8.1.6An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships
HU016238.1.0.0An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships
HU016247.7.1.9GUI response can become very slow in systems with a large number of compressed and uncompressed volume
HU016247.8.1.3GUI response can become very slow in systems with a large number of compressed and uncompressed volume
HU016257.8.1.3In systems with a consistency group of HyperSwap or Metro Mirror relationships if an upgrade attempts to commit whilst a relationship is out of synch then there may be multiple warmstarts and a Tier 2 recovery
HU016267.8.1.2Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue
HU016268.1.0.0Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue
HU016287.7.1.9In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading
HU016287.8.1.6In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading
HU016307.8.1.6When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts
HU016308.1.0.0When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts
HU016317.8.1.3A memory leak in EasyTier when pools are in Balanced mode can lead to node warmstarts
HU016318.1.0.0A memory leak in EasyTier when pools are in Balanced mode can lead to node warmstarts
HU016327.8.1.3A congested fabric causes the Fibre Channel adapter firmware to abort I/O resulting in node warmstarts
HU016328.1.1.0A congested fabric causes the Fibre Channel adapter firmware to abort I/O resulting in node warmstarts
HU016338.1.1.0Even though synchronisation has completed a RAID array may still show progress to be at 99%
HU016357.7.1.7A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes or performance degradation
HU016357.8.0.0A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes or performance degradation
HU016367.7.1.7A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller
HU016367.8.1.3A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller
HU016368.1.0.0A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller
HU016387.7.1.7When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail
HU016387.8.1.3When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail
HU016388.1.0.0When upgrading to v7.6 or later, if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail
HU016457.8.1.3After upgrading to v7.8 a reboot of a node will initiate a continual boot cycle
HU016467.7.1.7A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster
HU016467.8.1.3A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster
HU016468.1.0.0A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster
HU016538.1.0.0An automatic Tier 3 recovery process may fail due to a RAID indexing issue
HU016547.8.1.3There may be a node warmstart when a switch of direction, in a HyperSwap relationship, fails to complete properly
HU016548.1.1.0There may be a node warmstart when a switch of direction, in a HyperSwap relationship, fails to complete properly
HU016557.8.1.5The algorithm used to calculate an SSDs replacement date can sometimes produce incorrect results leading to a premature End-of-Life error being reported
HU016558.1.1.1The algorithm used to calculate an SSDs replacement date can sometimes produce incorrect results leading to a premature End-of-Life error being reported
HU016577.8.1.8The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart
HU016578.1.3.4The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart
HU016578.2.0.0The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart
HU016578.2.1.0The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart
HU016597.8.1.9Node Fault LED can be seen to flash in the absence of an error condition.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
HU016598.1.3.6Node Fault LED can be seen to flash in the absence of an error condition.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
HU016598.2.1.4Node Fault LED can be seen to flash in the absence of an error condition.Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed
HU016617.8.1.8A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation
HU016618.1.3.4A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation
HU016618.2.0.0A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation
HU016618.2.1.0A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation
HU016647.7.1.9A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade
HU016647.8.1.6A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade
HU016648.1.1.2A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade
HU016648.1.2.0A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade
HU016658.1.0.1In environments where backend controllers are busy the creation of a new filesystem, with default settings, on a Linux host under conditions of parallel workloads can overwhelm the capabilities of the backend storage MDisk group and lead to warmstarts due to hung I/O on multiple nodes
HU016678.2.0.0A timing-window issue, in the remote copy component, may cause a node warmstart
HU016678.2.1.0A timing-window issue, in the remote copy component, may cause a node warmstart
HU016708.1.0.1Enabling RSA without a valid service IP address may cause multiple node warmstarts
HU016718.1.1.0Metadata between two nodes in an I/O group can become out of step leaving one node unaware of work scheduled on its partner. This can lead to stuck array synchronisation and false 1691 events
HU016738.1.0.1GUI rejects passwords that include special characters
HU016757.8.1.0Memory allocation issues may cause GUI and I/O performance issues
HU016787.8.1.8Entering an invalid parameter in the addvdiskaccess command may initiate a Tier 2 recovery
HU016788.1.1.0Entering an invalid parameter in the addvdiskaccess command may initiate a Tier 2 recovery
HU016797.8.1.5An issue in the RAID component can very occasionally cause a single node warmstart
HU016798.1.0.0An issue in the RAID component can very occasionally cause a single node warmstart
HU016877.7.1.9For volumes by host, ports by host and volumes by pool pages in the GUI when the number of items is greater than 50 then the item name will not be displayed
HU016877.8.1.5For volumes by host, ports by host and volumes by pool pages in the GUI when the number of items is greater than 50 then the item name will not be displayed
HU016888.1.1.0Unexpected used_virtualization figure in lslicense output after upgrade
HU016977.8.1.6A timeout issue in RAID member management can lead to multiple node warmstarts
HU016978.1.0.0A timeout issue in RAID member management can lead to multiple node warmstarts
HU016987.7.1.9A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted
HU016987.8.1.6A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted
HU016988.1.1.0A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted
HU017008.1.0.1If a thin-provisioned or compressed volume is deleted, and another volume is immediately created with the same real capacity, warmstarts may occur
HU017018.1.1.0Following loss of all logins to an external controller, that is providing quorum, when the controller next logs in it will not be automatically used for quorum
HU017047.8.1.5In systems using HyperSwap a rare timing window issue can result in a node warmstart
HU017048.1.0.0In systems using HyperSwap a rare timing window issue can result in a node warmstart
HU017067.7.1.8Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash
HU017067.8.1.4Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash
HU017068.1.0.2Areas of volumes written with all-zero data may contain non-zero data. For more details refer to this Flash
HU017087.8.1.9A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks
HU017088.1.3.0A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks
HU017157.8.1.8Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung I/O leading to a node warmstart
HU017158.1.2.0Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung I/O leading to a node warmstart
HU017188.1.2.0Hung I/O due to issues on the inter-site links can lead to multiple node warmstarts
HU017197.8.1.8Node warmstart due to a parity error in the HBA driver firmware
HU017198.1.3.4Node warmstart due to a parity error in the HBA driver firmware
HU017198.2.0.0Node warmstart due to a parity error in the HBA driver firmware
HU017198.2.1.0Node warmstart due to a parity error in the HBA driver firmware
HU017208.1.1.2An issue in the handling of compressed volume shrink operations, in the presence of EasyTier migrations, can cause DRAID MDisk timeouts leading to an offline MDisk group
HU017208.1.2.0An issue in the handling of compressed volume shrink operations, in the presence of EasyTier migrations, can cause DRAID MDisk timeouts leading to an offline MDisk group
HU017237.8.1.9A timing window issue, around nodes leaving and re-joining clusters, can lead to hung I/O and node warmstarts
HU017238.1.2.0A timing window issue, around nodes leaving and re-joining clusters, can lead to hung I/O and node warmstarts
HU017247.8.1.5An I/O lock handling issue between nodes can lead to a single node warmstart
HU017248.1.3.0An I/O lock handling issue between nodes can lead to a single node warmstart
HU017258.1.2.0Snap collection audit log selection filter can, incorrectly, skip some of the latest logs
HU017267.8.1.8A slow raid member drive in an MDisk may cause node warmstarts and the MDisk to go offline for a short time
HU017268.1.1.0A slow raid member drive in an MDisk may cause node warmstarts and the MDisk to go offline for a short time
HU017278.1.2.0Due to a memory accounting issue an out of range access attempt will cause a node warmstart
HU017297.8.1.5Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node, unable to progress, may warmstart
HU017298.1.0.0Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node, unable to progress, may warmstart
HU017307.7.1.9When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter
HU017307.8.1.5When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter
HU017308.1.1.1When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter
HU017317.8.1.5When a node is placed into service mode it is possible for all compression cards within the node to be marked as failed
HU017337.8.1.8Canister information, for the High Density Expansion Enclosure, may be incorrectly reported
HU017338.1.3.4Canister information, for the High Density Expansion Enclosure, may be incorrectly reported
HU017338.2.0.0Canister information, for the High Density Expansion Enclosure, may be incorrectly reported
HU017338.2.1.0Canister information, for the High Density Expansion Enclosure, may be incorrectly reported
HU017357.8.1.8Multiple power failures can cause a RAID array to get into a stuck state leading to offline volumes
HU017358.1.2.0Multiple power failures can cause a RAID array to get into a stuck state leading to offline volumes
HU017368.1.0.0A single node warmstart may occur when the topology setting of the cluster is changed
HU017377.8.1.10On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update
HU017378.1.3.6On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update
HU017378.2.0.0On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update
HU017378.2.1.0On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update
HU017407.8.1.6The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail
HU017408.1.1.2The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail
HU017408.1.2.0The timeout setting for key server commands may be too brief, when the server is busy, causing those commands to fail
HU017438.2.1.0Where hosts are directly attached a mishandling of the login process, by the fabric controller, may result in dual node warmstarts
HU017457.5.0.14testssl.sh identifies Logjam (CVE-2015-4000), fixed in v7.5.0.0, as a vulnerability
HU017468.3.1.0Adding a volume copy may deactivate any associated MDisk throttling
HU017477.8.1.6The incorrect detection of a cache issue can lead to a node warmstart
HU017478.1.1.0The incorrect detection of a cache issue can lead to a node warmstart
HU017508.1.2.0An issue in heartbeat handling between nodes can cause a node warmstart
HU017517.8.1.8When RAID attempts to flag a strip as bad, and that strip has already been flagged, a node may warmstart
HU017518.1.3.0When RAID attempts to flag a strip as bad, and that strip has already been flagged, a node may warmstart
HU017528.1.3.0A problem with the way IBM FlashSystem FS900 handles SCSI WRITE SAME commands (without the Unmap bit set) can lead to port exclusions
HU017568.1.1.2A scheduling issue may cause a config node warmstart
HU017568.1.2.0A scheduling issue may cause a config node warmstart
HU017588.2.0.0After an unexpected power loss, all nodes, in a cluster, may warmstart repeatedly, necessitating a Tier 3 recovery
HU017588.2.1.0After an unexpected power loss, all nodes, in a cluster, may warmstart repeatedly, necessitating a Tier 3 recovery
HU017607.8.1.8FlashCopy map progress appears to be stuck at zero percent
HU017608.1.3.4FlashCopy map progress appears to be stuck at zero percent
HU017608.2.0.2FlashCopy map progress appears to be stuck at zero percent
HU017608.2.1.0FlashCopy map progress appears to be stuck at zero percent
HU017618.1.3.6Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts
HU017618.2.0.0Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts
HU017618.2.1.0Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts
HU017637.7.1.9A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node
HU017637.8.1.5A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node
HU017638.1.1.1A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node
HU017658.2.0.0Node warmstart may occur when there is a delay to I/O at the secondary site
HU017658.2.1.0Node warmstart may occur when there is a delay to I/O at the secondary site
HU017677.5.0.14Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash
HU017677.7.1.9Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash
HU017677.8.1.6Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash
HU017678.1.1.2Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash
HU017678.1.2.0Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to this Flash
HU017698.1.1.2Systems with DRAID arrays, with more than 131,072 extents, may experience multiple warmstarts due to a backend SCSI UNMAP issue
HU017698.1.2.1Systems with DRAID arrays, with more than 131,072 extents, may experience multiple warmstarts due to a backend SCSI UNMAP issue
HU017717.7.1.9An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline
HU017717.8.1.6An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline
HU017718.1.1.2An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline
HU017718.1.2.0An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline
HU017728.2.1.0The mail queue may become blocked preventing the transmission of event log messages
HU017747.8.1.8After a failed mkhost command for an iSCSI host any I/O from that host will cause multiple warmstarts
HU017748.1.3.0After a failed mkhost command for an iSCSI host any I/O from that host will cause multiple warmstarts
HU017778.3.0.0Where not all I/O groups have NPIV enabled, hosts may be shown as Degraded with an incorrect count of node logins
HU017788.1.3.4An issue, in the HBA adapter, is exposed where a switch port keeps the link active but does not respond to link resets resulting in a node warmstart
HU017808.1.3.0Migrating a volume to an image-mode volume on controllers that support SCSI unmap will trigger repeated cluster recoveries
HU017817.8.1.10An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes
HU017818.1.3.0An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes
HU017828.4.0.10A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC
HU017828.5.0.7A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC
HU017828.5.4.0A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC
HU017828.6.0.0A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC
HU017837.6.1.7Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550
HU017837.7.1.4Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550
HU017837.8.0.0Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550
HU017848.2.1.0If a cluster using IP quorum experiences a site outage, the IP quorum device may become invalid. Restarting the quorum application will resolve the issue
HU017857.8.1.7An issue with memory mapping may lead to multiple node warmstarts
HU017867.8.1.8An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log
HU017868.1.3.4An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log
HU017868.2.0.0An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log
HU017868.2.1.0An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log
HU017907.8.1.8On the Create Volumes page the Accessible I/O Groups selection may not update when the Caching I/O group selection is changed
HU017908.1.3.3On the Create Volumes page the Accessible I/O Groups selection may not update when the Caching I/O group selection is changed
HU017918.1.3.4Using the chhost command will remove stored CHAP secrets
HU017918.2.0.0Using the chhost command will remove stored CHAP secrets
HU017918.2.1.0Using the chhost command will remove stored CHAP secrets
HU017927.8.1.6When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to this Flash
HU017928.1.1.2When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to this Flash
HU017928.1.2.1When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to this Flash
HU017937.8.1.8The Maximum final size value in the Expand Volume dialog can display an incorrect value preventing expansion
HU017958.1.3.0A thread locking issue in the Remote Copy component may cause a node warmstart
HU017968.3.1.0Battery Status LED may not illuminate
HU017977.8.1.8Hitachi G1500 backend controllers may exhibit higher than expected latency
HU017978.1.3.4Hitachi G1500 backend controllers may exhibit higher than expected latency
HU017978.2.0.0Hitachi G1500 backend controllers may exhibit higher than expected latency
HU017978.2.1.0Hitachi G1500 backend controllers may exhibit higher than expected latency
HU017988.1.3.0Manual (user-paced) upgrade to v8.1.2 may invalidate hardened data putting all nodes in service state if they are shutdown and then restarted. Automatic upgrade is not affected by this issue. For more details refer to this Flash
HU017997.8.1.8Timing window issue can affect operation of the HyperSwap addvolumecopy command causing all nodes to warmstart
HU017998.2.1.0Timing window issue can affect operation of the HyperSwap addvolumecopy command causing all nodes to warmstart
HU018008.1.3.0Under some rare circumstance a node warmstart may occur whilst creating volumes in a Data Reduction Pool
HU018018.1.3.0An issue in the handling of unmaps for MDisks can lead to a node warmstart
HU018027.8.1.7USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable
HU018028.1.3.0USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable
HU018038.1.3.0The garbage collection process in Data Reduction Pool may become stalled resulting in no reclamation of free space from removed volumes
HU018048.1.3.0During a system upgrade the processing required to upgrade the internal mapping between volumes and volume copies can lead to high latency impacting host I/O
HU018078.2.0.0The lsfabric command may show incorrect local node id and local node name for some Fibre Channel logins
HU018078.2.1.0The lsfabric command may show incorrect local node id and local node name for some Fibre Channel logins
HU018098.1.3.0An issue in the handling of extent allocation in Data Reduction Pools can result in volumes being taken offline
HU018108.2.1.0Deleting volumes, or using FlashCopy/Global Mirror with Change Volumes, in a Data Reduction Pool, may impact the performance of other volumes in the pool
HU018118.2.0.0DRAID rebuilds, for large (>10TB) drives, may require lengthy metadata processing leading to a node warmstart
HU018118.2.1.0DRAID rebuilds, for large (>10TB) drives, may require lengthy metadata processing leading to a node warmstart
HU018137.8.1.8An issue with Global Mirror stream recovery handling at secondary sites can adversely impact replication performance
HU018158.1.3.3In Data Reduction Pools, volume size is limited to 96TB
HU018158.2.0.2In Data Reduction Pools, volume size is limited to 96TB
HU018158.2.1.0In Data Reduction Pools, volume size is limited to 96TB
HU018178.2.0.0Volumes used for vVols metadata or cloud backup, that are associated with a FlashCopy mapping, cannot be included in any further FlashCopy mappings
HU018178.2.1.0Volumes used for vVols metadata or cloud backup, that are associated with a FlashCopy mapping, cannot be included in any further FlashCopy mappings
HU018188.1.3.0Excessive debug logging in the Data Reduction Pools component can adversely impact system performance
HU018208.1.3.0When an unusual I/O request pattern is received it is possible for the handling of Data Reduction Pool metadata to become stuck, leading to a node warmstart
HU018218.1.3.4An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies
HU018218.2.0.3An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies
HU018218.2.1.0An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies
HU018247.8.1.8Switching replication direction for HyperSwap relationships can lead to long I/O timeouts
HU018248.1.3.4Switching replication direction for HyperSwap relationships can lead to long I/O timeouts
HU018257.8.1.8Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery
HU018258.1.3.4Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery
HU018258.2.1.0Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery
HU018288.1.3.3Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue
HU018288.2.0.2Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue
HU018288.2.1.0Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue
HU018298.1.3.1An issue in statistical data collection can prevent EasyTier from working with Data Reduction Pools
HU018307.8.1.11Missing security-enhancing HTTP response headers
HU018308.1.3.0Missing security-enhancing HTTP response headers
HU018317.8.0.0Cluster-wide warmstarts may occur when the SAN delivers a FDISC frame with an invalid WWPN
HU018327.8.1.12Creation and distribution of the config file may cause an out-of-memory condition, leading to a node warmstart
HU018328.2.1.0Creation and distribution of the config file may cause an out-of-memory condition, leading to a node warmstart
HU018338.1.3.4If both nodes, in an I/O group, start up together a timing window issue may occur, that would prevent them running garbage collection, leading to a related Data Reduction Pool running out of space
HU018338.2.1.0If both nodes, in an I/O group, start up together a timing window issue may occur, that would prevent them running garbage collection, leading to a related Data Reduction Pool running out of space
HU018358.1.3.1Multiple warmstarts may be experienced due to an issue with Data Reduction Pool garbage collection where data for a volume is detected after the volume itself has been removed
HU018367.8.1.11When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts
HU018368.2.1.8When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts
HU018368.3.0.0When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts
HU018378.1.3.2In systems where a vVols metadata volume has been created an upgrade to v8.1.3 or later will cause a node warmstart stalling the upgrade
HU018378.2.1.0In systems where a vVols metadata volume has been created an upgrade to v8.1.3 or later will cause a node warmstart stalling the upgrade
HU018397.8.1.8Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected
HU018398.1.3.4Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected
HU018398.2.1.0Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then I/O performance, for the volumes from the other controller, will be adversely affected
HU018408.1.3.1When removing large numbers of volumes each with multiple copies it is possible to hit a timeout condition leading to warmstarts
HU018427.8.1.8Bursts of I/O to Read-Intensive Drives can be interpreted as dropped frames against the resident slots, leading to redundant drives being incorrectly failed
HU018428.1.3.4Bursts of I/O to Read-Intensive Drives can be interpreted as dropped frames against the resident slots, leading to redundant drives being incorrectly failed
HU018428.2.1.0Bursts of I/O to Read-Intensive Drives can be interpreted as dropped frames against the resident slots, leading to redundant drives being incorrectly failed
HU018438.2.1.6A node hardware issue can cause a CLI command to timeout resulting in a node warmstart
HU018438.3.0.0A node hardware issue can cause a CLI command to timeout resulting in a node warmstart
HU018458.2.1.0If the execution of a rmvdisk -force command, for the FlashCopy target volume in a GMCV relationship, coincides with the start of a GMCV cycle all nodes may warmstart
HU018467.8.1.8Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state
HU018468.1.3.4Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state
HU018468.2.1.0Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state
HU018477.8.1.8FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts
HU018478.1.3.3FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts
HU018478.2.0.2FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts
HU018478.2.1.0FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts
HU018488.2.0.0During an upgrade, systems with a large AIX VIOS setup may have multiple node warmstarts with the possibility of a loss of access to data
HU018488.2.1.0During an upgrade, systems with a large AIX VIOS setup may have multiple node warmstarts with the possibility of a loss of access to data
HU018497.8.1.9An excessive number of SSH sessions may lead to a node warmstart
HU018498.1.3.4An excessive number of SSH sessions may lead to a node warmstart
HU018498.2.0.3An excessive number of SSH sessions may lead to a node warmstart
HU018498.2.1.0An excessive number of SSH sessions may lead to a node warmstart
HU018508.1.3.3When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily
HU018508.2.0.2When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily
HU018508.2.1.0When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily
HU018518.1.3.2When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools
HU018518.2.0.1When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools
HU018518.2.1.0When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools
HU018528.1.3.3The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available
HU018528.2.0.2The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available
HU018528.2.1.0The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available
HU018538.1.3.0In a Data Reduction Pool, it is possible for metadata to be assigned incorrect values leading to offline managed disk groups
HU018558.1.3.4Clusters using Data Reduction Pools can experience multiple warmstarts on all nodes putting them in a service state
HU018558.2.1.0Clusters using Data Reduction Pools can experience multiple warmstarts on all nodes putting them in a service state
HU018568.1.3.3A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart
HU018568.2.0.0A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart
HU018568.2.1.0A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart
HU018578.1.3.6Improved validation of user input in GUI
HU018578.2.1.4Improved validation of user input in GUI
HU018588.1.3.3Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline
HU018588.2.0.2Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline
HU018588.2.1.0Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline
HU018608.1.3.6During garbage collection the flushing of extents may become stuck leading to a timeout and a single node warmstart
HU018608.2.1.4During garbage collection the flushing of extents may become stuck leading to a timeout and a single node warmstart
HU018628.1.3.4When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access
HU018628.2.0.3When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access
HU018628.2.1.0When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access
HU018637.8.1.11In rare circumstances, a drive replacement may result in a ghost drive (i.e. a drive with the same ID as the replaced drive stuck in a permanently offline state)
HU018638.2.1.0In rare circumstances, a drive replacement may result in a ghost drive (i.e. a drive with the same ID as the replaced drive stuck in a permanently offline state)
HU018657.8.1.9When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros)
HU018658.1.3.6When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros)
HU018658.2.1.4When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros)
HU018667.7.1.9A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group
HU018667.8.1.6A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group
HU018668.1.2.0A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group
HU018678.1.3.0Expansion of a volume may fail due to an issue with accounting of physical capacity. All nodes will warmstart in order to clear the problem. The expansion may be triggered by writing data to a thin-provisioned or compressed volume.
HU018687.8.1.12After deleting an encrypted external MDisk, it is possible for the encrypted status of volumes to change to no, even though all remaining MDisks are encrypted
HU018688.2.1.11After deleting an encrypted external MDisk, it is possible for the encrypted status of volumes to change to no, even though all remaining MDisks are encrypted
HU018688.3.0.0After deleting an encrypted external MDisk, it is possible for the encrypted status of volumes to change to no, even though all remaining MDisks are encrypted
HU018698.1.3.6Volume copy deletion, in a Data Reduction Pool, triggered by rmvdiskcopy rmvolumecopy or addvdiskcopy -autodelete (or similar) may become stalled with the copy being left in deleting status
HU018698.2.1.4Volume copy deletion, in a Data Reduction Pool, triggered by rmvdiskcopy rmvolumecopy or addvdiskcopy -autodelete (or similar) may become stalled with the copy being left in deleting status
HU018708.1.3.3LDAP server communication fails with SSL or TLS security configured
HU018718.2.1.0An issue with bitmap synchronisation can lead to a node warmstart
HU018728.3.0.0An issue with cache partition fairness can favour small IOs over large ones leading to a node warmstart
HU018738.1.3.4Deleting a volume, in a Data Reduction Pool, while volume protection is enabled and when the volume was not explicitly unmapped, before deletion, may result in simultaneous node warmstarts. For more details refer to this Flash
HU018738.2.1.0Deleting a volume, in a Data Reduction Pool, while volume protection is enabled and when the volume was not explicitly unmapped, before deletion, may result in simultaneous node warmstarts. For more details refer to this Flash
HU018767.8.1.9Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur
HU018768.1.3.6Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur
HU018768.2.0.3Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur
HU018768.2.1.0Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur
HU018778.1.3.0Where a volume is being expanded, and the additional capacity is to be formatted, the creation of a related volume copy may result in multiple warmstarts and a potential loss of access to data
HU018788.1.3.4During an upgrade from v7.8.1 or earlier to v8.1.3 or later if an MDisk goes offline then at completion all volumes may go offline
HU018788.2.1.0During an upgrade from v7.8.1 or earlier to v8.1.3 or later if an MDisk goes offline then at completion all volumes may go offline
HU018798.2.1.0Latency induced by DWDM inter-site links may result in a node warmstart
HU018808.2.1.8When a write, to a secondary volume, becomes stalled, a node at the primary site may warmstart
HU018808.3.0.0When a write, to a secondary volume, becomes stalled, a node at the primary site may warmstart
HU018818.2.0.2An issue within the compression card in FS9100 systems can result in the card being incorrectly flagged as failed leading to warmstarts
HU018818.2.1.0An issue within the compression card in FS9100 systems can result in the card being incorrectly flagged as failed leading to warmstarts
HU018838.2.1.0Config node processes may consume all available memory, leading to node warmstarts. This can be caused, for example, by large numbers of concurrent SSH connections being opened
HU018858.1.3.4As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts
HU018858.2.0.3As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts
HU018858.2.1.0As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts
HU018868.1.3.6The Unmap function can leave volume extents, that have not been freed, preventing managed disk and pool removal
HU018868.2.1.4The Unmap function can leave volume extents, that have not been freed, preventing managed disk and pool removal
HU018877.8.1.11In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts
HU018878.1.3.6In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts
HU018878.2.1.4In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts
HU018887.8.1.10An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU018888.1.3.6An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU018888.2.1.6An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU018888.3.0.0An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU019977.8.1.10An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU019978.1.3.6An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU019978.2.1.6An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU019978.3.0.0An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
HU018908.2.1.6FlashCopy mappings, from master volume to primary change volume, may become stalled when a T2 recovery occurs whilst the mappings are in a copying state
HU018908.3.1.0FlashCopy mappings, from master volume to primary change volume, may become stalled when a T2 recovery occurs whilst the mappings are in a copying state
HU018918.3.1.0An issue in DRAID grain process scheduling can lead to a duplicate entry condition that is cleared by a node warmstart
HU018927.8.1.11LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported
HU018928.2.1.6LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported
HU018928.3.0.0LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported
HU018938.2.1.0Excessive reporting frequency of NVMe drive diagnostics generates large numbers of callhome events
HU018948.2.1.11After node reboot, or warmstart, some volumes accessed by AIX, VIO or VMware hosts may experience stuck SCSI2 reservations on the NPIV failover ports of the partner node. This can cause a loss of access to data
HU018948.3.1.0After node reboot, or warmstart, some volumes accessed by AIX, VIO or VMware hosts may experience stuck SCSI2 reservations on the NPIV failover ports of the partner node. This can cause a loss of access to data
HU018958.2.1.0Where a banner has been created, without a new line at the end, any subsequent T4 recovery will fail
HU018997.8.1.8In a HyperSwap cluster, when the primary I/O group has a dead domain, nodes will repeatedly warmstart
HU019008.2.1.4Executing a command, that can result in a shared mapping being created or destroyed, for an individual host, in a host cluster, without that command applying to all hosts in the host cluster, may lead to multiple node warmstarts with the possibility of a T2 recovery
HU019018.2.1.0Enclosure management firmware, in an expansion enclosure, will reset a canister after a certain number of discovery requests have been received, from the controller, for that canister. It is possible simultaneous resets may occur in adjacent canisters causing a temporary loss of access to data
HU019027.8.1.8During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade
HU019028.1.3.4During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade
HU019028.2.1.4During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade
HU019048.3.0.0A timing issue can cause a remote copy relationship to become stuck, in a pausing state, resulting in a node warmstart
HU019068.2.0.3Low-level hardware errors may not be recovered correctly, causing a canister to reboot. If multiple canisters reboot, this may result in loss of access to data
HU019068.2.1.0Low-level hardware errors may not be recovered correctly, causing a canister to reboot. If multiple canisters reboot, this may result in loss of access to data
HU019077.8.1.9An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error
HU019078.1.3.4An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error
HU019078.2.1.0An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error
HU019098.3.0.0Upgrading a system with Read-Intensive drives to 8.2, or later, may result in node warmstarts
HU019108.1.3.6When FlashCopy mappings are created, with a grain size of 64KB, it is possible for an overflow condition in the bitmap to occur. This can resulting in multiple node warmstarts with a possible loss of access to data
HU019108.2.1.4When FlashCopy mappings are created, with a grain size of 64KB, it is possible for an overflow condition in the bitmap to occur. This can resulting in multiple node warmstarts with a possible loss of access to data
HU019118.2.1.4The System Overview screen, in the GUI, may display nodes in the wrong site
HU019128.2.1.4Systems with iSCSI-attached controllers may see node warmstarts due to I/O request timeouts
HU019137.8.1.9A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access
HU019138.1.3.6A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access
HU019138.2.0.0A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access
HU019138.2.1.0A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access
HU019157.8.1.10Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust
HU019158.1.3.6Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust
HU019158.2.1.4Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust
IT286547.8.1.10Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust
IT286548.1.3.6Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust
IT286548.2.1.4Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers SSL certificates are part of a chain of trust
HU019168.1.3.6The GUI Dashboard and the CLI lssystem command report physical capacity incorrectly
HU019168.2.1.4The GUI Dashboard and the CLI lssystem command report physical capacity incorrectly
HU019177.8.1.12Chrome browser support requires a self-signed certificate to include subject alternate name
HU019178.2.1.11Chrome browser support requires a self-signed certificate to include subject alternate name
HU019178.3.0.0Chrome browser support requires a self-signed certificate to include subject alternate name
HU019188.1.3.5Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash
HU019188.2.0.4Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash
HU019188.2.1.4Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash
HU019198.3.0.0During an upgrade some components may take too long to initialise causing node warmstarts
HU019208.1.3.5An issue in the garbage collection process can cause node warmstarts and offline pools
HU019208.2.0.4An issue in the garbage collection process can cause node warmstarts and offline pools
HU019208.2.1.1An issue in the garbage collection process can cause node warmstarts and offline pools
HU019218.2.1.11Where FlashCopy mapping targets are also in remote copy relationships there may be node warmstarts with a temporary loss of access to data
HU019218.3.0.0Where FlashCopy mapping targets are also in remote copy relationships there may be node warmstarts with a temporary loss of access to data
HU019237.8.1.11An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts
HU019238.2.1.11An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts
HU019238.3.1.0An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts
HU019248.2.1.11Migrating extents to an MDisk, that is not a member of an MDisk group, may result in a Tier 2 recovery
HU019248.3.0.1Migrating extents to an MDisk, that is not a member of an MDisk group, may result in a Tier 2 recovery
HU019258.2.1.4Systems will incorrectly report offline and unresponsive NVMe drives after an I/O group outage. These errors will fail to auto-fix and must be manually marked as fixed
HU019268.2.1.4When a node, with 32GB of RAM, is upgraded to v8.2.1 it may experience a warmstart resulting in a failed upgrade
HU019288.1.3.6When two IOs attempt to access the same address, the state of the data may be incorrectly set to invalid causing offline volumes and, possibly, offline pools
HU019288.2.1.4When two IOs attempt to access the same address, the state of the data may be incorrectly set to invalid causing offline volumes and, possibly, offline pools
HU019298.2.1.4Drive fault type 3 (error code 1686) may be seen in the Event Log for empty slots
HU019308.2.1.4Certain types of FlashCore Module (FCM) failure may not result in a call home, delaying the shipment of a replacement
HU019318.2.1.11Where a high rate of CLI commands are received, it is possible for inter-node processing code to be delayed which results in a small increase in receive queue time on the config node
HU019318.3.1.2Where a high rate of CLI commands are received, it is possible for inter-node processing code to be delayed which results in a small increase in receive queue time on the config node
HU019328.2.1.2When a rmvdisk command initiates a Data Reduction Pool rehoming process any I/O to the removed volume may cause multiple warmstarts leading to a loss of access
HU019338.2.1.6Under rare circumstances the Data Reduction Pool deduplication rehoming process can become truncated. Subsequent detection of inconsistent metadata can lead to offline Data Reduction Pools
HU019338.3.0.0Under rare circumstances the Data Reduction Pool deduplication rehoming process can become truncated. Subsequent detection of inconsistent metadata can lead to offline Data Reduction Pools
HU019348.2.0.3An issue in the handling of faulty canister components can lead to multiple node warmstarts for that canister
HU019348.2.1.0An issue in the handling of faulty canister components can lead to multiple node warmstarts for that canister
HU019368.2.1.8When shrinking a volume, that has host mappings, there may be recurring node warmstarts
HU019368.3.0.0When shrinking a volume, that has host mappings, there may be recurring node warmstarts
HU019378.2.1.4DRAID copy-back operation can overload NVMe drives resulting in high I/O latency
HU019398.2.1.4After replacing a canister, and attempting to bring the new canister into the cluster, it may remain offline
HU019407.8.1.8Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low
HU019408.1.0.0Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low
HU019418.2.1.4After upgrading the system to v8.2, or later, when expanding a mirrored volume, the formatting of additional space may become stalled
HU019428.2.1.8NVMe drive ports can go offline, for a very short time, when an upgrade of that drives firmware commences
HU019428.3.0.0NVMe drive ports can go offline, for a very short time, when an upgrade of that drives firmware commences
HU019438.3.1.0Stopping a GMCV relationship with the -access flag may result in more processing than is required
HU019447.8.1.11Proactive host failover not waiting for 25 seconds before allowing nodes to go offline during upgrades or maintenance
HU019448.2.1.4Proactive host failover not waiting for 25 seconds before allowing nodes to go offline during upgrades or maintenance
HU019458.2.1.4Systems with Flash Core Modules are unable to upgrade the firmware for those drives
HU019527.8.1.11When the compression accelerator hardware driver detects an uncorrectable error the node will reboot
HU019538.3.1.0Following a Data Reduction Pool recovery, in some circumstances, it may not be possible to create new volumes, via the GUI, due to an incorrect value being returned from the lsmdiskgrp
HU019558.3.0.0The presence of unsupported configurations, in a Spectrum Virtualize environment, can cause a mishandling of unsupported commands leading to a node warmstart
HU019568.3.0.0The output from a lsdrive command shows the write endurance usage, for new read-intensive SSDs, as blank rather than 0%
HU019578.1.3.6Due to an issue in Data Reduction Pools, when the system attempts an upgrade, there may be node warmstarts
HU019578.2.1.0Due to an issue in Data Reduction Pools, when the system attempts an upgrade, there may be node warmstarts
HU019598.2.1.4An timing window issue in the Thin Provisioning component can cause a node warmstart
HU019618.2.1.4A hardware issue can provoke the system to repeatedly try to collect a statesave, from the enclosure management firmware, causing 1048 errors in the Event Log
HU019628.2.1.4When Call Home servers return an invalid message it can be incorrectly reported as an error 3201 in the Event Log
HU019638.3.0.0A deadlock condition in the deduplication component can lead to a node warmstart
HU019648.3.1.0An issue in the cache component may limit I/O throughput
HU019658.2.1.0A timing window issue in the deduplication component can lead to I/O timeouts, and a node warmstart, with the possibility of an offline MDisk group
HU019678.2.1.8When I/O, in remote copy relationships, experiences delays (1720 and/or 1920 errors are logged) an I/O group may warmstart
HU019678.3.1.0When I/O, in remote copy relationships, experiences delays (1720 and/or 1920 errors are logged) an I/O group may warmstart
HU019688.2.1.12An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group
HU019688.3.1.2An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group
HU022158.2.1.12An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group
HU022158.3.1.2An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group
HU019698.3.0.0It is possible, after an rmrcrelationship command is run, that the connection to the remote cluster may be lost
HU019707.8.1.12When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted with -force, then all nodes may repeatedly warmstart
HU019708.2.1.11When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted with -force, then all nodes may repeatedly warmstart
HU019708.3.1.0When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted with -force, then all nodes may repeatedly warmstart
HU019718.2.1.4Spurious DIMM over-temperature errors may cause a node to go offline with node error 528
HU019727.8.1.10When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts
HU019728.1.3.6When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts
HU019728.2.1.4When an array is in a quiescing state, for example where a member has been deleted, I/O may become pended leading to multiple warmstarts
HU019748.2.1.6With all Remote Support Assistant connections closed, the GUI may show that a connection is still in progress
HU019748.3.0.0With all Remote Support Assistant connections closed, the GUI may show that a connection is still in progress
HU019768.2.1.4A new MDisk array may not be encrypted even though encryption is enabled on the system
HU019778.4.0.0CLI commands can produce a return code of 1 even though execution was successful
HU019788.2.1.6Unable to create HyperSwap volumes. The mkvolume command fails with CMMVC7050E error
HU019788.3.0.0Unable to create HyperSwap volumes. The mkvolume command fails with CMMVC7050E error
HU019798.2.1.6The figure for used_virtualization, in the output of a lslicense command, may be unexpectedly large
HU019798.3.0.0The figure for used_virtualization, in the output of a lslicense command, may be unexpectedly large
HU019817.8.1.11Although an issue, in the HBA firmware, is handled correctly it can still cause a node warmstart
HU019818.2.1.0Although an issue, in the HBA firmware, is handled correctly it can still cause a node warmstart
HU019828.2.1.6In an environment, with multiple IP Quorum servers, if the quorum component encounters a duplicate UID then a node may warmstart
HU019828.3.0.0In an environment, with multiple IP Quorum servers, if the quorum component encounters a duplicate UID then a node may warmstart
HU019838.2.1.6Improve debug data capture to assist in determining the reason for a Data Reduction Pool to be taken offline
HU019838.3.0.0Improve debug data capture to assist in determining the reason for a Data Reduction Pool to be taken offline
HU019858.2.1.6As a consequence of a Data Reduction Pool recovery, bad metadata may be created. When the region of disk associated with the bad metadata is accessed there may be an I/O group warmstarts
HU019858.3.0.0As a consequence of a Data Reduction Pool recovery, bad metadata may be created. When the region of disk associated with the bad metadata is accessed there may be an I/O group warmstarts
HU019868.2.1.6An accounting issue in the FlashCopy component may cause node warmstarts
HU019868.3.0.0An accounting issue in the FlashCopy component may cause node warmstarts
HU019878.2.1.4During SAN fabric power maintenance a cluster may breech resource limits, on the remaining node to node links, resulting in system-wide lease expiry
HU019887.8.1.11In the Monitoring -> 3D view page, the "export to csv" button does not function
HU019898.2.1.6For large drives, bitmap scanning, during an array rebuild, can timeout resulting in multiple node warmstarts, possibly leading to offline I/O groups
HU019898.3.0.0For large drives, bitmap scanning, during an array rebuild, can timeout resulting in multiple node warmstarts, possibly leading to offline I/O groups
HU019908.3.0.0Bad return codes from the partnership compression component can cause multiple node warmstarts taking nodes offline
HU019918.2.1.6An issue in the handling of extent allocation, in the Data Reduction Pool component, can cause a node warmstart
HU019918.3.0.0An issue in the handling of extent allocation, in the Data Reduction Pool component, can cause a node warmstart
HU019988.2.1.6All SCSI command types can set volumes as busy resulting in I/O timeouts and multiple node warmstarts, with the possibility of an offline I/O group. For more details refer to this Flash
HU019988.3.0.1All SCSI command types can set volumes as busy resulting in I/O timeouts and multiple node warmstarts, with the possibility of an offline I/O group. For more details refer to this Flash
HU020008.2.1.4Data Reduction Pools may go offline due to a timing issue in metadata handling
HU020018.2.1.4During a system upgrade an issue in callhome may cause a node warmstart stalling the upgrade
HU020028.2.1.4On busy systems, diagnostic data collection may not complete correctly producing livedumps with missing pages
HU020058.2.1.11An issue in the background copy process prevents grains, above a 128TB limit, from being cleaned properly. As a consequence there may be multiple node warmstarts with the potential for a loss of access to data
HU020058.3.0.0An issue in the background copy process prevents grains, above a 128TB limit, from being cleaned properly. As a consequence there may be multiple node warmstarts with the potential for a loss of access to data
HU020068.3.0.1Garbage collection behaviour can become overzealous, adversely affect performance
HU020078.2.1.5During volume migration an issue, in the handling of old to new extents transfer, can lead to cluster-wide warmstarts
HU020078.3.0.0During volume migration an issue, in the handling of old to new extents transfer, can lead to cluster-wide warmstarts
HU020088.2.1.4When a DRAID rebuild occurs, occasionally a RAID deadlock condition can be triggered by a particular type of I/O workload. This can lead to repeated node warmstarts and a loss of access to data
HU020098.2.1.5Systems which are using Data Reduction Pools, with the maximum possible extent size of 8GB, and which experience a very specific I/O workload, may experience an issue due to garbage collection. This can cause repeated node warmstarts and loss of access to data
HU020098.3.0.0Systems which are using Data Reduction Pools, with the maximum possible extent size of 8GB, and which experience a very specific I/O workload, may experience an issue due to garbage collection. This can cause repeated node warmstarts and loss of access to data
HU020108.3.1.9A single node warmstart may occur when a drive in a non-distributed RAID array is taken temporarily out-of-sync due to slow performance
HU020108.4.0.10A single node warmstart may occur when a drive in a non-distributed RAID array is taken temporarily out-of-sync due to slow performance
HU020118.2.1.5When a node warmstart occurs on a system using Data Reduction Pools, there is a small possibility that the node will not automatically return online. If the partner node is also offline, this can cause temporary loss of access to data
HU020118.3.0.0When a node warmstart occurs on a system using Data Reduction Pools, there is a small possibility that the node will not automatically return online. If the partner node is also offline, this can cause temporary loss of access to data
HU020128.2.1.5Under certain I/O workloads the garbage collection process can adversely impact volume write response times
HU020128.3.0.0Under certain I/O workloads the garbage collection process can adversely impact volume write response times
HU020138.1.3.6A race condition between the extent invalidation and destruction in the garbage collection process may cause a node warmstart with the possibility of offline volumes
HU020138.2.1.4A race condition between the extent invalidation and destruction in the garbage collection process may cause a node warmstart with the possibility of offline volumes
HU020147.8.1.11After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue
HU020148.2.1.6After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue
HU020148.3.0.1After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue
HU020158.2.1.11Some read-intensive SSDs are incorrectly reporting wear rate thresholds generating unnecessary errors in the Event Log
HU020158.3.1.2Some read-intensive SSDs are incorrectly reporting wear rate thresholds generating unnecessary errors in the Event Log
HU020168.2.1.6A memory leak in the component that handles thin-provisioned MDisks can lead to an adverse performance impact with the possibility of offline MDisks. For more details refer to this Flash
HU020168.3.0.1A memory leak in the component that handles thin-provisioned MDisks can lead to an adverse performance impact with the possibility of offline MDisks. For more details refer to this Flash
HU020178.3.1.0Unstable inter-site links may cause a system-wide lease expiry leaving all nodes in a service state - one with error 564 and others with error 551
HU020198.2.1.4When the master and auxiliary volumes, in a relationship, have the same name it is not possible, in the GUI, to determine which is master or auxiliary
HU020208.2.1.6An internal hardware bus, running at the incorrect speed, may give rise to spurious DIMM over-temperature errors
HU020208.3.0.0An internal hardware bus, running at the incorrect speed, may give rise to spurious DIMM over-temperature errors
HU020218.2.1.8Disabling garbage collection may cause a node warmstart
HU020218.3.1.0Disabling garbage collection may cause a node warmstart
HU020238.3.1.0An issue with the processing of FlashCopy map commands may result in a single node warmstart
HU020258.1.3.6An issue with metadata handling, where a pool has been taken offline, may lead to an out of space condition in that pool preventing its return to operation
HU020258.2.1.4An issue with metadata handling, where a pool has been taken offline, may lead to an out of space condition in that pool preventing its return to operation
HU020268.3.1.0A timing window issue in the processing of FlashCopy status listing commands can cause a node warmstart
HU020278.2.1.6Fabric congestion can cause internal resource constraints, in 16Gb HBAs, leading to lease expiries
HU020278.3.0.0Fabric congestion can cause internal resource constraints, in 16Gb HBAs, leading to lease expiries
HU020287.8.1.8An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart
HU020288.1.3.4An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart
HU020288.2.0.0An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart
HU020288.2.1.0An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart
HU020298.2.1.6An issue with the SSMTP process may result in failed callhome, inventory reporting and user notifications. A testemail command will fail with a CMMVC9051E error
HU020298.3.0.0An issue with the SSMTP process may result in failed callhome, inventory reporting and user notifications. A testemail command will fail with a CMMVC9051E error
HU020368.2.1.8It is possible for commands, that alter pool-level extent reservations (i.e. migratevdisk or rmmdisk), to conflict with an ongoing EasyTier migration, resulting in a Tier 2 recovery
HU020368.3.0.1It is possible for commands, that alter pool-level extent reservations (i.e. migratevdisk or rmmdisk), to conflict with an ongoing EasyTier migration, resulting in a Tier 2 recovery
HU020378.2.1.6A FlashCopy consistency group, with a mix of mappings in different states, cannot be stopped
HU020378.3.1.0A FlashCopy consistency group, with a mix of mappings in different states, cannot be stopped
HU020398.2.1.6An issue in the management steps of Data Reduction Pool recovery may lead to a node warmstart
HU020398.3.0.0An issue in the management steps of Data Reduction Pool recovery may lead to a node warmstart
HU020408.3.1.0VPD contains the incorrect FRU part number for the SAS adapter
HU020428.1.3.4An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state
HU020428.2.0.2An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state
HU020428.2.1.0An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state
HU020437.8.1.11Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565
HU020438.2.1.6Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565
HU020438.3.0.1Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565
HU020448.2.1.8Multiple DRAID arrays can, where one is performing a rebuild, be exposed to a RAID deadlock condition resulting in multiple node warmstarts and a loss of access to data
HU020448.3.0.1Multiple DRAID arrays can, where one is performing a rebuild, be exposed to a RAID deadlock condition resulting in multiple node warmstarts and a loss of access to data
HU020458.2.1.6When a node is removed from the cluster, using CLI, it may still be shown as online in the GUI. If an attempt is made to shutdown this node, from the GUI, whilst it appears to be online, then the whole cluster will shutdown
HU020458.3.0.1When a node is removed from the cluster, using CLI, it may still be shown as online in the GUI. If an attempt is made to shutdown this node, from the GUI, whilst it appears to be online, then the whole cluster will shutdown
HU020488.2.1.12An issue in the handling of ATS commands from VMware hosts can cause a single node warmstart
HU020488.3.1.0An issue in the handling of ATS commands from VMware hosts can cause a single node warmstart
HU020497.8.1.11GUI session handling has an issue that can generate many exceptions, adversely impacting GUI performance
HU020498.2.1.8GUI session handling has an issue that can generate many exceptions, adversely impacting GUI performance
HU020508.2.1.8Compression hardware can have an issue processing certain types of data resulting in node reboots and marking the compression hardware as faulty even though it is serviceable
HU020508.3.0.1Compression hardware can have an issue processing certain types of data resulting in node reboots and marking the compression hardware as faulty even though it is serviceable
HU020518.3.0.0If unexpected actions are taken during node replacement, node warmstarts and temporary loss of access to data may occur. This issue can only occur if a node is replaced, and then the old node is re-added to the cluster
HU020528.3.1.0During an upgrade an issue, with buffer handling, in Data Reduction Pool can lead to a node warmstart
HU020538.2.1.6An issue with canister BIOS update can stall system upgrades
HU020538.3.0.1An issue with canister BIOS update can stall system upgrades
HU020548.2.1.11The event log handler maintains a second list of events. On rare occasions, for log full events, these lists can get out of step, resulting in a Tier 2 recovery
HU020548.3.1.0The event log handler maintains a second list of events. On rare occasions, for log full events, these lists can get out of step, resulting in a Tier 2 recovery
HU020558.2.1.6Creating a FlashCopy snapshot, in the GUI, does not set the same preferred node for both source and target volumes. This may adversely impact performance
HU020558.3.0.1Creating a FlashCopy snapshot, in the GUI, does not set the same preferred node for both source and target volumes. This may adversely impact performance
HU020588.2.1.12Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery
HU020588.3.1.3Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery
HU020588.4.0.0Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery
HU020598.3.0.0Event Log may display quorum errors even though quorum devices are available
HU020628.3.0.2An issue, with node index numbers for I/O groups, when using 32Gb HBAs may result in host ports incorrectly being reported offline
HU020628.3.1.0An issue, with node index numbers for I/O groups, when using 32Gb HBAs may result in host ports incorrectly being reported offline
HU020637.8.1.11HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB
HU020638.2.1.8HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB
HU020638.3.1.0HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB
HU020648.2.1.8An issue in the firmware for compression accelerator cards can cause offline compressed volumes. For more details refer to this Flash
HU020648.3.0.1An issue in the firmware for compression accelerator cards can cause offline compressed volumes. For more details refer to this Flash
HU020658.2.1.11Mishandling of Data Reduction Pool allocation request rejections can lead to node warmstarts that can take an MDisk group offline
HU020658.3.1.0Mishandling of Data Reduction Pool allocation request rejections can lead to node warmstarts that can take an MDisk group offline
HU020668.3.1.0If, during large (>8KB) reads from a host, a medium error is encountered, on backend storage, then there may be node warmstarts, with the possibility of a loss of access to data
HU020678.2.1.6If multiple recipients are specified, for callhome emails, then no callhome emails will be sent
HU020678.3.0.1If multiple recipients are specified, for callhome emails, then no callhome emails will be sent
HU020698.2.1.11When a SCSI command, containing an invalid byte, is received there may be a node warmstart. This can affect both nodes, in an I/O group, at the same time
HU020728.2.1.6An issue in the handling of email transmission can write a large file to the node boot drive. If this causes the boot drive to become full, the node will go offline with error 565
HU020728.3.0.1An issue in the handling of email transmission can write a large file to the node boot drive. If this causes the boot drive to become full, the node will go offline with error 565
HU020738.3.0.1Detection of an invalid list entry in the parity handling process can lead to a node warmstart
HU020758.3.1.0A FlashCopy snapshot, sourced from the target of an Incremental FlashCopy map, can sometimes, temporarily, present incorrect data to the host
HU020778.2.1.8A node upgrading to v8.2.1 or later will lose access to controllers directly-attached to its FC ports and the upgrade will stall
HU020778.3.0.1A node upgrading to v8.2.1 or later will lose access to controllers directly-attached to its FC ports and the upgrade will stall
HU020788.2.1.8Heavily unbalanced workloads, in stretched-cluster configurations, can bias inter-node traffic through one port, adversely affecting performance
HU020788.3.1.0Heavily unbalanced workloads, in stretched-cluster configurations, can bias inter-node traffic through one port, adversely affecting performance
HU020798.3.0.1Starting a FlashCopy mapping, within a Data Reduction Pool, a large number of times may cause a node warmstart
HU020808.2.1.11When a Data Reduction Pool is running low on free space, the credit allocation algorithm, for garbage collection, can be exposed to a race condition, adversely affecting performance
HU020808.3.0.1When a Data Reduction Pool is running low on free space, the credit allocation algorithm, for garbage collection, can be exposed to a race condition, adversely affecting performance
HU020838.2.1.8During DRAID rebuilds, an issue in the handling of memory buffers can lead to multiple node warmstarts and a loss of access to data. For more details refer to this Flash
HU020838.3.0.1During DRAID rebuilds, an issue in the handling of memory buffers can lead to multiple node warmstarts and a loss of access to data. For more details refer to this Flash
HU020848.3.0.1If a node goes offline, after the firmware of multiple NVMe drives has been upgraded, then incorrect 3090/90021 errors may be seen in the Event Log
HU020857.8.1.11Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios
HU020858.2.1.8Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios
HU020858.3.1.0Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios
HU020868.2.1.8An issue, in IP Quorum, may cause a Tier 2 recovery, during initial connection to a candidate device
HU020868.3.0.1An issue, in IP Quorum, may cause a Tier 2 recovery, during initial connection to a candidate device
HU020878.3.0.1LDAP users with SSH keys cannot create volumes after upgrading to 8.3.0.0
HU020888.4.0.10There can be multiple node warmstarts when no mailservers are configured
HU020888.4.2.0There can be multiple node warmstarts when no mailservers are configured
HU020888.5.0.0There can be multiple node warmstarts when no mailservers are configured
HU020898.2.1.8Due to changes to quorum management, during an upgrade to v8.2.x, or later, there may be multiple warmstarts, with the possibility of a loss of access to data
HU020898.3.0.1Due to changes to quorum management, during an upgrade to v8.2.x, or later, there may be multiple warmstarts, with the possibility of a loss of access to data
HU020908.2.1.8When a failing drive experiences an error, RAID may mishandle it, resulting in a node warmstart
HU020908.3.0.0When a failing drive experiences an error, RAID may mishandle it, resulting in a node warmstart
HU020918.2.1.11Upgrading to v8.2.1.8, or later, may result in a licensing error in the Event Log
HU020918.3.1.2Upgrading to v8.2.1.8, or later, may result in a licensing error in the Event Log
HU020928.4.0.0The effectiveness of slow drain mitigation can become reduced causing fabric congestion to adversely impact all ports on an adapter
HU020938.2.1.8A locking issue in the inter-node communications, of V5030 systems, can lead to a deadlock condition, resulting in a node warmstart
HU020958.2.1.12The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI
HU020958.3.1.4The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI
HU020958.4.0.2The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI
HU020958.5.0.0The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI
HU020978.2.1.11Workloads, with data that is highly suited to deduplication, can provoke high CPU utilisation, as multiple destinations try to dedupe to one source. This adversely impacts performance with the possibility of offline MDisk groups
HU020978.3.0.1Workloads, with data that is highly suited to deduplication, can provoke high CPU utilisation, as multiple destinations try to dedupe to one source. This adversely impacts performance with the possibility of offline MDisk groups
HU020998.2.1.8Cloud callhome error 3201 messages may appear in the Event Log
HU020998.3.1.0Cloud callhome error 3201 messages may appear in the Event Log
HU021027.8.1.11Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart
HU021028.2.1.9Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart
HU021028.3.0.2Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart
HU021028.3.1.0Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart
HU021038.2.1.11The system management firmware may, incorrectly, attempt to obtain an IP address, using DHCP, making it accessible via Ethernet
HU021038.3.1.0The system management firmware may, incorrectly, attempt to obtain an IP address, using DHCP, making it accessible via Ethernet
HU021048.2.1.9An issue in the RAID component, in the presence of very high I/O workload and the exhaustion of cache resources, can see a deadlock condition occurring which prevents further I/O processing. The system detects this issue and takes the storage pool offline for a six minute period, to clear the problem. The pool is then brought online automatically, and normal operation resumes. For more details refer to this Flash
HU021048.3.0.2An issue in the RAID component, in the presence of very high I/O workload and the exhaustion of cache resources, can see a deadlock condition occurring which prevents further I/O processing. The system detects this issue and takes the storage pool offline for a six minute period, to clear the problem. The pool is then brought online automatically, and normal operation resumes. For more details refer to this Flash
HU021068.2.1.11Multiple node warmstarts, in quick succession, can cause the partner node to lease expire
HU021068.3.1.2Multiple node warmstarts, in quick succession, can cause the partner node to lease expire
HU021088.2.1.11Deleting a managed disk group, with -force, may cause multiple warmstarts with the possibility of a loss of access to data
HU021088.3.1.0Deleting a managed disk group, with -force, may cause multiple warmstarts with the possibility of a loss of access to data
HU021098.2.1.11Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers
HU021098.3.0.2Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers
HU021098.3.1.0Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers
HU021118.2.1.11An issue with how Data Reduction Pool handles data, at the sub-extent level, may result in a node warmstart
HU021118.3.1.0An issue with how Data Reduction Pool handles data, at the sub-extent level, may result in a node warmstart
HU021148.2.1.11Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state
HU021148.3.0.2Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state
HU021148.3.1.0Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state
HU021158.3.0.2Attempting to upgrade all drive firmware, with an inadequate drive package, may lead to multiple node warmstarts, with the possibility of a loss of access to data
HU021158.3.1.0Attempting to upgrade all drive firmware, with an inadequate drive package, may lead to multiple node warmstarts, with the possibility of a loss of access to data
HU021198.3.1.0NVMe drive replacement on 8.3.0.0 or 8.3.0.1 may result in the GUI, and lsdrive CLI command, showing a ghost drive
HU021218.2.1.8When the system changes from copyback to rebuild a failure to clear related metadata can cause multiple node warmstarts, with the possibility of a loss of access
HU021218.3.0.0When the system changes from copyback to rebuild a failure to clear related metadata can cause multiple node warmstarts, with the possibility of a loss of access
HU021238.2.1.11For direct-attached hosts, a race condition between the FLOGI and Link UP processes can result in FC ports not coming online
HU021238.3.0.0For direct-attached hosts, a race condition between the FLOGI and Link UP processes can result in FC ports not coming online
HU021248.2.1.11Due to an issue with FCM thin provisioning calculations the GUI may incorrectly display volume capacity and capacity savings as zero
HU021267.8.1.11There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node
HU021268.2.1.9There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node
HU021268.3.0.1There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node
HU021278.4.2.032Gbps FC ports will auto-negotiate to 8Gbps, if they are connected to a 16Gbps Cisco switch port
HU021278.5.0.032Gbps FC ports will auto-negotiate to 8Gbps, if they are connected to a 16Gbps Cisco switch port
HU021288.3.1.2Deduplication volume lookup can over utilise resources causing an adverse performance impact
HU021298.2.1.6GUI drive filtering fails with An error occurred loading table data
HU021298.3.0.1GUI drive filtering fails with An error occurred loading table data
HU021308.3.0.1An issue with the RAID scrub process can overload Nearline SAS drives causing premature failures
HU021318.2.1.9When changing DRAID configuration, for an array with an active workload, a deadlock condition can occur resulting in a single node warmstart
HU021318.3.0.1When changing DRAID configuration, for an array with an active workload, a deadlock condition can occur resulting in a single node warmstart
HU021328.2.1.12Removing a thin-provisioned volume and then immediately creating one of the same size may cause node warmstarts
HU021328.3.1.0Removing a thin-provisioned volume and then immediately creating one of the same size may cause node warmstarts
HU021338.2.1.9NVMe drives may become degraded after a drive reseat or node reboot
HU021338.3.0.0NVMe drives may become degraded after a drive reseat or node reboot
HU021348.3.0.0A timing issue, in handling chquorum CLI commands, can result in fewer than three quorum devices being available
HU021358.2.1.11Removing multiple IQNs for an iSCSI host can result in a Tier 2 recovery
HU021358.3.1.2Removing multiple IQNs for an iSCSI host can result in a Tier 2 recovery
HU021378.2.1.11An issue with support for target resets in Nimble Storage controllers may cause a node warmstart
HU021378.3.1.2An issue with support for target resets in Nimble Storage controllers may cause a node warmstart
HU021388.2.1.11An issue in Data Reduction Pool garbage collection can cause I/O timeouts leading to an offline pool
HU021388.3.1.0An issue in Data Reduction Pool garbage collection can cause I/O timeouts leading to an offline pool
HU021398.4.0.0When 32Gbps FC adapters are fitted the maximum supported ambient temperature is decreased leading to more threshold exceeded errors in the Event Log
HU021418.2.1.11An issue in the max replication delay function may trigger a Tier 2 recovery, after posting multiple 1920 errors in the Event Log. For more details refer to this Flash
HU021418.3.1.0An issue in the max replication delay function may trigger a Tier 2 recovery, after posting multiple 1920 errors in the Event Log. For more details refer to this Flash
HU021428.2.1.12It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing
HU021428.3.1.3It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing
HU021428.4.0.0It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing
HU021438.2.1.10The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven
HU021438.3.0.3The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven
HU021438.3.1.0The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven
HU021468.3.1.0An issue in inter-node message handling may cause a node warmstart
HU021497.8.1.11When an Enhanced Stretch Cluster is using NPIV, in transitional mode, the path priority is not being reported correctly to some hosts
HU021498.2.1.11When an Enhanced Stretch Cluster is using NPIV, in transitional mode, the path priority is not being reported correctly to some hosts
HU021498.3.0.0When an Enhanced Stretch Cluster is using NPIV, in transitional mode, the path priority is not being reported correctly to some hosts
HU021528.3.1.0Due to an issue in RAID there may be I/O timeouts, leading to node warmstarts, with the possibility of a loss of access to data
HU021538.3.1.4Fabric or host issues can cause aborted IOs to block the port throttle queue leading to adverse performance that is cleared by a node warmstart
HU021538.4.0.0Fabric or host issues can cause aborted IOs to block the port throttle queue leading to adverse performance that is cleared by a node warmstart
HU021548.2.1.11If a node is rebooted, when remote support is enabled, then all other nodes will warmstart
HU021548.3.1.2If a node is rebooted, when remote support is enabled, then all other nodes will warmstart
HU021558.2.1.11Upgrading to v8.2.1 may result in offline managed disk groups and OOS events (1685/1687) appearing in the Event Log
HU021568.2.1.12Global Mirror environments may experience more frequent 1920 events due to writedone message queuing
HU021568.3.1.3Global Mirror environments may experience more frequent 1920 events due to writedone message queuing
HU021568.4.0.0Global Mirror environments may experience more frequent 1920 events due to writedone message queuing
HU021578.2.1.12Issuing a mkdistributedarray command may result in a node warmstart
HU021578.3.1.0Issuing a mkdistributedarray command may result in a node warmstart
HU021598.5.0.17A rare issue caused by unexpected I/O in the upper cache can cause a node to warmstart
HU021598.6.0.5A rare issue caused by unexpected I/O in the upper cache can cause a node to warmstart
HU021598.7.0.0A rare issue caused by unexpected I/O in the upper cache can cause a node to warmstart
HU021628.3.1.3When a node warmstart occurs during an upgrade from v8.3.0.0, or earlier, to 8.3.0.1, or later, with dedup enabled it can lead to repeated node warmstarts across the cluster necessitating a Tier 3 recovery
HU021648.2.1.12An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted
HU021648.3.1.3An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted
HU021648.4.0.0An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted
HU021668.2.1.4A timing window issue, in RAID code that handles recovery after a drive has been taken out of sync, due to a slow I/O, can cause a single node warmstart
HU021688.2.1.11In the event of unexpected power loss a node may not save system data
HU021688.3.1.2In the event of unexpected power loss a node may not save system data
HU021698.3.1.0After a Tier 3 recovery, different nodes may report different UIDs for a subset of volumes
HU021708.4.0.0During NVMe SSD firmware upgrade processes peak read latency may reach 10sec
HU021718.4.0.7The timezone for Iceland is set incorrectly
HU021718.4.2.0The timezone for Iceland is set incorrectly
HU021718.5.0.0The timezone for Iceland is set incorrectly
HU021728.4.0.0The CLI command lsdependentvdisks -enclosure X causes node warmstarts if no nodes are online in that enclosure
HU021738.2.1.11During a pending fabric login, when an abort is received, it is possible for a related entry in the WWPN table to not be removed. The node will warmstart to clear this condition
HU021738.3.1.0During a pending fabric login, when an abort is received, it is possible for a related entry in the WWPN table to not be removed. The node will warmstart to clear this condition
HU021748.4.0.7A timing window issue related to remote copy memory allocation can result in a node warmstart
HU021748.4.2.0A timing window issue related to remote copy memory allocation can result in a node warmstart
HU021748.5.0.0A timing window issue related to remote copy memory allocation can result in a node warmstart
HU021758.3.1.2A GUI issue can cause drive counts to be inconsistent and crash browsers
HU021768.2.1.12During upgrade a node may limit the number of target ports it reports causing a failover contradiction on hosts
HU021788.3.1.2IP Quorum hosts may not be shown in lsquorum command output
HU021808.3.1.3When a svctask restorefcmap command is run on a VVol that is the target of another FlashCopy mapping both nodes in an I/O group may warmstart
HU021828.3.1.2Cisco MDS switches with old firmware may refuse port logins leading to a loss of access. For more details refer to this Flash
HU021838.2.1.11An issue in the way inter-node communication is handled can lead to a node warmstart
HU021838.3.1.0An issue in the way inter-node communication is handled can lead to a node warmstart
HU021848.2.1.12When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide
HU021848.3.1.3When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide
HU021848.4.0.0When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide
HU021868.2.1.13NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash
HU021868.3.1.5NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash
HU021868.4.0.0NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash
HU021908.2.1.11Error 1046 not triggering a Call Home even though it is a hardware fault
HU021908.3.1.0Error 1046 not triggering a Call Home even though it is a hardware fault
HU021948.3.1.3Password reset via USB drive does not work as expected and user is not able to login to Management or Service assistant GUI with the new password
HU021948.4.0.0Password reset via USB drive does not work as expected and user is not able to login to Management or Service assistant GUI with the new password
HU021968.3.1.3A particular sequence of internode messaging delays can lead to a cluster wide lease expiry
HU021968.4.0.0A particular sequence of internode messaging delays can lead to a cluster wide lease expiry
HU022538.3.1.3A particular sequence of internode messaging delays can lead to a cluster wide lease expiry
HU022538.4.0.0A particular sequence of internode messaging delays can lead to a cluster wide lease expiry
HU021977.8.1.12Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery
HU021978.2.1.11Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery
HU021978.3.1.0Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery
HU022008.2.1.12When upgrading from v8.1 or earlier to v8.2.1 or later a remote copy issue may cause a node warmstart, stalling the upgrade
HU022017.8.1.13Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022018.2.1.12Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022018.3.1.3Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022018.4.0.2Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022018.5.0.0Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022217.8.1.13Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022218.2.1.12Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022218.3.1.3Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022218.4.0.2Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022218.5.0.0Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
HU022028.3.1.2During an migratevdisk operation if MDisk tiers in the target pool do not match those in the source pool then a Tier 2 recovery may occur
HU022038.2.1.11When a node reboots, it is possible for the node to be unable to communicate with some of the NVMe drives in the enclosure
HU022038.3.1.2When a node reboots, it is possible for the node to be unable to communicate with some of the NVMe drives in the enclosure
HU022048.3.1.2After a Tier 2 recovery a node may fail to rejoin the cluster
HU022058.2.1.11Incremental FlashCopy targets can be corrupted when the FlashCopy source is a target of a remote copy relationship
HU022058.3.1.0Incremental FlashCopy targets can be corrupted when the FlashCopy source is a target of a remote copy relationship
HU022068.3.1.0Garbage collection can operate at inappropriate times, generating inefficient backend workload, adversely affecting flash drive write endurance and overloading nearline drives
HU022078.3.1.2If hosts send more concurrent iSCSI commands than a node can handle then it may enter a service state (error 578)
HU022088.3.1.3An issue with the handling of files by quorum can lead to a node warmstart
HU022088.4.0.0An issue with the handling of files by quorum can lead to a node warmstart
HU022108.3.1.3There is a very small timing window where a volume may be reported as offline, to a host, during its conversion from a regular volume to a HyperSwap volume
HU022108.4.0.0There is a very small timing window where a volume may be reported as offline, to a host, during its conversion from a regular volume to a HyperSwap volume
HU022128.2.1.11Remote Copy secondary may have inconsistent data following a stop with -access due to a missing bitmap merge from FlashCopy to Remote Copy. For more details refer to this Flash
HU022128.3.1.2Remote Copy secondary may have inconsistent data following a stop with -access due to a missing bitmap merge from FlashCopy to Remote Copy. For more details refer to this Flash
HU022138.2.1.12A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash
HU022138.3.1.3A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash
HU022138.4.0.0A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash
HU022148.2.1.11Under a certain I/O pattern it is possible for metadata management in Data Reduction Pools to become inconsistent leading to a node warmstart
HU022148.3.1.0Under a certain I/O pattern it is possible for metadata management in Data Reduction Pools to become inconsistent leading to a node warmstart
HU022168.3.1.2When migrating or deleting a Change Volume of a RC relationship the system might be exposed to a Tier 2 (Automatic Cluster Restart) recovery. When deleting the Change Volumes, the T2 will re-occur which will place the nodes into a 564 state. The migration of the Change Volume will trigger a T2 and recover. For more details refer to this Flash
HU022178.4.2.0Incomplete re-synchronisation following a Tier 3 recovery can lead to RAID inconsistencies
HU022198.5.0.12Certain tier 1 flash drives report 'SCSI check condition: Aborted command' events
HU022227.8.1.13Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to this Flash
HU022228.2.1.11Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to this Flash
HU022228.3.1.2Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to this Flash
HU022248.3.1.2When the RAID component fails to free up memory quickly enough for I/O processing there can be a single node warmstart
HU022258.4.0.0An issue in the Thin Provisioning feature can lead to multiple warmstarts with the possibility of a loss of access to data
HU022268.3.1.6Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster
HU022268.4.0.6Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster
HU022268.5.0.0Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster
HU022278.2.1.12Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline
HU022278.3.1.4Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline
HU022278.4.0.2Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline
HU022278.5.0.0Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline
HU022298.3.1.2An issue in the BIOS firmware of some systems can cause a severe performance impact for iSCSI hosts
HU022308.3.1.3For IBM Flash Core Modules a change of state, from unused to candidate, can lead to a Tier 2 recovery
HU022308.4.0.0For IBM Flash Core Modules a change of state, from unused to candidate, can lead to a Tier 2 recovery
HU022328.4.0.0Forced removal of large volumes in FlashCopy mappings can cause multiple node warmstarts with the possibility of a loss of access
HU022348.3.1.2An issue in HyperSwap Read Passthrough can cause multiple node warmstarts with the possibility of a loss of access to data
HU022358.3.1.2The SSH CLI prompt can contain the characters FB after the cluster name
HU022378.2.1.11Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash
HU022378.3.0.2Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash
HU022378.3.1.2Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash
HU022387.8.1.12Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash
HU022388.2.1.11Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash
HU022388.3.0.2Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash
HU022388.3.1.2Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash
HU022398.4.0.0A rare race condition in the Xcopy function can cause a single node warmstart
HU022418.2.1.12IP Replication can fail to create IP partnerships via the secondary cluster management IP
HU022418.3.1.3IP Replication can fail to create IP partnerships via the secondary cluster management IP
HU022418.4.0.0IP Replication can fail to create IP partnerships via the secondary cluster management IP
HU022428.3.1.2An iSCSI IP address, with a gateway argument of 0.0.0.0, is not properly assigned to each Ethernet port and any previously set iSCSI IP address may be retained
HU022438.4.2.0DMP for 1670 event (replace CMOS) will shutdown a node without confirmation from user
HU022438.5.0.0DMP for 1670 event (replace CMOS) will shutdown a node without confirmation from user
HU022448.2.1.12False positive node error 766 (depleted CMOS battery) messages may appear in the Event Log
HU022448.3.1.3False positive node error 766 (depleted CMOS battery) messages may appear in the Event Log
HU022458.4.0.0First support data collection fails to upload successfully
HU022478.2.1.11Unnecessary Ethernet MAC flapping messages reported in switch logs
HU022478.3.1.0Unnecessary Ethernet MAC flapping messages reported in switch logs
HU022488.3.1.3After upgrade the system may be unable to perform LDAP authentication
HU022508.4.0.0Duplicate volume names may cause multiple asserts
HU022518.3.1.3A warmstart may occur when a node receives iSCSI host login/logout requests out of sequence
HU022518.4.0.0A warmstart may occur when a node receives iSCSI host login/logout requests out of sequence
HU022558.3.1.3A timing issue in the processing of login requests can cause a single node warmstart
HU022558.4.0.0A timing issue in the processing of login requests can cause a single node warmstart
HU022618.3.1.4A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash
HU022618.4.0.2A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash
HU022618.5.0.0A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash
HU022628.3.1.3Entering the CLI applydrivesoftware -cancel command may result in cluster-wide warmstarts
HU022628.4.0.0Entering the CLI applydrivesoftware -cancel command may result in cluster-wide warmstarts
HU022638.4.0.6The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only
HU022638.4.2.0The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only
HU022638.5.0.0The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only
HU022658.4.0.0Enhanced inventory can sometimes be missing from callhome data due to the lsfabric command timing out
HU022668.2.1.12An issue in auto-expand can cause expansion to fail and the volume to be taken offline
HU022668.3.1.3An issue in auto-expand can cause expansion to fail and the volume to be taken offline
HU022668.4.0.0An issue in auto-expand can cause expansion to fail and the volume to be taken offline
HU022678.4.0.0After upgrade it is possible for a node IP address to become duplicated with the cluster IP address and access to the config node to be lost as a consequence
HU022738.4.2.0When write I/O workload to a HyperSwap volume site reaches a certain thresholds, the system should switch the primary and secondary copies. There are circumstances where this will not happen
HU022738.5.0.0When write I/O workload to a HyperSwap volume site reaches a certain thresholds, the system should switch the primary and secondary copies. There are circumstances where this will not happen
HU022748.4.2.0Due to a timing issue in how events are handled an active quorum loss and re-acquisition cycle can be triggered with a 3124 error
HU022748.5.0.0Due to a timing issue in how events are handled an active quorum loss and re-acquisition cycle can be triggered with a 3124 error
HU022758.3.0.0Performing any sort of hardware maintenance during an upgrade may cause a cluster to destroy itself, with nodes entering candidate or service state 550
HU022777.8.1.13RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash
HU022778.2.1.12RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash
HU022778.3.1.3RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash
HU022778.4.0.2RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash
HU022778.5.0.0RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash
HU022808.3.1.4Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown
HU022808.4.0.2Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown
HU022808.5.0.0Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown
HU022818.3.1.3When upgrading from v8.2.1, or earlier, to v8.3.0, or later, the CLI and GUI may incorrectly show all hosts offline. Checks from the host perspective will show them to be online
HU022828.3.1.4After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline
HU022828.4.0.2After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline
HU022828.5.0.0After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline
HU022858.3.1.0Single node warmstart due to cache resource allocation issue
HU022888.2.1.12A node might fail to come online after a reboot or warmstart such as during an upgrade
HU022888.3.0.0A node might fail to come online after a reboot or warmstart such as during an upgrade
HU022898.3.1.3An issue with internal resource allocation in high-end systems, with 1000s of mirror copies, may cause multiple warmstarts with the possibility of a loss of access
HU022898.4.0.0An issue with internal resource allocation in high-end systems, with 1000s of mirror copies, may cause multiple warmstarts with the possibility of a loss of access
HU022908.5.0.0An issue in the virtualization component can divide up IO resources incorrectly leading to adverse impact on queuing times for mdisks CPU cores leading to performance impact
HU022918.4.0.2Internal counters for upper cache stage/destage I/O rates and latencies are not collected and zeroes are usually displayed
HU022918.5.0.0Internal counters for upper cache stage/destage I/O rates and latencies are not collected and zeroes are usually displayed
HU022928.2.1.12The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU022928.3.1.4The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU022928.4.0.2The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU022928.5.0.0The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU023088.2.1.12The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU023088.3.1.4The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU023088.4.0.2The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU023088.5.0.0The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
HU022938.7.2.0MDisk Groups can go offline due to overall timeout if the backend storage is configured incorrectly after a hot spare node comes online
HU022939.1.0.0MDisk Groups can go offline due to overall timeout if the backend storage is configured incorrectly after a hot spare node comes online
HU022958.2.1.12When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery
HU022958.3.1.3When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery
HU022958.4.2.0When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery
HU022958.5.0.0When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery
HU022968.3.1.6The zero page functionality can become corrupt causing a volume to be initialised with non-zero data
HU022968.4.0.6The zero page functionality can become corrupt causing a volume to be initialised with non-zero data
HU022968.4.2.0The zero page functionality can become corrupt causing a volume to be initialised with non-zero data
HU022968.5.0.0The zero page functionality can become corrupt causing a volume to be initialised with non-zero data
HU022978.4.0.7Error handling for a failing backend controller can lead to multiple warmstarts
HU022978.4.2.0Error handling for a failing backend controller can lead to multiple warmstarts
HU022978.5.0.0Error handling for a failing backend controller can lead to multiple warmstarts
HU022988.4.0.0A high frequency of 1920 events and restarting of consistency groups may provoke a Tier 2 recovery
HU022998.4.0.0NVMe drives can become locked due to a missing encryption key condition
HU023008.4.0.2Use of Enhanced Callhome in censored mode may lead to adverse performance around 02:00 (2AM)
HU023008.5.0.0Use of Enhanced Callhome in censored mode may lead to adverse performance around 02:00 (2AM)
HU023018.4.0.2iSCSI hosts connected to iWARP 25G adapters may experience adverse performance impacts
HU023018.5.0.0iSCSI hosts connected to iWARP 25G adapters may experience adverse performance impacts
HU023028.7.0.0A cluster wide warmstart can occur if an unsupported drive is inserted into an enclosure slot, and then removed.
HU023038.3.1.3Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
HU023038.4.0.2Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
HU023038.5.0.0Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
HU023058.3.1.3Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
HU023058.4.0.2Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
HU023058.5.0.0Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
HU023048.4.0.2Some RAID operations for certain NVMe drives may cause adverse I/O performance
HU023048.5.0.0Some RAID operations for certain NVMe drives may cause adverse I/O performance
HU023068.3.1.9An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline
HU023068.4.0.4An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline
HU023068.4.2.0An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline
HU023068.5.0.0An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline
HU023098.4.2.0Due to a change in how FlashCopy and remote copy interact, multiple warmstarts may occur with the possibility of lease expiries
HU023098.5.0.0Due to a change in how FlashCopy and remote copy interact, multiple warmstarts may occur with the possibility of lease expiries
HU023108.4.0.2Where a FlashCopy mapping exists between two volumes in the same Data Reduction Pool and the same I/O group, and the target volume has deduplication enabled, then the target may contain invalid data
HU023108.5.0.0Where a FlashCopy mapping exists between two volumes in the same Data Reduction Pool and the same I/O group, and the target volume has deduplication enabled, then the target may contain invalid data
HU023118.3.1.4An issue in volume copy flushing may lead to higher than expected write cache delays
HU023118.4.0.2An issue in volume copy flushing may lead to higher than expected write cache delays
HU023118.5.0.0An issue in volume copy flushing may lead to higher than expected write cache delays
HU023128.4.0.3Changing the preferred node for a volume when it is in a remote copy relationship can result in multiple node warmstarts. For more details refer to this Flash
HU023128.5.0.0Changing the preferred node for a volume when it is in a remote copy relationship can result in multiple node warmstarts. For more details refer to this Flash
HU023138.2.1.12When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash
HU023138.3.1.4When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash
HU023138.4.0.2When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash
HU023138.5.0.0When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash
HU023148.3.1.4Due to a RAID issue when a bad block is detected on a NVMe drive there may be multiple node warmstarts with a possibility of a loss of access to data
HU023148.4.0.0Due to a RAID issue when a bad block is detected on a NVMe drive there may be multiple node warmstarts with a possibility of a loss of access to data
HU023158.3.1.4Failover for VMware iSER hosts may pause I/O for more than 120 seconds
HU023158.4.0.2Failover for VMware iSER hosts may pause I/O for more than 120 seconds
HU023158.5.0.0Failover for VMware iSER hosts may pause I/O for more than 120 seconds
HU023178.3.1.4A DRAID expansion can stall shortly after it is initiated
HU023178.4.0.2A DRAID expansion can stall shortly after it is initiated
HU023178.5.0.0A DRAID expansion can stall shortly after it is initiated
HU023188.3.0.0An issue in the handling of iSCSI host I/O may cause a node to kernel panic and go into service with error 578
HU023198.4.0.3The GUI can become unresponsive
HU023198.4.1.0The GUI can become unresponsive
HU023198.5.0.0The GUI can become unresponsive
HU023208.5.0.6A battery fails to perform re-condition. This is identified when 'lsenclosurebattery' shows the 'last_recondition_timestamp' as an empty field on the impacted node
HU023218.3.1.4Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries
HU023218.4.0.2Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries
HU023218.5.0.0Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries
HU023228.3.1.4A deadlock condition in the Data Reduction Pool function may cause multiple node warmstarts and a temporary loss of access to data
HU023228.4.0.0A deadlock condition in the Data Reduction Pool function may cause multiple node warmstarts and a temporary loss of access to data
HU023238.3.1.4Stalled I/O during DRAID expansion can cause node warmstarts and a temporary loss of access to data
HU023238.4.0.0Stalled I/O during DRAID expansion can cause node warmstarts and a temporary loss of access to data
HU023258.4.0.3Tier 2 and Tier 3 recoveries can fail due to node warmstarts
HU023258.4.1.0Tier 2 and Tier 3 recoveries can fail due to node warmstarts
HU023258.5.0.0Tier 2 and Tier 3 recoveries can fail due to node warmstarts
HU023268.3.1.6Delays in passing messages between nodes in an I/O group can adversely impact write performance
HU023268.4.0.3Delays in passing messages between nodes in an I/O group can adversely impact write performance
HU023268.4.1.0Delays in passing messages between nodes in an I/O group can adversely impact write performance
HU023268.5.0.0Delays in passing messages between nodes in an I/O group can adversely impact write performance
HU023278.2.1.15Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash
HU023278.3.1.6Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash
HU023278.4.0.0Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash
HU023288.4.2.0Due to an issue with the handling of NVMe registration keys, changing the node WWNN in an active system will cause a lease expiry
HU023288.5.0.0Due to an issue with the handling of NVMe registration keys, changing the node WWNN in an active system will cause a lease expiry
HU023318.3.1.6Due to a threshold issue an error code 3400 may appear too often in the event log
HU023318.4.0.3Due to a threshold issue an error code 3400 may appear too often in the event log
HU023318.4.1.0Due to a threshold issue an error code 3400 may appear too often in the event log
HU023318.5.0.0Due to a threshold issue an error code 3400 may appear too often in the event log
HU023327.8.1.15When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023328.2.1.12When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023328.3.1.6When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023328.4.0.3When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023328.4.1.0When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023328.5.0.0When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023367.8.1.15When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023368.2.1.12When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023368.3.1.6When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023368.4.0.3When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023368.4.1.0When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023368.5.0.0When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
HU023348.4.0.0Node to node connectivity issues may trigger repeated logins/logouts resulting in a single node warmstart
HU023358.4.0.7Cannot properly set the site for a host in a multi-site configuration (hyperswap or stretched) via the GUI
HU023358.4.1.0Cannot properly set the site for a host in a multi-site configuration (hyperswap or stretched) via the GUI
HU023387.8.1.13An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image
HU023388.3.1.4An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image
HU023388.4.0.2An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image
HU023388.5.0.0An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image
HU023398.4.0.7Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data
HU023398.5.0.5Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data
HU023398.5.2.0Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data
HU023398.6.0.0Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data
HU023408.3.1.4High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster
HU023408.4.0.3High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster
HU023408.4.1.0High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster
HU023408.5.0.0High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster
HU023418.3.1.2Cloud Callhome can become disabled due to an internal issue. A related error may not being recorded in the event log
HU023427.8.1.15Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state
HU023428.2.1.15Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state
HU023428.3.1.6Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state
HU023428.4.0.4Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state
HU023428.5.0.0Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state
HU023438.3.1.7For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts
HU023438.4.0.6For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts
HU023438.5.0.0For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts
HU023458.3.1.6When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance
HU023458.4.0.4When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance
HU023458.4.2.0When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance
HU023458.5.0.0When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance
HU023468.4.2.0A mismatch between LBA stored by snapshot and disk allocator processes in the thin-provisioning component may cause a single node warmstart
HU023468.5.0.0A mismatch between LBA stored by snapshot and disk allocator processes in the thin-provisioning component may cause a single node warmstart
HU023478.5.0.0An issue in the handling of boot drive failure can lead to the partner drive also being failed
HU023498.4.2.0Using an incorrect FlashCopy consistency group id to stop consistency group will result in T2 recovery if the incorrect id is >501
HU023498.5.0.0Using an incorrect FlashCopy consistency group id to stop consistency group will result in T2 recovery if the incorrect id is >501
HU023538.4.0.0The GUI will refuse to start a GMCV relationship if one of the change volumes has an ID of 0
HU023548.2.1.12An issue in the handling of read transfers may cause hung host IOs leading to a node warmstart
HU023588.2.1.12An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart
HU023588.3.1.3An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart
HU023588.4.0.0An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart
HU023608.3.1.5Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash
HU023608.4.0.3Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash
HU023608.4.1.0Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash
HU023608.5.0.0Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash
HU023628.3.1.6When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted
HU023628.4.0.3When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted
HU023628.4.1.0When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted
HU023628.5.0.0When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted
HU023648.3.1.9False 989001 Managed Disk Group space warnings can be generated
HU023648.4.0.0False 989001 Managed Disk Group space warnings can be generated
HU023668.2.1.15Slow internal resource reclamation by the RAID component can cause a node warmstart
HU023668.3.1.6Slow internal resource reclamation by the RAID component can cause a node warmstart
HU023668.4.0.3Slow internal resource reclamation by the RAID component can cause a node warmstart
HU023668.4.2.0Slow internal resource reclamation by the RAID component can cause a node warmstart
HU023668.5.0.0Slow internal resource reclamation by the RAID component can cause a node warmstart
HU023678.3.1.9An issue with how RAID handles drive failures may lead to a node warmstart
HU023678.4.0.10An issue with how RAID handles drive failures may lead to a node warmstart
HU023678.4.2.0An issue with how RAID handles drive failures may lead to a node warmstart
HU023678.5.0.0An issue with how RAID handles drive failures may lead to a node warmstart
HU023688.4.2.0When consistency groups from code levels prior to v8.3 are carried through to v8.3 or later then there can be multiple node warmstarts with the possibility of a loss of access
HU023688.5.0.0When consistency groups from code levels prior to v8.3 are carried through to v8.3 or later then there can be multiple node warmstarts with the possibility of a loss of access
HU023708.4.0.7Replacing a drive will cause copyback to start, which can cause multiple node warmstarts to occur.
HU023728.3.1.9Host SAS port 4 is missing from the GUI view on some systems.
HU023728.4.0.10Host SAS port 4 is missing from the GUI view on some systems.
HU023728.5.0.6Host SAS port 4 is missing from the GUI view on some systems.
HU023738.3.1.6An incorrect compression flag in metadata can take a DRP offline
HU023738.4.0.3An incorrect compression flag in metadata can take a DRP offline
HU023738.4.2.0An incorrect compression flag in metadata can take a DRP offline
HU023738.5.0.0An incorrect compression flag in metadata can take a DRP offline
HU023748.2.1.15Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports
HU023748.4.0.6Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports
HU023748.4.1.0Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports
HU023748.5.0.0Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports
HU023758.3.1.6An issue in how the GUI handles volume data can adversely impact its responsiveness
HU023758.4.0.3An issue in how the GUI handles volume data can adversely impact its responsiveness
HU023758.5.0.0An issue in how the GUI handles volume data can adversely impact its responsiveness
HU023768.3.1.6FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes
HU023768.4.0.3FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes
HU023768.4.1.0FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes
HU023768.5.0.0FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes
HU023778.3.1.6A race condition in DRP may stop IO being processed leading to timeouts
HU023788.4.2.0Multiple maximum replication delay events and Remote Copy relationship restarts can cause multiple node warmstarts with the possibility of a loss of access
HU023788.5.0.0Multiple maximum replication delay events and Remote Copy relationship restarts can cause multiple node warmstarts with the possibility of a loss of access
HU023818.4.0.3When the proxy server password is changed to one with more than 40 characters the config node will warmstart
HU023818.4.2.0When the proxy server password is changed to one with more than 40 characters the config node will warmstart
HU023818.5.0.0When the proxy server password is changed to one with more than 40 characters the config node will warmstart
HU023828.4.0.6A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade)
HU023828.4.2.0A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade)
HU023828.5.0.0A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade)
HU023838.4.0.6An additional 20 second IO delay can occur when a system update commits
HU023838.4.2.0An additional 20 second IO delay can occur when a system update commits
HU023838.5.0.0An additional 20 second IO delay can occur when a system update commits
HU023848.3.1.6An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access
HU023848.4.0.4An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access
HU023848.4.2.0An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access
HU023848.5.0.0An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access
HU023858.4.2.0Unexpected emails from Inventory Script can be found on mailserver
HU023858.5.0.0Unexpected emails from Inventory Script can be found on mailserver
HU023868.4.0.7Enclosure fault LED can remain on due to race condition when location LED state is changed
HU023868.4.2.0Enclosure fault LED can remain on due to race condition when location LED state is changed
HU023868.5.0.0Enclosure fault LED can remain on due to race condition when location LED state is changed
HU023878.4.0.3When using the GUI the maximum Data Reduction Pools limitation incorrectly includes child pools
HU023878.5.0.0When using the GUI the maximum Data Reduction Pools limitation incorrectly includes child pools
HU023888.4.0.4GUI can hang randomly due to an out of memory issue after running any task
HU023888.4.2.0GUI can hang randomly due to an out of memory issue after running any task
HU023888.5.0.0GUI can hang randomly due to an out of memory issue after running any task
HU023908.3.1.3A memory handling issue in the REST API may cause an out-of-memory condition when listing a large number of volumes
HU023908.4.0.0A memory handling issue in the REST API may cause an out-of-memory condition when listing a large number of volumes
HU023918.3.1.9An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server
HU023918.4.0.10An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server
HU023918.5.0.0An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server
HU023928.3.1.6Validation in the Upload Support Package feature will reject new case number formats in the PMR field
HU023928.4.0.3Validation in the Upload Support Package feature will reject new case number formats in the PMR field
HU023928.5.0.0Validation in the Upload Support Package feature will reject new case number formats in the PMR field
HU023938.2.1.15Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group
HU023938.3.1.6Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group
HU023938.4.0.4Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group
HU023938.4.2.0Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group
HU023938.5.0.0Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group
HU023978.3.1.6A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline
HU023978.4.0.4A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline
HU023978.4.2.0A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline
HU023978.5.0.0A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline
HU023998.3.1.6Boot drives may be reported as having invalid state by the GUI, even though they are online
HU024008.2.1.15A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area
HU024008.3.1.6A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area
HU024008.4.0.4A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area
HU024008.5.0.0A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area
HU024018.2.1.15EasyTier can move extents between identical mdisks until one runs out of space
HU024018.3.1.6EasyTier can move extents between identical mdisks until one runs out of space
HU024018.4.0.4EasyTier can move extents between identical mdisks until one runs out of space
HU024018.5.0.0EasyTier can move extents between identical mdisks until one runs out of space
HU024028.4.0.7The remote support feature may use more memory than expected causing a temporary loss of access
HU024028.5.0.0The remote support feature may use more memory than expected causing a temporary loss of access
HU024058.4.0.4An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros
HU024058.4.2.0An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros
HU024058.5.0.0An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros
HU024067.8.1.15An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
HU024068.2.1.15An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
HU024068.3.1.6An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
HU024068.4.0.4An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
HU024068.4.2.1An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
HU024068.4.3.1An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
HU024068.5.0.0An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
HU024098.3.1.7If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive
HU024098.4.0.6If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive
HU024098.5.0.0If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive
HU024108.3.1.7A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery
HU024108.4.0.6A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery
HU024108.4.2.0A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery
HU024108.5.0.0A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery
HU024118.4.2.0An issue in the NVMe drive presence checking can result in a node warmstart
HU024118.5.0.0An issue in the NVMe drive presence checking can result in a node warmstart
HU024148.3.1.6Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily
HU024148.4.0.4Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily
HU024148.4.2.0Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily
HU024148.5.0.0Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily
HU024158.5.0.0An issue in garbage collection IO flow logic can take a pool offline temporarily
HU024168.5.0.0A timing window issue in DRP can cause a valid condition to be deemed invalid triggering a single node warmstart
HU024178.5.0.0Restoring a reverse FlashCopy mapping to a volume that is also the source of an incremental FlashCopy mapping can take longer than expected
HU024188.3.1.6During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash
HU024188.4.0.5During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash
HU024188.4.2.1During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash
HU024188.5.0.0During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash
HU024198.3.1.6During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string
HU024198.4.0.2During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string
HU024198.4.2.0During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string
HU024198.5.0.0During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string
HU024208.4.0.10During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access
HU024208.5.0.6During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access
HU024208.5.2.0During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access
HU024208.6.0.0During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access
HU024218.4.2.1A logic fault in the socket communication sub-system can cause multiple node warmstarts when more than 8 external clients attempt to connect. It is possible for this to lead to a loss of access
HU024218.5.0.0A logic fault in the socket communication sub-system can cause multiple node warmstarts when more than 8 external clients attempt to connect. It is possible for this to lead to a loss of access
HU024228.3.1.6GUI performance can be degraded when displaying large numbers of volumes or other objects
HU024228.4.0.4GUI performance can be degraded when displaying large numbers of volumes or other objects
HU024228.4.2.0GUI performance can be degraded when displaying large numbers of volumes or other objects
HU024228.5.0.0GUI performance can be degraded when displaying large numbers of volumes or other objects
HU024238.4.0.6Volume copies may be taken offline even though there is sufficient free capacity
HU024238.4.2.0Volume copies may be taken offline even though there is sufficient free capacity
HU024238.5.0.0Volume copies may be taken offline even though there is sufficient free capacity
HU024248.3.1.6Frequent GUI refreshing adversely impacts usability on some screens
HU024248.4.0.0Frequent GUI refreshing adversely impacts usability on some screens
HU024258.3.1.6An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition.
HU024258.4.0.3An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition.
HU024258.4.2.0An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition.
HU024258.5.0.0An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition.
HU024268.4.0.4Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts
HU024268.4.2.0Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts
HU024268.5.0.0Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts
HU024288.4.0.6Issuing a movevdisk CLI command immediately after removing an associated GMCV relationship can trigger a Tier 2 recovery
HU024288.5.0.0Issuing a movevdisk CLI command immediately after removing an associated GMCV relationship can trigger a Tier 2 recovery
HU024297.8.1.14System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI
HU024298.2.1.12System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI
HU024298.3.1.6System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI
HU024298.4.0.2System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI
HU024298.5.0.0System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI
HU024308.4.2.1Expanding or shrinking the real size of FlashCopy target volumes can cause recurring node warmstarts and may cause nodes to revert to candidate state
HU024308.5.0.0Expanding or shrinking the real size of FlashCopy target volumes can cause recurring node warmstarts and may cause nodes to revert to candidate state
HU024328.5.0.17Due to problems caused by the internal SAS indexes on the SAS expansion connection, this can lead to SAS host degredation.
HU024338.2.1.15When a BIOS upgrade occurs excessive tracefile entries can be generated
HU024338.3.1.7When a BIOS upgrade occurs excessive tracefile entries can be generated
HU024348.4.0.6An issue in the internal accounting of FlashCopy resources can lead to multiple node warmstarts taking a cluster offline
HU024348.5.0.0An issue in the internal accounting of FlashCopy resources can lead to multiple node warmstarts taking a cluster offline
HU024358.4.2.1The removal of deduplicated volumes can cause repeated node warmstarts and the possibility of offline Data Reduction Pools
HU024358.5.0.0The removal of deduplicated volumes can cause repeated node warmstarts and the possibility of offline Data Reduction Pools
HU024378.5.0.0Error 2700 is not reported in the Event Log when an incorrect NTP server IP is entered
HU024388.4.0.6Certain conditions can provoke a cache behaviour that unbalances workload distribution across CPU cores leading to performance impact
HU024388.5.0.0Certain conditions can provoke a cache behaviour that unbalances workload distribution across CPU cores leading to performance impact
HU024398.4.0.10An IP partnership between a pre-v8.4.2 system and v8.4.2 or later system may be disconnected because of a keepalive timeout
HU024398.5.0.0An IP partnership between a pre-v8.4.2 system and v8.4.2 or later system may be disconnected because of a keepalive timeout
HU024408.4.0.6Using the migrateexts command when both source and target mdisks are unmanaged can trigger a Tier 2 recovery
HU024408.5.0.0Using the migrateexts command when both source and target mdisks are unmanaged can trigger a Tier 2 recovery
HU024418.4.2.1Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024418.4.3.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024418.5.0.3Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024418.5.1.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024418.5.3.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024418.6.0.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024868.4.2.1Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024868.4.3.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024868.5.0.3Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024868.5.1.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024868.5.3.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024868.6.0.0Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
HU024428.4.0.6Issuing a lspotentialarraysize CLI command with an invalid drive class can trigger a Tier 2 recovery
HU024428.5.0.0Issuing a lspotentialarraysize CLI command with an invalid drive class can trigger a Tier 2 recovery
HU024438.3.1.9An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart
HU024438.4.0.10An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart
HU024438.5.0.0An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart
HU024448.4.0.6Some security scanners can report unauthenticated targets against all the iSCSI IP addresses of a node
HU024448.5.0.0Some security scanners can report unauthenticated targets against all the iSCSI IP addresses of a node
HU024458.5.0.0When attempting to expand a volume, if the volume size is greater than 1TB the GUI may not display the expansion pop-up window
HU024468.5.0.17An invalid alert relating to GMCV freeze time can be displayed
HU024468.5.1.0An invalid alert relating to GMCV freeze time can be displayed
HU024468.6.0.0An invalid alert relating to GMCV freeze time can be displayed
HU024488.5.0.0IP Replication statistics displayed in the GUI and XML can be incorrect
HU024498.5.0.6Due to a timing issue, it is possible (but very unlikely) that maintenance on a SAS 92F/92G expansion enclosure could cause multiple node warmstarts, leading to a loss of access
HU024508.4.0.7A defect in the frame switching functionality of 32Gbps HBA firmware can cause a node warmstart
HU024508.5.0.0A defect in the frame switching functionality of 32Gbps HBA firmware can cause a node warmstart
HU024518.3.1.7An incorrect IP Quorum lease extension setting can lead to a node warmstart
HU024528.4.0.7An issue in NVMe I/O write functionality can cause a single node warmstart
HU024528.5.0.0An issue in NVMe I/O write functionality can cause a single node warmstart
HU024538.3.1.9It may not be possible to connect to GUI or CLI without a restart of the Tomcat server
HU024538.4.0.10It may not be possible to connect to GUI or CLI without a restart of the Tomcat server
HU024538.5.0.2It may not be possible to connect to GUI or CLI without a restart of the Tomcat server
HU024538.6.0.0It may not be possible to connect to GUI or CLI without a restart of the Tomcat server
HU024548.5.0.0Large numbers of 2251 errors are recorded in the Event Log even though LDAP appears to be working
HU024558.3.1.7After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery
HU024558.4.0.7After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery
HU024558.5.0.0After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery
HU024568.5.0.10Unseating a NVMe drive after automanage failure can cause a node to warmstart
HU024568.5.2.0Unseating a NVMe drive after automanage failure can cause a node to warmstart
HU024608.3.1.7Multiple node warmstarts triggered by ports on the 32G fibre channel adapter failing
HU024608.5.0.0Multiple node warmstarts triggered by ports on the 32G fibre channel adapter failing
HU024618.5.0.0Livedump collection can fail multiple times
HU024628.5.0.12A node can warm start when a FlashCopy volume is flushing, quiesces and has pinned data
HU024628.5.2.0A node can warm start when a FlashCopy volume is flushing, quiesces and has pinned data
HU024628.6.0.0A node can warm start when a FlashCopy volume is flushing, quiesces and has pinned data
HU024638.4.0.10LDAP user accounts can become locked out because of multiple failed login attempts
HU024638.5.0.6LDAP user accounts can become locked out because of multiple failed login attempts
HU024638.5.1.0LDAP user accounts can become locked out because of multiple failed login attempts
HU024638.6.0.0LDAP user accounts can become locked out because of multiple failed login attempts
HU024648.5.0.5An issue in the processing of NVMe host logouts can cause multiple node warmstarts
HU024648.5.1.0An issue in the processing of NVMe host logouts can cause multiple node warmstarts
HU024648.6.0.0An issue in the processing of NVMe host logouts can cause multiple node warmstarts
HU024668.3.1.7An issue in the handling of drive failures can result in multiple node warmstarts
HU024668.4.0.7An issue in the handling of drive failures can result in multiple node warmstarts
HU024668.5.0.6An issue in the handling of drive failures can result in multiple node warmstarts
HU024678.3.1.9When one node disappears from the cluster the surviving node can be unable to achieve quorum allegiance in a timely manner causing it to lease expire
HU024678.4.0.0When one node disappears from the cluster the surviving node can be unable to achieve quorum allegiance in a timely manner causing it to lease expire
HU024688.5.0.6lsvdisk preferred_node_id filter not working correctly
HU024688.5.1.0lsvdisk preferred_node_id filter not working correctly
HU024688.6.0.0lsvdisk preferred_node_id filter not working correctly
HU024717.8.1.15After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue
HU024718.3.1.9After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue
HU024718.4.0.10After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue
HU024718.5.1.0After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue
HU024718.6.0.0After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue
HU024748.3.1.9An SFP failure can cause a node warmstart
HU024748.4.0.7An SFP failure can cause a node warmstart
HU024748.5.0.6An SFP failure can cause a node warmstart
HU024758.4.0.9Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery
HU024758.5.0.6Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery
HU024758.5.2.0Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery
HU024758.6.0.0Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery
HU024798.4.0.7If an NVMe host cancels a large number of I/O requests, multiple node warmstarts might occur
HU024798.5.0.5If an NVMe host cancels a large number of I/O requests, multiple node warmstarts might occur
HU024828.4.0.7Issue with 25Gb ethernet adapter card firmware can cause the node to warmstart should a specific signal be received from the iSer switch. It is possible for this signal to be propagated to all nodes resulting in a loss of access to data
HU024838.5.2.0T2 Recovery occurred after mkrcrelationship command was run
HU024838.6.0.0T2 Recovery occurred after mkrcrelationship command was run
HU024848.5.0.5The GUI does not allow expansion of DRP thin or compressed volumes
HU024848.5.2.0The GUI does not allow expansion of DRP thin or compressed volumes
HU024848.6.0.0The GUI does not allow expansion of DRP thin or compressed volumes
HU024858.3.1.9Reoccurring node warmstarts on systems with DRP that have been upgraded to 8.3.1.7 or 8.3.1.8
HU024878.5.0.6Problems expanding the size of a volume using the GUI
HU024878.5.2.0Problems expanding the size of a volume using the GUI
HU024878.6.0.0Problems expanding the size of a volume using the GUI
HU024888.5.0.3Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost)
HU024888.5.1.0Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost)
HU024888.6.0.0Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost)
HU024908.5.0.6Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded
HU024908.5.2.0Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded
HU024908.6.0.0Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded
HU024918.5.0.5On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur
HU024918.5.2.0On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur
HU024918.6.0.0On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur
HU024928.5.0.5Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected.
HU024928.5.2.0Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected.
HU024928.6.0.0Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected.
HU024938.7.2.0On certain controllers that have more then 511 LUNS configured, then mdisks may go offline
HU024939.1.0.0On certain controllers that have more then 511 LUNS configured, then mdisks may go offline
HU024948.5.0.5A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events.
HU024948.5.2.0A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events.
HU024948.6.0.0A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events.
HU024978.4.0.7A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts
HU024978.5.0.5A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts
HU024978.5.2.0A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts
HU024978.6.0.0A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts
HU024988.5.0.5If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load.
HU024988.5.2.0If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load.
HU024988.6.0.0If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load.
HU024998.3.1.9A pop up with the message saying 'The server was unable to process the request' may occur due to an invalid time stamp in the file used to provide the pop up reminder
HU025008.5.0.5If a volume in a FlashCopy mapping is deleted, and the deletion fails (for example because the user does not have the correct permissions to delete that volume), node warmstarts can occur, leading to loss of access
HU025018.5.0.5If an internal I/O timeout occurs in a RAID array, a node warmstart can occur
HU025018.5.2.0If an internal I/O timeout occurs in a RAID array, a node warmstart can occur
HU025018.6.0.0If an internal I/O timeout occurs in a RAID array, a node warmstart can occur
HU025028.5.0.5On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access
HU025028.5.2.0On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access
HU025028.6.0.0On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access
HU025038.5.0.5The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI
HU025038.5.1.0The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI
HU025038.6.0.0The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI
HU025048.5.0.5The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP
HU025048.5.1.0The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP
HU025048.6.0.0The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP
HU025058.5.0.5A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running
HU025058.5.2.0A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running
HU025058.6.0.0A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running
HU025068.5.0.4On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access.
HU025068.5.2.0On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access.
HU025068.6.0.0On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access.
HU025078.5.0.6A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts.
HU025078.5.2.0A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts.
HU025078.6.0.0A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts.
HU025088.5.0.6The mkippartnership cli command does not allow a portset with a space in the name as a parameter.
HU025088.5.2.0The mkippartnership cli command does not allow a portset with a space in the name as a parameter.
HU025088.6.0.0The mkippartnership cli command does not allow a portset with a space in the name as a parameter.
HU025098.5.0.5Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use
HU025098.5.2.0Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use
HU025098.6.0.0Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use
HU025118.4.0.9Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms
HU025118.5.0.6Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms
HU025118.5.2.0Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms
HU025118.6.0.0Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms
HU025128.4.0.7An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts
HU025128.5.0.5An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts
HU025128.5.2.0An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts
HU025128.6.0.0An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts
HU025138.5.0.6When upgrading one side of a cluster from 8.4.2 to either 8.5.0 or 8.5.2, when the other side of the cluster is still running 8.4.2, when you run either 'mkippartnership' or 'rmippartnership' commands from the cluster that is running 8.5.0 or 8.5.2, then an iplink node warmstart can occur
HU025148.5.0.5Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file
HU025148.5.2.0Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file
HU025148.6.0.0Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file
HU025158.5.0.5Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected
HU025158.5.2.0Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected
HU025158.6.0.0Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected
HU025188.4.0.8Certain hardware platforms running 8.4.0.7 have an issue with the Trusted Platform Module (TPM). This causes issues communicating with encryption keyservers and invalid SSL certificates
HU025198.5.0.6Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession
HU025198.5.2.0Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession
HU025198.6.0.0Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession
HU025208.5.0.6Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession
HU025208.5.2.0Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession
HU025208.6.0.0Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession
HU025228.5.0.6When upgrading from 8.4.1 or lower to a level that uses IP portsets (8.4.2 or higher), there is an issue when the port ID on each node has a different remote copy use
HU025238.5.2.0False Host WWPN state shows as degraded for direct attached host after upgrading to 8.5.0.2
HU025238.6.0.0False Host WWPN state shows as degraded for direct attached host after upgrading to 8.5.0.2
HU025258.5.0.6Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss
HU025258.5.3.0Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss
HU025258.6.0.0Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss
HU025288.5.0.6When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values
HU025288.5.3.0When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values
HU025288.6.0.0When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values
HU025298.6.0.0A single node warmstart may occur due to a rare timing window, when a disconnection occurs between two systems in an IP replication partnership
HU025308.5.0.6Upgrades from 8.4.2 or 8.5 fail to start on some platforms
HU025308.5.2.0Upgrades from 8.4.2 or 8.5 fail to start on some platforms
HU025308.6.0.0Upgrades from 8.4.2 or 8.5 fail to start on some platforms
HU025328.4.0.9Nodes that are running 8.4.0.7 or 8.4.0.8, or upgrading to either of these levels may suffer asserts if NVME hosts are configured
HU025348.5.0.6When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes
HU025348.5.3.0When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes
HU025348.6.0.0When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes
HU025388.5.2.0Some systems may suffer a thread locking issue caused by the background copy / cleaning progress for flash copy maps
HU025388.6.0.0Some systems may suffer a thread locking issue caused by the background copy / cleaning progress for flash copy maps
HU025398.5.0.10If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port
HU025398.5.4.0If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port
HU025398.6.0.0If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port
HU025408.5.0.6Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts
HU025408.5.2.1Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts
HU025408.6.0.0Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts
HU025418.5.0.6In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data.
HU025418.5.3.0In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data.
HU025418.6.0.0In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data.
HU025428.5.0.6On systems that are running 8.4.2 or 8.5.0, when deleting a Hyperswap volume, or Hyperswap volume copy, that has Safeguarded copy snapshots configured, a T2 recovery can occur causing loss of access to data.
HU025438.5.0.6After upgrade to 850, the 'lshost -delim' command shows hosts in offline state, while 'lshost' shows them online
HU025448.5.2.2On systems running 8.5.2.1, if you are not logged in as superuser and you try to create a partnership for policy-based replication, or enable policy-based replication on an existing partnership, then this can trigger a single node warmstart.
HU025448.6.0.0On systems running 8.5.2.1, if you are not logged in as superuser and you try to create a partnership for policy-based replication, or enable policy-based replication on an existing partnership, then this can trigger a single node warmstart.
HU025458.6.0.0When following the 'removing and replacing a faulty node canister' procedure, the satask chbootdrive -replacecanister fails to clear the reported 545 error - instead the replacement reboots into 525 / 522 service state
HU025468.5.2.2On systems running 8.5.2.1, and with Policy-based replication configured, if you created more than 1PB of replicated volumes then this can lead to a loss of hardened data
HU025468.6.0.0On systems running 8.5.2.1, and with Policy-based replication configured, if you created more than 1PB of replicated volumes then this can lead to a loss of hardened data
HU025498.5.0.6When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade
HU025498.5.3.0When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade
HU025498.6.0.0When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade
HU025518.5.0.6When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints
HU025518.5.3.0When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints
HU025518.6.0.0When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints
HU025538.5.0.7Remote copy relationships may not correctly display the name of the vdisk on the remote cluster
HU025538.5.3.0Remote copy relationships may not correctly display the name of the vdisk on the remote cluster
HU025538.6.0.0Remote copy relationships may not correctly display the name of the vdisk on the remote cluster
HU025558.4.0.10A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured
HU025558.5.0.7A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured
HU025558.5.3.0A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured
HU025558.6.0.0A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured
HU025568.6.0.0In rare circumstances, a FlashSystem 9500 (or SV3) node might be unable to boot, requiring a replacement of the boot drive and TPM
HU025578.5.0.7Systems may be unable to upgrade from pre-8.5.0 to 8.5.0 due to a previous node upgrade and certain DRP conditions existing
HU025588.5.0.6A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur.
HU025588.5.4.0A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur.
HU025588.6.0.0A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur.
HU025598.4.0.10A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information
HU025598.5.0.6A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information
HU025598.5.3.0A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information
HU025598.6.0.0A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information
HU025608.5.0.6When creating a SAS host using the GUI, portset is incorrectly added. The command fails with CMMVC9777E as the portset parameter is not supported with the given type of host.
HU025618.3.1.9If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur
HU025618.4.0.10If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur
HU025618.5.0.6If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur
HU025618.5.3.0If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur
HU025618.6.0.0If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur
HU025628.4.0.10A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations
HU025628.5.0.6A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations
HU025628.5.3.0A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations
HU025628.6.0.0A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations
HU025638.4.0.10Improve dimm slot identification for memory errors
HU025638.5.0.6Improve dimm slot identification for memory errors
HU025638.5.3.0Improve dimm slot identification for memory errors
HU025638.6.0.0Improve dimm slot identification for memory errors
HU025648.3.1.9The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct
HU025648.4.0.10The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct
HU025648.5.0.6The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct
HU025658.5.0.8Node warmstart when generating data compression savings data for 'lsvdiskanalysis'
HU025658.6.0.0Node warmstart when generating data compression savings data for 'lsvdiskanalysis'
HU025678.5.3.0Due to a low probability timing window, FlashCopy reads can occur indefinitely to an offline Vdisk. This can cause host write delays to flashcopy target volumes that can exceed 6 minutes
HU025678.6.0.0Due to a low probability timing window, FlashCopy reads can occur indefinitely to an offline Vdisk. This can cause host write delays to flashcopy target volumes that can exceed 6 minutes
HU025688.6.0.0Unable to create remote copy relationship with 'mkrcrelationship' with Aux volume ID greater than 10,000 when one of the systems in the set of partnered systems is limited to 10,000 volumes, either due to the limits of the platform (hardware) or the installed software version
HU025698.5.3.0Due to a low-probability timing window, when processing I/O from both SCSI and NVMe hosts, a node may warmstart to clear the condition
HU025698.6.0.0Due to a low-probability timing window, when processing I/O from both SCSI and NVMe hosts, a node may warmstart to clear the condition
HU025718.4.0.10In a Hyperswap cluster, a Tier 2 recovery may occur after manually shutting down both nodes that are in one IO group
HU025728.5.0.7When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade.
HU025728.5.4.0When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade.
HU025728.6.0.0When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade.
HU025738.5.0.10HBA firmware can cause a port to appear to be flapping. The port will not work again until the HBA is restarted by rebooting the node.
HU025738.6.0.0HBA firmware can cause a port to appear to be flapping. The port will not work again until the HBA is restarted by rebooting the node.
HU025798.5.0.7The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable
HU025798.5.3.0The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable
HU025798.6.0.0The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable
HU025808.5.3.0If FlashCopy mappings are force stopped, and the targets are in a remote copy relationship, then a node may warmstart
HU025808.6.0.0If FlashCopy mappings are force stopped, and the targets are in a remote copy relationship, then a node may warmstart
HU025818.5.3.0Due to a low probability timing window, a node warmstart might occur when I/O is sent to a partner node and before the partner node recognizes that the disk is online
HU025818.6.0.0Due to a low probability timing window, a node warmstart might occur when I/O is sent to a partner node and before the partner node recognizes that the disk is online
HU025838.5.3.0FCM drive ports maybe excluded after a failed drive firmware download. Depending on the number of drives impacted, this may take the RAID array offline
HU025838.6.0.0FCM drive ports maybe excluded after a failed drive firmware download. Depending on the number of drives impacted, this may take the RAID array offline
HU025848.6.0.0If a HyperSwap volume is created with cache disabled in a Data Reduction Pool (DRP), multiple node warmstarts may occur.
HU025858.5.0.12An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring
HU025858.6.0.1An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring
HU025858.6.1.0An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring
HU025858.7.0.0An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring
HU025868.5.0.8When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly
HU025868.5.4.0When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly
HU025868.6.0.0When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly
HU025898.5.4.0Reducing the expiration date of snapshots can cause volume creation and deletion to stall
HU025898.6.0.0Reducing the expiration date of snapshots can cause volume creation and deletion to stall
HU025918.5.0.12Multiple node asserts can occur when running commands with the 'preferred node' filter during an upgrade to 8.5.0.0 and above.
HU025928.6.0.0In some scenarios DRP can request RAID to attempt a read by reconstructing data from other strips. In certain cases this can result in a node warmstart
HU025938.3.1.9NVMe drive is incorrectly reporting end of life due to flash degradation
HU025938.4.0.10NVMe drive is incorrectly reporting end of life due to flash degradation
HU025938.5.0.0NVMe drive is incorrectly reporting end of life due to flash degradation
HU025948.5.0.8Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated
HU025948.5.4.0Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated
HU025948.6.0.0Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated
HU025978.4.0.10A single node may warmstart to recover from the situation were different fibres update the completed count for the allocation extent in question
HU026008.6.0.0Single node warmstart caused by a rare race condition triggered by multiple aborts and I/O issues
IC576427.8.1.5A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide
IC576428.1.0.0A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide
IC802307.4.0.0Both nodes warmstart due to Ethernet storm
IC859317.5.0.8When the user is copying iostats files between nodes the automatic clean up process may occasionally result in an failure message (ID 980440) in the event log
IC859317.6.0.0When the user is copying iostats files between nodes the automatic clean up process may occasionally result in an failure message (ID 980440) in the event log
IC895627.3.0.1Node warmstart when handling a large number of XCOPY commands
IC896087.3.0.1Node warmstart when handling a large number of XCOPY commands
IC903747.3.0.5Node warmstart due to a I/O deadlock when using FlashCopy functions
IC907997.3.0.8Node warmstart when drive medium error detected at the same time as drive is changing state to offline
IC923567.5.0.0Improve DMP for handling 2500 event for V7000 using Unified storage
IC926657.3.0.1Multiple node warmstarts caused by iSCSI initiator using the same IQN as SVC or Storwize
IC929937.3.0.1Fix Procedure for 1686 not replacing drive correctly
IC947817.3.0.1GUI Health status pod still showing red after offline node condition has been recovered
II147678.3.1.3An issue with how cache handles ownership of volumes across multiple sites can lead to cross-site destage, adversely impacting write latency. For more details refer to this Flash
II147678.4.0.0An issue with how cache handles ownership of volumes across multiple sites can lead to cross-site destage, adversely impacting write latency. For more details refer to this Flash
II147717.3.0.9Node warmstart due to compression/index re-writing timing condition
II147717.4.0.3Node warmstart due to compression/index re-writing timing condition
II147787.4.0.6Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required
II147787.5.0.5Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required
II147787.6.0.0Reduced performance for volumes which have the configuration node as their preferred node due to GUI processing the update of volume attributes where there is a large number of changes required
IT012507.3.0.1Loss of access to data when node or node canister goes offline during drive update
IT033547.3.0.8Poor read performance with iSCSI single threaded read workload
IT041057.3.0.8EasyTier does not promote extents between different tiers when using release v7.3.0
IT049117.3.0.9Node warmstarts due to RAID synchronisation inconsistency
IT052197.3.0.8Compressed volumes offline on systems running v7.3 release due to decompression issue
IT064077.3.0.9Node warmstart due to compressed volume metadata
IT102517.6.0.0Freeze time update delayed after reduction of cycle period
IT104707.5.0.13Noisy/high speed fan
IT104707.6.0.0Noisy/high speed fan
IT120887.7.0.0If a node encounters a SAS-related warmstart the node can remain in service with a 504/505 error, indicating that is was unable to pick up the necessary VPD to become active again
IT149177.4.0.10Node warmstarts due to a timing window in the cache component. For more details refer to this Flash
IT149177.5.0.8Node warmstarts due to a timing window in the cache component. For more details refer to this Flash
IT149177.6.1.7Node warmstarts due to a timing window in the cache component. For more details refer to this Flash
IT149177.7.1.5Node warmstarts due to a timing window in the cache component. For more details refer to this Flash
IT149177.8.0.0Node warmstarts due to a timing window in the cache component. For more details refer to this Flash
IT149227.6.1.3A memory issue, related to the email feature, may cause nodes to warmstart or go offline
IT153667.6.1.4CLI command lsportsas may show unexpected port numbering
IT153667.7.0.0CLI command lsportsas may show unexpected port numbering
IT160127.6.1.6Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance
IT160127.7.0.5Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance
IT160127.8.0.0Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance
IT161487.7.1.1When accelerate mode is enabled due to the way promote/swap plans are prioritized over demote EasyTier is only demoting 1 extent every 5 minutes
IT163377.5.0.10Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash
IT163377.6.1.5Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash
IT163377.7.0.4Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash
IT163377.7.1.1Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to this Flash
IT171027.7.0.4Where the maximum number of I/O requests for a FC port has been exceeded, if a SCSI command, with an unsupported opcode, is received from a host then the node may warmstart
IT171027.7.1.3Where the maximum number of I/O requests for a FC port has been exceeded, if a SCSI command, with an unsupported opcode, is received from a host then the node may warmstart
IT173027.7.0.5Unexpected 45034 1042 entries in the Event Log
IT173027.7.1.5Unexpected 45034 1042 entries in the Event Log
IT173027.8.0.0Unexpected 45034 1042 entries in the Event Log
IT175647.7.1.7All nodes in an I/O group may warmstart when a DRAID array experiences drive failures
IT175647.8.0.0All nodes in an I/O group may warmstart when a DRAID array experiences drive failures
IT179197.8.1.6A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts
IT179198.1.0.0A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts
IT180867.7.1.5When a volume is moved between I/O groups a node may warmstart
IT180867.8.0.0When a volume is moved between I/O groups a node may warmstart
IT187527.7.1.6When the config node processes an lsdependentvdisks command, issued via the GUI, that has a large number of objects in its parameters, it may warmstart
IT187527.8.0.2When the config node processes an lsdependentvdisks command, issued via the GUI, that has a large number of objects in its parameters, it may warmstart
IT190197.8.1.0V5000 control enclosure midplane FRU replacement may fail leading to both nodes reporting a 506 error
IT191927.8.1.5An issue in the handling of GUI certificates may cause warmstarts leading to a Tier 2 recovery
IT191928.1.1.1An issue in the handling of GUI certificates may cause warmstarts leading to a Tier 2 recovery
IT192327.8.1.0Storwize systems can report unexpected drive location errors as a result of a RAID issue
IT193878.1.0.0When two Storwize I/O groups are connected to each other (via direct connect) 1550 errors will be logged and reappear when marked as fixed
IT195617.8.1.8An issue with register clearance in the FC driver code may cause a node warmstart
IT195618.2.0.0An issue with register clearance in the FC driver code may cause a node warmstart
IT195618.2.1.0An issue with register clearance in the FC driver code may cause a node warmstart
IT197267.6.1.8Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled preventing the HBA firmware from generating the completion for a FC command
IT197267.7.1.7Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled preventing the HBA firmware from generating the completion for a FC command
IT197267.8.1.1Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled preventing the HBA firmware from generating the completion for a FC command
IT199737.8.1.1Call home emails may not be sent due to a failure to retry
IT205868.1.1.0Due to an issue in Lancer G5 firmware after a node reboot the LED of the 10GBE port may remain amber even though the port is working normally
IT206277.7.1.7When Read-Intensive drives are used as quorum disks a drive outage can occur. Under some circumstances this can lead to a loss of access
IT206277.8.1.1When Read-Intensive drives are used as quorum disks a drive outage can occur. Under some circumstances this can lead to a loss of access
IT213837.7.1.7Heavy I/O may provoke inconsistencies in resource allocation leading to node warmstarts
IT218968.3.1.0Where encryption keys have been lost it will not be possible to remove an empty MDisk group
IT223767.7.1.7Upgrade of V5000 Gen 2 systems, with 16GB node canisters, can become stalled with multiple warmstarts on first node to be upgraded
IT225917.8.1.8An issue in the HBA adapter firmware may result in node warmstarts
IT225918.1.3.4An issue in the HBA adapter firmware may result in node warmstarts
IT228028.1.0.1A memory management issue in cache may cause multiple node warmstarts possibly leading to a loss of access and necessitating a Tier 3 recovery
IT230347.8.1.3With HyperSwap volumes and mirrored copies, at a single site, using rmvolumecopy to remove a copy, from an auxiliary volume, may result in a cluster-wide warmstart necessitating a Tier 2 recovery
IT230348.1.0.1With HyperSwap volumes and mirrored copies, at a single site, using rmvolumecopy to remove a copy, from an auxiliary volume, may result in a cluster-wide warmstart necessitating a Tier 2 recovery
IT231407.8.1.5When viewing the licensed functions GUI page the individual calculations for SCUs, for each tier, may be wrong. However the total is correct
IT237477.8.1.5For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance
IT237478.1.1.1For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance
IT249007.8.1.8Whilst replacing a control enclosure midplane an issue at boot can prevent VPD being assigned delaying a return to service
IT249008.1.3.0Whilst replacing a control enclosure midplane an issue at boot can prevent VPD being assigned delaying a return to service
IT253677.8.1.12A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type
IT253678.2.1.11A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type
IT253678.3.0.0A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type
IT254578.1.3.4Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error
IT254578.2.0.3Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error
IT254578.2.1.0Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error
IT258507.8.1.8I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access
IT258508.1.3.6I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access
IT258508.2.0.0I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access
IT258508.2.1.0I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access
IT259708.2.1.0After a FlashCopy consistency group is started a node may warmstart
IT260497.8.1.9An issue with CPU scheduling may cause the GUI to respond slowly
IT260498.1.3.4An issue with CPU scheduling may cause the GUI to respond slowly
IT260498.2.0.3An issue with CPU scheduling may cause the GUI to respond slowly
IT260498.2.1.0An issue with CPU scheduling may cause the GUI to respond slowly
IT262577.8.1.11Starting a relationship, when the remote volume is offline, may result in a T2 recovery
IT262578.2.1.8Starting a relationship, when the remote volume is offline, may result in a T2 recovery
IT262578.3.0.0Starting a relationship, when the remote volume is offline, may result in a T2 recovery
IT268367.8.1.8Loading drive firmware may cause a node warmstart
IT274607.8.1.9Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits
IT274608.1.3.6Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits
IT274608.2.1.0Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits
IT284338.1.3.6Timing window issue in the Data Reduction Pool rehoming component can cause a single node warmstart
IT284338.2.1.4Timing window issue in the Data Reduction Pool rehoming component can cause a single node warmstart
IT287288.2.1.4Email alerts will not work where the mail server does not allow unqualified client host names
IT290407.8.1.9Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access
IT290408.1.3.6Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access
IT290408.2.1.0Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access
IT298538.1.3.4After upgrading to v8.1.1, or later, V5000 Gen 2 systems, with Gen 1 expansion enclosures, may experience multiple node warmstarts leading to a loss of access
IT298538.2.1.0After upgrading to v8.1.1, or later, V5000 Gen 2 systems, with Gen 1 expansion enclosures, may experience multiple node warmstarts leading to a loss of access
IT298678.3.1.0If a change volume, for a remote copy relationship, in a consistency group, runs out of space whilst properties, of the consistency group, are being changed then a Tier 2 recovery may occur
IT299758.3.0.1During Ethernet port configuration, netmask validation will only accept a fourth octet of zero. Non-zero values will cause the interface to remain inactive
IT303068.3.1.0A timing issue in callhome function initialisation may cause a node warmstart
IT304488.2.1.8If an IP Quorum app is killed, during the commit phase of a code upgrade, then that offline IP Quorum device cannot be removed, post upgrade
IT304488.3.0.1If an IP Quorum app is killed, during the commit phase of a code upgrade, then that offline IP Quorum device cannot be removed, post upgrade
IT304498.2.1.8Attempting to activate USB encryption on a new V5030E will fail with a CMMVCU6054E error
IT305958.2.1.8A resource shortage in the RAID component can cause MDisks to be taken offline
IT305958.3.0.1A resource shortage in the RAID component can cause MDisks to be taken offline
IT311138.2.1.11After a manual power off and on, of a system, both nodes, in an I/O group, may repeatedly assert into a service state
IT311138.3.1.0After a manual power off and on, of a system, both nodes, in an I/O group, may repeatedly assert into a service state
IT313008.3.1.0When a snap collection reads the status of PCI devices a CPU can be stalled leading to a cluster-wide lease expiry
IT323388.3.1.3Testing LDAP Authentication fails if username & password are supplied
IT323388.4.0.0Testing LDAP Authentication fails if username & password are supplied
IT324408.3.1.2Under heavy I/O workload the processing of deduplicated I/O may cause a single node warmstart
IT325198.3.1.2Changing an LDAP users password, in the directory, whilst this user is logged in to the GUI of a Spectrum Virtualize system may result in an account lockout in the directory, depending on the account lockout policy configured for the directory. Existing CLI logins via SSH are not affected
IT326318.3.1.2Whilst upgrading the firmware for multiple drives an issue in the firmware checking can initiate a Tier 2 recovery
IT337348.4.0.0Lower cache partitions may fill up even though higher destage rates are available
IT338688.4.0.0Non-FCM NVMe drives may exhibit high write response times with the Spectrum Protect Blueprint script
IT339128.4.0.7A multi-drive code download may fail resulting in a Tier 2 recovery
IT339968.3.1.7An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart
IT339968.4.0.7An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart
IT339968.4.2.0An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart
IT339968.5.0.0An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart
IT349498.4.0.2lsnodevpd may show DIMM information in the wrong positions
IT349498.5.0.0lsnodevpd may show DIMM information in the wrong positions
IT349588.4.2.0During a system update a node returning to the cluster, after upgrade, may warmstart
IT349588.5.0.0During a system update a node returning to the cluster, after upgrade, may warmstart
IT355558.3.1.4Storwize V5030 systems running v8.3.1.3 may experience an offline pool under heavy I/O workloads
IT366198.4.0.0After a node warmstart, system CPU utilisation may show an increase
IT367928.3.1.6EasyTier can select a default performance profile for a drive which could cause too much hot data to be moved to lower tiers
IT376548.4.0.4When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation
IT376548.4.2.0When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation
IT376548.5.0.0When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation
IT380158.2.1.15During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts
IT380158.3.1.6During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts
IT380158.4.0.6During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts
IT380158.5.0.0During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts
IT388588.4.2.0Unable to resume Enable USB Encryption wizard via the GUI. The GUI will display error CMMVC9231E
IT388588.5.0.0Unable to resume Enable USB Encryption wizard via the GUI. The GUI will display error CMMVC9231E
IT400598.4.0.7Port to node metrics can appear inflated due to an issue in performance statistics aggregation
IT400598.5.0.2Port to node metrics can appear inflated due to an issue in performance statistics aggregation
IT403708.4.2.0An issue in the PCI fault recovery mechanism may cause a node to constantly reboot
IT410888.3.1.9Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks
IT410888.4.0.10Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks
IT410888.5.0.6Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks
IT410888.5.2.0Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks
IT410888.6.0.0Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks
IT411738.4.0.7If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare.
IT411738.5.0.5If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare.
IT411738.5.2.0If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare.
IT411738.6.0.0If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare.
IT411918.5.0.5If a REST API client authenticates as an LDAP user, a node warmstart can occur
IT411918.5.2.0If a REST API client authenticates as an LDAP user, a node warmstart can occur
IT411918.6.0.0If a REST API client authenticates as an LDAP user, a node warmstart can occur
IT414478.4.0.10When removing the DNS server configuration, a node may discover unexpected metadata and warmstart
IT414478.5.0.6When removing the DNS server configuration, a node may discover unexpected metadata and warmstart
IT414478.6.0.3When removing the DNS server configuration, a node may discover unexpected metadata and warmstart
IT418358.3.1.9A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type
IT418358.4.0.10A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type
IT418358.5.0.6A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type
IT418358.5.2.0A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type
IT418358.6.0.0A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type
IT424038.4.0.10A limit is in place to prevent the use of 8TB drives or larger in RAID5 arrays due to the risk of data loss during an extended rebuild. This limit was intended for 8 TiB however it was implemented as 8TB. A 7.3TiB drive has a capacity of 8.02TB and as a result was incorrectly prevented from use in RAID5
IT424038.5.0.6A limit is in place to prevent the use of 8TB drives or larger in RAID5 arrays due to the risk of data loss during an extended rebuild. This limit was intended for 8 TiB however it was implemented as 8TB. A 7.3TiB drive has a capacity of 8.02TB and as a result was incorrectly prevented from use in RAID5
SVAPAR-1001278.5.0.10The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI.
SVAPAR-1001278.6.0.1The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI.
SVAPAR-1001278.6.1.0The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI.
SVAPAR-1001278.7.0.0The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI.
SVAPAR-1001628.5.0.10Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur
SVAPAR-1001628.6.0.1Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur
SVAPAR-1001628.6.1.0Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur
SVAPAR-1001628.7.0.0Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur
SVAPAR-1001728.6.0.1During the enclosure component upgrade, which occurs after the cluster upgrade has committed, a system can experience spurious 'The PSU has indicated DC failure' events (error code 1126 ). The event will automatically fix itself after several seconds and there is no user action required
SVAPAR-1005648.6.0.1On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it.
SVAPAR-1005648.6.1.0On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it.
SVAPAR-1005648.7.0.0On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it.
SVAPAR-1008718.7.0.0Removing an NVMe host followed by running the 'lsnvmefabric' command causes a recurring single node warmstart
SVAPAR-1009248.6.2.0After the battery firmware is updated, either using the utility or by upgrading to a version with newer firmware, the battery LED may be falsely illuminated.
SVAPAR-1009248.7.0.0After the battery firmware is updated, either using the utility or by upgrading to a version with newer firmware, the battery LED may be falsely illuminated.
SVAPAR-1009588.5.0.10A single FCM may incorrectly report multiple medium errors for the same LBA
SVAPAR-1009588.6.0.1A single FCM may incorrectly report multiple medium errors for the same LBA
SVAPAR-1009588.6.1.0A single FCM may incorrectly report multiple medium errors for the same LBA
SVAPAR-1009588.7.0.0A single FCM may incorrectly report multiple medium errors for the same LBA
SVAPAR-1009778.5.0.10When a zone containing NVMe devices is enabled, a node warmstart might occur.
SVAPAR-1009778.6.0.1When a zone containing NVMe devices is enabled, a node warmstart might occur.
SVAPAR-1009778.6.1.0When a zone containing NVMe devices is enabled, a node warmstart might occur.
SVAPAR-1009778.7.0.0When a zone containing NVMe devices is enabled, a node warmstart might occur.
SVAPAR-1022718.6.0.2Enable IBM Storage Defender integration for Data Reduction Pools
SVAPAR-1022718.6.1.0Enable IBM Storage Defender integration for Data Reduction Pools
SVAPAR-1022718.7.0.0Enable IBM Storage Defender integration for Data Reduction Pools
SVAPAR-1023828.6.0.3Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed.
SVAPAR-1023828.6.2.0Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed.
SVAPAR-1023828.7.0.0Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed.
SVAPAR-1025738.6.0.1On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O
SVAPAR-1025738.6.1.0On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O
SVAPAR-1025738.7.0.0On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O
SVAPAR-1036968.6.0.1When taking a snapshot of a volume that is being replicated to another system using Policy Based Replication, the snapshot may contain data from an earlier point in time than intended
SVAPAR-1041598.6.0.2Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart.
SVAPAR-1041598.6.2.0Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart.
SVAPAR-1041598.7.0.0Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart.
SVAPAR-1042508.5.0.12There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition
SVAPAR-1042508.6.0.2There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition
SVAPAR-1042508.6.2.0There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition
SVAPAR-1042508.7.0.0There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition
SVAPAR-1045338.5.0.10Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools
SVAPAR-1045338.6.0.2Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools
SVAPAR-1045338.6.2.0Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools
SVAPAR-1045338.7.0.0Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools
SVAPAR-1054308.6.0.2When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts.
SVAPAR-1054308.6.2.0When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts.
SVAPAR-1054308.7.0.0When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts.
SVAPAR-1057278.5.0.10An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised
SVAPAR-1057278.6.0.2An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised
SVAPAR-1057278.6.2.0An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised
SVAPAR-1057278.7.0.0An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised
SVAPAR-1058618.6.0.2A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group
SVAPAR-1058618.6.2.0A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group
SVAPAR-1058618.7.0.0A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group
SVAPAR-1059558.6.0.3Single node warmstart during link recovery when using a secured IP partnership.
SVAPAR-1059558.6.2.0Single node warmstart during link recovery when using a secured IP partnership.
SVAPAR-1059558.7.0.0Single node warmstart during link recovery when using a secured IP partnership.
SVAPAR-1066938.6.0.2Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8
SVAPAR-1066938.6.2.0Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8
SVAPAR-1066938.7.0.0Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8
SVAPAR-1068748.6.0.2A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication.
SVAPAR-1068748.6.2.0A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication.
SVAPAR-1068748.7.0.0A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication.
SVAPAR-1072708.6.0.2If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting.
SVAPAR-1072708.6.2.0If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting.
SVAPAR-1072708.7.0.0If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting.
SVAPAR-1075478.5.0.11If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur.
SVAPAR-1075478.6.0.3If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur.
SVAPAR-1075478.6.2.0If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur.
SVAPAR-1075478.7.0.0If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur.
SVAPAR-1075588.6.0.2A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail.
SVAPAR-1075588.6.2.0A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail.
SVAPAR-1075588.7.0.0A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail.
SVAPAR-1075958.5.0.10Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources
SVAPAR-1075958.6.0.2Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources
SVAPAR-1075958.6.2.0Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources
SVAPAR-1075958.7.0.0Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources
SVAPAR-1077338.5.0.17The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!'
SVAPAR-1077338.6.0.2The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!'
SVAPAR-1077338.6.2.0The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!'
SVAPAR-1077338.7.0.0The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!'
SVAPAR-1077348.5.0.11When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart.
SVAPAR-1077348.6.0.2When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart.
SVAPAR-1077348.6.2.0When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart.
SVAPAR-1077348.7.0.0When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart.
SVAPAR-1078158.6.2.0There is an issue between 3-Site, whilst adding snapshots on the auxfar site, that causes the node to warmstart
SVAPAR-1078158.7.0.0There is an issue between 3-Site, whilst adding snapshots on the auxfar site, that causes the node to warmstart
SVAPAR-1078528.6.2.0A Policy-Based High Availability node may warmstart during IP quorum disconnect and reconnect operations.
SVAPAR-1078528.7.0.0A Policy-Based High Availability node may warmstart during IP quorum disconnect and reconnect operations.
SVAPAR-1084698.6.0.4A single node warmstart may occur on nodes configure to use a secured IP partnership
SVAPAR-1084698.6.2.0A single node warmstart may occur on nodes configure to use a secured IP partnership
SVAPAR-1084698.7.0.0A single node warmstart may occur on nodes configure to use a secured IP partnership
SVAPAR-1084768.6.0.4Remote users with public SSH keys configured cannot failback to password authentication.
SVAPAR-1084768.6.2.0Remote users with public SSH keys configured cannot failback to password authentication.
SVAPAR-1084768.7.0.0Remote users with public SSH keys configured cannot failback to password authentication.
SVAPAR-1307298.6.0.4Remote users with public SSH keys configured cannot failback to password authentication.
SVAPAR-1307298.6.2.0Remote users with public SSH keys configured cannot failback to password authentication.
SVAPAR-1307298.7.0.0Remote users with public SSH keys configured cannot failback to password authentication.
SVAPAR-1085518.5.0.11An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded.
SVAPAR-1085518.6.0.3An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded.
SVAPAR-1085518.6.2.0An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded.
SVAPAR-1085518.7.0.0An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded.
SVAPAR-1087158.5.0.12The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI.
SVAPAR-1087158.6.0.4The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI.
SVAPAR-1087158.6.2.0The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI.
SVAPAR-1087158.7.0.0The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI.
SVAPAR-1088318.6.2.0FS9500 and SV3 nodes may not boot with the minimum configuration consisting of at least 2 DIMMS.
SVAPAR-1088318.7.0.0FS9500 and SV3 nodes may not boot with the minimum configuration consisting of at least 2 DIMMS.
SVAPAR-1092898.5.0.10Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets
SVAPAR-1092898.6.0.2Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets
SVAPAR-1092898.6.2.0Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets
SVAPAR-1092898.7.0.0Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets
SVAPAR-1093858.6.3.0When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage
SVAPAR-1093858.7.0.0When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage
SVAPAR-1100598.6.0.1When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail.
SVAPAR-1100598.6.2.0When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail.
SVAPAR-1100598.7.0.0When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail.
SVAPAR-1102348.5.0.11A single node warmstart can occur due to fibre channel adapter resource contention during 'chpartnership -stop' or 'mkfcpartnership' actions
SVAPAR-1103098.7.0.0When a volume group is assigned to an ownership group, and has a snapshot policy associated, running the 'lsvolumegroupsnapshotpolicy' or 'lsvolumegrouppopulation' command whilst logged in as an ownership group user, will cause a Config node to warmstart.
SVAPAR-1104268.6.0.3When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart
SVAPAR-1104268.6.2.0When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart
SVAPAR-1104268.7.0.0When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart
SVAPAR-1107358.6.2.0Additional policing has been introduced to ensure that FlashCopy target volumes are not used with policy-based replication. Commands 'chvolumegroup -replicationpolicy' will fail if any volume in the group is the target of a FlashCopy map. 'chvdisk -volumegroup' will fail if the volume is the target of a FlashCopy map, and the volume group has a replication policy.
SVAPAR-1107358.7.0.0Additional policing has been introduced to ensure that FlashCopy target volumes are not used with policy-based replication. Commands 'chvolumegroup -replicationpolicy' will fail if any volume in the group is the target of a FlashCopy map. 'chvdisk -volumegroup' will fail if the volume is the target of a FlashCopy map, and the volume group has a replication policy.
SVAPAR-1107428.6.2.0A System is unable to send email to email server because the password contains a hash '#' character.
SVAPAR-1107428.7.0.0A System is unable to send email to email server because the password contains a hash '#' character.
SVAPAR-1078668.6.2.0A System is unable to send email to email server because the password contains a hash '#' character.
SVAPAR-1078668.7.0.0A System is unable to send email to email server because the password contains a hash '#' character.
SVAPAR-1107438.6.0.4Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes.
SVAPAR-1107438.6.2.0Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes.
SVAPAR-1107438.7.0.0Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes.
SVAPAR-1107458.6.2.0Policy-based Replication (PBR) snapshots and Change Volumes are factored into the preferred node assignment. This can lead to a perceived imbalance of the distribution of preferred node assignments.
SVAPAR-1107458.7.0.0Policy-based Replication (PBR) snapshots and Change Volumes are factored into the preferred node assignment. This can lead to a perceived imbalance of the distribution of preferred node assignments.
SVAPAR-1107498.6.2.0There is an issue when configuring volumes using the wizard, the underlying command that is called is the 'mkvolume' command rather than the previous 'mkvdisk' command. With 'mkvdisk' it was possible to format the volumes, whereas with 'mkvolume' it is not possible
SVAPAR-1107498.7.0.0There is an issue when configuring volumes using the wizard, the underlying command that is called is the 'mkvolume' command rather than the previous 'mkvdisk' command. With 'mkvdisk' it was possible to format the volumes, whereas with 'mkvolume' it is not possible
SVAPAR-1107658.5.0.12In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter
SVAPAR-1107658.6.0.4In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter
SVAPAR-1107658.6.2.0In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter
SVAPAR-1107658.7.0.0In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter
SVAPAR-1110218.5.0.12Unable to load resource page in GUI if the IO group ID:0 does not have any nodes.
SVAPAR-1110218.6.0.4Unable to load resource page in GUI if the IO group ID:0 does not have any nodes.
SVAPAR-1110218.6.2.0Unable to load resource page in GUI if the IO group ID:0 does not have any nodes.
SVAPAR-1110218.7.0.0Unable to load resource page in GUI if the IO group ID:0 does not have any nodes.
SVAPAR-1111738.5.0.17Loss of access when two drives experience slowness at the same time
SVAPAR-1111738.6.0.5Loss of access when two drives experience slowness at the same time
SVAPAR-1111738.7.0.1Loss of access when two drives experience slowness at the same time
SVAPAR-1111738.7.1.0Loss of access when two drives experience slowness at the same time
SVAPAR-1111878.6.2.0There is an issue if the browser language is set to French, that can cause the SNMP server creation wizard not to be displayed.
SVAPAR-1111878.7.0.0There is an issue if the browser language is set to French, that can cause the SNMP server creation wizard not to be displayed.
SVAPAR-1112398.6.2.0In rare situations it is possible for a node running Global Mirror with Change Volumes (GMCV) to assert
SVAPAR-1112398.7.0.0In rare situations it is possible for a node running Global Mirror with Change Volumes (GMCV) to assert
SVAPAR-1112578.6.2.0If many drive firmware upgrades are performed in quick succession, multiple nodes may go offline with node error 565 due to a full boot drive.
SVAPAR-1112578.7.0.0If many drive firmware upgrades are performed in quick succession, multiple nodes may go offline with node error 565 due to a full boot drive.
SVAPAR-1114448.6.0.4Direct attached fibre channel hosts may not log into the NPIV host port due to a timing issue with the Registered State Change Notification (RSCN).
SVAPAR-1117058.6.0.3If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts.
SVAPAR-1117058.6.2.0If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts.
SVAPAR-1117058.7.0.0If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts.
SVAPAR-1118128.6.0.3Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes.
SVAPAR-1118128.6.3.0Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes.
SVAPAR-1118128.7.0.0Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes.
SVAPAR-1119898.6.2.0Downloading software with a Fix ID longer than 64 characters fails with an error
SVAPAR-1119898.7.0.0Downloading software with a Fix ID longer than 64 characters fails with an error
SVAPAR-1119918.6.0.5Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character
SVAPAR-1119918.6.2.0Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character
SVAPAR-1119918.7.0.0Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character
SVAPAR-1119928.6.0.4Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings
SVAPAR-1119928.6.2.0Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings
SVAPAR-1119928.7.0.0Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings
SVAPAR-1119948.6.2.0Certain writes to deduplicated and compressed DRP vdisks may return a mismatch, leading to a DRP pool going offline.
SVAPAR-1119948.7.0.0Certain writes to deduplicated and compressed DRP vdisks may return a mismatch, leading to a DRP pool going offline.
SVAPAR-1119968.5.0.12After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade.
SVAPAR-1119968.6.2.0After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade.
SVAPAR-1119968.7.0.0After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade.
SVAPAR-1120078.6.2.0Running the 'chsystemlimits' command with no parameters can cause multiple node warmstarts.
SVAPAR-1120078.7.0.0Running the 'chsystemlimits' command with no parameters can cause multiple node warmstarts.
SVAPAR-1121078.5.0.11There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process.
SVAPAR-1121078.6.0.3There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process.
SVAPAR-1121078.6.2.0There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process.
SVAPAR-1121078.7.0.0There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process.
SVAPAR-1121198.6.2.0Volumes can go offline due to out of space issues. This can cause the node to warmstart.
SVAPAR-1121198.7.0.0Volumes can go offline due to out of space issues. This can cause the node to warmstart.
SVAPAR-1122038.6.2.0A node warmstart may occur when removing a volume from a volume group which uses policy-based Replication.
SVAPAR-1122038.7.0.0A node warmstart may occur when removing a volume from a volume group which uses policy-based Replication.
SVAPAR-1125258.5.0.11A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy
SVAPAR-1125258.6.0.3A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy
SVAPAR-1125258.6.2.0A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy
SVAPAR-1125258.7.0.0A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy
SVAPAR-1127078.5.0.11Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash
SVAPAR-1127078.6.0.3Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash
SVAPAR-1127078.6.3.0Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash
SVAPAR-1127078.7.0.0Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash
SVAPAR-1127118.5.0.11IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message.
SVAPAR-1127118.6.0.3IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message.
SVAPAR-1127118.6.2.0IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message.
SVAPAR-1127118.7.0.0IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message.
SVAPAR-1127128.6.0.3The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above.
SVAPAR-1127128.6.3.0The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above.
SVAPAR-1127128.7.0.0The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above.
SVAPAR-1128568.6.0.4Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes.
SVAPAR-1128568.6.3.0Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes.
SVAPAR-1128568.7.0.0Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes.
SVAPAR-1129398.6.0.4A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang.
SVAPAR-1129398.6.3.0A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang.
SVAPAR-1129398.7.0.0A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang.
SVAPAR-1131228.6.0.3A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process.
SVAPAR-1131228.6.2.0A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process.
SVAPAR-1131228.7.0.0A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process.
SVAPAR-1108198.6.0.3A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process.
SVAPAR-1108198.6.2.0A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process.
SVAPAR-1108198.7.0.0A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process.
SVAPAR-1137928.6.0.4Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts
SVAPAR-1137928.6.3.0Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts
SVAPAR-1137928.7.0.0Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts
SVAPAR-1140818.6.0.4The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins.
SVAPAR-1140818.6.2.0The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins.
SVAPAR-1140818.7.0.0The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins.
SVAPAR-1140868.6.3.0Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware.
SVAPAR-1140868.7.0.0Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware.
SVAPAR-1141458.6.0.5A timing issue triggered by disabling an IP partnership's compression state while replication is running may cause a node to warmstart.
SVAPAR-1141458.7.0.0A timing issue triggered by disabling an IP partnership's compression state while replication is running may cause a node to warmstart.
SVAPAR-1147588.7.2.0Following a cluster recovery, the names of some back end storage controllers can be lost resulting in default names such as controller0.
SVAPAR-1147589.1.0.0Following a cluster recovery, the names of some back end storage controllers can be lost resulting in default names such as controller0.
SVAPAR-1148998.6.0.2Out of order snapshot stopping can cause stuck cleaning processes to occur, following Policy-based Replication cycling. This manifests as extremely high CPU utilization on multiple CPU cores, causing excessively high volume response times.
SVAPAR-1148998.6.2.0Out of order snapshot stopping can cause stuck cleaning processes to occur, following Policy-based Replication cycling. This manifests as extremely high CPU utilization on multiple CPU cores, causing excessively high volume response times.
SVAPAR-1148998.7.0.0Out of order snapshot stopping can cause stuck cleaning processes to occur, following Policy-based Replication cycling. This manifests as extremely high CPU utilization on multiple CPU cores, causing excessively high volume response times.
SVAPAR-1150218.6.0.4Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state.
SVAPAR-1150218.6.3.0Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state.
SVAPAR-1150218.7.0.0Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state.
SVAPAR-1151298.5.0.17A node can warmstart when its I/O group partner node is removed due to an internal software counter discrepancy. This can lead to temporary loss of access.
SVAPAR-1151298.6.0.5A node can warmstart when its I/O group partner node is removed due to an internal software counter discrepancy. This can lead to temporary loss of access.
SVAPAR-1151298.6.3.0A node can warmstart when its I/O group partner node is removed due to an internal software counter discrepancy. This can lead to temporary loss of access.
SVAPAR-1151298.7.0.0A node can warmstart when its I/O group partner node is removed due to an internal software counter discrepancy. This can lead to temporary loss of access.
SVAPAR-1151368.5.0.12Failure of an NVMe drive has a small probability of triggering a PCIe credit timeout in a node canister, causing the node to reboot.
SVAPAR-1151368.6.0.3Failure of an NVMe drive has a small probability of triggering a PCIe credit timeout in a node canister, causing the node to reboot.
SVAPAR-1154788.6.0.4An issue in the thin-provisioning component may cause a node warmstart during upgrade from pre-8.5.4 to 8.5.4 or later.
SVAPAR-1154788.6.2.0An issue in the thin-provisioning component may cause a node warmstart during upgrade from pre-8.5.4 to 8.5.4 or later.
SVAPAR-1154788.7.0.0An issue in the thin-provisioning component may cause a node warmstart during upgrade from pre-8.5.4 to 8.5.4 or later.
SVAPAR-1155058.6.0.4Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started.
SVAPAR-1155058.6.3.0Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started.
SVAPAR-1155058.7.0.0Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started.
SVAPAR-1155208.7.0.0An unexpected sequence of NVMe host IO commands may trigger a node warmstart.
SVAPAR-1162658.6.3.0When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory.
SVAPAR-1162658.7.0.0When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory.
SVAPAR-1165928.5.0.12If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources.
SVAPAR-1165928.6.0.4If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources.
SVAPAR-1165928.7.0.0If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources.
SVAPAR-1170058.6.0.5A system may run an automatic cluster recovery, and warmstart all nodes, if Policy-based Replication is disabled on the partnership before removing the replication policy.
SVAPAR-1171798.5.0.11Snap data collection does not collect an error log if the superuser password requires a change
SVAPAR-1171798.6.0.3Snap data collection does not collect an error log if the superuser password requires a change
SVAPAR-1171798.6.2.0Snap data collection does not collect an error log if the superuser password requires a change
SVAPAR-1171798.7.0.0Snap data collection does not collect an error log if the superuser password requires a change
SVAPAR-1173188.5.0.11A faulty SFP in a 32Gb Fibre Channel adapter may cause a single node warmstart, instead of reporting the port as failed.
SVAPAR-1174578.6.0.5A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
SVAPAR-1174578.6.3.0A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
SVAPAR-1174578.7.0.0A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
SVAPAR-1176638.6.3.0The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time.
SVAPAR-1176638.7.0.0The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time.
SVAPAR-1177278.6.2.0Node warmstarts may occur when a system using Policy-based High-Availability is upgraded to 8.6.2.
SVAPAR-1177278.7.0.0Node warmstarts may occur when a system using Policy-based High-Availability is upgraded to 8.6.2.
SVAPAR-1177388.6.2.1The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive.
SVAPAR-1177388.6.3.0The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive.
SVAPAR-1177388.7.0.0The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive.
SVAPAR-1177688.6.0.3Cloud Callhome may stop working without logging an error
SVAPAR-1177688.6.3.0Cloud Callhome may stop working without logging an error
SVAPAR-1177688.7.0.0Cloud Callhome may stop working without logging an error
SVAPAR-1177818.6.0.3A single node warmstart may occur during Fabric Device Management Interface (FDMI) discovery if a virtual WWPN is discovered on a different physical port than is was previously.
SVAPAR-1177818.6.2.0A single node warmstart may occur during Fabric Device Management Interface (FDMI) discovery if a virtual WWPN is discovered on a different physical port than is was previously.
SVAPAR-1177818.7.0.0A single node warmstart may occur during Fabric Device Management Interface (FDMI) discovery if a virtual WWPN is discovered on a different physical port than is was previously.
SVAPAR-1197998.6.0.5Inter-node resource queuing on SV3 I/O groups, causes high write response time.
SVAPAR-1197998.7.0.0Inter-node resource queuing on SV3 I/O groups, causes high write response time.
SVAPAR-1201568.6.0.4An internal process introduced in 8.6.0 to collect iSCSI port statistics can cause host performance to be affected
SVAPAR-1201568.7.0.0An internal process introduced in 8.6.0 to collect iSCSI port statistics can cause host performance to be affected
SVAPAR-1203598.6.3.0Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication
SVAPAR-1203598.7.0.0Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication
SVAPAR-1203918.6.3.0Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts.
SVAPAR-1203918.7.0.0Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts.
SVAPAR-1203978.6.0.5A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery.
SVAPAR-1203978.6.3.0A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery.
SVAPAR-1203978.7.0.0A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery.
SVAPAR-1203998.5.0.12A host WWPN incorrectly shows as being still logged into the storage when it is not.
SVAPAR-1203998.6.0.4A host WWPN incorrectly shows as being still logged into the storage when it is not.
SVAPAR-1203998.6.3.0A host WWPN incorrectly shows as being still logged into the storage when it is not.
SVAPAR-1203998.7.0.0A host WWPN incorrectly shows as being still logged into the storage when it is not.
SVAPAR-1204958.6.0.4A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart.
SVAPAR-1204958.6.3.0A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart.
SVAPAR-1204958.7.0.0A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart.
SVAPAR-1205998.6.3.0On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart.
SVAPAR-1205998.7.0.0On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart.
SVAPAR-1206108.5.0.12Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206108.6.0.4Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206108.6.3.0Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206108.7.0.0Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206318.5.0.12Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206318.6.0.4Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206318.6.3.0Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206318.7.0.0Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted
SVAPAR-1206308.6.0.5An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup.
SVAPAR-1206308.6.3.0An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup.
SVAPAR-1206308.7.0.0An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup.
SVAPAR-1206398.5.0.12The vulnerability scanner claims cookies were set without HttpOnly flag.
SVAPAR-1206398.6.0.4The vulnerability scanner claims cookies were set without HttpOnly flag.
SVAPAR-1206398.6.3.0The vulnerability scanner claims cookies were set without HttpOnly flag.
SVAPAR-1206398.7.0.0The vulnerability scanner claims cookies were set without HttpOnly flag.
SVAPAR-1206498.6.0.10Node warmstart triggered by a small timing window when temporarily pausing IO in response to a configuration change.
SVAPAR-1206498.7.0.7Node warmstart triggered by a small timing window when temporarily pausing IO in response to a configuration change.
SVAPAR-1206499.1.0.0Node warmstart triggered by a small timing window when temporarily pausing IO in response to a configuration change.
SVAPAR-1207328.6.3.0Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file.
SVAPAR-1207328.7.0.0Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file.
SVAPAR-1209258.6.3.0A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool.
SVAPAR-1209258.7.0.0A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool.
SVAPAR-1213348.6.0.4Packets with unexpected size are received on the ethernet interface. This causes the internal buffers to become full, thereby causing a node to warmstart to clear the condition
SVAPAR-1213348.7.0.0Packets with unexpected size are received on the ethernet interface. This causes the internal buffers to become full, thereby causing a node to warmstart to clear the condition
SVAPAR-1224118.5.0.12A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome.
SVAPAR-1224118.6.0.4A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome.
SVAPAR-1224118.7.0.0A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome.
SVAPAR-1236148.6.0.61300 Error in the error log when a node comes online, caused by a delay between bringing up the physical FC ports and the virtual FC ports
SVAPAR-1236148.7.0.31300 Error in the error log when a node comes online, caused by a delay between bringing up the physical FC ports and the virtual FC ports
SVAPAR-1236148.7.2.01300 Error in the error log when a node comes online, caused by a delay between bringing up the physical FC ports and the virtual FC ports
SVAPAR-1236448.5.0.12A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared.
SVAPAR-1236448.6.0.4A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared.
SVAPAR-1236448.7.0.0A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared.
SVAPAR-1238748.6.0.4There is a timing window when using async-PBR or RC GMCV, with Volume Group snapshots, which results in the new snapshot VDisk mistakenly being taken offline, forcing the production volume offline for a brief period.
SVAPAR-1238748.7.0.0There is a timing window when using async-PBR or RC GMCV, with Volume Group snapshots, which results in the new snapshot VDisk mistakenly being taken offline, forcing the production volume offline for a brief period.
SVAPAR-1239458.6.0.4If a system SSL certificate is installed with the extension CA True it may trigger multiple node warmstarts.
SVAPAR-1239458.7.0.0If a system SSL certificate is installed with the extension CA True it may trigger multiple node warmstarts.
SVAPAR-1254168.7.0.0If the vdisk with ID 0 is deleted and then recreated, and is added to a volume group with an HA replication policy, its internal state may become invalid. If a node warmstart or upgrade occurs in this state, this may trigger multiple node warmstarts and loss of access.
SVAPAR-1267378.7.0.0If a user that does not have SecurityAdmin role runs the command 'rmmdiskgrp -force' on a pool with mirrored VDisks, a T2 recovery may occur.
SVAPAR-1267428.5.0.12A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix.
SVAPAR-1267428.6.0.4A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix.
SVAPAR-1267428.7.0.0A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix.
SVAPAR-1267678.6.0.4Upgrading to 8.6.0 when iSER clustering is configured, may cause multiple node warmstarts to occur, if node canisters have been swapped between slots since the system was manufactured.
SVAPAR-1267678.7.0.0Upgrading to 8.6.0 when iSER clustering is configured, may cause multiple node warmstarts to occur, if node canisters have been swapped between slots since the system was manufactured.
SVAPAR-1270638.5.0.12Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts
SVAPAR-1270638.6.0.4Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts
SVAPAR-1270638.7.0.0Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts
SVAPAR-1278258.7.0.0Due to an issue with the Fibre Channel adapter firmware the node may warmstart
SVAPAR-1278338.7.0.0Temperature warning is reported against the incorrect Secondary Expander Module (SEM)
SVAPAR-1278358.6.0.5A node may warmstart due to invalid RDMA receive size of zero.
SVAPAR-1278358.7.0.0A node may warmstart due to invalid RDMA receive size of zero.
SVAPAR-1278368.6.0.4Running some Safeguarded Copy commands can cause a cluster recovery in some platforms.
SVAPAR-1278368.7.0.0Running some Safeguarded Copy commands can cause a cluster recovery in some platforms.
SVAPAR-1278418.5.0.12A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur
SVAPAR-1278418.6.0.4A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur
SVAPAR-1278418.7.0.0A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur
SVAPAR-1278448.6.0.4The user is informed that a snapshot policy cannot be assigned. The error message CMMVC9893E is displayed.
SVAPAR-1278448.7.0.0The user is informed that a snapshot policy cannot be assigned. The error message CMMVC9893E is displayed.
SVAPAR-1278698.6.0.5Multiple node warmstarts may occur, due to a rarely seen timing window, when quorum disk I/O is submitted but there is no backend mdisk Logical Unit association that has been discovered by the agent for that quorum disk.
SVAPAR-1278698.7.0.0Multiple node warmstarts may occur, due to a rarely seen timing window, when quorum disk I/O is submitted but there is no backend mdisk Logical Unit association that has been discovered by the agent for that quorum disk.
SVAPAR-1278718.7.0.0When performing a manual upgrade of the AUX cluster from 8.1.1.2 to 8.2.1.12, 'lsupdate' incorrectly reports that the code level is 7.7.1.5
SVAPAR-1279088.5.0.12A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1279088.6.0.4A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1279088.6.3.0A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1279088.7.0.0A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1206168.5.0.12A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1206168.6.0.4A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1206168.6.3.0A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1206168.7.0.0A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
SVAPAR-1280108.7.0.0A node warmstart can sometimes occur due to a timeout on certain fibre channel adapters
SVAPAR-1280528.5.0.12A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter.
SVAPAR-1280528.6.0.4A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter.
SVAPAR-1280528.7.0.0A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter.
SVAPAR-1282288.5.0.12The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1282288.6.0.4The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1282288.6.2.0The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1282288.7.0.0The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1122438.5.0.12The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1122438.6.0.4The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1122438.6.2.0The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1122438.7.0.0The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
SVAPAR-1283798.5.0.12When collecting the debug data from a 16Gb or 32Gb Fibre Channel adapter, node warmstarts may occur, due to the firmware dump file exceeding the maximum size.
SVAPAR-1284018.7.0.0Upgrade to 8.6.3 may cause loss of access to iSCSI hosts, on FlashSystem 5015 and FlashSystem 5035 systems with a 4-port 10Gb ethernet adapter.
SVAPAR-1284148.6.0.6Thin-clone volumes in a Data Reduction Pool will incorrectly have compression disabled, if the source volume was uncompressed.
SVAPAR-1284148.7.0.0Thin-clone volumes in a Data Reduction Pool will incorrectly have compression disabled, if the source volume was uncompressed.
SVAPAR-1286268.6.0.4A node may warmstart or fail to start FlashCopy maps, in volume groups that contain Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume.
SVAPAR-1286268.7.0.0A node may warmstart or fail to start FlashCopy maps, in volume groups that contain Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume.
SVAPAR-1289128.6.0.5A T2 recovery may occur when attempting to take a snapshot from a volume group that contains volumes from multiple I/O groups, and one of the I/O groups is offline.
SVAPAR-1289128.7.0.0A T2 recovery may occur when attempting to take a snapshot from a volume group that contains volumes from multiple I/O groups, and one of the I/O groups is offline.
SVAPAR-1289138.5.0.17Multiple node asserts after a VDisk copy in a data reduction pool was removed while an IO group is offline and a T2 recovery occurred
SVAPAR-1289138.6.0.10Multiple node asserts after a VDisk copy in a data reduction pool was removed while an IO group is offline and a T2 recovery occurred
SVAPAR-1289138.7.0.0Multiple node asserts after a VDisk copy in a data reduction pool was removed while an IO group is offline and a T2 recovery occurred
SVAPAR-1289148.6.0.5A CMMVC9859E error will occur when trying to use 'addvolumecopy' to create Hyperswap volume from a VDisk with existing snapshots
SVAPAR-1289148.7.0.0A CMMVC9859E error will occur when trying to use 'addvolumecopy' to create Hyperswap volume from a VDisk with existing snapshots
SVAPAR-1291118.6.0.4When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address.
SVAPAR-1291118.7.0.0When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address.
SVAPAR-1319938.6.0.4When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address.
SVAPAR-1319938.7.0.0When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address.
SVAPAR-1292748.6.0.5When running the 'mkvolumegroup' command, a warmstart of the Config node may occur.
SVAPAR-1292748.6.2.0When running the 'mkvolumegroup' command, a warmstart of the Config node may occur.
SVAPAR-1292748.7.0.0When running the 'mkvolumegroup' command, a warmstart of the Config node may occur.
SVAPAR-1292988.6.0.4Manage disk group went offline during queueing of fibre rings on the overflow list causing the node to assert.
SVAPAR-1292988.7.0.0Manage disk group went offline during queueing of fibre rings on the overflow list causing the node to assert.
SVAPAR-1293188.6.0.5A Storage Virtualize cluster configured without I/O group 0 is unable to send performance metrics
SVAPAR-1293188.7.0.0A Storage Virtualize cluster configured without I/O group 0 is unable to send performance metrics
SVAPAR-1304388.6.0.5Upgrading a system to 8.6.2 or higher with a single portset assigned to an IP replication partnership may cause all nodes to warmstart when making a change to the partnership.
SVAPAR-1304388.7.0.0Upgrading a system to 8.6.2 or higher with a single portset assigned to an IP replication partnership may cause all nodes to warmstart when making a change to the partnership.
SVAPAR-1305538.6.0.5Converting a 3-Site AuxFar volume to HyperSwap results in multiple node asserts
SVAPAR-1305538.7.0.0Converting a 3-Site AuxFar volume to HyperSwap results in multiple node asserts
SVAPAR-1306468.7.0.0False positive Recovery point Objective (RPO) exceeded events (52004) reported for volume groups configured with Policy-Based Replication
SVAPAR-1307318.6.0.4During installation, a single node assert at the end of the software upgrade process may occur
SVAPAR-1307318.6.3.0During installation, a single node assert at the end of the software upgrade process may occur
SVAPAR-1307318.7.0.0During installation, a single node assert at the end of the software upgrade process may occur
SVAPAR-1309848.7.1.0Configuring Policy-based replication in the GUI fails if the system authentication service type is unused.
SVAPAR-1309849.1.0.0Configuring Policy-based replication in the GUI fails if the system authentication service type is unused.
SVAPAR-1312128.6.0.5The GUI partnership properties dialog crashes if the issuer certificate does not have an organization field
SVAPAR-1312128.7.0.0The GUI partnership properties dialog crashes if the issuer certificate does not have an organization field
SVAPAR-1312288.6.0.5A RAID array temporarily goes offline due to delays in fetching the encryption key when a node starts up.
SVAPAR-1312288.7.0.1A RAID array temporarily goes offline due to delays in fetching the encryption key when a node starts up.
SVAPAR-1312288.7.2.0A RAID array temporarily goes offline due to delays in fetching the encryption key when a node starts up.
SVAPAR-1312338.7.0.0In an SVC stretched-cluster configuration with multiple I/O groups and policy-based replication, an attempt to create a new volume may fail due to an incorrect automatic I/O group assignment.
SVAPAR-1312508.7.0.0The system may not correctly balance fibre channel workload over paths to a back end controller.
SVAPAR-1312598.6.0.5Removal of the replication policy after the volume group was set to be independent exposed an issue that resulted in the FlashCopy internal state becoming incorrect, this meant subsequent FlashCopy actions failed incorrectly.
SVAPAR-1312598.7.0.0Removal of the replication policy after the volume group was set to be independent exposed an issue that resulted in the FlashCopy internal state becoming incorrect, this meant subsequent FlashCopy actions failed incorrectly.
SVAPAR-1315678.6.0.4Node goes offline and enters service state when collecting diagnostic data for 100Gb/s adapters.
SVAPAR-1315678.7.0.0Node goes offline and enters service state when collecting diagnostic data for 100Gb/s adapters.
SVAPAR-1316488.6.0.5Multiple node warmstarts may occur when starting an incremental FlashCopy map that uses a replication target volume as its source, and the change volume is used to keep a consistent image.
SVAPAR-1316488.7.0.0Multiple node warmstarts may occur when starting an incremental FlashCopy map that uses a replication target volume as its source, and the change volume is used to keep a consistent image.
SVAPAR-1316518.6.0.5Policy-based Replication got stuck after both nodes in the I/O group on a target system restarted at the same time
SVAPAR-1316518.7.0.0Policy-based Replication got stuck after both nodes in the I/O group on a target system restarted at the same time
SVAPAR-1318078.6.0.5The orchestrator for Policy-Based Replication is not running, preventing replication from being configured. Attempting to configure replication may cause a single node warmstart.
SVAPAR-1318078.7.0.0The orchestrator for Policy-Based Replication is not running, preventing replication from being configured. Attempting to configure replication may cause a single node warmstart.
SVAPAR-1318658.5.0.17A system may encounter communication issues when being configured with IPv6.
SVAPAR-1318658.6.0.5A system may encounter communication issues when being configured with IPv6.
SVAPAR-1318658.7.0.0A system may encounter communication issues when being configured with IPv6.
SVAPAR-1319948.5.0.17When implementing Safeguarded Copy, the associated child pool may run out of space, which can cause multiple Safeguarded Copies to go offline. This can cause the node to warmstart.
SVAPAR-1319948.6.0.5When implementing Safeguarded Copy, the associated child pool may run out of space, which can cause multiple Safeguarded Copies to go offline. This can cause the node to warmstart.
SVAPAR-1319948.7.0.0When implementing Safeguarded Copy, the associated child pool may run out of space, which can cause multiple Safeguarded Copies to go offline. This can cause the node to warmstart.
SVAPAR-1319998.6.0.6Single node warmstart when an NVMe host disconnects from the storage
SVAPAR-1319998.7.0.3Single node warmstart when an NVMe host disconnects from the storage
SVAPAR-1319998.7.2.0Single node warmstart when an NVMe host disconnects from the storage
SVAPAR-1320018.7.0.0Unexpected lease expiries may occur when half of the nodes in the system start up, one after another in a short time.
SVAPAR-1320038.7.0.0A node may warmstart when an internal process to collect information from Ethernet ports takes longer than expected..
SVAPAR-1320118.5.0.17In rare situations, a hosts WWPN may show incorrectly as still logged into the storage even though it is not. This can cause the host to incorrectly appear as degraded.
SVAPAR-1320118.6.0.5In rare situations, a hosts WWPN may show incorrectly as still logged into the storage even though it is not. This can cause the host to incorrectly appear as degraded.
SVAPAR-1320118.7.0.0In rare situations, a hosts WWPN may show incorrectly as still logged into the storage even though it is not. This can cause the host to incorrectly appear as degraded.
SVAPAR-1320138.7.0.0On a Hyperswap system, the preferred site node can lease expire if the remote site nodes suffered a warmstart.
SVAPAR-1320278.7.0.0An incorrect 'acknowledge' status for an initiator SCSI command is sent from the SCSI target side when no sense data was actually transferred. This may cause a node to warmstart.
SVAPAR-1320628.7.0.0vVols are reported as inaccessible due to a 30 minute timeout if the VASA provider is unavailable
SVAPAR-1320728.5.0.17A node may assert due to a Fibre Channel port constantly flapping between the FlashSystem and the host.
SVAPAR-1320728.6.0.5A node may assert due to a Fibre Channel port constantly flapping between the FlashSystem and the host.
SVAPAR-1320728.7.0.0A node may assert due to a Fibre Channel port constantly flapping between the FlashSystem and the host.
SVAPAR-1321238.5.0.12Vdisks can go offline after a T3 with an expanding DRAID1 array evokes some IO errors and data corruption
SVAPAR-1321238.6.0.4Vdisks can go offline after a T3 with an expanding DRAID1 array evokes some IO errors and data corruption
SVAPAR-1333928.7.0.0In rare situations involving multiple concurrent snapshot restore operations, an undetected data corruption may occur.
SVAPAR-1334428.7.0.0When using asynchronous policy based replication in DR test mode, if the DR volume group is put into production use (the volume group is made independent), an undetected data corruption may occur.
SVAPAR-1345898.6.0.6A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.

The fix for SVAPAR-134589 was found to be incomplete. SVAPAR-170657 provides a full fix for this issue.

SVAPAR-1345898.7.0.3A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.

The fix for SVAPAR-134589 was found to be incomplete. SVAPAR-170657 provides a full fix for this issue.

SVAPAR-1345898.7.2.0A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.

The fix for SVAPAR-134589 was found to be incomplete. SVAPAR-170657 provides a full fix for this issue.

SVAPAR-1350008.6.0.5A low-probability timing window in memory management code may cause a single-node warmstart at upgrade completion.
SVAPAR-1350008.6.3.0A low-probability timing window in memory management code may cause a single-node warmstart at upgrade completion.
SVAPAR-1350008.7.0.0A low-probability timing window in memory management code may cause a single-node warmstart at upgrade completion.
SVAPAR-1350228.7.0.1When using Policy Based High Availability, a storage partition can become suspended due to a disagreement in the internal quorum race state between two systems, causing a loss of access to data.
SVAPAR-1350228.7.1.0When using Policy Based High Availability, a storage partition can become suspended due to a disagreement in the internal quorum race state between two systems, causing a loss of access to data.
SVAPAR-1357428.6.0.5A temporary network issue may cause unexpected 1585 DNS connection errors after upgrading to 8.6.0.4, 8.6.3.0 or 8.7.0.0. This is due to a shorter DNS request timeout in these PTFs.
SVAPAR-1357428.7.0.1A temporary network issue may cause unexpected 1585 DNS connection errors after upgrading to 8.6.0.4, 8.6.3.0 or 8.7.0.0. This is due to a shorter DNS request timeout in these PTFs.
SVAPAR-1357428.7.1.0A temporary network issue may cause unexpected 1585 DNS connection errors after upgrading to 8.6.0.4, 8.6.3.0 or 8.7.0.0. This is due to a shorter DNS request timeout in these PTFs.
SVAPAR-1361728.6.0.5VMware vCentre reports a disk expansion failure, prior to changing the provisioning policy.
SVAPAR-1362568.7.0.1Each ethernet port can only have a single management IP address. Attempting to add a second management IP to the same port may cause multiple node warmstarts and a loss of access to data.
SVAPAR-1362568.7.1.0Each ethernet port can only have a single management IP address. Attempting to add a second management IP to the same port may cause multiple node warmstarts and a loss of access to data.
SVAPAR-1364278.6.0.5When deleting multiple older snapshots versions, whilst simultaneously creating new snapshots, the system can run out of bitmap space resulting in a bad snapshot map, repeated asserts, and a loss of access.
SVAPAR-1364278.7.0.1When deleting multiple older snapshots versions, whilst simultaneously creating new snapshots, the system can run out of bitmap space resulting in a bad snapshot map, repeated asserts, and a loss of access.
SVAPAR-1364278.7.1.0When deleting multiple older snapshots versions, whilst simultaneously creating new snapshots, the system can run out of bitmap space resulting in a bad snapshot map, repeated asserts, and a loss of access.
SVAPAR-1366778.6.0.6An unresponsive DNS server may cause a single node warmstart and the email process to get stuck.
SVAPAR-1366778.7.0.3An unresponsive DNS server may cause a single node warmstart and the email process to get stuck.
SVAPAR-1366778.7.2.0An unresponsive DNS server may cause a single node warmstart and the email process to get stuck.
SVAPAR-1370968.7.0.0An issue with the TPM on FS50xx may cause a chsystemcert command to fail.
SVAPAR-1372418.6.0.5When attempting to create a Hyperswap volume via the GUI, when the preferred site is in the secondary data centre, a CMMVC8709E 'the iogroups of cache memory storage are not in the same site as the storage groups' failure occurs.
SVAPAR-1372418.7.0.0When attempting to create a Hyperswap volume via the GUI, when the preferred site is in the secondary data centre, a CMMVC8709E 'the iogroups of cache memory storage are not in the same site as the storage groups' failure occurs.
SVAPAR-1278458.6.0.5When attempting to create a Hyperswap volume via the GUI, when the preferred site is in the secondary data centre, a CMMVC8709E 'the iogroups of cache memory storage are not in the same site as the storage groups' failure occurs.
SVAPAR-1278458.7.0.0When attempting to create a Hyperswap volume via the GUI, when the preferred site is in the secondary data centre, a CMMVC8709E 'the iogroups of cache memory storage are not in the same site as the storage groups' failure occurs.
SVAPAR-1372658.6.0.5Error when attempting to delete a HyperSwap volume with snapshots
SVAPAR-1372658.7.0.5Error when attempting to delete a HyperSwap volume with snapshots
SVAPAR-1372658.7.3.2Error when attempting to delete a HyperSwap volume with snapshots
SVAPAR-1373228.7.0.5A false 1627 event will be reported on an SVC stretched cluster after adding connectivity to more ports on a backend controller.
SVAPAR-1373228.7.2.0A false 1627 event will be reported on an SVC stretched cluster after adding connectivity to more ports on a backend controller.
SVAPAR-1373618.7.2.0A battery may incorrectly enter a failed state, if input power is removed within a small timing window
SVAPAR-1373619.1.0.0A battery may incorrectly enter a failed state, if input power is removed within a small timing window
SVAPAR-1374858.7.0.1Reseating a FlashSystem 50xx node canister at 8.7.0.0 may cause the partner node to reboot, causing temporary loss of access to data.
SVAPAR-1374858.7.1.0Reseating a FlashSystem 50xx node canister at 8.7.0.0 may cause the partner node to reboot, causing temporary loss of access to data.
SVAPAR-1375128.7.0.1A single-node warmstart may occur during a shrink operation on a thin-provisioned volume. This is caused by a timing window in the cache component.
SVAPAR-1375128.7.2.0A single-node warmstart may occur during a shrink operation on a thin-provisioned volume. This is caused by a timing window in the cache component.
SVAPAR-1379068.5.0.17A node warmstart may occur due to a timeout caused by FlashCopy bitmap cleaning, leading to a stalled software upgrade.
SVAPAR-1379068.6.0.5A node warmstart may occur due to a timeout caused by FlashCopy bitmap cleaning, leading to a stalled software upgrade.
SVAPAR-1379068.7.0.1A node warmstart may occur due to a timeout caused by FlashCopy bitmap cleaning, leading to a stalled software upgrade.
SVAPAR-1379068.7.2.0A node warmstart may occur due to a timeout caused by FlashCopy bitmap cleaning, leading to a stalled software upgrade.
SVAPAR-1382148.6.0.6When a volume group is assigned to an ownership group, creating a snapshot and populating a new volume group from the snapshot will cause a warmstart of the configuration node when 'lsvolumepopulation' is run.
SVAPAR-1382148.7.0.1When a volume group is assigned to an ownership group, creating a snapshot and populating a new volume group from the snapshot will cause a warmstart of the configuration node when 'lsvolumepopulation' is run.
SVAPAR-1382148.7.1.0When a volume group is assigned to an ownership group, creating a snapshot and populating a new volume group from the snapshot will cause a warmstart of the configuration node when 'lsvolumepopulation' is run.
SVAPAR-1382868.6.0.6If a direct-attached controller has NPIV enabled, 1625 errors will incorrectly be logged, indicating a controller misconfiguration.
SVAPAR-1382868.7.0.2If a direct-attached controller has NPIV enabled, 1625 errors will incorrectly be logged, indicating a controller misconfiguration.
SVAPAR-1382868.7.2.0If a direct-attached controller has NPIV enabled, 1625 errors will incorrectly be logged, indicating a controller misconfiguration.
SVAPAR-1384188.6.0.5Snap collections triggered by Storage Insights over cloud callhome time out before they have completed
SVAPAR-1384188.7.0.1Snap collections triggered by Storage Insights over cloud callhome time out before they have completed
SVAPAR-1384188.7.1.0Snap collections triggered by Storage Insights over cloud callhome time out before they have completed
SVAPAR-1388328.6.0.7Nodes using IP replication with compression may experience multiple node warmstarts due to a timing window in error recovery.
SVAPAR-1388328.7.0.4Nodes using IP replication with compression may experience multiple node warmstarts due to a timing window in error recovery.
SVAPAR-1388328.7.2.0Nodes using IP replication with compression may experience multiple node warmstarts due to a timing window in error recovery.
SVAPAR-1388598.6.0.6Collecting a Type 4 support package (Snap Type 4: Standard logs plus new statesaves) in the GUI can trigger an out of memory event causing the GUI process to be killed.
SVAPAR-1388598.7.0.1Collecting a Type 4 support package (Snap Type 4: Standard logs plus new statesaves) in the GUI can trigger an out of memory event causing the GUI process to be killed.
SVAPAR-1388598.7.2.0Collecting a Type 4 support package (Snap Type 4: Standard logs plus new statesaves) in the GUI can trigger an out of memory event causing the GUI process to be killed.
SVAPAR-1391188.5.0.17When logged into GUI as a user that is a member of the FlashCopy Administrator group, the GUI does not allow flashcopies to be created and options are greyed out.
SVAPAR-1391188.6.0.10When logged into GUI as a user that is a member of the FlashCopy Administrator group, the GUI does not allow flashcopies to be created and options are greyed out.
SVAPAR-1391188.7.0.3When logged into GUI as a user that is a member of the FlashCopy Administrator group, the GUI does not allow flashcopies to be created and options are greyed out.
SVAPAR-1391188.7.1.0When logged into GUI as a user that is a member of the FlashCopy Administrator group, the GUI does not allow flashcopies to be created and options are greyed out.
SVAPAR-1392058.5.0.17A node warmstart may occur due to a race condition between Fibre Channel adapter I/O processing and a link reset.
SVAPAR-1392058.6.0.10A node warmstart may occur due to a race condition between Fibre Channel adapter I/O processing and a link reset.
SVAPAR-1392058.7.0.1A node warmstart may occur due to a race condition between Fibre Channel adapter I/O processing and a link reset.
SVAPAR-1392058.7.1.0A node warmstart may occur due to a race condition between Fibre Channel adapter I/O processing and a link reset.
SVAPAR-1392478.6.0.6Very heavy write workload to a thin-provisioned volume may cause a single-node warmstart, due to a low-probability deadlock condition.
SVAPAR-1392478.7.0.1Very heavy write workload to a thin-provisioned volume may cause a single-node warmstart, due to a low-probability deadlock condition.
SVAPAR-1392478.7.2.0Very heavy write workload to a thin-provisioned volume may cause a single-node warmstart, due to a low-probability deadlock condition.
SVAPAR-1392608.6.0.10Heavy write workloads to thin-provisioned volumes may result in poor performance on thin-provisioned volumes, due to a lack of destage resource.
SVAPAR-1392608.7.0.1Heavy write workloads to thin-provisioned volumes may result in poor performance on thin-provisioned volumes, due to a lack of destage resource.
SVAPAR-1392608.7.2.0Heavy write workloads to thin-provisioned volumes may result in poor performance on thin-provisioned volumes, due to a lack of destage resource.
SVAPAR-1394918.7.0.7VMWare hosts attached via NVMe may log errors related to opcode 0x5
SVAPAR-1394919.1.0.2VMWare hosts attached via NVMe may log errors related to opcode 0x5
SVAPAR-1394919.1.1.0VMWare hosts attached via NVMe may log errors related to opcode 0x5
SVAPAR-1399438.6.0.6A single node warmstart may occur when a host sends a high number of unexpected Fibre Channel frames.
SVAPAR-1399438.7.0.3A single node warmstart may occur when a host sends a high number of unexpected Fibre Channel frames.
SVAPAR-1399438.7.2.0A single node warmstart may occur when a host sends a high number of unexpected Fibre Channel frames.
SVAPAR-1400798.7.0.1The internal scheduler is blocked after requesting more flashcopy bitmap memory. This will cause the creation of new snapshots and removal of expired snapshots to fail.
SVAPAR-1400798.7.2.0The internal scheduler is blocked after requesting more flashcopy bitmap memory. This will cause the creation of new snapshots and removal of expired snapshots to fail.
SVAPAR-1400808.7.0.1Tier 2 warmstarts ending with nodes in service state while processing a long list of expired snapshots.
SVAPAR-1400808.7.1.0Tier 2 warmstarts ending with nodes in service state while processing a long list of expired snapshots.
SVAPAR-1405888.6.0.6A node warmstart may occur due to incorrect processing of NVMe host I/O offload commands
SVAPAR-1405888.7.0.3A node warmstart may occur due to incorrect processing of NVMe host I/O offload commands
SVAPAR-1405888.7.2.0A node warmstart may occur due to incorrect processing of NVMe host I/O offload commands
SVAPAR-1407818.6.0.6Successful login attempts to the configuration node via SSH are not communicated to the remote syslog server. Service assistant and GUI logins are correctly reported.
SVAPAR-1407818.7.0.3Successful login attempts to the configuration node via SSH are not communicated to the remote syslog server. Service assistant and GUI logins are correctly reported.
SVAPAR-1407818.7.2.0Successful login attempts to the configuration node via SSH are not communicated to the remote syslog server. Service assistant and GUI logins are correctly reported.
SVAPAR-1408928.6.0.10Excessive numbers of informational battery reconditioning events may be logged.
SVAPAR-1408928.7.0.3Excessive numbers of informational battery reconditioning events may be logged.
SVAPAR-1408928.7.2.1Excessive numbers of informational battery reconditioning events may be logged.
SVAPAR-1409268.7.0.2When a cluster partnership is removed, a timing window can result in an I/O timeout and a node warmstart.
SVAPAR-1409268.7.1.0When a cluster partnership is removed, a timing window can result in an I/O timeout and a node warmstart.
SVAPAR-1409948.5.0.17Expanding a volume via the GUI fails with CMMVC7019E because the volume size is not a multiple of 512 bytes.
SVAPAR-1409948.6.0.5Expanding a volume via the GUI fails with CMMVC7019E because the volume size is not a multiple of 512 bytes.
SVAPAR-1409948.7.0.1Expanding a volume via the GUI fails with CMMVC7019E because the volume size is not a multiple of 512 bytes.
SVAPAR-1409948.7.1.0Expanding a volume via the GUI fails with CMMVC7019E because the volume size is not a multiple of 512 bytes.
SVAPAR-1410018.6.0.5Unexpected error CMMVC9326E when adding either a port to host or creating a host.
SVAPAR-1410198.5.0.17The GUI crashed when a user group with roles 3SiteAdmin and remote users exist
SVAPAR-1410198.6.0.5The GUI crashed when a user group with roles 3SiteAdmin and remote users exist
SVAPAR-1410198.7.0.1The GUI crashed when a user group with roles 3SiteAdmin and remote users exist
SVAPAR-1410198.7.1.0The GUI crashed when a user group with roles 3SiteAdmin and remote users exist
SVAPAR-1410948.6.0.5On power failure, FS50xx systems with a 25Gb ROCE adapters may fail to gracefully shutdown, causing loss of cache data.
SVAPAR-1410948.6.3.0On power failure, FS50xx systems with a 25Gb ROCE adapters may fail to gracefully shutdown, causing loss of cache data.
SVAPAR-1410948.7.0.0On power failure, FS50xx systems with a 25Gb ROCE adapters may fail to gracefully shutdown, causing loss of cache data.
SVAPAR-1410988.7.0.1High peak latency causing access loss after recovering from SVAPAR-140079 and SVAPAR-140080.
SVAPAR-1410988.7.2.0High peak latency causing access loss after recovering from SVAPAR-140079 and SVAPAR-140080.
SVAPAR-1411128.7.0.1When using policy-based high availability and volume group snapshots, it is possible for an I/O timeout condition to trigger node warmstarts. This can happen if a system is disconnected for an extended period, and is then brought back online after a large amount of host I/O to the HA volumes.
SVAPAR-1411128.7.1.0When using policy-based high availability and volume group snapshots, it is possible for an I/O timeout condition to trigger node warmstarts. This can happen if a system is disconnected for an extended period, and is then brought back online after a large amount of host I/O to the HA volumes.
SVAPAR-1413068.6.0.6Changing the preferred node of a volume could trigger a cluster recovery causing brief loss of access to data
SVAPAR-1414678.7.0.1SNMPv3 traps may not be processed properly by the SNMP server configured in the system.
SVAPAR-1414678.7.1.0SNMPv3 traps may not be processed properly by the SNMP server configured in the system.
SVAPAR-1415598.6.0.5GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers
SVAPAR-1415598.7.0.1GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers
SVAPAR-1415598.7.1.0GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers
SVAPAR-1372448.6.0.5GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers
SVAPAR-1372448.7.0.1GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers
SVAPAR-1372448.7.1.0GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers
SVAPAR-1416848.7.0.1Prevent drive firmware upgrade with both '-force' and '-all' parameters, to avoid multiple drives going offline due to lack of redundancy.
SVAPAR-1416848.7.1.0Prevent drive firmware upgrade with both '-force' and '-all' parameters, to avoid multiple drives going offline due to lack of redundancy.
SVAPAR-1418768.7.0.1The GUI does not offer the option to create GM or GMCV relationships, even after remote_copy compatibility mode has been enabled.
SVAPAR-1419208.6.0.6Under specific scenarios, adding a snapshot to a volume group could trigger a cluster recovery causing brief loss of access to data.
SVAPAR-1419208.7.0.1Under specific scenarios, adding a snapshot to a volume group could trigger a cluster recovery causing brief loss of access to data.
SVAPAR-1419208.7.2.0Under specific scenarios, adding a snapshot to a volume group could trigger a cluster recovery causing brief loss of access to data.
SVAPAR-1419378.7.0.1In a Policy-based high availability configuration, when a SCSI Compare and Write command is sent to the non-Active Management System, and communication is lost between the systems while it is being processed, a node warmstart may occur.
SVAPAR-1419378.7.1.0In a Policy-based high availability configuration, when a SCSI Compare and Write command is sent to the non-Active Management System, and communication is lost between the systems while it is being processed, a node warmstart may occur.
SVAPAR-1419968.6.0.5Policy-based replication may not perform the necessary background synchronization to maintain an up to date copy of data at the DR site.
SVAPAR-1419968.7.0.0Policy-based replication may not perform the necessary background synchronization to maintain an up to date copy of data at the DR site.
SVAPAR-1420408.7.0.1A timing window related to logging of capacity warnings may cause multiple node warmstarts on a system with low free physical capacity on an FCM array.
SVAPAR-1420408.7.1.0A timing window related to logging of capacity warnings may cause multiple node warmstarts on a system with low free physical capacity on an FCM array.
SVAPAR-1420458.7.0.1A system which was previously running pre-8.6.0 software, and is now using policy-based high availability, may experience multiple node warmstarts when a PBHA failover is requested by the user.
SVAPAR-1420458.7.1.0A system which was previously running pre-8.6.0 software, and is now using policy-based high availability, may experience multiple node warmstarts when a PBHA failover is requested by the user.
SVAPAR-1420818.7.0.3If an error occurs during creation of a replication policy, multiple node warmstarts may occur, causing a temporary loss of access to data.
SVAPAR-1420818.7.1.0If an error occurs during creation of a replication policy, multiple node warmstarts may occur, causing a temporary loss of access to data.
SVAPAR-1420938.7.0.1The 'Upload support package' option is missing from the Support Package GUI
SVAPAR-1420938.7.1.0The 'Upload support package' option is missing from the Support Package GUI
SVAPAR-1421908.7.1.0The user role descriptions in the GUI are wrong for the CopyOperator.
SVAPAR-1421909.1.0.0The user role descriptions in the GUI are wrong for the CopyOperator.
SVAPAR-1421918.7.0.0When a child pool contains thin-provisioned volumes, running out of space in the child pool may cause volumes outside the child pool to be taken offline.
SVAPAR-1421938.7.0.3If an IP Replication partnership only has link2 configured, then the GUI Partnership shows type Fibre Channel for the IPV4 connection.
SVAPAR-1421938.7.1.0If an IP Replication partnership only has link2 configured, then the GUI Partnership shows type Fibre Channel for the IPV4 connection.
SVAPAR-1421948.6.0.6GUI volume creation does not honour the preferred node that was selected.
SVAPAR-1421948.7.0.3GUI volume creation does not honour the preferred node that was selected.
SVAPAR-1421948.7.1.0GUI volume creation does not honour the preferred node that was selected.
SVAPAR-1422878.6.0.6Loss of access to data when running certain snapshot commands at the exact time that a Volume Group Snapshots is stopping
SVAPAR-1422878.7.0.3Loss of access to data when running certain snapshot commands at the exact time that a Volume Group Snapshots is stopping
SVAPAR-1422878.7.2.0Loss of access to data when running certain snapshot commands at the exact time that a Volume Group Snapshots is stopping
SVAPAR-1429398.7.2.0Upgrade to 8.7.1 on FS5045 with policy-based replication or high availability is not supported.
SVAPAR-1429399.1.0.0Upgrade to 8.7.1 on FS5045 with policy-based replication or high availability is not supported.
SVAPAR-1429408.7.0.7IO processing unnecessarily stalled for several seconds following a node coming online
SVAPAR-1429409.1.0.0IO processing unnecessarily stalled for several seconds following a node coming online
SVAPAR-1434808.7.0.1When using asynchronous policy based replication on low bandwidth links with snapshot clone/restore, an undetected data corruption may occur. This issue only affects 8.7.0.0.
SVAPAR-1434808.7.1.0When using asynchronous policy based replication on low bandwidth links with snapshot clone/restore, an undetected data corruption may occur. This issue only affects 8.7.0.0.
SVAPAR-1435748.5.0.17It is possible for a battery register read to fail, causing a battery to unexpectedly be reported as offline. The issue will persist until the node is rebooted.
SVAPAR-1435748.6.0.6It is possible for a battery register read to fail, causing a battery to unexpectedly be reported as offline. The issue will persist until the node is rebooted.
SVAPAR-1435748.7.0.0It is possible for a battery register read to fail, causing a battery to unexpectedly be reported as offline. The issue will persist until the node is rebooted.
SVAPAR-1436218.7.0.4REST API returns HTTP status 502 after a timeout of 30 seconds instead of 180 seconds
SVAPAR-1436218.7.2.0REST API returns HTTP status 502 after a timeout of 30 seconds instead of 180 seconds
SVAPAR-1438908.6.0.6If a HyperSwap volume is expanded shortly after disabling 3-site replication, the expandvolume command may fail to complete. This will lead to a loss of configuration access.
SVAPAR-1438908.7.0.3If a HyperSwap volume is expanded shortly after disabling 3-site replication, the expandvolume command may fail to complete. This will lead to a loss of configuration access.
SVAPAR-1439978.7.2.0A single node warmstart may occur when the upper cache reaches 100% full while the partner node in the I/O group is offline
SVAPAR-1439979.1.0.0A single node warmstart may occur when the upper cache reaches 100% full while the partner node in the I/O group is offline
SVAPAR-1440008.6.0.6A high number of abort commands from an NVMe host in a short time may cause a Fibre Channel port on the storage to go offline, leading to degraded hosts.
SVAPAR-1440008.7.0.3A high number of abort commands from an NVMe host in a short time may cause a Fibre Channel port on the storage to go offline, leading to degraded hosts.
SVAPAR-1440008.7.2.0A high number of abort commands from an NVMe host in a short time may cause a Fibre Channel port on the storage to go offline, leading to degraded hosts.
SVAPAR-1440338.7.2.0Spurious 1370 events against SAS drives which are not members of an array.
SVAPAR-1440339.1.0.0Spurious 1370 events against SAS drives which are not members of an array.
SVAPAR-1440368.6.0.6Replacement of an industry standard NVMe drive may fail until both nodes are warmstarted.
SVAPAR-1440368.7.0.3Replacement of an industry standard NVMe drive may fail until both nodes are warmstarted.
SVAPAR-1440368.7.2.0Replacement of an industry standard NVMe drive may fail until both nodes are warmstarted.
SVAPAR-1440628.7.2.0A node may warmstart due to a problem with IO buffer management in the cache component.
SVAPAR-1440629.1.0.0A node may warmstart due to a problem with IO buffer management in the cache component.
SVAPAR-1440688.6.0.6If a volume group snapshot is created at the same time as an existing snapshot is deleting, all nodes may warmstart, causing a loss of access to data. This can only happen if there is insufficient FlashCopy bitmap space for the new snapshot.
SVAPAR-1440688.7.0.1If a volume group snapshot is created at the same time as an existing snapshot is deleting, all nodes may warmstart, causing a loss of access to data. This can only happen if there is insufficient FlashCopy bitmap space for the new snapshot.
SVAPAR-1440688.7.2.0If a volume group snapshot is created at the same time as an existing snapshot is deleting, all nodes may warmstart, causing a loss of access to data. This can only happen if there is insufficient FlashCopy bitmap space for the new snapshot.
SVAPAR-1440698.7.2.0On a system with SAS drives, if a node canister is replaced while an unsupported drive is in the enclosure, all nodes may warmstart simultaneously, causing a loss of access to data.
SVAPAR-1440699.1.0.0On a system with SAS drives, if a node canister is replaced while an unsupported drive is in the enclosure, all nodes may warmstart simultaneously, causing a loss of access to data.
SVAPAR-1440708.6.0.6After changing the system name, the iSCSI IQNs may still contain the old system name.
SVAPAR-1440708.7.0.3After changing the system name, the iSCSI IQNs may still contain the old system name.
SVAPAR-1440708.7.2.0After changing the system name, the iSCSI IQNs may still contain the old system name.
SVAPAR-1442718.6.0.6An offline node that is protected by a spare node may take longer than expected to come online. This may result in a temporary loss of Fibre Channel connectivity to the hosts
SVAPAR-1442718.7.0.3An offline node that is protected by a spare node may take longer than expected to come online. This may result in a temporary loss of Fibre Channel connectivity to the hosts
SVAPAR-1442718.7.2.0An offline node that is protected by a spare node may take longer than expected to come online. This may result in a temporary loss of Fibre Channel connectivity to the hosts
SVAPAR-1442728.6.0.6IO processing unnecessarily stalled for several seconds following a node coming online
SVAPAR-1442728.7.0.3IO processing unnecessarily stalled for several seconds following a node coming online
SVAPAR-1442728.7.2.0IO processing unnecessarily stalled for several seconds following a node coming online
SVAPAR-1443898.7.2.0In an SVC stretched cluster, adding a second vdisk copy to a PBR-enabled volume using the GUI does not automatically add a copy to the change volume. This can cause subsequent vdisk migration requests to fail.
SVAPAR-1443899.1.0.0In an SVC stretched cluster, adding a second vdisk copy to a PBR-enabled volume using the GUI does not automatically add a copy to the change volume. This can cause subsequent vdisk migration requests to fail.
SVAPAR-1445158.7.2.0When trying to increase FlashCopy or volume mirroring bitmap memory, the GUI may incorrectly report that the new value exceeds combined memory limits.
SVAPAR-1445159.1.0.0When trying to increase FlashCopy or volume mirroring bitmap memory, the GUI may incorrectly report that the new value exceeds combined memory limits.
SVAPAR-1452788.7.2.0Upgrade from 8.7.0 to 8.7.1 may cause an invalid internal state, if policy-based replication is in use. This may lead to node warmstarts on the recovery system, or cause replication to stop.
SVAPAR-1452789.1.0.0Upgrade from 8.7.0 to 8.7.1 may cause an invalid internal state, if policy-based replication is in use. This may lead to node warmstarts on the recovery system, or cause replication to stop.
SVAPAR-1453558.7.0.2On FS5045 with policy-based high availability or replication, an out-of-memory issue may cause frequent 4110 events.
SVAPAR-1453558.7.1.0On FS5045 with policy-based high availability or replication, an out-of-memory issue may cause frequent 4110 events.
SVAPAR-1458928.7.3.0An unfixed error might display an incorrect fixed timestamp
SVAPAR-1458929.1.0.0An unfixed error might display an incorrect fixed timestamp
SVAPAR-1459768.7.0.3On FlashSystem 7300, fan speeds can vary at 3 second intervals even at a constant temperature
SVAPAR-1459768.7.2.2On FlashSystem 7300, fan speeds can vary at 3 second intervals even at a constant temperature
SVAPAR-1459768.7.3.0On FlashSystem 7300, fan speeds can vary at 3 second intervals even at a constant temperature
SVAPAR-1460648.7.0.3Systems using asynchronous policy-based replication may incorrectly log events indicating the recovery point objective (RPO) has been exceeded.
SVAPAR-1460648.7.2.0Systems using asynchronous policy-based replication may incorrectly log events indicating the recovery point objective (RPO) has been exceeded.
SVAPAR-1460978.7.2.0On systems running 8.7.0 or 8.7.1 software with NVMe drives, at times of particularly high workload, there is a low probability of a single-node warmstart.
SVAPAR-1460979.1.0.0On systems running 8.7.0 or 8.7.1 software with NVMe drives, at times of particularly high workload, there is a low probability of a single-node warmstart.
SVAPAR-1465228.7.0.2FlashCopy background copy and cleaning may get stuck after a node restarts. This can also affect Global Mirror with Change Volumes, volume group snapshots, and policy-based replication
SVAPAR-1465228.7.2.0FlashCopy background copy and cleaning may get stuck after a node restarts. This can also affect Global Mirror with Change Volumes, volume group snapshots, and policy-based replication
SVAPAR-1465768.7.0.4When a quorum device is disconnected from a system twice in a short period, multiple node warmstarts can occur.
SVAPAR-1465768.7.3.1When a quorum device is disconnected from a system twice in a short period, multiple node warmstarts can occur.
SVAPAR-1465918.7.0.3Single node asserts may occur in systems using policy-based high availability if the active quorum application is restarted.
SVAPAR-1465918.7.3.0Single node asserts may occur in systems using policy-based high availability if the active quorum application is restarted.
SVAPAR-1466408.7.2.0When volume latency increases from below 1ms to above 1ms, the units in the GUI performance monitor will be incorrect.
SVAPAR-1466409.1.0.0When volume latency increases from below 1ms to above 1ms, the units in the GUI performance monitor will be incorrect.
SVAPAR-1472239.1.0.0System does not notify hosts of ALUA changes prior to hot spare node failback. This may prevent host path failover, leading to loss of access to data.
SVAPAR-1473618.6.0.6If a software upgrade completes at the same time as performance data is being sent to IBM Storage Insights, a single node warmstart may occur.
SVAPAR-1473618.7.0.3If a software upgrade completes at the same time as performance data is being sent to IBM Storage Insights, a single node warmstart may occur.
SVAPAR-1473618.7.3.0If a software upgrade completes at the same time as performance data is being sent to IBM Storage Insights, a single node warmstart may occur.
SVAPAR-1476468.6.0.6Node goes offline when a non-fatal PCIe error on the fibre channel adapter is encountered. It's possible for this to occur on both nodes simultaneously.
SVAPAR-1476468.7.0.3Node goes offline when a non-fatal PCIe error on the fibre channel adapter is encountered. It's possible for this to occur on both nodes simultaneously.
SVAPAR-1476468.7.2.0Node goes offline when a non-fatal PCIe error on the fibre channel adapter is encountered. It's possible for this to occur on both nodes simultaneously.
SVAPAR-1476478.7.3.0A 32Gb Fibre Channel adapter may unexpectedly reset, causing a delay in communication via that adapter's ports.
SVAPAR-1476479.1.0.0A 32Gb Fibre Channel adapter may unexpectedly reset, causing a delay in communication via that adapter's ports.
SVAPAR-1478708.7.0.3Occasionally, deleting a thin-clone volume that is deduplicated may result in a single node warmstart and a 1340 event, causing a pool to temporarily go offline.
SVAPAR-1478708.7.2.0Occasionally, deleting a thin-clone volume that is deduplicated may result in a single node warmstart and a 1340 event, causing a pool to temporarily go offline.
SVAPAR-1479068.6.0.6All nodes may warmstart in a SAN Volume Controller cluster consisting of SV3 nodes under heavy load, if a reset occurs on a Fibre Channel adapter used for local node to node communication.
SVAPAR-1479068.7.0.3All nodes may warmstart in a SAN Volume Controller cluster consisting of SV3 nodes under heavy load, if a reset occurs on a Fibre Channel adapter used for local node to node communication.
SVAPAR-1479068.7.3.0All nodes may warmstart in a SAN Volume Controller cluster consisting of SV3 nodes under heavy load, if a reset occurs on a Fibre Channel adapter used for local node to node communication.
SVAPAR-1479788.7.0.3A system running 8.7.0 and using policy-based replication may experience additional warmstarts during the recovery from a single node warmstart
SVAPAR-1479788.7.1.0A system running 8.7.0 and using policy-based replication may experience additional warmstarts during the recovery from a single node warmstart
SVAPAR-1480328.7.0.3When using policy-based high availability, a specific, unusual sequence of configuration commands can cause a node warmstart and prevent configuration commands from completing.
SVAPAR-1480328.7.3.0When using policy-based high availability, a specific, unusual sequence of configuration commands can cause a node warmstart and prevent configuration commands from completing.
SVAPAR-1480498.7.0.1A config node may warmstart during the failback process of the online_spare node to the spare node after executing 'swapnode -failback' command, resulting in a loss of access.
SVAPAR-1480498.7.2.0A config node may warmstart during the failback process of the online_spare node to the spare node after executing 'swapnode -failback' command, resulting in a loss of access.
SVAPAR-1482368.7.0.4iSER hosts are unable to access volumes on systems running 8.7.0.2
SVAPAR-1482518.7.2.0Merging partitions on 8.7.1.0 software may trigger a single-node warmstart.
SVAPAR-1482519.1.0.0Merging partitions on 8.7.1.0 software may trigger a single-node warmstart.
SVAPAR-1482878.7.0.2On FS9500, FS5xxx or SV3 systems running 8.7.x software, it is not possible to enable USB encryption using the GUI, because the system does not correctly report how many USB devices the key has been written to. The command-line interface is not affected by this issue.
SVAPAR-1482878.7.3.0On FS9500, FS5xxx or SV3 systems running 8.7.x software, it is not possible to enable USB encryption using the GUI, because the system does not correctly report how many USB devices the key has been written to. The command-line interface is not affected by this issue.
SVAPAR-1483648.7.0.2Systems whose certificate uses a SHA1 signature may experience node warmstarts. LDAP authentication using servers returning an OCSP response signed with SHA1 may fail.
SVAPAR-1483648.7.3.1Systems whose certificate uses a SHA1 signature may experience node warmstarts. LDAP authentication using servers returning an OCSP response signed with SHA1 may fail.
SVAPAR-1484668.7.0.3During upgrade from 8.6.1 or earlier, to 8.6.2 or later, the lsvdiskfcmappings command can result in a node warmstart if ownership groups are in use. This may result in an outage if the partner node is offline.
SVAPAR-1484958.7.0.3Multiple node warm-starts on systems running v8.7 due to a small timing window when starting a snapshot if the source volume does not have any other snapshots. This issue is more likely to occur on systems using policy-based replication with no user-created snapshots.
SVAPAR-1484958.7.3.0Multiple node warm-starts on systems running v8.7 due to a small timing window when starting a snapshot if the source volume does not have any other snapshots. This issue is more likely to occur on systems using policy-based replication with no user-created snapshots.
SVAPAR-1485048.7.0.3On a system using asynchronous policy-based replication, a timing window during volume creation may cause node warmstarts on the recovery system.
SVAPAR-1485048.7.3.0On a system using asynchronous policy-based replication, a timing window during volume creation may cause node warmstarts on the recovery system.
SVAPAR-1486068.7.0.7Storage Insights log collection may fail with the message "another wrapper is running:". Sometimes this prevents future log upload requests from starting.
SVAPAR-1486069.1.0.0Storage Insights log collection may fail with the message "another wrapper is running:". Sometimes this prevents future log upload requests from starting.
SVAPAR-1486438.7.0.3Changing the management IP address on 8.7.0 software does not update the console_IP field in lspartnership and lssystem output. This can cause the management GUI and Storage Insights to display the wrong IP address.
SVAPAR-1486438.7.1.0Changing the management IP address on 8.7.0 software does not update the console_IP field in lspartnership and lssystem output. This can cause the management GUI and Storage Insights to display the wrong IP address.
SVAPAR-1489878.5.0.17SVC model SV1 nodes running 8.5.0.13 may be unable to access keys from USB sticks when using USB encryption
SVAPAR-1489878.6.0.6SVC model SV1 nodes running 8.5.0.13 may be unable to access keys from USB sticks when using USB encryption
SVAPAR-1489878.7.3.0SVC model SV1 nodes running 8.5.0.13 may be unable to access keys from USB sticks when using USB encryption
SVAPAR-1489879.1.0.0SVC model SV1 nodes running 8.5.0.13 may be unable to access keys from USB sticks when using USB encryption
SVAPAR-1499838.6.0.6During an upgrade from 8.5.0.10 or higher to 8.6.0 or higher, a medium error on a quorum disk may cause a node warmstart. If the partner node is offline at the same time, this may cause loss of access.
SVAPAR-1499838.7.0.3During an upgrade from 8.5.0.10 or higher to 8.6.0 or higher, a medium error on a quorum disk may cause a node warmstart. If the partner node is offline at the same time, this may cause loss of access.
SVAPAR-1499838.7.3.0During an upgrade from 8.5.0.10 or higher to 8.6.0 or higher, a medium error on a quorum disk may cause a node warmstart. If the partner node is offline at the same time, this may cause loss of access.
SVAPAR-1501988.7.2.1Multiple node warmstarts (causing loss of access to data) may occur on 8.7.1 and 8.7.2 software, when deleting a volume that is in a volume group, and previously received a persistent reservation request.
SVAPAR-1501989.1.0.0Multiple node warmstarts (causing loss of access to data) may occur on 8.7.1 and 8.7.2 software, when deleting a volume that is in a volume group, and previously received a persistent reservation request.
SVAPAR-1504338.7.2.1In certain policy-based 3-site replication configurations, a loss of connectivity between HA systems may cause I/O timeouts and loss of access to data.
SVAPAR-1504339.1.0.0In certain policy-based 3-site replication configurations, a loss of connectivity between HA systems may cause I/O timeouts and loss of access to data.
SVAPAR-1506638.7.2.1Some FCM3 drives may go offline on upgrade to 8.7.2.0.
SVAPAR-1506639.1.0.0Some FCM3 drives may go offline on upgrade to 8.7.2.0.
SVAPAR-1507648.7.2.1At 8.7.2.0, loss of access to vVols may occur after node failover if the host rebind operation fails.
SVAPAR-1507649.1.0.0At 8.7.2.0, loss of access to vVols may occur after node failover if the host rebind operation fails.
SVAPAR-1508328.7.0.3Upgrading a FlashSystem 5200 to a FlashSystem 5300 may fail if policy-based replication is enabled. The first node to upgrade may assert repeatedly, preventing a concurrent upgrade.
SVAPAR-1508328.7.3.0Upgrading a FlashSystem 5200 to a FlashSystem 5300 may fail if policy-based replication is enabled. The first node to upgrade may assert repeatedly, preventing a concurrent upgrade.
SVAPAR-1511018.7.0.3Unable to create new volumes in a volume group with a Policy-based High Availability replication policy using the GUI. The error returned is "The selected volume group is a recovery copy and no new volumes can be created in the group."
SVAPAR-1511018.7.2.1Unable to create new volumes in a volume group with a Policy-based High Availability replication policy using the GUI. The error returned is "The selected volume group is a recovery copy and no new volumes can be created in the group."
SVAPAR-1512858.5.0.17In a system with multiple I/O groups and a Data Reduction Pool, multiple nodes may warmstart (causing loss of access to data) if all thin or compressed volumes are deleted in one of the I/O groups, and a new volume is then created. This can only happen if the I/O group's node hardware has been upgraded to a different model.
SVAPAR-1512858.6.0.10In a system with multiple I/O groups and a Data Reduction Pool, multiple nodes may warmstart (causing loss of access to data) if all thin or compressed volumes are deleted in one of the I/O groups, and a new volume is then created. This can only happen if the I/O group's node hardware has been upgraded to a different model.
SVAPAR-1512858.7.0.5In a system with multiple I/O groups and a Data Reduction Pool, multiple nodes may warmstart (causing loss of access to data) if all thin or compressed volumes are deleted in one of the I/O groups, and a new volume is then created. This can only happen if the I/O group's node hardware has been upgraded to a different model.
SVAPAR-1512858.7.3.0In a system with multiple I/O groups and a Data Reduction Pool, multiple nodes may warmstart (causing loss of access to data) if all thin or compressed volumes are deleted in one of the I/O groups, and a new volume is then created. This can only happen if the I/O group's node hardware has been upgraded to a different model.
SVAPAR-1516398.6.0.6If Two-Person Integrity is in use, multiple node warmstarts may occur when removing a user with remote authentication and an SSH key.
SVAPAR-1516398.7.0.3If Two-Person Integrity is in use, multiple node warmstarts may occur when removing a user with remote authentication and an SSH key.
SVAPAR-1516398.7.2.2If Two-Person Integrity is in use, multiple node warmstarts may occur when removing a user with remote authentication and an SSH key.
SVAPAR-1516398.7.3.0If Two-Person Integrity is in use, multiple node warmstarts may occur when removing a user with remote authentication and an SSH key.
SVAPAR-1519658.5.0.17The time zone in performance XML files is displayed incorrectly for some timezones during daylight savings time. This can impact performance monitoring tools such as Storage Insights.
SVAPAR-1519658.6.0.6The time zone in performance XML files is displayed incorrectly for some timezones during daylight savings time. This can impact performance monitoring tools such as Storage Insights.
SVAPAR-1519658.7.0.3The time zone in performance XML files is displayed incorrectly for some timezones during daylight savings time. This can impact performance monitoring tools such as Storage Insights.
SVAPAR-1519658.7.3.0The time zone in performance XML files is displayed incorrectly for some timezones during daylight savings time. This can impact performance monitoring tools such as Storage Insights.
SVAPAR-1519758.6.0.6In systems using IP replication, a CPU resource allocation change introduced in 8.6.0.0 release could cause delays in node to node communication, affecting overall write performance.
SVAPAR-1520198.6.0.6A single node assert may occur, potentially leading to the loss of the config node, when running the rmfcmap command with the force flag enabled. This can happen if a vdisk used by both FlashCopy and Remote Copy was previously moved between I/O groups.
SVAPAR-1520198.7.0.3A single node assert may occur, potentially leading to the loss of the config node, when running the rmfcmap command with the force flag enabled. This can happen if a vdisk used by both FlashCopy and Remote Copy was previously moved between I/O groups.
SVAPAR-1520768.7.0.3The GUI may notify the user about all new releases, even if the system is configured to notify only for Long-Term Support releases.
SVAPAR-1520768.7.3.0The GUI may notify the user about all new releases, even if the system is configured to notify only for Long-Term Support releases.
SVAPAR-1521188.7.0.5Large FlashCopy dependency chains with lots of mappings stopping at the same time could cause a loss of access to data on systems running 8.7.0
SVAPAR-1521188.7.3.0Large FlashCopy dependency chains with lots of mappings stopping at the same time could cause a loss of access to data on systems running 8.7.0
SVAPAR-1523798.7.0.4When using policy-based high availability with multiple partitions, where partitions are replicating in both directions, it is possible to see a single node warmstart on each system. This is due to a deadlock condition related to I/O forwarding, triggered by a large write or unmap spike at the non-preferred site for the partition.
SVAPAR-1523798.7.3.0When using policy-based high availability with multiple partitions, where partitions are replicating in both directions, it is possible to see a single node warmstart on each system. This is due to a deadlock condition related to I/O forwarding, triggered by a large write or unmap spike at the non-preferred site for the partition.
SVAPAR-1524728.7.3.0Volume group details view in the GUI might show a blank page for FlashSystem 5045 systems running 8.7.1
SVAPAR-1524729.1.0.0Volume group details view in the GUI might show a blank page for FlashSystem 5045 systems running 8.7.1
SVAPAR-1528808.7.0.3The service IP may not be available after a node reboot, if a timeout occurs when the system tries to bring up the IP address.
SVAPAR-1528808.7.2.0The service IP may not be available after a node reboot, if a timeout occurs when the system tries to bring up the IP address.
SVAPAR-1529028.7.1.0If a system is using asynchronous policy-based replication, certain unusual host I/O workloads can cause an I/O timeout to be incorrectly detected, triggering node warmstarts at the recovery site.
SVAPAR-1529029.1.0.0If a system is using asynchronous policy-based replication, certain unusual host I/O workloads can cause an I/O timeout to be incorrectly detected, triggering node warmstarts at the recovery site.
SVAPAR-1529128.7.0.3The finderr CLI command may produce no output on 8.7.0, 8.7.1 or 8.7.2.
SVAPAR-1529128.7.3.0The finderr CLI command may produce no output on 8.7.0, 8.7.1 or 8.7.2.
SVAPAR-1529258.7.0.5The lsfabricport command may fail to return output, if a remote Fibre Channel port has an unknown port speed. In 8.7.2, this will prevent configuration backup from completing.
SVAPAR-1529258.7.2.2The lsfabricport command may fail to return output, if a remote Fibre Channel port has an unknown port speed. In 8.7.2, this will prevent configuration backup from completing.
SVAPAR-1529258.7.3.0The lsfabricport command may fail to return output, if a remote Fibre Channel port has an unknown port speed. In 8.7.2, this will prevent configuration backup from completing.
SVAPAR-1532368.7.0.3Upgrade from 8.5.x or 8.6.x to 8.7.0 may cause a single node warmstart on systems with USB encryption enabled. This can cause the upgrade to stall and require manual intervention to complete the upgrade - however during the warmstart the partner handles I/O, so there is no loss of access.
SVAPAR-1532468.6.0.7A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
SVAPAR-1532468.7.0.3A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
SVAPAR-1532468.7.1.0A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
SVAPAR-1532698.7.0.3A node warmstart may occur due to a stalled FlashCopy mapping, when policy-based replication is used with FlashCopy or snapshots
SVAPAR-1532698.7.2.0A node warmstart may occur due to a stalled FlashCopy mapping, when policy-based replication is used with FlashCopy or snapshots
SVAPAR-1533108.7.3.0Adding an HA policy to a partition may fail with a CMMVC1249E error. This will only happen if a DR partition in a 3-site configuration is deleted, a new partition is created with the same ID, and the user attempts to add an HA policy to that partition.
SVAPAR-1533109.1.0.0Adding an HA policy to a partition may fail with a CMMVC1249E error. This will only happen if a DR partition in a 3-site configuration is deleted, a new partition is created with the same ID, and the user attempts to add an HA policy to that partition.
SVAPAR-1535848.7.2.0A node warmstart may occur when a system using policy-based high availability loses connectivity to the remote system. In rare instances, both nodes in the system can warmstart at the same time. This is due to an inter-node messaging timing window.
SVAPAR-1535849.1.0.0A node warmstart may occur when a system using policy-based high availability loses connectivity to the remote system. In rare instances, both nodes in the system can warmstart at the same time. This is due to an inter-node messaging timing window.
SVAPAR-1541008.7.3.0A node warmstart may occur to clear the condition when Fibre Channel adapter firmware has started processing a target I/O request, but has failed the request with status "Invalid Receive Exchange Address".
SVAPAR-1541009.1.0.0A node warmstart may occur to clear the condition when Fibre Channel adapter firmware has started processing a target I/O request, but has failed the request with status "Invalid Receive Exchange Address".
SVAPAR-1543879.1.0.0Running multiple supportupload commands in quick succession may cause an out of memory condition, which leads to a node warmstart.
SVAPAR-1543998.7.3.0Policy-based high availability may be suspended and unable to restart after an upgrade to 8.7.2.x, due to a timing window.
SVAPAR-1543999.1.0.0Policy-based high availability may be suspended and unable to restart after an upgrade to 8.7.2.x, due to a timing window.
SVAPAR-1545028.6.0.6A single node warmstart may occur on systems with a very large number of vdisk-host maps, caused by a timeout during host Unit Attention processing.
SVAPAR-1545028.7.0.5A single node warmstart may occur on systems with a very large number of vdisk-host maps, caused by a timeout during host Unit Attention processing.
SVAPAR-1545028.7.3.0A single node warmstart may occur on systems with a very large number of vdisk-host maps, caused by a timeout during host Unit Attention processing.
SVAPAR-1547758.7.0.5The user role requirements for the restorefromsnapshot and refreshfromsnapshot commands are incorrect, meaning a command run by a user may be rejected when it should be accepted.
SVAPAR-1547758.7.1.0The user role requirements for the restorefromsnapshot and refreshfromsnapshot commands are incorrect, meaning a command run by a user may be rejected when it should be accepted.
SVAPAR-1547758.7.3.2The user role requirements for the restorefromsnapshot and refreshfromsnapshot commands are incorrect, meaning a command run by a user may be rejected when it should be accepted.
SVAPAR-1549638.7.3.0On systems with 8.7.2 software, a single node assert may occur due to a race condition when deleting volumes, hosts or host mappings that are part of a policy-based high availability partition.
SVAPAR-1549639.1.0.0On systems with 8.7.2 software, a single node assert may occur due to a race condition when deleting volumes, hosts or host mappings that are part of a policy-based high availability partition.
SVAPAR-1553958.6.0.10Hardware failure of a node at the exact moment that a volume is being created can result in an invalid cache state. If I/O is received by that volume before the failed node recovers, node warmstarts may cause loss of access to data.
SVAPAR-1553958.7.0.7Hardware failure of a node at the exact moment that a volume is being created can result in an invalid cache state. If I/O is received by that volume before the failed node recovers, node warmstarts may cause loss of access to data.
SVAPAR-1553959.1.0.0Hardware failure of a node at the exact moment that a volume is being created can result in an invalid cache state. If I/O is received by that volume before the failed node recovers, node warmstarts may cause loss of access to data.
SVAPAR-1554378.7.3.0When enabling replication for a volume group, there is a very low probability that the DR system might detect an invalid state, due to a timing window between creation of the volume group and the volumes. If this happens, both nodes at the DR system might warmstart at the same time.
SVAPAR-1554379.1.0.0When enabling replication for a volume group, there is a very low probability that the DR system might detect an invalid state, due to a timing window between creation of the volume group and the volumes. If this happens, both nodes at the DR system might warmstart at the same time.
SVAPAR-1555688.6.0.7On FS9500 or SV3 systems, batteries may prematurely hit end of life and go offline.
SVAPAR-1555688.7.0.4On FS9500 or SV3 systems, batteries may prematurely hit end of life and go offline.
SVAPAR-1555688.7.3.0On FS9500 or SV3 systems, batteries may prematurely hit end of life and go offline.
SVAPAR-1556568.7.3.0Multiple node asserts when removing a VDisk copy (or adding a copy with the autodelete parameter) from a policy-based replication recovery volume
SVAPAR-1556569.1.0.0Multiple node asserts when removing a VDisk copy (or adding a copy with the autodelete parameter) from a policy-based replication recovery volume
SVAPAR-1556978.6.0.10Loss of access to data caused by a partial failure of an internal PCI express bus
SVAPAR-1556978.7.0.6Loss of access to data caused by a partial failure of an internal PCI express bus
SVAPAR-1556978.7.3.3Loss of access to data caused by a partial failure of an internal PCI express bus
SVAPAR-1556979.1.0.0Loss of access to data caused by a partial failure of an internal PCI express bus
SVAPAR-1557708.7.0.5Changing the replication policy on a volume group to a new one might require a full resynchronization
SVAPAR-1557708.7.3.2Changing the replication policy on a volume group to a new one might require a full resynchronization
SVAPAR-1558248.7.0.4FS5200 and FS5300 systems with iWARP adapters and 8.7.0.3 or 8.7.3.0 software may experience an out-of-memory condition.
SVAPAR-1558248.7.3.1FS5200 and FS5300 systems with iWARP adapters and 8.7.0.3 or 8.7.3.0 software may experience an out-of-memory condition.
SVAPAR-1561418.6.0.10Fibre Channel host ports might ignore PLOGI requests for a very short duration after port startup
SVAPAR-1561418.7.0.5Fibre Channel host ports might ignore PLOGI requests for a very short duration after port startup
SVAPAR-1561418.7.3.0Fibre Channel host ports might ignore PLOGI requests for a very short duration after port startup
SVAPAR-1561428.7.3.0Persistent reservation requests for volumes configured with Policy-based High Availability might be rejected during a small timing window after a node comes online
SVAPAR-1561429.1.0.0Persistent reservation requests for volumes configured with Policy-based High Availability might be rejected during a small timing window after a node comes online
SVAPAR-1561468.7.3.0The GUI's encryption panel displays "Encryption is not fully enabled" when encryption is enabled but the encryption recovery key has not been configured.
SVAPAR-1561469.1.0.0The GUI's encryption panel displays "Encryption is not fully enabled" when encryption is enabled but the encryption recovery key has not been configured.
SVAPAR-1561558.6.0.10Repeated node warmstarts may occur if an unsupported direct-attached Fibre-channel host sends unsolicited frames to the nodes.
SVAPAR-1561558.7.0.5Repeated node warmstarts may occur if an unsupported direct-attached Fibre-channel host sends unsolicited frames to the nodes.
SVAPAR-1561558.7.3.0Repeated node warmstarts may occur if an unsupported direct-attached Fibre-channel host sends unsolicited frames to the nodes.
SVAPAR-1561688.6.0.10A single node warmstart may occur after running the svcinfo traceroute CLI command.
SVAPAR-1561688.7.0.5A single node warmstart may occur after running the svcinfo traceroute CLI command.
SVAPAR-1561688.7.2.2A single node warmstart may occur after running the svcinfo traceroute CLI command.
SVAPAR-1561688.7.3.0A single node warmstart may occur after running the svcinfo traceroute CLI command.
SVAPAR-1561748.6.0.10A single node warmstart may happen due to unexpected duplicate OXID values from a Fibre Channel host
SVAPAR-1561748.7.0.5A single node warmstart may happen due to unexpected duplicate OXID values from a Fibre Channel host
SVAPAR-1561748.7.3.0A single node warmstart may happen due to unexpected duplicate OXID values from a Fibre Channel host
SVAPAR-1561798.6.0.6The supported length of client secret for SSO and MFA configurations is limited to 64 characters.
SVAPAR-1561798.7.0.3The supported length of client secret for SSO and MFA configurations is limited to 64 characters.
SVAPAR-1561798.7.3.0The supported length of client secret for SSO and MFA configurations is limited to 64 characters.
SVAPAR-1561828.7.0.5A single node warmstart has a low probability of occurring on a system using policy-based high availability, if another node goes offline.
SVAPAR-1561828.7.3.0A single node warmstart has a low probability of occurring on a system using policy-based high availability, if another node goes offline.
SVAPAR-1562258.6.0.10Hot Spare node protection is unavailable for the second half of a software upgrade without this APAR being installed first. The affected upgrades are from 8.5 or 8.6 to 8.7.0 or later on systems with multiple IO groups.
SVAPAR-1563328.7.0.3Using the GUI to create a clone or thin-clone from a snapshot may fail with an CMMVC1243E error, if the snapshot is in an HA partition.
SVAPAR-1563328.7.2.2Using the GUI to create a clone or thin-clone from a snapshot may fail with an CMMVC1243E error, if the snapshot is in an HA partition.
SVAPAR-1563328.7.3.0Using the GUI to create a clone or thin-clone from a snapshot may fail with an CMMVC1243E error, if the snapshot is in an HA partition.
SVAPAR-1563458.7.0.4A node warmstart on a system using policy-based replication has a low probability of causing another node to also warmstart.
SVAPAR-1563458.7.3.0A node warmstart on a system using policy-based replication has a low probability of causing another node to also warmstart.
SVAPAR-1563588.7.0.5A temporary failure to download the list of available patches may cause a 4401 event ("Patch auto-update server communication error") to be logged.
SVAPAR-1563588.7.3.0A temporary failure to download the list of available patches may cause a 4401 event ("Patch auto-update server communication error") to be logged.
SVAPAR-1565228.7.0.4Compare and Write (CAW) I/O requests might be rejected with a SCSI Busy status after a failure to create volumes in the second location of a PBHA partition
SVAPAR-1565228.7.2.2Compare and Write (CAW) I/O requests might be rejected with a SCSI Busy status after a failure to create volumes in the second location of a PBHA partition
SVAPAR-1565228.7.3.0Compare and Write (CAW) I/O requests might be rejected with a SCSI Busy status after a failure to create volumes in the second location of a PBHA partition
SVAPAR-1565868.7.0.4Cloud callhome stops working after downloading software directly to the system, or upgrading to 8.7.0.2 or later.
SVAPAR-1565868.7.3.1Cloud callhome stops working after downloading software directly to the system, or upgrading to 8.7.0.2 or later.
SVAPAR-1566608.7.0.4Compare and Write (CAW) I/O requests sent to volumes configured with policy-based high availability may get stuck in a timing window after a node comes online, causing a single-node warmstart.
SVAPAR-1566608.7.1.0Compare and Write (CAW) I/O requests sent to volumes configured with policy-based high availability may get stuck in a timing window after a node comes online, causing a single-node warmstart.
SVAPAR-1568498.7.0.4Removing a replication policy from a partition operating in single location mode following a failover of Policy-based High Availability may leave an incorrect residual state on the partition that may impact future potential T3 recoveries. For more details refer to this Flash
SVAPAR-1568498.7.3.1Removing a replication policy from a partition operating in single location mode following a failover of Policy-based High Availability may leave an incorrect residual state on the partition that may impact future potential T3 recoveries. For more details refer to this Flash
SVAPAR-1569768.5.0.17Volumes in a data reduction pool with deduplication enabled may be taken offline due to a metadata inconsistency
SVAPAR-1569768.6.0.10Volumes in a data reduction pool with deduplication enabled may be taken offline due to a metadata inconsistency
SVAPAR-1569768.7.0.7Volumes in a data reduction pool with deduplication enabled may be taken offline due to a metadata inconsistency
SVAPAR-1569768.7.3.2Volumes in a data reduction pool with deduplication enabled may be taken offline due to a metadata inconsistency
SVAPAR-1569769.1.0.0Volumes in a data reduction pool with deduplication enabled may be taken offline due to a metadata inconsistency
SVAPAR-1570078.6.0.7On heavily loaded systems, a dual node warmstart may occur after an upgrade to 8.7.3.0, 8.7.0.3, or 8.6.0.6 due to an internal memory allocation issue causing brief loss of access to the data.
SVAPAR-1570078.7.0.4On heavily loaded systems, a dual node warmstart may occur after an upgrade to 8.7.3.0, 8.7.0.3, or 8.6.0.6 due to an internal memory allocation issue causing brief loss of access to the data.
SVAPAR-1570078.7.3.1On heavily loaded systems, a dual node warmstart may occur after an upgrade to 8.7.3.0, 8.7.0.3, or 8.6.0.6 due to an internal memory allocation issue causing brief loss of access to the data.
SVAPAR-1571648.7.3.2Removing a volume group with a 3-site disaster recovery link may cause the volume group to have a state that prevents configuring it for 2-site asynchronous disaster recovery in the future
SVAPAR-1571649.1.0.0Removing a volume group with a 3-site disaster recovery link may cause the volume group to have a state that prevents configuring it for 2-site asynchronous disaster recovery in the future
SVAPAR-1573178.7.0.5Loss of access to data after a drive failure in a DRAID array with a very sparse extent allocation table
SVAPAR-1573178.7.3.2Loss of access to data after a drive failure in a DRAID array with a very sparse extent allocation table
SVAPAR-1573558.7.3.1Multiple node warmstarts under very heavy work-loads from NVMe over FC hosts.
SVAPAR-1573559.1.0.0Multiple node warmstarts under very heavy work-loads from NVMe over FC hosts.
SVAPAR-1575618.6.0.10Single node warmstart when collecting data from Ethernet SFPs times out.
SVAPAR-1575619.1.0.0Single node warmstart when collecting data from Ethernet SFPs times out.
SVAPAR-1575938.7.0.4Mapping an HA volume to a SAN Volume Controller or FlashSystem is not supported. This may cause loss of access to data on the system presenting the HA volume.
SVAPAR-1575938.7.3.1Mapping an HA volume to a SAN Volume Controller or FlashSystem is not supported. This may cause loss of access to data on the system presenting the HA volume.
SVAPAR-1576018.7.0.5Attempting to migrate a partition using the 'chpartition -location' command may fail with an error, if the system has nodes in an I/O group with an ID other than 0.
SVAPAR-1576018.7.3.2Attempting to migrate a partition using the 'chpartition -location' command may fail with an error, if the system has nodes in an I/O group with an ID other than 0.
SVAPAR-1577008.7.3.1Systems on 8.7.3 may be unable to establish a partnership for policy-based replication or high availability, with systems on lower code levels that have volume group snapshots.
SVAPAR-1577009.1.0.0Systems on 8.7.3 may be unable to establish a partnership for policy-based replication or high availability, with systems on lower code levels that have volume group snapshots.
SVAPAR-1589158.7.3.1A single node warmstart may occur if a host issues more than 65535 write requests, to a single 128KB region of a policy-based replication or HA volume, within a short period of time.
SVAPAR-1589159.1.0.0A single node warmstart may occur if a host issues more than 65535 write requests, to a single 128KB region of a policy-based replication or HA volume, within a short period of time.
SVAPAR-1590978.7.0.5When a partnership has been created with the Long Distance (TCP) option, the GUI may incorrectly report the partnership as Short Distance (RDMA) instead.
SVAPAR-1592849.1.0.0Multiple node asserts may occur on a system using IP Replication, if a low-probability timing window results in a node receiving an IP replication login from itself.
SVAPAR-1594308.7.3.2Multiple node warmstarts may occur if a PBHA volume receives more than 100 persistent reserve registrations from hosts.
SVAPAR-1594309.1.0.0Multiple node warmstarts may occur if a PBHA volume receives more than 100 persistent reserve registrations from hosts.
SVAPAR-1597958.7.0.2On systems using iSER clustering, an issue in the iSER driver could cause simultaneous node warmstarts followed by kernel panics, due to a timing window during disconnect/reconnect.
SVAPAR-1597958.7.2.0On systems using iSER clustering, an issue in the iSER driver could cause simultaneous node warmstarts followed by kernel panics, due to a timing window during disconnect/reconnect.
SVAPAR-1598678.6.0.10A successful drive firmware update can report a 3090 event indicating that the update has failed. This is caused by some types of SAS drives taking longer to update.
SVAPAR-1598678.7.0.7A successful drive firmware update can report a 3090 event indicating that the update has failed. This is caused by some types of SAS drives taking longer to update.
SVAPAR-1598679.1.0.0A successful drive firmware update can report a 3090 event indicating that the update has failed. This is caused by some types of SAS drives taking longer to update.
SVAPAR-1602429.1.0.0If an FCM array is offline due to an out-of-space condition during array expansion, and a T2 recovery takes place, the recovery may fail, resulting in both nodes being offline with node error 564.
SVAPAR-1605948.7.0.5A single node warmstart may occur while deleting a FlashCopy mapping or snapshot.
SVAPAR-1605948.7.3.2A single node warmstart may occur while deleting a FlashCopy mapping or snapshot.
SVAPAR-1608528.7.0.5A software issue may leave a vdisk in a paused state causing a CLI timeout warmstart if the vdisk is deleted. If the pause was stuck on the non-config node then one or more IO timeout warmstarts may also occur.
SVAPAR-1608528.7.3.2A software issue may leave a vdisk in a paused state causing a CLI timeout warmstart if the vdisk is deleted. If the pause was stuck on the non-config node then one or more IO timeout warmstarts may also occur.
SVAPAR-1608688.7.0.5A node warmstart may occur to clear a hung I/O condition caused by requests waiting for a chunk lock.
SVAPAR-1608688.7.3.2A node warmstart may occur to clear a hung I/O condition caused by requests waiting for a chunk lock.
SVAPAR-1609119.1.1.0Following an FCM array expansion, the array will temporarily report more physical capacity than expected. If all of the expanded array capacity, plus some of this extra capacity, is occupied by written data, the array will go offline out-of-space.
SVAPAR-1610168.7.0.5Upgrade to 8.7.0.4 on FS5015 / FS5035 may cause node warmstarts on a single node, if VMware hosts are in use.
SVAPAR-1610168.7.3.2Upgrade to 8.7.0.4 on FS5015 / FS5035 may cause node warmstarts on a single node, if VMware hosts are in use.
SVAPAR-1612638.7.3.2Systems with NVMe hosts may experience multiple node warmstarts on 8.7.3.x software
SVAPAR-1612639.1.0.0Systems with NVMe hosts may experience multiple node warmstarts on 8.7.3.x software
SVAPAR-1615078.7.0.5Higher than expected I/O pause during the commit phase of code upgrade on systems with high I/O rate
SVAPAR-1615078.7.3.2Higher than expected I/O pause during the commit phase of code upgrade on systems with high I/O rate
SVAPAR-1615179.1.0.0Exported trust store file cannot be read
SVAPAR-1615188.7.3.2Configuration backup and support data collection may create files with invalid JSON encoding
SVAPAR-1615189.1.0.0Configuration backup and support data collection may create files with invalid JSON encoding
SVAPAR-1615208.7.0.5Outgoing Remote Support Assistance (RSA) connections are blocked by Zscaler proxy
SVAPAR-1615208.7.3.2Outgoing Remote Support Assistance (RSA) connections are blocked by Zscaler proxy
SVAPAR-1619329.1.0.0Creating 20480 thin-provisioned volumes may cause multiple node warmstarts and loss of access to data, due to an invalid internal count of the number of thin-provisioned volumes.
SVAPAR-1624508.7.0.5A single-node warmstart may occur, if a timing window in the background vdisk analysis process causes an invalid file access.
SVAPAR-1624508.7.3.2A single-node warmstart may occur, if a timing window in the background vdisk analysis process causes an invalid file access.
SVAPAR-1628369.1.0.0Node warmstarts when starting a Volume Group Snapshot, if there are multiple legacy FlashCopy maps with the same target volume.
SVAPAR-1629588.7.0.5Node warmstart caused by a SAS chip failure fault on systems with encryption enabled
SVAPAR-1630749.1.1.0A single node restart may occur if the connectivity between systems in a policy-based replication partnership is unstable.
SVAPAR-1633928.7.0.5A single node restart may occur due to a timing issue when adding a snapshot to a volume group that is part of policy-based replication.
SVAPAR-1633928.7.3.2A single node restart may occur due to a timing issue when adding a snapshot to a volume group that is part of policy-based replication.
SVAPAR-1634058.7.0.5A single node warmstart may occur due to a small timing window shortly after PBR stops replicating a volume group.
SVAPAR-1634058.7.3.2A single node warmstart may occur due to a small timing window shortly after PBR stops replicating a volume group.
SVAPAR-1635028.6.0.8Node warmstarts may occur due to a replication credit issue where a host has an unsupported high queue depth setting, and sends a large burst of unmaps to a single volume using policy-based replication.
SVAPAR-1635028.7.0.5Node warmstarts may occur due to a replication credit issue where a host has an unsupported high queue depth setting, and sends a large burst of unmaps to a single volume using policy-based replication.
SVAPAR-1635028.7.3.2Node warmstarts may occur due to a replication credit issue where a host has an unsupported high queue depth setting, and sends a large burst of unmaps to a single volume using policy-based replication.
SVAPAR-1635658.7.0.5A single node warmstart and a loss of configuration access to the system may occur if a FlashCopy target volume is deleted immediately after its FlashCopy mapping is stopped.
SVAPAR-1635658.7.3.2A single node warmstart and a loss of configuration access to the system may occur if a FlashCopy target volume is deleted immediately after its FlashCopy mapping is stopped.
SVAPAR-1635668.7.0.5Multiple node warmstarts and loss of access to data may occur, if a host issues Persistent Reserve commands with the APTPL bit set, and the system has previously undergone T2 recovery.
SVAPAR-1635668.7.3.2Multiple node warmstarts and loss of access to data may occur, if a host issues Persistent Reserve commands with the APTPL bit set, and the system has previously undergone T2 recovery.
SVAPAR-1635738.7.0.5Node warmstarts when adding the first snapshot to a volume group when using a 64k snapshot grain size.
SVAPAR-1635738.7.3.2Node warmstarts when adding the first snapshot to a volume group when using a 64k snapshot grain size.
SVAPAR-1637888.7.0.5The GUI may not display the correct 'last backup time' for safeguarded copies of a volume group. The CLI can be used to view the correct information.
SVAPAR-1637888.7.3.2The GUI may not display the correct 'last backup time' for safeguarded copies of a volume group. The CLI can be used to view the correct information.
SVAPAR-1638318.7.0.5Loss of access to data, triggered by a single node experiencing a critical hardware error on systems running 8.7.0 or later.
SVAPAR-1638318.7.3.2Loss of access to data, triggered by a single node experiencing a critical hardware error on systems running 8.7.0 or later.
SVAPAR-1640468.7.0.5A node warmstart may occur when processing an NVMe Compare command from a host.
SVAPAR-1640468.7.3.2A node warmstart may occur when processing an NVMe Compare command from a host.
SVAPAR-1640629.1.0.0lsvdisk does not accept the filter values 'is_safeguarded_snapshot' and 'safeguarded_snapshot_count'.
SVAPAR-1640788.7.0.5Partnership IP quorum will not discover and connect to service IP addresses for a system that does not have nodes in IO group 0. This will result in a persistent 3129 error code with error ID 52006.
SVAPAR-1640788.7.3.2Partnership IP quorum will not discover and connect to service IP addresses for a system that does not have nodes in IO group 0. This will result in a persistent 3129 error code with error ID 52006.
SVAPAR-1640828.7.3.0When configuring 3-site replication by adding a policy-based HA copy to an existing volume that already has DR configured, any writes that arrive during a small timing window will not be mirrored to the new HA copy, causing an undetected data corruption.
SVAPAR-1640829.1.0.0When configuring 3-site replication by adding a policy-based HA copy to an existing volume that already has DR configured, any writes that arrive during a small timing window will not be mirrored to the new HA copy, causing an undetected data corruption.
SVAPAR-1642068.7.3.2A short loss of access can occur due to a cluster warmstart when deleting a volume with a persistent reservation on systems protected by policy-based high availability.
SVAPAR-1642069.1.0.0A short loss of access can occur due to a cluster warmstart when deleting a volume with a persistent reservation on systems protected by policy-based high availability.
SVAPAR-1643688.5.0.17Recurring node warmstarts within an IO group after starting DRP recovery.
SVAPAR-1643688.6.0.10Recurring node warmstarts within an IO group after starting DRP recovery.
SVAPAR-1643688.7.0.7Recurring node warmstarts within an IO group after starting DRP recovery.
SVAPAR-1643689.1.0.0Recurring node warmstarts within an IO group after starting DRP recovery.
SVAPAR-1644308.7.3.2Repeated single node warmstarts may occur in a 3-site policy-based HA+DR configuration, due to a timing window during an HA failover scenario.
SVAPAR-1644309.1.0.0Repeated single node warmstarts may occur in a 3-site policy-based HA+DR configuration, due to a timing window during an HA failover scenario.
SVAPAR-1646418.5.0.17Single Sign On (SSO) login fails if the HTTP proxy password exceeds 20 characters
SVAPAR-1646418.6.0.10Single Sign On (SSO) login fails if the HTTP proxy password exceeds 20 characters
SVAPAR-1646418.7.0.5Single Sign On (SSO) login fails if the HTTP proxy password exceeds 20 characters
SVAPAR-1646418.7.2.0Single Sign On (SSO) login fails if the HTTP proxy password exceeds 20 characters
SVAPAR-1647028.7.0.5When a node on the HA Active Management System comes back online (for example during a software upgrade), the partner node may experience a single-node warmstart due to I/O timeout. If all partitions are replicating in the same direction, a warmstart may also occur on the partner system.
SVAPAR-1647028.7.3.2When a node on the HA Active Management System comes back online (for example during a software upgrade), the partner node may experience a single-node warmstart due to I/O timeout. If all partitions are replicating in the same direction, a warmstart may also occur on the partner system.
SVAPAR-1647779.1.0.0Failure to collect snaps using Storage Insights without a data collector from systems running 8.7.2 or higher
SVAPAR-1648378.7.0.3When merging two policy-based HA partitions, any writes that arrive during a small window will not be mirrored to the second copy, causing an undetected data corruption.
SVAPAR-1648378.7.2.0When merging two policy-based HA partitions, any writes that arrive during a small window will not be mirrored to the second copy, causing an undetected data corruption.
SVAPAR-1648388.7.0.4When asynchronous policy-based replication operates in a mode for low-bandwidth, data on the recovery volumes may by inconsistent until the replication next operates in mode for low-latency. For more details refer to this Flash
SVAPAR-1648388.7.3.0When asynchronous policy-based replication operates in a mode for low-bandwidth, data on the recovery volumes may by inconsistent until the replication next operates in mode for low-latency. For more details refer to this Flash
SVAPAR-1648399.1.0.0A node containing iWARP adapters may fail to reboot.
SVAPAR-1651658.7.0.5The event log might show an unresolved error related to a VASA provider failure, even if the VASA provider has already restarted successfully
SVAPAR-1651658.7.3.2The event log might show an unresolved error related to a VASA provider failure, even if the VASA provider has already restarted successfully
SVAPAR-1651948.7.3.2A node warmstart may occur when a host issues a persistent reserve command to an HA volume, and another I/O is received at the exact time the persistent reserve command completes.
SVAPAR-1651949.1.0.0A node warmstart may occur when a host issues a persistent reserve command to an HA volume, and another I/O is received at the exact time the persistent reserve command completes.
SVAPAR-1655838.7.0.7Unable to set an Ethernet port MTU below 1500. This APAR reduces the minimum MTU to 1200.
SVAPAR-1656438.6.0.10Adding a certificate to a truststore may cause multiple node warmstarts, if the truststore does not have enough space. This can happen if trying to replace a certificate for an existing syslog server using the GUI.
SVAPAR-1656438.7.0.5Adding a certificate to a truststore may cause multiple node warmstarts, if the truststore does not have enough space. This can happen if trying to replace a certificate for an existing syslog server using the GUI.
SVAPAR-1664298.7.0.7After migration from remote copy to policy-based replication, volume groups may be unable to be set to "production" replication mode after they were made "independent". This is due to the migrated volumes having inconsistent configuration data.
SVAPAR-1664299.1.0.0After migration from remote copy to policy-based replication, volume groups may be unable to be set to "production" replication mode after they were made "independent". This is due to the migrated volumes having inconsistent configuration data.
SVAPAR-1665428.7.0.5A system using policy-based HA may be unable to communicate correctly with the quorum device, resulting in loss of access to data if failover is required. This can occur if multiple quorum apps are connected, one of those apps is decommissioned, and then both nodes go offline and come online at the same time (for example due to a power failure).
SVAPAR-1669719.1.0.0Unable to change the replication mode of a volume group due to invalid internal state.
SVAPAR-1670408.7.3.2Single node warmstart triggered by making the DR site into an independent copy whilst replication is still active.
SVAPAR-1670409.1.0.0Single node warmstart triggered by making the DR site into an independent copy whilst replication is still active.
SVAPAR-1671068.7.0.5In rare cases around a network event such as a node warmstart, communications glitch, or quorum interruption, one node of the PBR production system may cease replicating IO to the DR site. The DR site will detect the situation and warmstarts nodes to clear the problem.
SVAPAR-1671068.7.3.2In rare cases around a network event such as a node warmstart, communications glitch, or quorum interruption, one node of the PBR production system may cease replicating IO to the DR site. The DR site will detect the situation and warmstarts nodes to clear the problem.
SVAPAR-1671338.7.0.5While GMCV and PBR are configured on the same system, there's a small chance of a single node warmstart, if internal state changes for both PBR and GMCV coincide.
SVAPAR-1673909.1.0.0A T2 recovery may occur when manually adding a node to the system while an upgrade with the -pause flag is in progress
SVAPAR-1676959.1.1.0If two-person integrity (TPI) is enabled, an LDAP user that is in multiple remote groups may not be able to remove safeguarded snapshots, even if a role elevation request has been approved.
SVAPAR-1678658.7.0.7Systems with heavy iSCSI I/O workload may show a decrease in performance after upgrade to 8.7.0 or later
SVAPAR-1678659.1.0.0Systems with heavy iSCSI I/O workload may show a decrease in performance after upgrade to 8.7.0 or later
SVAPAR-1680789.1.1.0IO related to Data Reduction Pools can stall during array rebuild operations resulting in a single node assert. It is possible for this occur repeatedly resulting in a loss of access to data.
SVAPAR-1681028.6.0.10Service GUI multi-factor authentication as superuser may not work, if the authentication provider is using PKCE (Proof Key for Code Exchange).
SVAPAR-1681028.7.0.7Service GUI multi-factor authentication as superuser may not work, if the authentication provider is using PKCE (Proof Key for Code Exchange).
SVAPAR-1681029.1.0.0Service GUI multi-factor authentication as superuser may not work, if the authentication provider is using PKCE (Proof Key for Code Exchange).
SVAPAR-1683978.7.0.5If a volume group snapshot has a thin clone, and the thin clone is read at exactly the same time as consistency protection is triggered for policy-based replication or high availability, it is possible for the data returned to the host to be zero instead of the correct data.
SVAPAR-1683978.7.3.0If a volume group snapshot has a thin clone, and the thin clone is read at exactly the same time as consistency protection is triggered for policy-based replication or high availability, it is possible for the data returned to the host to be zero instead of the correct data.
SVAPAR-1684118.7.0.7A single-node warmstart caused by the IO statistics processing exceeding a timeout
SVAPAR-1684119.1.0.0A single-node warmstart caused by the IO statistics processing exceeding a timeout
SVAPAR-1686398.6.0.10The Licensed Functions view in the GUI may show incomplete information for restricted users.
SVAPAR-1686398.7.0.7The Licensed Functions view in the GUI may show incomplete information for restricted users.
SVAPAR-1686399.1.0.0The Licensed Functions view in the GUI may show incomplete information for restricted users.
SVAPAR-1692508.7.0.6Adding a GKLM server at version 5.x may result in an error saying the key server is not supported. This is due to a change in the way that GKLM identifies itself when the storage system connects to it.
SVAPAR-1692509.1.0.0Adding a GKLM server at version 5.x may result in an error saying the key server is not supported. This is due to a change in the way that GKLM identifies itself when the storage system connects to it.
SVAPAR-1692558.7.0.7High peak response times when adding snapshots for a volume group containing mirrored vdisks.
SVAPAR-1692559.1.0.2High peak response times when adding snapshots for a volume group containing mirrored vdisks.
SVAPAR-1692559.1.1.0High peak response times when adding snapshots for a volume group containing mirrored vdisks.
SVAPAR-1703099.1.0.0The IP quorum application might not connect to a system if the request to discover node IP addresses fails
SVAPAR-1703409.1.0.0On systems using PBR/PBHA as well as FlashCopy, a node may warmstart due to a problem during background copy processing in the FlashCopy component.
SVAPAR-1703518.6.0.10A loss of access to data may occur upon an attempt to use 'charraymember' command to replace the drive while its rebuild is in progress.
SVAPAR-1703518.7.0.7A loss of access to data may occur upon an attempt to use 'charraymember' command to replace the drive while its rebuild is in progress.
SVAPAR-1703519.1.0.0A loss of access to data may occur upon an attempt to use 'charraymember' command to replace the drive while its rebuild is in progress.
SVAPAR-1703589.1.0.0The system may experience unexpected 1585 DNS connection errors due to a DNS request timeout.
SVAPAR-1703688.6.0.10FS50xx systems may incorrectly report failed batteries (node error 652) after a cold boot.
SVAPAR-1703688.7.0.7FS50xx systems may incorrectly report failed batteries (node error 652) after a cold boot.
SVAPAR-1703689.1.0.0FS50xx systems may incorrectly report failed batteries (node error 652) after a cold boot.
SVAPAR-1704298.7.0.6Policy-based replication or HA may suspend after upgrade to 8.7.0.5, due to a change volume on the recovery system being in an invalid state.
SVAPAR-1704298.7.3.3Policy-based replication or HA may suspend after upgrade to 8.7.0.5, due to a change volume on the recovery system being in an invalid state.
SVAPAR-1704299.1.0.0Policy-based replication or HA may suspend after upgrade to 8.7.0.5, due to a change volume on the recovery system being in an invalid state.
SVAPAR-1704389.1.0.0After a power outage, FCM drives may fail to come online when power is restored.
SVAPAR-1704988.7.0.7A timing window may cause a single-node warmstart, if a volume is deleted at the same time that volume compressibility is being measured.
SVAPAR-1704989.1.0.0A timing window may cause a single-node warmstart, if a volume is deleted at the same time that volume compressibility is being measured.
SVAPAR-1705119.1.1.0Node warmstarts caused by a race condition during NVMe host reset, if the host is using Compare and Write commands
SVAPAR-1706578.6.0.9A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.

SVAPAR-134589 previously addressed the same issue, but that fix was found to be incomplete.

. For more details refer to this Flash
SVAPAR-1706578.7.0.6A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.

SVAPAR-134589 previously addressed the same issue, but that fix was found to be incomplete.

. For more details refer to this Flash
SVAPAR-1706578.7.3.3A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.

SVAPAR-134589 previously addressed the same issue, but that fix was found to be incomplete.

. For more details refer to this Flash
SVAPAR-1706579.1.0.0A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.

SVAPAR-134589 previously addressed the same issue, but that fix was found to be incomplete.

. For more details refer to this Flash
SVAPAR-1709589.1.0.2Policy-based asynchronous replication may not correctly balance the available bandwidth between nodes after a node goes offline, potentially causing a degradation of the recovery point
SVAPAR-1709589.1.1.0Policy-based asynchronous replication may not correctly balance the available bandwidth between nodes after a node goes offline, potentially causing a degradation of the recovery point
SVAPAR-1717908.7.0.7Triggering a livedump may cause a single node warmstart
SVAPAR-1724789.1.0.1Systems running 9.1.0.0 may incorrectly report 1585 "Could not connect to DNS server" errors
SVAPAR-1724789.1.1.0Systems running 9.1.0.0 may incorrectly report 1585 "Could not connect to DNS server" errors
SVAPAR-1725409.1.1.0Single node warmstart after loss of connection to a remote cluster when using secured IP partnerships
SVAPAR-1725729.1.1.0Node warmstart after a host submits a persistent reserve command, in a small timing window immediately after mapping a PBHA volume to the host
SVAPAR-1727459.1.0.1Systems using policy based high availability (PBHA) on 9.1.0.0 may experience a detected data loss after performing configuration changes and multiple failovers of the active management system.
SVAPAR-1727459.1.1.0Systems using policy based high availability (PBHA) on 9.1.0.0 may experience a detected data loss after performing configuration changes and multiple failovers of the active management system.
SVAPAR-1729669.1.1.0The real capacity of a thin-provisioned volume in a standard pool cannot be shrunk if the new real capacity is not a multiple of the grain size
SVAPAR-1733109.1.0.2If both nodes in an IO Group go down unexpectedly, invalid snapshots may remain in the system which cannot be removed
SVAPAR-1733109.1.1.0If both nodes in an IO Group go down unexpectedly, invalid snapshots may remain in the system which cannot be removed
SVAPAR-1735488.7.0.7Removing and then adding vdisk-host mappings may cause multiple node warmstarts, leading to a loss of access to data.
SVAPAR-1735489.1.1.0Removing and then adding vdisk-host mappings may cause multiple node warmstarts, leading to a loss of access to data.
SVAPAR-1738589.1.0.2Expanding a production volume that is using asynchronous replication may trigger multiple node warmstarts and an outage on the recovery system.
SVAPAR-1738589.1.1.0Expanding a production volume that is using asynchronous replication may trigger multiple node warmstarts and an outage on the recovery system.
SVAPAR-1739368.5.0.17A timing window can lead to a resource leak in the thin-provisioning component. This can lead to higher volume response times, and eventually a node warmstart caused by an I/O timeout.
SVAPAR-1739368.6.0.10A timing window can lead to a resource leak in the thin-provisioning component. This can lead to higher volume response times, and eventually a node warmstart caused by an I/O timeout.
SVAPAR-1739368.7.0.7A timing window can lead to a resource leak in the thin-provisioning component. This can lead to higher volume response times, and eventually a node warmstart caused by an I/O timeout.
SVAPAR-1739369.1.0.2A timing window can lead to a resource leak in the thin-provisioning component. This can lead to higher volume response times, and eventually a node warmstart caused by an I/O timeout.
SVAPAR-1744448.7.0.7The change password dialog box does not allow the user to input a new password, when changing an expired password.
SVAPAR-1758079.1.0.2Multiple node warmstarts may cause loss of access to data after upgrade to 8.7.2 or later, on a system that was once an AuxFar site in a 3-site replication configuration. This is due to invalid FlashCopy configuration state after removal of 3-site replication with HyperSwap or Metro Mirror, and does not apply to 3-site policy-based replication.
SVAPAR-1758079.1.1.0Multiple node warmstarts may cause loss of access to data after upgrade to 8.7.2 or later, on a system that was once an AuxFar site in a 3-site replication configuration. This is due to invalid FlashCopy configuration state after removal of 3-site replication with HyperSwap or Metro Mirror, and does not apply to 3-site policy-based replication.
SVAPAR-1758558.7.0.7A new volume may incorrectly show a source_volume_name and source_volume_id, when it inherits the vdisk ID of a deleted clone volume.
SVAPAR-1758559.1.0.2A new volume may incorrectly show a source_volume_name and source_volume_id, when it inherits the vdisk ID of a deleted clone volume.
SVAPAR-1758559.1.1.0A new volume may incorrectly show a source_volume_name and source_volume_id, when it inherits the vdisk ID of a deleted clone volume.
SVAPAR-1762389.1.0.2A node may go offline with node error 566, due to excessive logging related to DIMM errors.
SVAPAR-1762389.1.1.0A node may go offline with node error 566, due to excessive logging related to DIMM errors.
SVAPAR-1771279.1.0.2The system may fail to install an externally signed system certificate via the GUI.
SVAPAR-1771279.1.1.0The system may fail to install an externally signed system certificate via the GUI.
SVAPAR-1773599.1.0.2Users may need to log out of iSCSI sessions individually, as simultaneous logout is not supported.
SVAPAR-1773599.1.1.0Users may need to log out of iSCSI sessions individually, as simultaneous logout is not supported.
SVAPAR-1775088.5.0.17Multiple node warmstarts may occur on systems where a maximum failed login count has been configured.
SVAPAR-1775088.6.0.10Multiple node warmstarts may occur on systems where a maximum failed login count has been configured.
SVAPAR-1775088.7.0.7Multiple node warmstarts may occur on systems where a maximum failed login count has been configured.
SVAPAR-1775089.1.0.0Multiple node warmstarts may occur on systems where a maximum failed login count has been configured.
SVAPAR-1776399.1.0.2Deletion of volumes in 3-site (HA+DR) replication may cause multiple node warmstarts. This can only occur if the volume previously used 2-site asynchronous replication, and was then converted to 3-site (HA+DR).
SVAPAR-1776399.1.1.0Deletion of volumes in 3-site (HA+DR) replication may cause multiple node warmstarts. This can only occur if the volume previously used 2-site asynchronous replication, and was then converted to 3-site (HA+DR).
SVAPAR-1777719.1.0.2Encryption with internal key management may be unable to perform the scheduled daily re-key of the internal key. The event log will show a daily repeating information event when this occurs. The current internal recovery key will continue to function.
SVAPAR-1777719.1.1.0Encryption with internal key management may be unable to perform the scheduled daily re-key of the internal key. The event log will show a daily repeating information event when this occurs. The current internal recovery key will continue to function.
SVAPAR-1782088.6.0.10A race condition in I/O processing for NVMe over RDMA/TCP hosts may lead to a single node warmstart.
SVAPAR-1782088.7.0.7A race condition in I/O processing for NVMe over RDMA/TCP hosts may lead to a single node warmstart.
SVAPAR-1782089.1.0.2A race condition in I/O processing for NVMe over RDMA/TCP hosts may lead to a single node warmstart.
SVAPAR-1782089.1.1.0A race condition in I/O processing for NVMe over RDMA/TCP hosts may lead to a single node warmstart.
SVAPAR-1782498.7.0.7A node may warmstart following a config node failover, if an encrypted cloud account was inaccessible at the time of the failover.
SVAPAR-1782508.7.0.7Node warmstarts may be triggered by a race condition during NVMe host reset, if the host is using Compare and Write commands. This can cause a loss of access to data.
SVAPAR-1782509.1.0.2Node warmstarts may be triggered by a race condition during NVMe host reset, if the host is using Compare and Write commands. This can cause a loss of access to data.
SVAPAR-1782509.1.1.0Node warmstarts may be triggered by a race condition during NVMe host reset, if the host is using Compare and Write commands. This can cause a loss of access to data.
SVAPAR-1782578.5.0.17During OpenShift version upgrade, when the last IQN is removed from a host object, this incorrectly causes the portset to be reset to the default value. This can cause a loss of access.
SVAPAR-1782578.6.0.10During OpenShift version upgrade, when the last IQN is removed from a host object, this incorrectly causes the portset to be reset to the default value. This can cause a loss of access.
SVAPAR-1782578.7.0.7During OpenShift version upgrade, when the last IQN is removed from a host object, this incorrectly causes the portset to be reset to the default value. This can cause a loss of access.
SVAPAR-1782579.1.1.0During OpenShift version upgrade, when the last IQN is removed from a host object, this incorrectly causes the portset to be reset to the default value. This can cause a loss of access.
SVAPAR-1782588.7.0.7System may experience performance issue when configured with ISCSI host
SVAPAR-1782589.1.0.2System may experience performance issue when configured with ISCSI host
SVAPAR-1782589.1.1.0System may experience performance issue when configured with ISCSI host
SVAPAR-1782628.7.0.7System may experience performance issue when configured with ISCSI host
SVAPAR-1782629.1.0.2System may experience performance issue when configured with ISCSI host
SVAPAR-1782629.1.1.0System may experience performance issue when configured with ISCSI host
SVAPAR-1782608.7.0.7chportethernet and mkip commands may fail if Ethernet adapters have not been configured uniformly on all nodes. This is possible in systems with multiple IO groups
SVAPAR-1782618.7.0.7Incorrect snapshot information displayed under the snapshot properties of a volume in GUI.
SVAPAR-1782619.1.0.0Incorrect snapshot information displayed under the snapshot properties of a volume in GUI.
SVAPAR-1782629.1.0.0System may experience performance issue when configured with ISCSI host
SVAPAR-1782988.7.0.7On systems using both volume group snapshots and legacy FlashCopy (for example remote copy with change volumes), a failed snapshot command may lead to multiple node warmstarts, or loss of configuration access.
SVAPAR-1783209.1.0.2When an invalid subject alternative name is entered for a mksystemcertstore command, the system returns CMMVC5786E The action failed because the cluster is not in a stable state.
SVAPAR-1783209.1.1.0When an invalid subject alternative name is entered for a mksystemcertstore command, the system returns CMMVC5786E The action failed because the cluster is not in a stable state.
SVAPAR-1783238.6.0.10The system may attempt to authenticate a LDAP user who is not in any remote user group with a null password.
SVAPAR-1783239.1.0.2The system may attempt to authenticate a LDAP user who is not in any remote user group with a null password.
SVAPAR-1783239.1.1.0The system may attempt to authenticate a LDAP user who is not in any remote user group with a null password.
SVAPAR-1784008.5.0.17Collecting support logs via Storage Insights may time out.
SVAPAR-1784008.6.0.10Collecting support logs via Storage Insights may time out.
SVAPAR-1784008.7.0.7Collecting support logs via Storage Insights may time out.
SVAPAR-1784028.6.0.10Multiple node warmstarts may occur when there are a high number of errors on the fibre channel network.
SVAPAR-1784028.7.0.7Multiple node warmstarts may occur when there are a high number of errors on the fibre channel network.
SVAPAR-1784029.1.0.2Multiple node warmstarts may occur when there are a high number of errors on the fibre channel network.
SVAPAR-1784029.1.1.0Multiple node warmstarts may occur when there are a high number of errors on the fibre channel network.
SVAPAR-1786488.6.0.10Single node warmstart triggered by transient fault on inter canister link
SVAPAR-1786488.7.0.7Single node warmstart triggered by transient fault on inter canister link
SVAPAR-1786488.7.2.0Single node warmstart triggered by transient fault on inter canister link
SVAPAR-1786679.1.1.0Node warmstarts caused by hung NVMe Compare and Write commands.
SVAPAR-1788079.1.1.0A single node may warmstart if a volume group being replicated with PBR (Policy-Based Replication) is deleted during its initial synchronization.
SVAPAR-1790309.1.0.2The CIMOM configuration interface is no longer supported in 9.1.0. Attempting to manually restart the cimserver service may cause a node warmstart, and loss of configuration access.
SVAPAR-1790309.1.1.0The CIMOM configuration interface is no longer supported in 9.1.0. Attempting to manually restart the cimserver service may cause a node warmstart, and loss of configuration access.
SVAPAR-1790869.1.1.0Ransomware threat detection process does not always send an alert when a threat is detected.
SVAPAR-1791289.1.0.2A single-node warmstart may occur on a system using policy-based replication or HA, due to a timing window triggered by disconnection of a partnership.
SVAPAR-1791289.1.1.0A single-node warmstart may occur on a system using policy-based replication or HA, due to a timing window triggered by disconnection of a partnership.
SVAPAR-1791849.1.1.0GUI allows DR linking to a partner system which does not support DR linking
SVAPAR-1791969.1.1.0When partnership creation is attempted using the GUI, for a remote system which already has a partnership, an error is produced but a new truststore is incorrectly created.
SVAPAR-1792969.1.0.2In 9.1.0.0 and 9.1.0.1, the "chvolume -size" command has no effect on FS5015 and FS5035. This prevents GUI volume resizing from working correctly.
SVAPAR-1792969.1.1.0In 9.1.0.0 and 9.1.0.1, the "chvolume -size" command has no effect on FS5015 and FS5035. This prevents GUI volume resizing from working correctly.
SVAPAR-1798129.1.1.0Event ID 86014 (encryption recovery key is not configured) is shown with the message for event ID 86015 (internal key management rekey failed), and vice-versa.
SVAPAR-1798749.1.0.2GUI displays old partition name after renaming.
SVAPAR-1798749.1.1.0GUI displays old partition name after renaming.
SVAPAR-1799309.1.0.2Node warmstarts when backend IO is active on a fibre channel login that experiences a logout on code level 9.1.0.0 or 9.1.0.1.
SVAPAR-1799309.1.1.0Node warmstarts when backend IO is active on a fibre channel login that experiences a logout on code level 9.1.0.0 or 9.1.0.1.
SVAPAR-1805308.7.0.8A volume group may get removed when configuring policy-based asynchronous disaster recovery if that volume group was previously migrated from another system to this system using policy-based high availability
SVAPAR-1805519.1.0.0Upgrade from 8.7.0 or 8.7.1, to 8.7.2 or 8.7.3, may cause a node warmstart if IPv6 addresses are in use.
SVAPAR-1808389.1.0.2Multiple node warmstarts on systems running 9.1 if High Availability (PBHA) is added to an existing asynchronous replication (PBR) setup and the volumes are using persistent reserves.
SVAPAR-1815569.1.0.2Configuration backups and support data collection (snap) can fail on systems running 9.1 if there are any invalid UTF-8 characters in the login banner
SVAPAR-1816409.1.0.2After expanding a volume that is being asynchronously replicated, data written to the recently expanded region of the disk may not get replicated to the remote site if the replication is running in low bandwidth mode. This can lead to an undetected data loss at the DR site.
SVAPAR-1816409.1.1.0After expanding a volume that is being asynchronously replicated, data written to the recently expanded region of the disk may not get replicated to the remote site if the replication is running in low bandwidth mode. This can lead to an undetected data loss at the DR site.
SVAPAR-1821889.1.0.2A single node warmstart may occur when the Ransomware Threat Detection process stops functioning
SVAPAR-1829099.1.0.2FlashSystem CLI, GUI and REST API are inaccessible on releases 9.1.0.0 and 9.1.0.1
SVAPAR-1835779.1.0.2Upgrade from 9.1.0.0 or 9.1.0.1, to 9.1.1.0 or later, is not supported if host clusters exist, or a partition is associated with a management portset.
SVAPAR-1840158.5.0.17In the GUI, under Secure Communications the system certificate details may not be displayed and the state incorrectly indicates expired.
SVAPAR-1840218.5.0.17FlashSystem 50xx enclosure midplane replacement may fail with node error 502 or 556.
SVAPAR-1842838.5.0.17Node warmstarts due to a timing issue when processing aborts on systems with 32G FC adapters
SVAPAR-1842838.7.0.0Node warmstarts due to a timing issue when processing aborts on systems with 32G FC adapters
SVAPAR-1842848.5.0.17Node warmstarts triggered by timing window when processing Fibre Channel logouts
SVAPAR-1842848.6.0.10Node warmstarts triggered by timing window when processing Fibre Channel logouts
SVAPAR-1842848.6.1.0Node warmstarts triggered by timing window when processing Fibre Channel logouts
SVAPAR-1843398.5.0.17Node warmstart due to a timing issue when processing configuration changes
SVAPAR-1848608.6.0.10On a system using 3-site replication with Metro Mirror or HyperSwap, a timing window in the configuration component may cause a single-node warmstart.
SVAPAR-1849288.6.0.10On a system using 3-site replication with Metro Mirror or HyperSwap, a timing window in the configuration component may cause a single-node warmstart.
SVAPAR-1849288.7.2.0On a system using 3-site replication with Metro Mirror or HyperSwap, a timing window in the configuration component may cause a single-node warmstart.
SVAPAR-1849308.6.0.10If a snapshot clone is added to a 3-site HyperSwap or Metro Mirror configuration, it may cause multiple node warmstarts.
SVAPAR-1849308.7.2.0If a snapshot clone is added to a 3-site HyperSwap or Metro Mirror configuration, it may cause multiple node warmstarts.
SVAPAR-1851168.6.0.10Multiple node warmstarts may occur when taking a snapshot of a volume group that contains both the master and auxiliary copy of a Hyperswap volume.
SVAPAR-829508.5.0.8If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue
SVAPAR-829508.5.3.1If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue
SVAPAR-829508.6.0.0If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue
SVAPAR-832908.4.0.10An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime.
SVAPAR-832908.5.0.7An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime.
SVAPAR-832908.5.4.0An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime.
SVAPAR-832908.6.0.0An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime.
SVAPAR-840998.5.0.7An NVMe codepath exists where by strict state checking incorrectly decides that a software flag state is invalid, there by triggering a node warmstart
SVAPAR-840998.6.0.0An NVMe codepath exists where by strict state checking incorrectly decides that a software flag state is invalid, there by triggering a node warmstart
SVAPAR-841168.4.0.11The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed
SVAPAR-841168.5.0.8The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed
SVAPAR-841168.6.0.0The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed
SVAPAR-841808.6.2.0A login to a backend target port that has been slandered will be re-used leading to one or more mdisks being excluded. This will cause the relevant storage pool to go offline.
SVAPAR-841808.7.0.0A login to a backend target port that has been slandered will be re-used leading to one or more mdisks being excluded. This will cause the relevant storage pool to go offline.
SVAPAR-843058.4.0.10A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter
SVAPAR-843058.5.0.7A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter
SVAPAR-843058.5.4.0A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter
SVAPAR-843058.6.0.0A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter
SVAPAR-843318.4.0.10A node may warmstart when the 'lsnvmefabric -remotenqn' command is run
SVAPAR-843318.5.0.7A node may warmstart when the 'lsnvmefabric -remotenqn' command is run
SVAPAR-843318.6.0.0A node may warmstart when the 'lsnvmefabric -remotenqn' command is run
SVAPAR-850938.5.4.0Systems that are using Policy-Based Replication may experience node warmstarts, if host I/O consists of large write I/Os with a high queue depth
SVAPAR-850938.6.0.0Systems that are using Policy-Based Replication may experience node warmstarts, if host I/O consists of large write I/Os with a high queue depth
SVAPAR-853968.4.0.10Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem
SVAPAR-853968.5.0.7Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem
SVAPAR-853968.5.4.0Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem
SVAPAR-853968.6.0.0Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem
SVAPAR-856408.5.0.12If new nodes/iogroups are added to an SVC cluster that is virtualizing a clustered SpecV system, an attempt to add the SVC node host objects to a host cluster on the backend SpecV system will fail with CLI error code CMMVC8278E due to incorrect policing
SVAPAR-856408.6.0.0If new nodes/iogroups are added to an SVC cluster that is virtualizing a clustered SpecV system, an attempt to add the SVC node host objects to a host cluster on the backend SpecV system will fail with CLI error code CMMVC8278E due to incorrect policing
SVAPAR-856588.5.0.12When replacing a boot drive, the new drive needs to be synchronized with the existing drive. The command to do this appears to run and does not return an error, but the new drive does not actually get synchronized.
SVAPAR-859808.4.0.10iSCSI response times may increase on some systems with 25Gb ethernet adapters, after upgrade to 8.4.0.9 or 8.5.x
SVAPAR-859808.5.0.8iSCSI response times may increase on some systems with 25Gb ethernet adapters, after upgrade to 8.4.0.9 or 8.5.x
SVAPAR-860358.4.0.10Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart
SVAPAR-860358.5.0.7Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart
SVAPAR-860358.6.0.0Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart
SVAPAR-861398.4.0.10Failover for VMware iSER hosts may pause I/O for more than 120 seconds
SVAPAR-861828.6.0.0A node may warmstart if there is an encryption key error that prevents a distributed raid array from being created
SVAPAR-864778.6.0.0In some situations ordered processes need to be replayed to ensure the continued management of user workloads. Circumstances exist where this processing can fail to get scheduled so the work remains locked. Software timers that check for this continued activity will detect a stall and force a recovery warmstart
SVAPAR-877298.5.0.8After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts
SVAPAR-877298.5.4.0After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts
SVAPAR-877298.6.0.0After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts
SVAPAR-878468.5.4.0Node warmstarts with unusual workload pattern on volumes with Policy-based replication
SVAPAR-878468.6.0.0Node warmstarts with unusual workload pattern on volumes with Policy-based replication
SVAPAR-882758.6.0.0A single-node warmstart may occur due to a very low-probability timing window in the thin-provisioning component. This can occur when the partner node has just gone offline, causing a loss of access to data
HU022718.6.0.0A single-node warmstart may occur due to a very low-probability timing window in the thin-provisioning component. This can occur when the partner node has just gone offline, causing a loss of access to data
SVAPAR-882798.6.0.0A low probability timing window exists in the Fibre Channel login management code. If there are many logins, and two nodes go offline in a very short time, this may cause other nodes in the cluster to warmstart
SVAPAR-888878.5.0.12Loss of access to data after replacing all boot drives in system
SVAPAR-888878.6.0.0Loss of access to data after replacing all boot drives in system
SVAPAR-891728.6.0.0Snapshot volumes created by running the 'addsnapshot' command from the CLI can be slow to come online, this causes the Production volumes to incorrectly go offline
SVAPAR-892718.7.0.0Policy-based Replication is not achieving the link_bandwidth_mbits configured on the partnership if only a single volume group is replicating in an I/O group, or workload is not balanced equally between volume groups owned by both nodes.
SVAPAR-892968.5.0.8Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade
SVAPAR-892968.5.4.0Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade
SVAPAR-892968.6.0.0Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade
SVAPAR-893318.6.0.5Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed.
SVAPAR-893318.7.0.2Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed.
SVAPAR-893318.7.1.0Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed.
SVAPAR-896928.5.0.8Battery back-up units may reach end of life prematurely on FS9500 / SV3 systems, despite the batteries being in good physical health, which will result in node errors and potentially nodes going offline if both batteries are affected
SVAPAR-896928.6.0.0Battery back-up units may reach end of life prematurely on FS9500 / SV3 systems, despite the batteries being in good physical health, which will result in node errors and potentially nodes going offline if both batteries are affected
SVAPAR-896948.4.0.11Kernel panics might occur on a subset of Spectrum Virtualize Hardware Platforms with a 10G Ethernet adapter running 8.4.0.10, 8.5.0.7 and 8.5.3.1 when taking a snap. For more details refer to this Flash
SVAPAR-896948.5.0.8Kernel panics might occur on a subset of Spectrum Virtualize Hardware Platforms with a 10G Ethernet adapter running 8.4.0.10, 8.5.0.7 and 8.5.3.1 when taking a snap. For more details refer to this Flash
SVAPAR-897648.6.0.0There is an issue with the asynchronous delete behavior of the Safeguarded Copies VDisks in the background that can cause an unexpected internal state in the FlashCopy component that can cause a single node assert
SVAPAR-897808.5.4.0A node may warmstart after running the flashcopy command 'stopfcconsistgrp' due to the flashcopy maps in the consistency group being in an invalid state
SVAPAR-897808.6.0.0A node may warmstart after running the flashcopy command 'stopfcconsistgrp' due to the flashcopy maps in the consistency group being in an invalid state
SVAPAR-897818.5.4.0The 'lsportstats' command does not work via the REST API until code level 8.5.4.0
SVAPAR-897818.6.0.0The 'lsportstats' command does not work via the REST API until code level 8.5.4.0
SVAPAR-899518.5.4.0A single node warmstart might occur when a volume group with a replication policy switches the replication to cycling mode.
SVAPAR-899518.6.0.0A single node warmstart might occur when a volume group with a replication policy switches the replication to cycling mode.
SVAPAR-903958.5.0.8FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources
SVAPAR-903958.5.4.0FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources
SVAPAR-903958.6.0.0FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources
SVAPAR-904388.5.0.8A conflict of host IO on one node, with array resynchronisation task on the partner node, can result in some regions of parity inconsistency. This is due to the asynchronous parity update behaviour leaving invalid parity in the RAID internal cache
SVAPAR-904388.6.0.0A conflict of host IO on one node, with array resynchronisation task on the partner node, can result in some regions of parity inconsistency. This is due to the asynchronous parity update behaviour leaving invalid parity in the RAID internal cache
SVAPAR-904598.5.4.0Possible undetected data corruption or multiple node warmstarts if a Traditional FlashCopy Clone of a volume is created before adding Volume Group Snapshots to the volume
SVAPAR-904598.6.0.0Possible undetected data corruption or multiple node warmstarts if a Traditional FlashCopy Clone of a volume is created before adding Volume Group Snapshots to the volume
SVAPAR-911118.5.4.0USB devices connected to an FS5035 node may be formatted on upgrade to 8.5.3 software
SVAPAR-911118.6.0.0USB devices connected to an FS5035 node may be formatted on upgrade to 8.5.3 software
SVAPAR-913578.6.2.0Cancelling a drive software upgrade may cause multiple node warmstarts, due to a timing window.
SVAPAR-913578.7.0.0Cancelling a drive software upgrade may cause multiple node warmstarts, due to a timing window.
SVAPAR-918608.5.0.10If an upgrade is started with the pause flag and then aborted, the pause flag may not be cleared. This can trigger the system to encounter an unexpected code path on the next upgrade, thereby causing a loss of access to data
SVAPAR-918608.6.0.0If an upgrade is started with the pause flag and then aborted, the pause flag may not be cleared. This can trigger the system to encounter an unexpected code path on the next upgrade, thereby causing a loss of access to data
SVAPAR-919378.6.2.0Externally virtualized managed disks may be taken offline during SCSI queue full conditions, when there are an excessive number of other SCSI errors in the system
SVAPAR-919378.7.0.0Externally virtualized managed disks may be taken offline during SCSI queue full conditions, when there are an excessive number of other SCSI errors in the system
SVAPAR-920668.6.0.0Node warmstarts can occur after running the 'lsvdiskfcmapcopies' command if Safeguarded Copy is used
SVAPAR-925798.6.0.0If Volume Group Snapshots are in use on a Policy-Based Replication DR system, a timing window may result in a node warmstart for one or both nodes in the I/O group
SVAPAR-928048.7.2.0SAS direct attach host path is not recovered after a node reboot causing a persistent loss of redundant paths.
SVAPAR-928049.1.0.0SAS direct attach host path is not recovered after a node reboot causing a persistent loss of redundant paths.
SVAPAR-929838.6.0.0There is an issue that prevents Remote users with SSH key to connect to the storage system if BatchMode is enabled
SVAPAR-930548.5.0.12Backend systems on 8.2.1 and beyond have an issue that causes capacity information updates to stop after a T2 or T3 is performed. This affects all backend systems with FCM arrays
SVAPAR-930548.6.0.0Backend systems on 8.2.1 and beyond have an issue that causes capacity information updates to stop after a T2 or T3 is performed. This affects all backend systems with FCM arrays
SVAPAR-933098.5.0.12A node may briefly go offline after a battery firmware update
SVAPAR-933098.6.0.0A node may briefly go offline after a battery firmware update
SVAPAR-934428.6.0.0User ID does not have the authority to submit a command in some LDAP environments
SVAPAR-934458.5.0.17A single node warmstart may occur due to a very low-probability timing window related to NVMe drive management.
SVAPAR-934458.6.0.10A single node warmstart may occur due to a very low-probability timing window related to NVMe drive management.
SVAPAR-934458.7.2.0A single node warmstart may occur due to a very low-probability timing window related to NVMe drive management.
SVAPAR-934459.1.0.0A single node warmstart may occur due to a very low-probability timing window related to NVMe drive management.
SVAPAR-937098.6.0.4A problem with NVMe drives may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.
SVAPAR-937098.6.2.0A problem with NVMe drives may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.
SVAPAR-937098.7.0.0A problem with NVMe drives may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.
SVAPAR-939878.5.0.6A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets
SVAPAR-939878.5.2.0A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets
SVAPAR-939878.6.0.0A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets
SVAPAR-941798.4.0.12Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node
SVAPAR-941798.5.0.9Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node
SVAPAR-941798.6.0.1Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node
SVAPAR-941798.6.1.0Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node
SVAPAR-941798.7.0.0Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node
SVAPAR-946828.6.0.0SMTP fails if the length of the email server's domain name is longer than 40 characters
SVAPAR-946868.5.0.10The GUI can become slow and unresponsive due to a steady stream of configuration updates such as 'svcinfo' queries for the latest configuration data
SVAPAR-946868.6.0.0The GUI can become slow and unresponsive due to a steady stream of configuration updates such as 'svcinfo' queries for the latest configuration data
SVAPAR-947038.6.0.0The estimated compression savings value shown in the GUI for a single volume is incorrect. The total savings for all volumes in the system will be shown
SVAPAR-949028.6.0.0When attempting to enable local port masking for a specific subset of control enclosure based clusters, this may fail with the following message; 'The specified port mask cannot be applied because insufficient paths would exist for node communication'
SVAPAR-949568.6.0.0When ISER clustering is configured with a default gateway of 0.0.0.0, the node IPs will not be activated during boot after a reboot or warmstart and the node will remain offline in 550/551 state
SVAPAR-953498.6.0.0Adding a hyperswap volume copy to a clone of a Volume Group Snapshot may cause all nodes to warmstart, causing a loss of access
SVAPAR-953848.6.0.1In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication
SVAPAR-953848.6.1.0In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication
SVAPAR-953848.7.0.0In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication
SVAPAR-966568.6.0.0VMware hosts may experience errors creating snapshots, due to an issue in the VASA Provider
SVAPAR-967778.6.1.0Policy-based Replication uses journal resources to handle replication. If these resources become exhausted, the volume groups with the highest RPO and most amount of resources should be purged to free up resources for other volume groups. The decision which volume groups to purge is made incorrectly, potentially causing too many volume groups to exceed their target RPO
SVAPAR-967778.7.0.0Policy-based Replication uses journal resources to handle replication. If these resources become exhausted, the volume groups with the highest RPO and most amount of resources should be purged to free up resources for other volume groups. The decision which volume groups to purge is made incorrectly, potentially causing too many volume groups to exceed their target RPO
SVAPAR-969528.6.2.0A single node warmstart may occur when updating the login counts associated with a backend controller.
SVAPAR-969528.7.0.0A single node warmstart may occur when updating the login counts associated with a backend controller.
SVAPAR-975028.6.0.1Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings
SVAPAR-975028.6.1.0Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings
SVAPAR-975028.7.0.0Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings
SVAPAR-981288.6.0.1A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters
SVAPAR-981288.6.1.0A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters
SVAPAR-981288.7.0.0A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters
SVAPAR-981848.6.0.1When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access
SVAPAR-981848.6.1.0When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access
SVAPAR-981848.7.0.0When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access
SVAPAR-984978.5.0.17Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure
SVAPAR-984978.6.0.1Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure
SVAPAR-984978.6.1.0Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure
SVAPAR-984978.7.0.0Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure
SVAPAR-985678.5.0.9In FS50xx nodes, the TPM may become unresponsive after a number of weeks' runtime. This can lead to encryption or mdisk group CLI commands failing, or in some cases node warmstarts. This issue was partially addressed by SVAPAR-83290, but is fully resolved by this second fix.
SVAPAR-985678.6.0.0In FS50xx nodes, the TPM may become unresponsive after a number of weeks' runtime. This can lead to encryption or mdisk group CLI commands failing, or in some cases node warmstarts. This issue was partially addressed by SVAPAR-83290, but is fully resolved by this second fix.
SVAPAR-985768.5.0.10Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear.
SVAPAR-985768.6.0.2Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear.
SVAPAR-985768.6.1.0Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear.
SVAPAR-985768.7.0.0Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear.
SVAPAR-986118.5.0.12The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host
SVAPAR-986118.6.0.1The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host
SVAPAR-986118.6.1.0The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host
SVAPAR-986118.7.0.0The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host
SVAPAR-986128.6.0.1Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts
SVAPAR-986128.6.1.0Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts
SVAPAR-986128.7.0.0Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts
SVAPAR-986728.5.0.9VMWare host crashes on servers connected using NVMe over Fibre Channel with the host_unmap setting disabled
SVAPAR-986728.6.0.1VMWare host crashes on servers connected using NVMe over Fibre Channel with the host_unmap setting disabled
SVAPAR-988938.6.0.1If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur
SVAPAR-988938.6.1.0If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur
SVAPAR-988938.7.0.0If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur
SVAPAR-989718.5.0.9The GUI may show repeated invalid pop-ups stating configuration node failover has occurred
SVAPAR-991758.5.0.10A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once.
SVAPAR-991758.6.0.1A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once.
SVAPAR-991758.6.2.0A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once.
SVAPAR-991758.7.0.0A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once.
SVAPAR-992738.5.0.10If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart.
SVAPAR-992738.5.2.0If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart.
SVAPAR-992738.6.0.0If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart.
SVAPAR-993548.6.0.1Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot
SVAPAR-993548.6.2.0Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot
SVAPAR-993548.7.0.0Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot
SVAPAR-995378.5.0.12If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed
SVAPAR-995378.6.0.1If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed
SVAPAR-995378.6.1.0If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed
SVAPAR-995378.7.0.0If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed
SVAPAR-998558.6.0.1After battery firmware is upgraded on SV3 or FS9500 as part of a software upgrade, there is a small probability that the battery may remain permanently offline
SVAPAR-999978.6.0.2Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup'
SVAPAR-999978.6.1.0Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup'
SVAPAR-999978.7.0.0Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup'

[{"Line of Business":{"code":"LOB71","label":"Storage HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"STPVGU","label":"SAN Volume Controller"},"ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Version(s)"},{"Type":"MASTER","Line of Business":{"code":"LOB71","label":"Storage HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"ST3FR7","label":"IBM Storwize V7000"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STSLR9","label":"IBM FlashSystem 9x00"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STHGUJ","label":"IBM Storwize V5000"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST3FR9","label":"IBM FlashSystem 5x00"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"},{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSA76Z4","label":"IBM FlashSystem 7x00"},"ARM Category":[{"code":"a8m3p000000GoMdAAK","label":"APARs"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
25 November 2025

UID

ibm16340241