Summary of changes
This topic summarizes changes to the IBM Storage Scale licensed program and the IBM Storage Scale library. Within each topic, these markers ( ) surrounding text or illustrations indicate technical changes or additions that are made to the previous edition of the information.
for IBM Storage Scale 6.0.0
as updated, December 2025
- IBM Storage Scale Big Data and Analytics changes
- For information on changes in IBM Storage Scale Big Data
and Analytics support and HDFS protocol, see IBM Storage Scale Big Data and Analytics - summary of changes.
- IBM Storage Scale Erasure Code Edition changes
- For information on changes in the
IBM Storage Scale Erasure Code Edition,
see
IBM Storage Scale Erasure Code Edition - Summary of
changes.
- IBM Storage Scale Container Native Storage Access
- For information on changes in IBM Storage Scale Container Native Storage Access, see
What's new? in the IBM Storage Scale Container Native documentation.
- IBM Storage Scale Container Storage Interface driver
- For information on changes in IBM Storage Scale Container Storage Interface driver, see
Summary of changes in the IBM Storage Scale Container Storage Interface Driver
documentation.
- AFM changes
-
- Asynchronous notifications to detect changes on a cloud object store instead of polling. For more information, see Asynchronous notifications and Amazon SQS integration.
- Configuration AFM to cloud object storage fileset by using proxy endpoints. For more information, see Creating a fileset by using proxy endpoints.
- AFM to cloud object storage fileset with S3 Glacier storage classes support for all fileset modes. For more information, see Configuring an AFM to cloud object storage fileset with S3 Glacier storage classes.
- Support custom certificates for AFM cache filesets with the cloud object storage. For more information, see Configuring AFM filesets with custom SSL certificates.
- Removal of the afmEnableADR parameter. To enable AFM DR, the afmEnableADR parameter is enabled by default. If this parameter is disabled, then AFM DR relationship will not work.
- Call home changes
- EU Data Act compliance implemented.
- Cloudkit changes
The following features are supported:- On Microsoft Azure, automated deployment and configuration of CES or protocol nodes.
- Extended support for RHEL 9.6.
- Cloudkit binary packaging and its dependencies as an RPM package.

- Commands, data types, and programming APIs
- The following section lists the modifications to the documented commands, structures, and subroutines:
- Changed commands
-
- mmaddcallback
- mmadddisk
- mmafmcosconfig
- mmcallhome
- mmces
- mmchattr
- mmchconfig
- mmchfileset
- mmchfs
- mmchlicense
- mmchode
- mmcrfs
- mmfsck
- mmlsattr
- mmlscluster
- mmlsfs
- mmlspool
- mmperfmon
- mmpmon
- mmqos
- mms3
- scalectl filesystem
- scalectl node
- Decentralized deployment of DAT
-
Starting with 6.0.0.1, support for the decentralized deployment of data acceleration tier (DAT) in IBM Storage Scale System 6000. For more information, see Decentralized DAT deployment in the IBM Storage Scale System 6000: Hardware Planning and Installation Guide.
- File system core improvements
-
- Enhanced online file system checks through improvements to the mmfsckx command. The mmfsckx command can now detect and report inconsistencies in a directory block structure. This feature only detects and reports the directory block inconsistency but does not repair it. For more information, see mmfsckx command.
- Starting with IBM Storage Scale 6.0.0, only for newly
created clusters, the default value of prefetchLargeBlockThreshold changes to
4194304. This change causes prefetching to speed up more gradually for file systems which block
sizes are equal to or greater than 4 MiB. Typically, this more gradual increase in prefetching speed
contributes to read amplification effects. The prefetchLargeBlockThreshold option
of the mmchconfig command can be changed without restarting the IBM Storage Scale
mmfsddaemon through themmchconfig -ioption. When the prefetchLargeBlockThreshold option is set, it defines the minimum block size of a file system, for which a new more gradual (less abrupt) prefetching mode is enabled. For clusters created before version 6.0.0, the default value of prefetchLargeBlockThreshold is 33554432. - New parameter for the mmchconfig command to set the TCP communications
mode
Specifies the TCP connections mode: multi-rail over TCP (MROT) or multiple connections over TCP (MCOT). For more information, see mmchconfig command in the IBM Storage Scale: Command and Programming Reference Guide.
- Expelled nodes:
- When a GPFS node is expelled by the cluster manager, the
mmfs.logmessage now provides clearer details about the reason for the expel decision. - A new postExpel event is introduced. If a GPFS node leaves the cluster unexpectedly (for example, due to lost leases, mmexpelnode invocations, or inter-node RPC failures), this event allows users to optionally trigger customized logging and post-expel actions through a callback.
- The mmhealth component also reports such expels through its own postExpel callback.
- When a GPFS node is expelled by the cluster manager, the
- Bash completion added for mmhealth, mmcallhome, and mmperfmon.
- IBM Storage Scale native REST API updates
- Added the scalectl node batchAdd command and the /scalemgmt/v3/nodes:batchAdd: POST endpoint for native REST API-enabled clusters to add multiple nodes with JSON or node descriptor file input. For more information, see scalectl node command and /scalemgmt/v3/nodes:batchAdd: POST.
- Added the scalectl filesystem quota command and the /scalemgmt/v3/filesystems/{filesystem}/quotas endpoints for file system quota management. For more information, see scalectl filesystem command and Quota management endpoints.
- Added support for ARM architecture.

- Installation toolkit changes
-
The following features and characteristics are supported:
- Extended certification and support for operating systems.
- Enhanced problem determination during installation and upgrade procedures.
- Deployment of IBM Storage Scale native REST API on ARM 64 architectures.
- Deployment of IBM Storage Scale native REST API for IBM Storage Scale Erasure Code Edition.
- Upgrade of IBM Storage Scale native REST API.
- Nodes exclusion during installation and deployment procedures.
- Deployment of HDFS Transparency 3.3.6 during installation and upgrade.
- Management API changes
- New endpoints are added to support content-aware storage and AFM.
For more information, see IBM Storage Scale management API endpoints.
- Management GUI Changes
-
- Modernization of GUI towards IBM’s Carbon standard with enhancements to node and events page.
- Support for ARM.
- Support for nodes booted in FIPS mode.
- GUI support for JBOF and DAT configurations.
- Improvements to support the creation of many filesets through the REST API.
For more information, see Introduction to IBM Storage Scale GUI.
- Performance monitoring changes
-
- Perfmon performance is improved for large-scale queries involving many filesets.
- Clock alignment added for sensors to synchronize measurements across nodes.
- REST API now supports measurement queries with aggregation and group-by.
- Prometheus exporter now supports query filtering to reduce data load.
- Protocol changes
-
- Updated Samba on IBM Storage Scale to the upstream Samba 4.22 version.
- Implemented SMB file listing optimization by using asynchronous threads.
- If slowness is observed during listing a directory with large number of files or directories (>= 50000), optimization can be enabled by changing smbd async dosmode at global or share levels to "yes" by using the mmsmb command.
- New SMB configuration option, vfs mkdir use tmp name is set to no by default to synchronize with older versions and avoid errors in notifications.
- The mmuserauth command in IBM Storage Scale is tightly coupled with SMB 4.22 version when you use the --enable-nfs-kerberos option. You cannot use this command with older SMB versions.
- Python changes
- Python 3.10 or later is now required for IBM Storage Scale. Earlier releases supported Python 3.8 or later.
- System health monitoring changes
-
- AFM replication errors now detected and reported via mmhealth.
- Node expel monitoring with expel details added.
- TIPs added for filesystem -n mismatch and quorum node redundancy.
- New default threshold for network errors.
- Added S3 webhook connection health monitoring and warnings when exceeding S3 account or bucket limits.
- mmpmon socket and proxy failures now detected and reported.
mmhealth node show --unhealthynow hides headers of healthy nodes.
- S3 protocol
- AUTH by using LDAP is supported.

- Quality of service for I/O operations (QoS)
-
- The QoS feature to rebalance I/O limits amongst nodes has been renamed to Dynamic QoS Rebalancing (DQR). DQR was previously known as Mount Dynamic I/O (MDIO) in IBM Storage Scale versions prior to 6.0.0. Existing MDIO configuration setting mdio-enabled for user classes and the throttle for mdio-all-sharing to enable QoS class sharing remain valid and compatible with DQR. However, it is recommended to use the new DQR attributes for any future changes. If the cluster's minReleaseLevel is 6.0.0 or higher, attributes and class names will display the dqr prefix; otherwise, they will display the legacy mdio prefix.
- Filesystem-level QoS feature is available as a technology preview in IBM Storage Scale 6.0.0. For instructions on how to enable and run the Filesystem-level QoS feature, sign up for the technology preview by contacting scale@us.ibm.com.