Start of changeIBM Tivoli Storage Productivity Center, Version 5.2.6

New for Tivoli Storage Productivity Center Version 5.2.6

New features and enhancements are available in IBM® Tivoli® Storage Productivity Center Version 5.2.6.

The following features and enhancements are available for all product licenses except where noted:

Monitoring IBM FlashSystem™ V9000 storage systems
You can add FlashSystem V9000 storage systems for monitoring.
See Adding storage systems.
Viewing cache performance metrics for IBM System Storage® SAN Volume Controller, IBM Storwize®, and IBM FlashSystem storage systems
You can now view performance metrics for the new cache architecture that was introduced in SAN Volume Controller V7.3. For example, you can view new metrics for the volume cache and the volume copy cache. These performance metric updates are available for SAN Volume Controller, Storwize, and FlashSystem block storage systems whose firmware version is V7.3 or later.
The existing metrics are also updated to support the new cache architecture.
Monitoring IBM Spectrum Scale™ storage systems
IBM Spectrum Scale, formerly known as IBM General Parallel File System (GPFS™), is a software defined storage solution that processes large amounts of data in a distributed environment on a single GPFS cluster. IBM Spectrum Scale is a member of the IBM Spectrum Storage™ family.
See External link icon
Monitoring IBM Spectrum Scale without root privileges
You can add an IBM Spectrum Scale storage system without specifying a user that has root privileges. Before you add the storage system for monitoring, you must add the user to the sudoers file on the GPFS cluster node that is used for authentication.
See Monitoring IBM Spectrum Scale without requiring root privileges.
Viewing the inodes by independent fileset charts
You can now see more information about the inode usage for your filesets. You can view charts on the Overview page for storage systems that show the following information about inode usage:
  • The filesets with the fewest number of available inodes
  • The filesets with the largest number of inodes
Viewing dependencies between GPFS clusters
You can view information about the following monitored resources:
  • GPFS clusters that share file systems with the storage system
  • File systems that are remotely mounted from related GPFS clusters
You can also view all the nodes across all the monitored clusters that mount a file system.
Viewing information about NPV switches
You can view information about Cisco NPV switches from the Fabrics page. You can also view information about the routes from ports that are connected to a hypervisor, server, or storage system to ports that are connected to fabrics. You can view the internal routes from Fibre Channel ports to proxy node ports, that is, from F_ports to NP_ports.
NPV switches are switches that are configured to work in N_Port Virtualization (NPV) mode. NPV switches use N_Port ID Virtualization (NPIV) technology to allow multiple logical connections to a fabric switch port.
The following figure shows an NPV switch on the NPV Switches page:
Maximizing storage efficiency and performance by tiering storage and balancing pools
You analyze the tiering of your block storage to place volumes on the tiers that match the performance requirements of the volumes that are in your storage environment. It is now much easier to define the thresholds that determine the placement of volumes and the thresholds that prevent re-tiered volumes from overloading destination pools.
The thresholds are shown for analyzing tiering.
You can define two types of thresholds, I/O rate or I/O density rate, which determine whether a volume is up-tiered or down-tiered. To ensure that pools do not become overloaded when volumes are added, you define maximum I/O rates for the pools on each tier.
See Optimizing storage tiering.
You balance pools to distribute volume workload across pools on the same tier in your storage environment. Now, all that you need do to balance pools is right-click two or more pools on the same tier and same storage virtualizer, and click Balance Pools.
See Optimizing storage pools.
Viewing the status changes in near real-time of disks on AIX® servers
You can now view when the status of disks on AIX servers changes in near real-time. If a Storage Resource agent is deployed as a daemon service, the agent monitors disks and paths in near real-time to detect errors. If the Storage Resource agent detects disk errors, they are included in the status of the disks, which is available from the Overview page for monitored servers.
See Viewing the status of internal resources.
Scheduling predefined reports for all resources
For most predefined reports about resources, you can now schedule the report to run without specifying the resources that the report runs on. For example, you can schedule a Most Active Storage Systems report to run without specifying the storage systems for the report to run on. The report runs for all resources by default. Any new resources are included automatically, so you do not need to update the report schedule when you add the new resource.
Managing the Alert server
A dedicated alert server was added to the product framework to manage the complex event processing that is related to alerting on the condition of resources and their attributes. You can view the status of the Alert server if you click Component Servers on the Home > System Management page in the web-based GUI.
For information about how to start and stop the Alert server, see Starting and stopping servers.
Enhanced support for products and platforms
The following new products and platforms are supported:
  • FlashSystem V9000
  • All AIX systems that are supported by GPFS 4.1
  • IBM Spectrum Scale 4.1.1
  • SAN Volume Controller 7.5

Limitations and known issues

For information about limitations and known issues that you might encounter when you use IBM Tivoli Storage Productivity Center Version 5.2.6, see External link icon

Product licenses

Tivoli Storage Productivity Center offers product licenses to meet your storage management needs. For information about these licenses, see Product licenses.

End of change