Performance statistics
Real-time performance statistics provide short-term status information for the system.
To access these performance statistics, click Performance section on the Dashboard.
in the management GUI. In addition, the management GUI displays an overview of system performance, in theYou can use system statistics to monitor the bandwidth of all the volumes, protocols, and MDisks that are being used on your system. You can also monitor the overall System CPU (total) for the system. These statistics summarize the overall performance health of the system and can be used to monitor trends in bandwidth and System CPU (total). You can monitor changes to stable values or differences between related statistics, such as the latency between volumes and MDisks. These differences then can be further evaluated by performance diagnostic tools.
Additionally, with system-level statistics, you can quickly view bandwidth of volumes, protocols, and MDisks. Each of these charts displays the current bandwidth in megabytes per second (Mbps) and a view of bandwidth over time. Each data point can be accessed to determine its individual bandwidth use and to evaluate whether a specific data point might represent performance impacts. For example, you can monitor the protocols to determine whether the host data-transfer rate is different from the expected rate.
You can also select node-level statistics, which can help you determine the performance impact of a specific node. As with system statistics, node statistics help you to evaluate whether the node is operating within normal performance expectations..
CPU Utilization
The System CPU (total) chart shows the current percentage of CPU usage and peaks in utilization.
A single spike often does not indicate a performance impact on the system; however, if a data point is consistently above 95% of utilization and I/O input is high, the system might be overloaded, which might indicate a need for more back-end storage. However, for compression utilization alone, it can be normal to see rates at 100%, especially if compression is used frequently on the system. If both compression and I/O input are high, the system might need more storage to accommodate the utilization.
Interface
The Protocols chart shows all possible protocol types that can be configured on different models of the system. Depending on the model of your system and protocol adapters that are installed, data points might not be available for all the displayed protocols. To view data points for a protocol, select the type of protocol to display performance data in the protocol chart. You can use this information to help determine connectivity issues that might impact performance. The Fibre Channel protocol is also used to communicate within the system. The iSCSI protocol is used for read and write workloads from iSCSI-attached hosts. The SAS protocol is used for read and write operations to drives. The SAS protocol can show activity even when there is no incoming workload on the Fibre Channel or iSCSI protocols due to FlashCopy® operations or background RAID activity, such as data scrubbing and array rebuilding. The workload on the SAS protocol can also be higher than the workload from hosts because of the additional write operations that are necessary for the different RAID types. For example, a write operation to a volume that is using a RAID-10 array requires twice the amount of the SAS protocol bandwidth to accommodate the RAID mirroring. The IP Replication protocol displays read and write workloads for Replication traffic over IP connections. The IP Replication (Compressed) protocol displays read and write workloads for Replication traffic over compressed IP connections. Data is compressed as it is sent between systems in the replication partnership. Compression for replication can reduce the amount of bandwidth that is required for the IP connection. Compression must be enabled on both systems in the replication partnership to compress data over the IP connection.
MDisks and volumes
The MDisk charts display six metrics while the Volumes charts display another six metrics on the Performance section. These metrics are: Read IOPS, Write IOPS, Read bandwidth, Write bandwidth, Read latency, and Write latency. You can use these metrics to help determine the overall performance health of the volumes and MDisks on your system. Consistent unexpected results can indicate errors in configuration, system faults, or connectivity issues. Both the volumes and MDisk charts contain the same metrics to compare and use to evaluate performance; however, the data points for these metrics can be different due to the impact of system cache, RAID overhead, and Copy Services functions. You can view data points in megabytes per second (MBps) and I/O per second (IOPS).
The difference between read and write IOPS shows the mixture of the workload that the system is executing. You can determine the average transfer sizes of data that the system is experiencing by dividing the reads and write operations in MBps by the read and write operations in IOPS. This information can be used for validating and predicting disk configuration for the system or input to a disk provisioning application. Write latency is the average time (in milliseconds) that the system writes data to volumes or MDisks but does not include the time for write operations that are used to keep volumes in Replication relationships synchronized. As with read latency, MDisk write latency tends to be higher than volumes because of write caching and RAID overheads. For example, a write operation to a volume can result in additional read and write operations on the MDisk depending on the RAID type for the array.