IBM Support

How to schedule Quick Performance Overview (qperf) tool via cron

News


Abstract

By Kiran Ghag, Storage Consultant, IBM Systems Lab Services

The Quick Performance Overview (qperf) tool provides a quick performance overview of an SVC/Storwize cluster by just accessing the product's CLI from a Unix box. Written and maintained by Christian Karpp, it is a collection of scripts along with MS-Excel based worksheet for easy visualization of analysis of collected data.

qperf can be installed on AIX, Linux and Windows. It can collect performance statistics from IBM SAN Volume Controller (SVC) and all Storwize Family Members viz. V35xx, V5xxx, V7xxx and V840 arrays.

This article provides additional information to run qperf via cron job scheduler facility in AIX and Linux.

Content

Getting Started

The latest version of qperf can be downloaded from https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105947.

The downloaded ZIP contains detailed instructions for setting up data collection.

All patches discussed in the article can be obtained from https://github.com/kiranghag/qperfpatch

qperf requires KSH to be installed on the host. It may not be there by default and may need admin assistance to do so.

A Windows/Unix host with supported browser should be available for connecting to Storwize GUI.

 

Setting up qperf

Follow the quick procedure below to set up qperf for manual data collection and then schedule via cron.

 

3.1 Assumptions

This document has been tested with qperf release 2018-05.

The set up steps have been performed under root user login credentials. Any other user can be used to set up qperf by modifying directory paths in the script.

 

3.2 Create qperf directory

Login to the host and create qperf directory:

[root@localhost ~]# pwd
/root
[root@localhost ~]#
mkdir qperf
[root@localhost ~]# cd qperf
[root@localhost qperf]# pwd
/root/qperf

 

3.3 Download latest version of qperf

Latest version of qperf can be downloaded from https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105947

Extract the ZIP file and transfer qperf2018-05.tar file to the host (assuming /root directory).

 

3.4 Extract qperf scripts

Create qperf directory and extract qperf:

[root@localhost qperf]# tar xvf ../qperf2018-05.tar

readme.txt

all

basic

minimum

cust

collect

drive_totals

mdisk_totals

fs840/

fs840/jmet

fs840/readme.txt

v72/

v72/drive_stats

v72/flash_candidates

.

.

.

 

3.5 Generate SSH keys for UNIX user

Check if the host already has SSH key pair set up. If not, generate new one:

[root@localhost qperf]# ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Created directory '/root/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:dMF9vJTon2MdiQQ/566z9mQo/yAazRi9N9bQAByY+jI root@localhost

The key's randomart image is:

+---[RSA 2048]----+

|         .  .+=o |

+----[SHA256]-----+

This should generate a public key for the current user as /root/.ssh/id_rsa.pub. Download this file on the current host.

 

3.6 Create GUI user “qperf” on Storwize

  • Connect to Storwize GUI.
  • Create a new user named “qperf” on Storwize GUI.
  • Use downloaded id_rsa.pub file and assign that as SSH key for qperf user.
  • Provide admin access to user.

 

3.7 Test passwordless CLI access from qperf host to Storwize

From the qperf host, test if Storwize CLI can be connected without being prompted for password:

[root@localhost qperf]# ssh <storwize_management_ip>

The authenticity of host '1.113.54.228 (1.113.54.228)' can't be established.

ECDSA key fingerprint is 05:2d:e4:fa:b3:25:8b:71:fd:6d:a4:d3:44:4d:86:80.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '1.113.54.228' (ECDSA) to the list of known hosts.

Password:

STORWIZE>exit

[root@localhost qperf]#

 

3.8 Edit qperf collect script

Collect script should be edited to update Storwize management IP address. Edit the file and replace IP on the following line with Storwize management IP address:

cnode=<storwize_management_ip>

 

3.9 Test “collect” script

Execute collect script with “-c” parameter. This would collect Storwize configuration information.

[root@localhost qperf]# ./collect -c

no interval given, collecting config only.

all files will be stored in run_1.113.54.228_20180720_1245.

The authenticity of host '1.113.54.228 (1.113.54.228)' can't be established.

ECDSA key fingerprint is SHA256:SgNyQUoBW59EqEWwJlx89sQLkurSvkqSQw6bJ/ghF1I.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '1.113.54.228' (ECDSA) to the list of known hosts.

 

gathering configuration information:

getting system details...

getting I/O group/s information...

getting node details...

getting drive/array details...

getting mdisk details...

getting pool details...

getting volume details...

getting flashcopy mapping details...

getting remote copy relationship details...

getting port details...

getting fabric details...

getting host details...

getting event log...

getting environmental stats...

[root@localhost qperf]#

 

3.10 Set up wrapper script for cron schedule

A simple wrapper script is required to schedule qperf via crontab. Default cron jobs are executed under a user’s home directory and our qperf is installed in /root/qperf. Qperf also does not provide any execution log if scheduled for cron job as-is.

qperf_start.sh script (latest version available from https://github.com/kiranghag/qperfpatch) provides easier way to overcome these challenges. The script requires minor customization to begin with.

The interval of execution and number needs to be updated as per requirement. Take care that the number of iterations finishs before next cron instance runs.

#!/usr/bin/ksh

 

# This wrapper script can be used to schedule qperf via crontab

# The script is assumed to be installed in current user's home directory in qperf folder

# interval and iterations can be changed as required

# number of iterations should not exceed crontab duration between two executions

# i.e. if cron executes two instances at 2 hour interval, iterations should be less than 120

#

# qperf is developed and maintained by Christian Karpp

# It can be downloaded from https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105947

#

# Author: Kiran Ghag, kiran.ghag@in.ibm.com

# Version: v0.2 20180723

 

# update this if qperf is installed in a different directory

qperf_dir="/root/qperf"

# interval between two samples

interval=1

# number of iterations

iterations=120

 

cd $qperf_dir

./collect $interval $iterations >> qperf_start.log 2>&1

Download this script in qperf folder.

 

3.11 Add crontab job entry for qperf_start.sh

qperf_start.sh script needs to be added to crontab for automatic execution. Typical qperf run is recommended to be restricted for two hours.

The following entry runs qperf every two hours. Run “crontab -e” to edit the schedule and add entry at the end (if it doesn’t existing already):

0 0,2,4,6,8,10,12,14,16,18,20,22 * * * /home/svcl3/qperf/qperf_start.sh

Every two hours, a new instance would be run and new data collection directory would be created.

 

3.12 Checking execution log

qperf_start.sh script creates log in qperf directory as qper_start.log. This log can be viewed to see execution progress from all instances:

[root@localhost qperf]# tail -f qperf_start.log

interval used will be 1 minute/s.

running for approx. 0 hrs and 2 mins.

deleting old statistic files first.

waiting for data from 1st interval...

all files will be stored in run_1.113.54.228_20180720_1653.

setting up statistics collection on config node 1.113.54.228.

interval used will be 1 minute/s.

running for approx. 0 hrs and 2 mins.

deleting old statistic files first.

 

3.13 Optimizing disk space during scheduled execution

qperf stores raw performance data for every execution in a separate directory. Amount of disk space required for the raw data depends on factors such as number of volumes, ports, controllers and other configuration items within the target storage systems.

If qperf is run for extended duration, raw data can take up disk space and, worse yet, cause the filesystem to fill up. This can disrupt application running on the host.

After every execution, typically “all” script provided in the qperf directory is run to generate TXT files, required for qperf_analyzer worksheet. At times, this step is omitted by the customer and you could be given just raw data. The customer may not compress it and, worse yet, complain that they must upload huge load of data. This also requires a Unix machine to process the data through “all” script.

qperf “collect” script can be patched a little to overcome this inconvenience. The example below patches collect script so that at the end of every execution, the “all” script is run and all the files are compressed. Compressed files take up much less space and qperf can be run for extended duration.

The patch for release 2018-05 can be downloaded from https://github.com/kiranghag/qperfpatch

The patch adds following steps to the qperf collect scripts and can be simply used to patch future versions.

  • Locate raw data files from current execution.
  • Run “all” against the data to generate TXT files.
  • Compress all files and remove originals to reduce disk space usage.

Download collect.patch, copy it to qperf directory and patch it as below:

[root@localhost qperf]# patch collect < collect.patch

patching file collect

[root@localhost qperf]#

Subsequent runs should now provide additional log events to indicate successful creation of TXT files and compression of data.

 

Consolidating results

Analyzing multiple qperf runs individually can be cumbersome. It may be desired to build a 24-hour IO profile of a storage system. In such scenario, following technique can be used:

  • Create a temp directory inside qperf directory
  • Copy raw data from each raw data folder in temp directory
  • Run “all” script on temp directory
  • Import TXT files in qperf_analyzer

 

Disclaimer

qperf is not meant to replace historical data collection tools such as IBM Spectrum Control. However, extended qperf might be needed in critical situations in absence of IBM Spectrum Control.

This patch and information provided is provided as-is. Use it at your risk and author is not responsible for any negative impact it may cause.

Related Information

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"ST3FR7","label":"IBM Storwize V7000"},"Component":"","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"V35xx;V5xxx;V7xxx;V840","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
28 March 2023

UID

ibm11125537