Elastic Storage with GPFS Native RAID performance test results with Tivoli Storage Manager over 40 GBit Ethernet
Nils Haustein 060000NX38 Comments (2) Visits (19370)
By Nils Haustein, Andre Gaschler, Sven Oehme and Joerg Walter
We would like to share the first preliminary results (see figure 1) of our Proof of Concept (PoC) running Tivoli Storage Manager (TSM) workloads on Elastic Storage with GPFS Native RAID (GNR) over 40 Gbit Ethernet connections. Thanks to Sven Oehme from IBM Research who provided us with the infrastructure and expertise configuring the GNR and TSM server systems.
The peak throughput results are amazing as you can see below:
Figure 1: Summary of the 40 Gbit Ethernet tests with TSM and GNR
The test setup is shown the figure 2 the TSM software (client and server) was installed on separate x86 servers, configured as GPFS NSD clients and connected via Ethernet to the Elastic Storage GNR system. The TSM servers used a single 40 Gbit line dedicated to each server. The Ethernet connection to the GNR system was also 40 Gbit/sec for each GNR server. The GNR system used for this test was comparable to an Elastic Storage Server model GL2, comprised of two GNR servers and 116 NL-SAS drives.
Figure 2: Proof of Concept Setup Overview
The TSM client workload was performed on the same servers as the TSM servers leveraging shared memory resources. In addition the TSM client workload was artificially generated in memory, using a special workload generator program. This eliminated TSM client system bottlenecks caused by limited disk and network performance.
The GNR system provided a single shared GPFS file system for the TSM servers. Both TSM servers were configured as GPFS NSD clients. As members of the GNR cluster they mounted the file system provided by the GNR system.
As shown in Figure 3 the single GPFS GNR file system used by the TSM server comprised two GPFS pools whereby each pool was configured on virtual disk (vdisk) providing 3-way fault tolerance. One pool – the system pool - was configured on two vdisk with 3-way replication to store the file system metadata. The other pool – the data pool – was configured on two vdisk with 8+2P RAID protection to store file system data including the TSM database and storage pools. Thus, TSM DB and storage pools were stored on the same GPFS file system pool and on the same vdisks that were distributed over all physical disk drives in the GNR system.
For each GPFS pool, two vdisk were configured with one vdisk per recovery group. Each of the two recovery groups was configured with one declustered array, using half of the physical disks comprised in the GNR system (58 disk per RG). Each of the two recovery group was managed by one GNR server.
Figure 4: GNR File system layout
To summarize the important findings for the tests with TSM on Elastic Storage GPFS Native RAID system using 40 Gbit Ethernet connections:
As next step we plan to test the TSM server workloads over a 10 Gbit Ethernet connections, stay tuned !!