Comment (1) Visits (17247)
Five years ago, I sprayed coffee all over my screen from something I read on a blog post from fellow blogger Hu Yoshida from HDS. You can read what cased my reaction in my now infamous post [Hu Yoshida should know better]. Subsequently, over the years, I have disagreed with Hu on a variety of of topics, as documented in my 2010 blog post [Hu Yoshida Does It Again].
(Apparently, I am not alone, as the process of spraying one's coffee onto one's computer screen while reading other blog posts has been referred to as "Pulling a Tony" or "Doing a Tony" by other bloggers!)
Fortunately, my IBM colleague David Sacks doesn't drink coffee. Last month, David noticed that Hu had posted a graph in a recent blog entry titled [Additional Storage Performance Efficiencies for Mainframes], comparing the performance of HDS's Virtual Storage Platform (VSP) to IBM's DS8000.
For those not familiar with disk performance graphs, flatter is better, lower response time and larger IOPS are always desired. This graph implies that the HDS disk system is astonishingly faster than IBM's DS8000 series disk system. Certainly, the HDS VSP qualifies as a member of the elite [Super High-End club] with impressive SPC benchmark numbers, and is generally recognized as a device that works in IBM mainframe environments. But this new comparison graph is just ridiculous!
(Note: While SPC benchmarks are useful for making purchase decisions, different disk systems respond differently to different workloads. As the former lead architect of DFSMS for z/OS, I am often brought in to consult on mainframe performance issues in complex situations. Several times, we have fixed performance problems for our mainframe clients by replacing their HDS systems with IBM DS8000 series!)
Since Hu's blog entry contained very little information about the performance test used to generate the graph, David submitted a comment directly to Hu's blog asking a few simple questions to help IBM and Hu's readers determine whether the test was fair. Here is David's comment as submitted:
Unlike my blog on IBM, HDS bloggers like Hu are allowed to reject or deny comments before they appear on his blog post. We were disappointed that HDS never posted David's comment nor responded to it. That certainly raises questions about the quality of the comparison.
So, perhaps this is yet another case of [Hitachi Math], a phrase coined by fellow blogger Barry Burke from EMC back in 2007 in reference to outlandish HDS claims. My earliest mention was in my blog post [Not letting the Wookie Win].
By the way, since the test was about z/OS Extended Address Volumes (EAV), it is worth mentioning that IBM's DS8700 and DS8800 support 3390 volume capacities up to 1 TB each, while the HDS VSP is limited to only 223 GB per volume. Larger volume capacities help support ease-of-growth and help reduce the number of volumes storage administrators need to manage; that's just one example of how the DS8000 series continues to provide the best storage system support for z/OS environments.
Personally, I am all for running both IBM and HDS boxes side-by-side and publishing the methodology, the workload characteristics, the configuration details, and the results. Sunshine is always the best disinfectant!
Last week, in Computer Technology Review's article [Tiering: Scale Up? Scale Out? Do Both], Mark Ferelli interviews fellow blogger Hu Yoshida, CTO of Hitachi Data Systems (HDS). Here's an excerpt:
This is not the first time I have had to correct Hu and others of misperceptions of IBM's SAN Volume Controller (SVC). This month marks my four year "blogoversary", and I seem to spend a large portion of my blogging time setting the record straight. Here are just a few of my favorite posts setting the record straight on SVC back in 2007:
Since day 1, SAN Volume Controllers has focused primarily on external storage. Initially, the early models had just battery-protected DRAM cache memory, but the most recent model of the SVC, the 2145-CF8, adds support for internal SLC NAND flash solid state drives. To fully appreciate how SVC can help improve the performance of the disks that are managed, I need to use some visual aids.
In this example, the SVC reduced the burden of the managed disk from 100,000 IOPS down to 55,000, which is 35,000 reads and 20,000 writes. Some have argued against putting one level of cache (SVC) in front of another level of cache (managed disk arrays). However, CPU processor designers have long recognized the value of hierarchical cache with L1, L2, L3 and sometimes even L4 caches. The cache-hits on SVC are faster than most disk system's cache-hits.
Of course, not all workloads are 70/30/50, and not every disk array is driven to its maximum capability, so your mileage may vary. As we slide down the left of the curve where things are flatter, the improvement in performance lowers.
Hitachi's offerings, including the HDS USP-V, USP-VM and their recently announced Virtual Storage Platform (VSP) sold also by HP under the name P9500, have similar architecture to the SVC and can offer similar benefits, but oddly the Hitachi engineers have decided to treat externally attached storage as second-class citizens instead. Hu mentions data that "ages out or becomes less active we can move it to the external storage." IBM has chosen not to impose this "caste" system onto its design of the SAN Volume Controller.
The SVC has been around since 2003, before the USP-V came to market, and has sold over 20,000 SVC nodes over the past seven years. The SVC can indeed improve performance of managed disk systems, in some cases by a substantial amount. The 0.06 msec latency on read-miss requests represents less than 1 percent of total performance in production workloads. SVC nearly always improves performance, and in the worst case, provides same performance but with added functionality and flexibility. For the most part, the performance boost comes as a delightful surprise to most people who start using the SVC.