Inside System Storage -- by Tony Pearson

Tony Pearson Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson )
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (4)

1 localhost commented Trackback

Thanks for the response, Tony. I am waiting for the dust to settle between you and Jeff over at Sun. I will echo here what I just wrote over on his blog.<div>&nbsp;</div> 1. I was a might sore when a qualified guy like Jeff suggested I was all wet for quoting IBM's press release and conversations with IBM techs.<div>&nbsp;</div> 2. I appreciate it when a knowledgeable guy gets in the middle and submits a view that does not attack me, but rather questions facts in an intelligent way.<div>&nbsp;</div> 3. I really appreciate it when Tony P. jumps in and answers my blegs, as you have so thoughtfully done, in a timely way.<div>&nbsp;</div> I was (and still am) inclined to believe that I am going to get more resiliency, performance and a better price if I am gung ho about virtualizing a bunch of x86 machines and use a mainframe LPAR rather than a tinkertoy hypervisor approach. That is my mainframe bias showing through. From what I've learned from x86 engineers and from my own testing in my labs, VMware is a wonderful piece of technology from the standpoint of its respect for x86 extents. However, not only the hypervisor, but also the applications must respect the underlying extent code for everything to be shiny. Many apps don't, which seems to put a burden on VMware to catch all the crazy calls and prevent them from destabilizing the stack. That is a technically non-trivial task and one that seems to account for the many abends we have had in our labs and the poor record of crash recovery failover, even when both VM servers are in the same subnet.<div>&nbsp;</div> After reading your post and Jeff's, there is still some confusion about the number (and type) of machines we can virtualize, both on a VMware server and in a z10 LPAR. You and I agree that we are limited to 16-20 VMs in a virtual x86 server but Jeff says it is two to three times that many. Jeff's initial objection to IBM claims was that z didn't provide sufficient LPARs to host 1500 VMs. <div>&nbsp;</div> Also, some of the services I was counting on to deliver resiliency (e.g. multiple processors with failover) were not, in Jeff's view, part of the configuration priced to come up with the "1500 VMs at $600-odd per" calculation proffered by Big Blue. <div>&nbsp;</div> Thirdly, I argued in my Mainframe Exec piece that you were going to realize greater resource efficiency -- especially storage efficiency -- behind the z because of its superior management paradigm (SMS and HSM). Distributed computing just doesn't have these tools, or a common standard (de facto or de jure) for storage attachment and management that approximates mainframe DASD rules. As a result, the storage vendors duke it out at the expense of the consumer in terms of common management and ultimately efficiency.<div>&nbsp;</div> Jeff said these tools had not been ported to z/OS, or that if they had (I need to go check my notes on this), they were not part of the suite of tools that would be available for use in a z/VM environment (which you must use in order to support LPARs).<div>&nbsp;</div> These three issues seem pretty key to me. And frankly I remain a tad confused.

2 localhost commented Trackback

Jeff, You bring up some good points, and so have made updates to my post. I had not seen your post when it first came out.<div>&nbsp;</div> (1) Workload Manager (WLM) relates to managing workloads within a single image, and Intelligent Resource Director (IRD) relates to managing workloads across LPARs. These are both beyond the scope of this post, so I will just change this to "Hipervisor" technology to relate to the combination of the System z hardware that supports LPAR in combination with z/VM technology that supports individual guests.<div>&nbsp;</div> (2) I have clarified the discussion of "Test Plan Charlie" to indicate that tens of thousands of guests are indeed possible, but a more realistic amount are several hundred to a few thousand images.<div>&nbsp;</div> (3) Sorry to confuse "sysplex timer" with "parallel sysplex". I have clarified the paragraph to explain the difference. Yes, Linux does I/O timestamps same as z/OS, and yes both Linux and z/OS data can be in the same consistency group for XRC (z/OS Global Mirror) purposes. I led the team that tested this way back when.<div>&nbsp;</div> (4) Because single z10 EC processors are so powerful, IBM offers sub-capacity pricing, but that is beyond the scope of this post. I will rephrase to say "six figures" instead.<div>&nbsp;</div> Additional savings are achieved from reduced software licenses, reduced power consumption, reduced administration. We have clients who have reduced their total cost of ownership migrating workloads from x86 onto System z.<div>&nbsp;</div> I was unable to locate any press releases discussing "760 cores" with 26 IFL engines, but perhaps this was a z9 calculation, or perhaps an actual client case study.<div>&nbsp;</div> The "3rd footnote" you mentioned was removed at the request of the trademark holder to eliminate "implied endorsement". I have updated my post to remove references to same.<div>&nbsp;</div> I am not suggesting that IBM is working on reverse-engineering the AMD processor to develop an emulator. Any such emulator would only happen if IBM and AMD collaborated together for that purpose.<div>&nbsp;</div> I agree it is possible to configure application workloads on x86 hardware to run full capacity, but using VMware or migrating to a mainframe just makes reaching 90 percent utilization substantially easier. If the x86 servers are managed, then your "administration costs" will probably increase to achieve "server utilization" improvements. In general, I find the "x86 system admin" staff do not have enough spare time for this added effort.<div>&nbsp;</div> -- Tony

3 localhost commented Trackback

More comments on DrunkenData here:http://www.drunkendata.com/?p=1759

4 localhost commented Trackback

Jeff,I guess we will just have to agree to disagree, as I don't think I can convince you that I did not write the original press release, and you cannot convince me that the mainframe is not a great product that can reduce total cost of ownership, as IBM itself is reducing its costs in this manner.<div>&nbsp;</div> I never quote Mr. Barton, but it is nice to know that others have thought about MIPS-to-GHz conversion factors as well.<div>&nbsp;</div> Since IBM is able to swap in and swap out to disk, and many Linux guests can share OS disk images, the result is that you get to enjoy the high-performance use of disk cache for much of this.<div>&nbsp;</div> If you can show a TCO case where consolidating 1500 servers onto a single Sun SPARC-based high-end system is less expensive than an IBM z10 EC, then show it. You mention you have no idea about the price of IBM high-end storage (which I can accept fully, by the way), but Sun sells StorageTek tape, and resells HDS USP arrays, so you can at least use Sun gear for those comparisons. Please include the costs of cables, network switches, PDUs, racks, software licenses, power and cooling costs to make it a fair comparison as IBM has done.<div>&nbsp;</div> My argument has been that if you are replacing 1500 older, underutilized 1-way and 2-way rack-mounted servers running a single application each, you might be better off with an IBM z10 EC mainframe, both in tangible costs and intangible benefits of flexibility and simplicity, than replacing with newer rack-mounted servers.<div>&nbsp;</div> Your efforts to convince everytone that virtualization and consolidation of servers are impractical or cost-prohibitive do not seem to match the results IBM has helped our clients achieve.<div>&nbsp;</div> Enjoy your 4th of July weekend!

Add a Comment Add a Comment