Inside System Storage -- by Tony Pearson

Tony Pearson Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: )
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (7)

6 localhost commented Permalink

When we looked at IBM SVC, for block level storage virtualization we couldn't understand the logic of putting multiple Tier1 or Tier 2 frames behind a commodity PC with couple of HBA's running Linux. We went with HDS USP and currently have over 300 TB of external storage.

7 localhost commented Trackback

Why does IBM have to continue to apply 1960s mainframe terminology to the 21st century open systems world?<div>&nbsp;</div> If Hu had said "DASD can only be virtualized at the controller", Tony would still be working the response.<div>&nbsp;</div> It's the 21st century now. Computer virtual memory management and SAN disk storage virtualization are too different to lump together.<div>&nbsp;</div> That said, Hu's comment is ridiculous. First, nobody it seems can really define storage virtualization. There is RAID and RAID LUNs, but the LUN is a by-product of duplication of data, and was not invented to virtualize disks. There are host-based volume managers, but those were not invented with virtualization in mind. The original goal was to aggregate disks when disks were small so a single file system could span multiple disks.<div>&nbsp;</div> Richard's comment is strange. Parity RAID should be done on the controller, where dedicated XOR engines can work their magic. Introducing SAN based volume manager (which is what SVC is), may be a better option than using the RAID controller as the LUN slicer, but functionally the RAID controller/SVC combo is little different from Hu's larger, more functional controller. Certainly the POWER5 server based controllers on IBM's 8000 series can do the same thing. The problem with this is you now have two levels of abstraction to manage. Actually, you have three, because the file system (and perhaps another volume manager) exist on the server.<div>&nbsp;</div> What I think could bring value to the two-layers of abstraction approach is to use the SVC-like layer as a filesystem. A combination NAS gateway (or SAN FS gateway) and SVC, baked into the array, could be useful as it moves all elements of storage management out of the server. Parallel NFS could be the industry standard SAN file system we have been waiting for. pNFS over RDMA 10Gb Ethernet could make it perform like a SAN over a single wire. That holds some real promise, and would require some serious rethinking of array designs.<div>&nbsp;</div> The more I study storage, the more I think the NetApp machines are almost ideal. They in essence combine the functions of the SVC and the RAID controller. They can be feature-rich, because the design point for the controller is a powerful server which can serve NAS. Compare this to the traditional midrange modular RAID box which is designed to run an embedded OS on an embedded chip.<div>&nbsp;</div> It's all an interesting discussion, however the storage industry has failed miserably, and continues to fail miserably, at effectively utilizing capacity. SCSI RAID on open systems is two decades old now. Shared FC SAN is a decade old now. Yet we do not have intelligent storage capacity management. We buy hundreds of large, expensive FC disks to provide performance to our ERP systems, leaving 80% of the capacity unused, and then go buy FC SATA arrays as second tier storage. We have 80% unused capacity, but loose one disk of a RAID5 set and you are keep your fingers crossed for hours as the rebuild to the hot-spare happens. Then when you swap the failed disk, you performance dives again for hours.<div>&nbsp;</div> Maybe Tony is right. Virtual memory can have page storms (similar to the RAID rebuild), but they are rare events, as we have learned and innovated pretty well on the memory management front. But disk storage is still stuck in the past.

Add a Comment Add a Comment