Inside System Storage -- by Tony Pearson

Tony Pearson Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: )
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (7)

1 localhost commented Trackback

Tony, great post. I'd also include (although they didn't invent) the virtualisation in RVA, which enabled virtual LUNs to be created to do thin provisioning. This was an STK product if anyone is interested. One other point, you indicate that Hitachi uses 1:1 mapping versus more complicated mapping in SVC. In fact, I see Hu push the merits of 1:1 mapping in his posts but it isn't the only way to use virtualisation in the USP. For the implementations I have done, I didn't have enough addressable LUNs in the lower tier AMS to be able to present the LUNs as 1:1 through the USP, so I chose to present larger 400GB LUNs and use the USP to slice them. This meant I could address more storage however I lose the ability to just unhook the USP and go direct to the storage. The question is do I care? Probably not. My implementation gets better performance anyway.

2 localhost commented Permalink

Tony,<div>&nbsp;</div> One of the major tasks of a RAID controller is to present (on its host ports) an error free and ‘perfectly’ emulated disk drive… usually as a number of LUNs.<div>&nbsp;</div> In the proposed scheme, the SVC &amp; USP provide another level of "virtualization" in front of the existing &amp; already “virtualized” third-party backends, containing RAID controllers. This is sometimes called a “raid of raids”. <div>&nbsp;</div> Hence, the principle of "virtualization" in SVC is similar to that implemented in a typical RAID controller, driving a number of generic disks. However, this SVC ‘virtualization” task is much simplified by the fact that there is nosupport of RAID 5/6 algorithms. Why not..? <div>&nbsp;</div> RAID5/6 are very computationally &amp; backend IO intensive algorithms, difficult to implement at the required performance level. This particularly true on ‘commodity’ PC hardware, on which the SVC is based. Performance is one of the major reasons justifying purpose-built RAID controllers. <div>&nbsp;</div> If you remove the need for RAID 5/6, then all you have is RAID 0/1 … i.e. no extra level of protection…. and more computing power available to LVM, replication and data migration tasks. <div>&nbsp;</div> SVC seems to be similar to what is currently available under Linux, which is very efficient in striping over multiple host adapters, supports good LVM tools…. and also has a problem with performance under RAID 5/6.<div>&nbsp;</div> There are some other issues to explore ….<div>&nbsp;</div> How does one manage &amp; support multi-vendor RAID protected backend enclosures… with different controllers, disks &amp; management interfaces? How doe one justify the cost &amp; complexity of “split” support responsibility? How does SVC scale and how is it a better solution to a large centralized system..?<div>&nbsp;</div> All said and done… I suspect that it may be cheaper to migrate all of the backend data to a more ‘uniform’ hardware …. to multiple RAID-protected backends or perhaps on one large array. Both IBM &amp; HDS can provide this. <div>&nbsp;</div> The extra level of virtualization (be it SVC or USP) is an excellent tool for such “as you go”, uninterrupted data migration. <div>&nbsp;</div> In practical terms, this is a very thinly disguised marketing campaign to facilitate another case of vendor lock-in…...but is there any other alternative ?

3 localhost commented Trackback

Chris, Richard, you both bring up good points. Rather than a response here, will address them in future blog posts.

4 localhost commented Trackback

Hello Tony, hope you were able to clean up your computer screen. Good comments from Chris and Richard. I felt the need to clarify my statement due to your post. Please check it out on my blog. By the way I like your blog and appreciate that you keep it open for comments.<div>&nbsp;</div> RegardsHu

5 localhost commented Trackback

Hu, thanks for the clarification. The SVC now has 8GB per node (it was 4GB, but now is 8GB as of September 2006).

6 localhost commented Permalink

When we looked at IBM SVC, for block level storage virtualization we couldn't understand the logic of putting multiple Tier1 or Tier 2 frames behind a commodity PC with couple of HBA's running Linux. We went with HDS USP and currently have over 300 TB of external storage.

7 localhost commented Trackback

Why does IBM have to continue to apply 1960s mainframe terminology to the 21st century open systems world?<div>&nbsp;</div> If Hu had said "DASD can only be virtualized at the controller", Tony would still be working the response.<div>&nbsp;</div> It's the 21st century now. Computer virtual memory management and SAN disk storage virtualization are too different to lump together.<div>&nbsp;</div> That said, Hu's comment is ridiculous. First, nobody it seems can really define storage virtualization. There is RAID and RAID LUNs, but the LUN is a by-product of duplication of data, and was not invented to virtualize disks. There are host-based volume managers, but those were not invented with virtualization in mind. The original goal was to aggregate disks when disks were small so a single file system could span multiple disks.<div>&nbsp;</div> Richard's comment is strange. Parity RAID should be done on the controller, where dedicated XOR engines can work their magic. Introducing SAN based volume manager (which is what SVC is), may be a better option than using the RAID controller as the LUN slicer, but functionally the RAID controller/SVC combo is little different from Hu's larger, more functional controller. Certainly the POWER5 server based controllers on IBM's 8000 series can do the same thing. The problem with this is you now have two levels of abstraction to manage. Actually, you have three, because the file system (and perhaps another volume manager) exist on the server.<div>&nbsp;</div> What I think could bring value to the two-layers of abstraction approach is to use the SVC-like layer as a filesystem. A combination NAS gateway (or SAN FS gateway) and SVC, baked into the array, could be useful as it moves all elements of storage management out of the server. Parallel NFS could be the industry standard SAN file system we have been waiting for. pNFS over RDMA 10Gb Ethernet could make it perform like a SAN over a single wire. That holds some real promise, and would require some serious rethinking of array designs.<div>&nbsp;</div> The more I study storage, the more I think the NetApp machines are almost ideal. They in essence combine the functions of the SVC and the RAID controller. They can be feature-rich, because the design point for the controller is a powerful server which can serve NAS. Compare this to the traditional midrange modular RAID box which is designed to run an embedded OS on an embedded chip.<div>&nbsp;</div> It's all an interesting discussion, however the storage industry has failed miserably, and continues to fail miserably, at effectively utilizing capacity. SCSI RAID on open systems is two decades old now. Shared FC SAN is a decade old now. Yet we do not have intelligent storage capacity management. We buy hundreds of large, expensive FC disks to provide performance to our ERP systems, leaving 80% of the capacity unused, and then go buy FC SATA arrays as second tier storage. We have 80% unused capacity, but loose one disk of a RAID5 set and you are keep your fingers crossed for hours as the rebuild to the hot-spare happens. Then when you swap the failed disk, you performance dives again for hours.<div>&nbsp;</div> Maybe Tony is right. Virtual memory can have page storms (similar to the RAID rebuild), but they are rare events, as we have learned and innovated pretty well on the memory management front. But disk storage is still stuck in the past.

Add a Comment Add a Comment