Are you trying to find the right way to explain Storage Management concepts to your friends and family at the next holiday cocktail party?
One of my readers made the following request:
I had mentioned this video in my 2007 blog post [Re-arranging the Sock Drawer], so I felt I needed to at least make an effort to track it down.
As it turns out, the IBM sales executive in the video, Charles "C.D." Larson, now works for another company (Hitachi Data Systems). Thanks to social media, I was able get in contact with him, and he sent me a copy of this 1989 video, and granted me permission to post it on YouTube.
To put it on YouTube, I had to convert the VOB file to something YouTube could understand. Since I run Linux, I was able to use the [ffmpeg] utility to do this. The result is now an [18-minute video], uploaded for all to enjoy.
The concepts discussed back then still apply today. Yes, we still have DFSMS for the mainframe mentioned in the video, but we also have extended these concepts to the Active Cloud Engine in the SONAS and Storwize V7000 Unified, as well as the hieararchy management included in the Linear Tape File System (LTFS) Enterprise Edition (LTFS-EE) solutions.
Happy Winter Solstice, or whatever holiday you may choose to celebrate this season!
Happy [Thanksgiving], everyone! Yes, I mean everyone. Even if you are not an American from United States eating turkey and stuffing this week, it wouldn't hurt to take stock in what you are thankful for.
And before I forget, I want to thank all of you, my readers, for making this blog the #1 most read blog at IBM developerWorks, and one of the top blogs in the IT storage industry!
A few years ago, I was stuck in Venice, Italy for the holiday. Not by choice, but because I was the victim of a car accident on my business trip. My neck in a brace, I was unable to fly home in time to celebrate Thanksgiving.
The local IBMers directed me to a wonderful restaurant where I would dine alone, on Thanksgiving, and insisted that I ask the waiter to have some butter for my bread. The joke was on me! A collection of waiters came out, banging on pot lids, with a huge six-pound "cube" of butter on a tray, with a fork and knife stuck into the top. They do like to make fun of the tourists in Vencie, don't they?
In other years past, I have found myself spending the holiday working at client locations, baby-sitting their datacenter. Why? Since Thanksgiving always lands on a Thursday in the USA, the day after is known as "Black Friday", the official kick-off of consumerism craziness.
The next day is "Small Business Saturday", to give small local businesses a chance to compete for some revenue. Two days after that is "Cyber Monday", where many people shop online from their office, rather than fight all the crowds in the traditional brick-and-mortar stores. My job was to make sure the systems ran smoothly, from Thursday to Monday, for our largest clients in the retail industry.
(Note: This is not the first time I have mentioned [Cyber Monday] on my blog. For the past few years, I remind people that the perfect holiday gift is one or more of my books from the Inside System Storage series, volumes I through V, are available in hardcover, paperback and eBook formats from my [Spotlight page on Lulu.com].)
This year, I am thankful that I will be in Tucson with my friends and family. The weather here is expected to be a beautiful 72 degrees Fahrenheit.
Last year, I hosted my friends and family in my home for the big meal. It went so well that I invited everyone back again this year. In August, I started the contractual process to remodel my kitchen, and the company I hired assured me repeatedly it would be ready by Thanksgiving.
Unfortunately, due to a combination of sloppy project management of the company that I hired to do the work and a few unforseen circumstances that caused some delays, I have no kichen.
I have an empty room where my kitchen used to be, the floor partially tiled, the walls clean and freshly panelled and painted. My new cabinets and sink are stacked up inside cardboard boxes in my garage.
So, instead, I am taking everyone out. I am thankful there are restaurants open tomorrow, and I was able to make a last-minute reservation for the six of us. Construction will resume on Friday.
Comment (1) Visits (12170)
Wrapping up my week on All-Flash arrays, I thought I would cover some of the Enterprise Reliability features of the IBM FlashSystem.
On Monday, [IBM FlashSystem versus EMC XtremeIO all-Flash Arrays], I discussed some of the features of the IBM FlashSystem that differentiate it from EMC's ExtremeIO and other all-Flash arrays. On Tuesday, [IBM 2013 Storage Announcements for November 19] included discussion of the all-Flash model of the IBM System Storage DS8870 disk system.
Just as light bulbs burn out eventually after repeatedly being turned on and off, Flash does not last forever either.
A set of transistors can represent a single bit of informaiton (Single-level cell, or SLC for short), or multiple bits (Multi-level Cell, MLC). MLC typically refers to two bits, with a new "Triple-level cell" or TLC technology, able to store three bits per set of transistors.
SLC is faster and can endure more "Program-erase" write cycles, but MLC is less expensive to manufacture and therefore used in most consumer products, like digital cameras, smart phones, music players and USB memory sticks. To learn more on this, see this 6-page IBM whitepaper on [Comparison of NAND Flash Technologies Used in Solid-State Storage].
In between, "Enterprise MLC" (or eMLC for short) refers specifically to a different grade of chips IBM gets from the flash manufacturer. eMLC chips use a similar MLC bit arrangement, but are typically selected from higher bins, and most importantly have much longer program-erase cycle times which yield greater chip endurance, at the expense of long data retention when power is off (but seriously, when is anything off for very long in a data center?)
As a result, eMLC has 10x the endurance of regalar MLC, approaching parity with SLC at half the cost!
In the IBM FlashSystem, DRAM cache is used to buffer the writes first, then written out to the Flash. This helps to further improve the endurance.
For enterprise reliability, each Flash chip on the IBM FlashSystem has Error Correcting Codes (ECC), and then each set of 10 chips is placed in a 9+P RAID-5 configuration.
The chips are sub-divided into 16 planes. In the event a cell fails, the data for that plane can be reconstructed from parity, and written to spare space on the other planes of that same chip set. That plane is then reformated as an 8+P RAID-5, bypassing the failed plane.
In this manner, a cell failure only results in losing a small portion of one chip. If the same plane fails another failure on another chip, it will drop down to 7+P, 6+P, 5+P, and finally 4+P. This is known as "Variable Stripe RAID" or VSR for short.
IBM FlashSystem can survive over 1,000 such cell failures without an outage. By comparison, a single cell failure on an SSD often marks the entire drive as a failure.
But wait, there's more. Why stop at just RAID-5 across 10 chips. The chips are organized into modules, and IBM FlashSystem can perform RAID-5 across modules, in a 10+P+S RAID-5 configuration. This is referred to as "Two dimensional RAID" or 2D-RAID for short.
Even if you lost an entire module, the system will automatically rebuild on the spare module, and you can replace the bad one non-disruptively.
Many use cases for all-Flash arrays do not require such high levels of Enterprise reliability. Several of the all-Flash competitors have adopted a "des
The idea is to assume that the data stored on them is just a copy from some other storage media. In the event of a Flash failure, it can easily be restored from a mirrored copy or backup.
For the IBM FlashSystem, The newer 800 series are based on eMLC, ideal for the majority of business applications, databases and virtual machine images placed on all-Flash arrays. The older 700 series are based on more expensive SLC, designed specifically for sustained write-intensive workloads.
Within each series, the "tens" models (710, 810) offer RAID-0 striping across ECC and VSR protected modules. For higher levels of availability, the "twenties" models (720, 820) offer ECC, VSR and 2D-RAID protection.
Well it's Tuesday again, and you know what that means? IBM Announcements!
You might be thinking, didn't IBM just have a [huge storage announcement October 8, 2013]? You would be right! IBM's $1B additional investment in Storage has been like shot of adrenaline in getting new features and functions out sooner to our clients.
These new models will help our clients deploy new workloads and consolidate existing workloads.
If Eskimos have 37 words for "snow", then EMC has perhaps a similar number of names for "failure". I have already covered a few of their past attempts, including [ATMOS], [Invista], and [VPLEX]. Last week, EMC introduced its latest, called XtremeIO.
But rather than focus on XtremeIO's many shortcomings, I thought it would be better to point out the highlights of IBM's All-Flash array, IBM FlashSystem.
But first, a quick story.
Two years ago, I worked the booth at [Oracle OpenWorld 2011]. After a conference attendee had visited the booths of Violin Memory and Pure Storage, he asked me why IBM did not have an all-Flash array.
Since then, IBM has added 800GB support to the Storwize V7000, doubling the capacity. More importantly, IBM acquired Texas Memory Systems, and offers a much better all-Flash array.
Flash can be deployed in three levels. The first is in the server itself, such as with PCiE cards containing Flash chips, limited to applications running on that server only.
The second option is a hybrid disk system, that can intermix Flash-based Solid State Drives (SSD) with regular spinning hard disk drives (HDD). These can be attached to many servers.
The problem with this approach is that when Flash is packaged to pretend to be spinning disk, it undermines some of the performance benefits. Traditional disk system architectures using SCSI commands over Device adapter loops can introduce added latency.
The third fits snuggly in the middle: all-Flash arrays designed from the ground up to be only Flash.
Whereas SSD can typically achieve an I/O latency in the 300 to 1000 microseconds range, IBM FlashSystem can process I/O in the 25 to 110 microsecond range. That is a huge difference!
(FTC Disclosure: The U.S. Federal Trade Commission requires that I mention that I am an IBM employee, and that this post may be considered a paid, celebrity endorsement of both the IBM FlashSystem and IBM Storwize family of products. I have no financial interest in EMC, do not endorse the XtremeIO mentioned here, and was not paid to mention their company or products in any manner.)
Fellow blogger and IBM Master Inventor Barry Whyte has a great comparison table in his blog post [Extreme Blogging]. I thought I would add an added column for the Storwize V7000 with 18 Solid State drives.
While it is easy to show that EMC's XtremeIO does not hold a candle to IBM FlashSystems, I think it is more amusing that it is not even as good as a Storwize V7000 with SSD that IBM offered two years ago, long before [EMC acquired XtremeIO company] back in May 2012.
But don't just take my word for it, fellow blogger Robin Harris, on his StorageMojo blog, has several posts, including [EMC's Xtreme embarrassment] and [ XtremLY late XtremIO launch]. Both are worth a read.
Earlier this year, [IBM announced it is investing $1 Billion USD in Flash technology]. EMC's announcement last week shows that they are at least 18 months behind IBM in Flash technology solutions.