This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Last week, in Computer Technology Review's article [Tiering: Scale Up? Scale Out? Do Both], Mark Ferelli interviews fellow blogger Hu Yoshida, CTO of Hitachi Data Systems (HDS). Here's an excerpt:
"MF/CTR: A global cache should be required to implement that common pool that you’re talking about going across all tiers.
Hu/HDS: Right. So that is needed to get to all the resources. Now with our system, we can also attach external storage behind it for capacity so that as the storage ages out or becomes less active we can move it to the external storage. They would certainly have less performance capability, but you don’t need it for the stale data that we’re aging down. Right now we’re the only vendor that can provide this type of tiering.
If you look at other people who do virtualization like IBM’s SVC, the SVC has no storage within it because it’s sitting so if you attach any storage behind it, there is some performance degradation because you have this appliance sitting in front. That appliance is also very limited in cache and very limited in the number of storage boards on it. It cannot really provide you additional performance than what is attached behind it. And in fact, it will always degrade what is attached behind it because it’s not storage, where as our USP is storage and it has a global cache and it has thousands of port connections, load balancing and all that. So our front end can enhance existing storage that sits behind it."
This is not the first time I have had to correct Hu and others of misperceptions of IBM's SAN Volume Controller (SVC). This month marks my four year "blogoversary", and I seem to spend a large portion of my blogging time setting the record straight. Here are just a few of my favorite posts setting the record straight on SVC back in 2007:
Since day 1, SAN Volume Controllers has focused primarily on external storage. Initially, the early models had just battery-protected DRAM cache memory, but the most recent model of the SVC, the 2145-CF8, adds support for internal SLC NAND flash solid state drives. To fully appreciate how SVC can help improve the performance of the disks that are managed, I need to use some visual aids.
In this first chart, we look at a 70/30/50 workload. This indicates that 70 percent of the IOPS are reads, 30 percent writes, and 50 percent can be satisfied as cache hits directly from the SVC. For the reads, this means that 50 percent are read-hits satisfied from SVC DRAM cache, and 50 percent are read-miss that have to get the data from the managed disk, either from the managed disk's own cache, or from the actual spinning drives inside that managed disk array.
For writes, all writes are cache-hits, but some of them will be destaged to the managed disk. Typically, we find that a third of writes are over-written before this happens, so only two-thirds are written down to managed disk.
In this example, the SVC reduced the burden of the managed disk from 100,000 IOPS down to 55,000, which is 35,000 reads and 20,000 writes. Some have argued against putting one level of cache (SVC) in front of another level of cache (managed disk arrays). However, CPU processor designers have long recognized the value of hierarchical cache with L1, L2, L3 and sometimes even L4 caches. The cache-hits on SVC are faster than most disk system's cache-hits.
This is a Ponder curve, mapping millisecond response (MSR) times for different levels of I/O per second, named after the IBM scientist John Ponder that created them. Most disk array vendors will publish similar curves for each of their products. In this case, we see that 100,000 IOPS would cause a 25 millisecond response (MSR) time, but when the load is reduced to 55,000 IOPS, the average response time drops to only 7 msec.
To be fair, the SVC does introduce 0.06 msec of additional latency on read-misses, so let's call this 7.06 msec. This tiny amount of latency could be what Hu Yoshida was referring to when he said there was "some performance degradation". There are other storage virtualization products in the market that do not provide caching to boost performance, but rather just map incoming requests to outgoing requests, and these can indeed slow down every I/O they process. Perhaps Hu was thinking of those instead of IBM's SVC when he made his comments.
Of course, not all workloads are 70/30/50, and not every disk array is driven to its maximum capability, so your mileage may vary. As we slide down the left of the curve where things are flatter, the improvement in performance lowers.
IOPS before SVC
IOPS after SVC
MSR before SVC
MSR after SVC
Hitachi's offerings, including the HDS USP-V, USP-VM and their recently announced Virtual Storage Platform (VSP) sold also by HP under the name P9500, have similar architecture to the SVC and can offer similar benefits, but oddly the Hitachi engineers have decided to treat externally attached storage as second-class citizens instead. Hu mentions data that "ages out or becomes less active we can move it to the external storage." IBM has chosen not to impose this "caste" system onto its design of the SAN Volume Controller.
The SVC has been around since 2003, before the USP-V came to market, and has sold over 20,000 SVC nodes over the past seven years. The SVC can indeed improve performance of managed disk systems, in some cases by a substantial amount. The 0.06 msec latency on read-miss requests represents less than 1 percent of total performance in production workloads. SVC nearly always improves performance, and in the worst case, provides same performance but with added functionality and flexibility. For the most part, the performance boost comes as a delightful surprise to most people who start using the SVC.
To learn more about IBM's upcoming products and how IBM will lead in storage this decade, register for next week's webcast "Taming the Information Explosion with IBM Storage" featuring Dan Galvan, IBM Vice President, and Steve Duplessie, Senior Analyst and Founder of Enterprise Storage Group (ESG).
In my presentations in Australia and New Zealand, I mentioned that people were re-discovering the benefits of removable media. While floppy diskettes were convenient way of passing information from one person to another, they unfortunately did not have enough capacity. In today's world, you may need Gigabytes or Terabytes of re-writeable storage with a file system interface that can easily be passed from one person to another. In this post, I explore three options.
(FCC Disclaimer: I work for IBM, and IBM has no business relationship with Cirago at the time of this writing. Cirago has not paid me to mention their product, but instead provided me a free loaner that I promised to return to them after my evaluation is completed. This post should not be considered an endorsement for Cirago's products. List prices for Cirago and IBM products were determined from publicly available sources for the United States, and may vary in different countries. The views expressed herein may not necessarily reflect the views and opinions of either IBM or Cirago.)
I took a few photos so you can see what exactly this device looks like. Basically, it is a plastic box that holds a single naked disk drive. It has four little rubber feet so that it does not slip on your desk surface.
The inside is quite simple. The power and SATA connections match those of either a standard 3.5 inch drive, or the smaller form factor (SFF) 2.5 inch drive. However, to my dismay, it does not handle EIDE drives which I have a ton of. After taking apart six different computer systems, I found only one had SATA drives for me to try this unit out with.
The unit comes with a USB cable and AC/DC power adapter. In my case, I found the USB 3.0 cable too short for my liking. My tower systems are under my desk, but I like keeping docking stations like this on the top of the desk, within easy reach, but that wasn't going to happen because the USB cable was not long enough.
Instead, I ended up putting it half-way in between, behind my desk, sitting on another spare system. Not ideal, but in theory there are USB-extension cables that probably could fix this.
Here it is with the drive inside. I had a 3.5 inch Western Digital [1600AAJS drive] 160 GB, SATA 3 Gbps, 8 MB Cache, 7200 RPM.
To compare the performance, I used a dual-core AMD [Athlon X2] system that I had built for my 2008 [One Laptop Per Child] project. To compare the performance, I ran with the drive externally in the Cirago docking station, then ran the same tests with the same drive internally on the native SATA controller. Although the Cirago documentation indicated that Windows was required, I used Ubuntu Linux 10.04 LTS just fine, using the flexible I/O [fio] benchmarking tool against an ext3 file system.
Sequential Write - a common use for external disk drive is backup.
Random read - randomly read files ranging from 5KB to 10MB in size.
Random mixed - randomly read/write files (50/50 mix) ranging from 5KB to 10MB in size.
Random Mixed (50/50)
Latency (msec) read
Latency (msec) write
Bandwidth (KB/s) read
Bandwidth (KB/s) write
For sequential write, the Cirago performed well, only about 15 percent slower than native SATA. For random workloads, however, it was 30-40 percent slower. If you are wondering why I did not get USB 3.0 speeds, there are several factors involved here. First, with overheads, 5 Gbps USB 3.0 is expected to get only about 400 MB/sec. My SATA 2.0 controller maxes out at 375 MB/sec, and my USB 2.0 ports on my system are rated for 57 MB/sec, but with overheads will only get 20-25 MB/sec. Most spinning drives only get 75 to 110 MB/sec. Even solid-state drives top out at 250 MB/sec for sustained activity. Despite all that, my internal SATA drive only got 16 MB/sec, and externally with the Cirago 14 MB/sec in sustained write activity.
Here is the mess that is inside my system. The slot for drive 2 was blocked by cables, memory chips and the heat sink for my processor. It is possible to damage a system just trying to squeeze between these obstacles.
However, the point of this post is "removable media". Having to open up the case and insert the second drive and wire it up to the correct SATA port was a pain, and certainly a more difficult challenge than the average PC user wishes to tackle.
Price-wise, the Cirago lists for $49 USD, and the 160GB drive I used lists for $69, so the combination $118 is about what you would pay for a fully integrated external USB drive. However, if you had lots of loose drives, then this could be more convenient and start to save you some money.
IBM RDX disk backup system
Another problem with the Cirago approach is that the disk drives are naked, with printed circuit board (PCB) exposed. When not in the docking station, where do you put your drive? Did you keep the [anti-static ESD bag] that it came in when you bought it? And once inside the bag, now what? Do you want to just stack it up in a pile with your other pieces of equipment?
To solve this, IBM offers the RDX backup system. These are fully compatible with other RDX sytems from Dell, HP, Imation, NEC, Quantum, and Tandberg Data. The concept is to have a docking station that takes removable, rugged plastic-coated disk-enclosed cartridges. The docking station can be part of the PC itself, similar to how CD/DVD drives are installed, or as a stand-alone USB 2.0 system, capable of processing data up to 25 MB/sec.
The idea is not new, about 10 years ago we had [Iomega "zip" drives] that offered disk-enclosed cartridges with capacities of 100, 250 and 750MB in size. Iomega had its fair share of problems with the zip drive, which were ranked in 2006 as the 15th worst technology product of all time, and were eventually were bought out by EMC two years later (as if EMC has not had enough failures on its own!)
The problem with zip drives was that they did not hold as much as CD or DVD media, and were more expensive. By comparison, IBM RDX cartridges come in 160GB to 750GB in size, at list prices starting at $127 USD.
IBM LTO tape with Long-Term File System
Removable media is not just for backup. Disk cartridges, like the IBM RDX above, had the advantage of being random access, but most tape are accessed sequentially. IBM has solved this also, with the new IBM Long Term File System [LTFS], available for LTO-5 tape cartridges.
With LFTS, the LTO-5 tape cartridge now can act as a super-large USB memory stick for passing information from one person to the next. The LTO-5 cartridge can handle up to 3TB of compressed data at up to SAS speeds of 140 MB/sec. An LTO-5 tape cartridge lists for only $87 USD.
The LTO-5 drives, such as the IBM [TS2250 drive] can read LTO-3, LTO-4 and LTO-5cartridges, and can write LTO-4 and LTO-5 cartridges, in a manner that is fully compatible with LTO drives from HP or Quantum. LTO-3, LTO-4 and LTO-5 cartridges are available in WORM or rewriteable formats. LTO-4 and LTO-5 cartridges can be encrypted with 256-bit AES built-in encryption. With three drive manufacturers, and seven cartridge manufacturers, there is no threat of vendor lock-in with this approach.
These three options offer various trade-offs in price, performance, security and convenience. Not surprisingly, tape continues to be the cheapest option.
Wrapping up my seven-city romp through Australia and New Zealand, the final city was Canberra, which is the capital of Australia. As with Wellington, this meant many of the clients in the audience work in government agencies.
I had not taken any photos of Anna Wells, IBM Storage Sales Leader for ANZ, but I was able to find this caricature of her on a poster from an award she won within IBM.
I also did not have a picture of Robert, my videographer for this trip, who was always behind the camera himself.
The event went smoothly, just like the rest of them. Anna presented IBM's storage strategy and highlighted specific IBM storage solutions.
I had several emails asking if this event was called "Storage Optimisation Breakfast" because it was held in the mornings, or did we actually serve food at these events. The answer is we actually served food, a variation of the [Full English Breakfast], and most of the attendees gobbled it down while Anna spoke.
The fare was quite similar across all seven locations: scrambled or poached eggs, on toast or english muffin, ham/bacon/sausages, potatoes or mushrooms, and half of a baked tomato with bits of something toasted on top.
One morning, for a change, I decided instead to have a bowl of Weet-Bix cereal. Tasted like cardboard. I learned my lesson.
Next, we had Will Quodling, Manager of Infrastructure Operations, at Australia's Department of Innovation, Industry, Science and Research. The Department of Innovation, Industry, Science and Research consists of 3200 staff that strive to encourage the sustainable growth of Australian industries. The Department is committed to developing policies and delivering programs to provide lasting economic benefits ensuring Australia's competitive future, undertakes analysis, and provides services and advice to the business, science and research community. American President, Barack Obama, visited Australia and was interested in adopting a similar concept for the United States.
The department was looking to replace their existing IBM System Storage DS4800 disk systems with something more energy efficient. They selected IBM XIV storage system, with an expected savings of 10kW per year. They are able to run 800 VMware images and 150 VDI workstations using storage on one XIV, replicate the data to a second XIV at a remote location, and have a third XIV for their Web serving environment. They tested out both single drive and full module failures, and experienced better-than-expected rebuild times, with no impact to users, and no impact to performance.
After 17 days without a functioning government, Australia finally selected a prime minister. Her name is Julia Gillard, shown here. She won in part by promising to build a National Broadband Network (NBN) for the entire country, including the rural areas.
[Canberra] is an interesting town, a fully planned community designed in 1913 by Chicago's husband-and-wife architect team of Walter Burley Griffin and Marion Mahony Griffin. The location was selected as being half-way compromise between Australia's two largest cities, Sydney and Melbourne.
I would like to thank all the wonderful people in both Australia and New Zealand for making this a successful trip!
Continuing my romp through Australia and New Zealand, this is city 6 - Wellington, which is the capital of New Zealand. This meant many of the clients in the audience work in government agencies.
Here is my view of Wellington from my hotel room at the Duxton Hotel. I have been to Wellington before, it has that "small town" feel.
The event went smoothly, just like the rest of them. Anna Wells presented IBM's storage strategy and highlighted specific IBM storage solutions.
Replacing Natalie from GPJ Australia is Megan, who coordinated our events in both Auckland and Wellington, NZ.
Next, we had Glen Mitchell again from Telecom NZ, presenting his success story going from an EMC-only environment to a dual IBM-and-EMC mixed environment managed by IBM SAN Volume Controller.
Someone mentioned that my job as public speaker in different cities was akin to "busking". I had no idea what "busking" was, until I was shown two "in the act" in front of a bank. Americans call these "street performers", which shows we appreciate this art form perhaps more than the Kiwis.
Lastly, I covered future trends in storage. This is particularly interesting to government agencies that are particularly interested in reducing costs, managing risks, and improving service delivery.
Lastly, this is Aisel Giumali, IBM storage marketing manager for Australia and New Zealand. She managed my calendar, all of my events and one-on-one client briefings. I could not have handled these past two weeks without her.
Since the first big earthquake on Saturday, there were several smaller aftershocks, including one in Wellington itself. It is a good thing I head back for Australia for the rest of the trip.
While I was in Auckland, New Zealand, for the IBM Storage Optimisation Breakfast series of events, I agreed to also talk at the [Ingram Micro Showcase 2010] held there the same week. David Bird, who was scheduled to speak, was down in Christchurch taking care of his family after the big 7.1 magnitude earthquake.
The marketing team did a great job putting up a "Smarter Planet" ball up near the ceiling. It had to be "enhanced" with some extra black ink to include the outline of the islands of New Zealand.
Basically, I had 25 minutes to present "Future Storage Trends" to a packed room with standing room only. This was a shortened version of my 40-minute talk that I had been already giving at the Storage Optimisation Breakfast events. This presentation was based on three key trends:
There is a shift in the role each storage media type is going to be used for. Rising energy costs, performance and economics are causing the IT industry to re-evaluate their use of solid-state drives, spinning disk, tape cartridge, paper and analog film. IBM Easy Tier and blended disk-and-tape solutions are paving the way for these future trends.
Advancements in commuications technology and bandwidth are driving a convergence of SANs and LANs to a single Data Center Network (DCN) based on Convergence Enhanced Ethernet (CEE). IBM's top-of-rack switches and converged network adapaters (CNA) are the first step in this process.
Cloud Computing is driving new levels of standardization, automation and management that will impact the way internal IT departments will manage their own IT equipment as well. IBM's five different levels of cloud computing offerings, from private cloud to public cloud, provides every individual or company a level of service that is just right.
Here is the IBM booth. As is often the case, we get a prestigious corner booth that maximizes foot traffic to see our solutions.
While walking around, the folks at the Samsung booth notices my Samsung Galaxy S smartphone. These are not yet available in the New Zealand market, so they thought I was a Samsung employee. I explained that I am an American, and that these have been available for weeks now in the states.
The Samsung team then showed me their latest 3D television. Basically, you wear special 3D glasses that sync-up electronically with the TV screen itself to give the appearance of 3D image on anything you play. I believe the TV comes with two pairs of glasses, and additional pairs can be purchased for substantial extra. It works with any movie or TV show, there is no requirement that it be filmed in 3D mode. The 3D-TV automatically analyzes that is moving on the screen, and then makes that item clearer and sharper, and things that are considered background are automatically made fuzzier, out of focus. The effect is really incredible.
One of the storage solutions on display was the entry-level IBM System Storage DS3524 disk system, which is a small 2U high cabinet that holds 24 drives. These are the small form factor 2.5 inch drives. It's amazing we can pack so many drives in such a compact rack-optimized enclosure!
Ingram Micro is one of IBM's technology distributors, and it was good to see it was a well-attended event.