Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Continuing this week's theme on customer references of IBM's solutions, today I will discussthe success at Kantana Animation Studios.
Here is a 3-minute video from the good folks at Kantana Animation Studios,part of the [Kantana Group].They produced the animated movie [Khan Kluay]using IBM Scale-out File Services (SoFS), a product IBM announced last November 2007.
As a film-maker myself (see this sample [Highlights clip])and active member of the Tucson Film Society,I am pleased to see IBM so greatly involved in the film industry. I've had the pleasure to visit some of theseanimation studios myself and meet with other film-makers at various conferences.
For more details on Kantana's implementation, see the [Case Study]
A faithful reader of this blog, Tom, sent me a link to Orson Scott Card's article titled[PROGRAMMERS AS BEES (or, how to kill a software company)]. "Is there any truth in this?" Tom asked?Having worked both sides of this fence as I approach my 22 year anniversary at IBM, I guess I can venturesome opinions on this piece. Let's start with this excerpt:
"The environment that nurtures creative programmers kills management and marketing types - and vice versa."
By this, he means "kills" in the UNIX sense, I imagine, and not the "Grand Theft Auto IV" sense.Different people solve problems differently. Some programmers have the luxury that theycan often focus on a single platform, single chipset, single OS, and so on, but Marketing types are tryingto come up with messaging that appeals to a broad audience, from people with business backgrounds to others with moretechnical backgrounds, and that can be more challenging. For programmers, "creative" is an adjective; formarketers, it's a noun.
"Programming is the Great Game. It consumes you, body and soul. When you're caught up in it, nothing else matters."
True. As a storage consultant, I find myself writing code a lot, from small programs, scripts, and even HTML codefor this blog. When you are in your zone, working on something, one can easily lose track of time.
"Here's the secret that every successful software company is based on: You can domesticate programmers the way beekeepers tame bees. You can't exactly communicate with them, but you can get them to swarm in one place and when they're not looking, you can carry off the honey. You keep these bees from stinging by paying them money. More money than they know what to do with. But that's less than you might think."
I have never tamed bees, but many of my friends who are still programmers are motivated by factors other thanmaximizing their income, such as: friendly co-workers, job security, casual attire, and interesting challenges. A few make more than they know what to do with, the rest have girlfriends"significant others" who solve that problem for them.
"One way or another, marketers get control. But...control of what? Instead of finding assembly lines of productive workers, they quickly discover that their product is produced by utterly unpredictable, uncooperative, disobedient, and worst of all, unattractive people who resist all attempts at management."
False. Either marketing had control in the first place (ala Apple, Inc.) or they never had. "Control of what?" is the key phrase here.
"The shock is greater for the coder, though. He suddenly finds that alien creatures control his life. Meetings, Schedules, Reports. And now someone demands that he PLAN all his programming and then stick to the plan, never improving, never tweaking, and never, never touching some other team's code."
True. But if you don't like surprises, perhaps software engineering is not the right career path for you.
"The hive has been ruined. The best coders leave. And the marketers, comfortable now because they're surrounded by power neckties and they have things under control, are baffled that each new iteration of their software loses market share as the code bloats and the bugs proliferate. Got to get some better packaging. Yeah, that's it."
This one depends. I've seen teams survive and manage, with junior programmers stepping up to backfill leadership roles, and other times, projects are scrapped, or started anew elsewhere. As for marketers, it doesn't take much to get one baffled, does it?
(Note: IBM [Guidelines] prevent me from picking blogfights, so this post is only to set the record straight on some misunderstandings, point to some positive press about IBM's leadership in this area, and for me to provide a different point of view.)
First, let's set the record straight on a few things. The [RedPaper is still in draft form] under review, and so some information has not yet been updated to reflect the current situation.
You can have 16 or 32 SSD per DA pair. However, you can only have a maximum of 128 SSD drives total in any DS8100 or DS8300. In the case of the IBM DS8300 with 8 DA pairs, it makes more senseto spread the SSD out across all 8 pairs, and perhaps this is what confused BarryB.
Yes, you can order an all-SSD model of the IBM DS8000 disk system. I don't see anywhere in the RedPaper that suggests otherwise, and I have confirmed with our offering manager that this is the case.
The 73GB and 146GB are freshly manufactured from STEC. The 146GB drive and 200GB drives are actually the same drive but just formatted differently. The 200GB format does not offer as much spare capacity for wear-leveling, and are therefore intended only for read-intensive workloads. (Perhaps EMC wants you to find this out the hard way so that you replace them more often???) These reduced-spare-capacity formats may not be appropriate with some write-intensive workloads. Don't let anyone from EMC try to misrepresent the 73GB or 146GB drives from STEC as older, obsolete, collecting dust in a warehouse, or otherwise no longer manufactured by STEC.
You can relocate data from HDD to SSD using "Data Set FlashCopy", a feature that does not involve host-based copy services, does not consume any MIPS on your System z mainframe, and is performed inside the DS8000 disk system. You can also use host-based copy services as well, but it is not the only way.
You can use any supported level of z/OS with SSD in the IBM DS8000. There is ENHANCED support mentioned in the RedPaper that you get only with z/OS 1.8 and above, allowing you to create automation policies that place data sets onto SSD or non-SSD storage pools. This synergy makes SSD with IBM DS8000 superior to the initial offerings that EMC had offered without this OS support.
I find it amusing that BarryB's basic argument is that IBM's initial release of SSD disk on DS8000 is less than what the potential architecture could be extended to support much more. Actually, if you look at EMC's November release of Atmos, as well as their most recent announcement of V-Max, they basically say the same thing "Stay Tuned, this is just our initial release, with various restrictions and limitations, but more will follow." Architecturally, IBM DS8000 could support a mix of SSD and non-SSD on the same DA pairs, could support RAID6 and RAID10 as well, and could support larger capacity drives or use higher-capacity read-intensive formats. These could all be done via RPQ if needed, or in a follow-on release.
BarryB's second argument is that IBM is somehow "throwing cold water" on SSD technology. That somehow IBM is trying to discourage people from using SSD by offering disk systems with this technology. IBM offered SSD storage on BladeCenter servers LONG BEFORE any EMC disk system offering, and IBM continues to innovate in ways that allow the best business value of this new technology. Take for example this 24-page IBM Technical Brief:[IBM System z® and System Storage DS8000:Accelerating the SAP® Deposits Management Workload With Solid State Drives]. It is full of example configurations that show that SSD on IBM DS8000 can help in practical business applications. IBM takes a solution view, and worked with DB2, DFSMS, z/OS, High Performance FICON (zHPF), and down the stack to optimize performance to provide real business value innovation. Thanks to this synergy,IBM can provide 90 percent of the performance improvement with only 10 percent of the SSD disk capacity as EMC offerings. Now that's innovative!
The price and performance differences between FC and SATA (what EMC was mostly used to) is only 30-50 percent. But the price and performance differences between SSD and HDD is more than an order of magnitude in some cases 10-30x, similar to the differences between HDD and tape. Of course, if you want hybrid solutions that take best advantage of SSD+HDD, it makes more sense to go to IBM, the leading storage vendor that has been doing HDD+Tape hybrid solutions for the past 30 years. IBM understands this better, and has more experience dealing with these orders of magnitude than EMC.
But don't just take my word for it. Here is an excerpt from Jim Handy, from [Objective Analysis] market research firm, in a recent Weekly Review from [Pund-IT] (Volume 5, Issue 23--May 6, 2009):
"What about IBM? One thing that we are finding is that IBM really “Gets It” in the area offlash in the data center. Readers of the Pund-IT Review will not only recall that IBM Researchpushed its SSD-based “Quicksilver” storage system to one million IOPS using Fusion-ioflash-based storage, but they also may have noticed that the recent MySQL and mem-cachedappliances recently introduced by Schooner Information Technology are both flash-enableddevices introduced in partnership with IBM. Ironically, while other OEMs are takingthe cautious approach of introducing a standard SSD option to their systems first, IBM appearsto have been working on several approaches simultaneously to bring flash to thedata center not only in SSDs, but in innovative ways as well."
As for why STEC put out a press release on their own this week without a corresponding IBM press release, I can only say that IBM already announced all of this support back in February, and I blogged about it in my post [Dynamic Infrastructure - Disk Announcements 1Q09]. This is not the first time one of IBM's suppliers has tried to drum up business in this manner. Intel often funds promotions for IBM System x servers (the leading Intel-based servers in the industry) to help drive more business for their Xeon processor.
So, BarryB, perhaps its time for you to take out your green pen and work up another one of your all-too-common retraction and corrections.[Read More]
Many thanks to the 186 people who registered for yesterday's webcast "Solving the Storage Capacity Crisis -- Tools and Practices for Effective Management!" We had some excellent questions posed during the live Q&A:
Do you recommend moving to a SAN before implementing the management techniques you described, or will these tactics work just as well on direct-attached storage?
How does data center tiering differ from hierarchical storage management?
How do you recommend decisions about data priority be made when there are multiple stakeholders competing for attention?
You didn't mention deduplication. Does that have much impact on capacity management?
When outsourcing to a storage service provider, do you have any recommendations of the merits of wholesale outsourcing vs. partial outsourcing?
What are the dangers of giving end-users the ability to manage their own storage? What kind of education should be put in place?
The webcast was recorded, so in case you missed it, or just want to hear it again, the recording is now available in the [On24 archives].
The latest update to the IBM Storage channel on YouTube is fellow IBMer Bob Dalton presenting IBM Scale-Out Network Attached Storage (SONAS) at the NAB 2010 conference. Here is the quick [2-minute YouTube video].
My post last week [Solid State Disk on DS8000 Disk Systems] kicked up some dust in the comment section.Fellow blogger BarryB (a member of the elite [Anti-Social Media gang from EMC]) tried to imply that 200GB solid state disk (SSD) drives were different or better than the 146GB drives used in IBM System Storage DS8000 disk systems. I pointed out that they are the actual same physical drive, just formatted differently.
To explain the difference, I will first have to go back to regular spinning Hard Disk Drives (HDD). There are variances in manufacturing, so how do you make sure that a spinning disk has AT LEAST the amount of space you are selling it as? The solution is to include extra. This is the same way that rice, flour, and a variety of other commodities are sold. Legally, if it says you are buying a pound or kilo of flour, then it must be AT LEAST that much to be legal labeling. Including some extra is a safe way to comply with the law. In the case of disk capacity, having some spare capacity and the means to use it follows the same general concept.
(Disk capacity is measured in multiples of 1000, in this case a Gigabyte (GB) = 1,000,000,000 bytes, not to be confused with [Gibibyte (GiB)] = 1,073,741,824 bytes, based on multiples of 1024.)
Let's say a manufacturer plans to sell 146GB HDD. We know that in some cases there might be bad sectors on the disk that won't accept written data on day 1, and there are other marginally-bad sectors that might fail to accept written data a few years later, after wear and tear. A manufacturer might design a 156GB drive with 10GB of spare capacity and format this with a defective-sector table that redirects reads/writes of known bad sectors to good ones. When a bad sector is discovered, it is added to the table, and a new sector is assigned out of the spare capacity.Over time, the amount of space that a drive can store diminishes year after year, and once it drops below its rated capacity, it fails to meet its legal requirements. Based on averages of manufacturing runs and material variances, these could then be sold as 146GB drives, with a life expectancy of 3-5 years.
With Solid State Disk, the technology requires a lot of tricks and techniques to stay above the rated capacity. For example, you can format a 256GB drive as a conservative 146GB usable, with an additional 110GB (75 percent) spare capacity to handle all of the wear-leveling. You could lose up to 22GB of cells per year, and still have the rated capacity for the full five-year life expectancy.
Alternatively, you could take a more aggressive format, say 200GB usable, with only 56GB (28 percent) of spare capacity. If you lost 22GB of cells per year, then sometime during the third year, hopefully under warranty, your vendor could replace the drive with a fresh new one, and it should last the rest of the five year time frame. The failed drive, having 190GB or so usable capacity, could then be re-issued legally as a refurbished 146GB drive to someone else.
The wear and tear on SSD happens mostly during erase-write cycles, so for read-intensive workloads, such as boot disks for operating system images, the aggressive 200GB format might be fine, and might last the full five years.For traditional business applications (70 percent read, 30 percent write) or more write-intensive workloads, IBM feels the more conservative 146GB format is a safer bet.
This should be of no surprise to anyone. When it comes to the safety, security and integrity of our client's data, IBM has always emphasized the conservative approach.[Read More]
Wrapping up this week's exploration on disk system performance, today I willcover the Storage Performance Council (SPC) benchmarks, and why I feel they are relevant to help customers make purchase decisions. This all started to address a comment from EMC blogger Chuck Hollis, who expressed his disappointment in IBM as follows:
You've made representations that SPC testing is somehow relevant to customers' environments, but offered nothing more than platitudes in support of that statement.
Apparently, while everyone else in the blogosphere merely states their opinions and moves on,IBM is held to a higher standard. Fair enough, we're used to that.Let's recap what we covered so far this week:
Monday, I explained how seemingly simple questions like "Which is the tallestbuilding?" or "Which is the fastest disk system?" can be steeped in controversy.
Tuesday, I explored what constitutes a disk system. While there are special storage systemsthat include HDD that offer tape-emulation, file-oriented access, or non-erasable non-rewriteable protection,it is difficult to get apples-to-apples comparisions with storage systems that don't offer these special features.I focused on the majority of general-purpose disk systems, those that are block-oriented, direct-access.
Today, I will explore ways to apply these metrics to measure and compare storageperformance.
Let's take, for example, an IBM System Storage DS8000 disk system. This has a controller thatsupports various RAID configurations, cache memory, and HDD inside one or more frames.Engineers who are testing individual components of this system might run specifictypes of I/O requests to test out the performance or validate certain processing.
100% read-hit, this means that all the I/O requests are to read data expectedto be in the cache.
100% read-miss, this means that all the I/O requests are to read data expectedNOT to be in the cache, and must go fetch the data from HDD.
100% write-hit, this means that all the I/O requests are to write data into cache.
100% write-miss, this means that all the I/O requests are to bypass the cache,and are immediately de-staged to HDD. Depending on the RAID configuration, this can result in actually reading or writing several blocks of data on HDD to satisfy thisI/O request.
Known affectionately in the industry as the "four corners" test, because you can show them on a box, with writes on the left, reads on the right,hits on the top, and misses on the bottom.Engineers are proud of these results, but these workloads do notreflect any practical production workload. At best, since all I/O requests are oneof these four types, the four corners provide an expectation range from the worst performance (most often write-missin the lower left corner)and the best performance (most often read-hit in the upper right corner) you might get with a real workload.
To understand what is needed to design a test that is more reflective of real business conditions,let's go back to yesterday's discussion of fuel economy of vehicles, with mileage measured in miles per gallon.The How Stuff Works websiteoffers the following description for the two measurements taken by the EPA:
The "city" program is designed to replicate an urban rush-hour driving experience in which the vehicle is started with the engine cold and is driven in stop-and-go traffic with frequent idling. The car or truck is driven for 11 miles and makes 23 stops over the course of 31 minutes, with an average speed of 20 mph and a top speed of 56 mph.
The "highway" program, on the other hand, is created to emulate rural and interstate freeway driving with a warmed-up engine, making no stops (both of which ensure maximum fuel economy). The vehicle is driven for 10 miles over a period of 12.5 minutes with an average speed of 48 mph and a top speed of 60 mph.
Why two different measurements? Not everyone drives in a city in stop-and-go traffic. Having only one measurement may not reflect the reality that you may travel long distances on the highway. Offering both city and highway measurements allows the consumers to decide which metric relates closer to their actual usage.
Should you expect your actual mileage to be the exact same as the standardized test?Of course not. Nobody drives exactly 11 miles in the city every morning with 23 stops along the way,or 10 miles on the highway at the exact speeds listed.The EPA's famous phrase "your mileage may vary" has been quickly adopted into popular culture's lexicon. All kinds of factors, like weather, distance, anddriving style can cause people to get better or worse mileage than thestandardized tests would estimate.
Want more accurate results that reflect your driving pattern, in specific conditions that you are most likely to drive in? You could rentdifferent vehicles for a week and drive them around yourself, keeping track of whereyou go, and how fast you drove, and how many gallons of gas you purchased, so thatyou can then repeat the process with another rental, and so on, and then use yourown findings to base your comparisons. Perhaps you find that your results are always20% worse than EPA estimates when you drive in the city, and 10% worse when you driveon the highway. Perhaps you have many mountains and hills where you drive, you drive too fast, you run the Air Conditioner too cold, or whatever.
If you did this with five or more vehicles, and ranked them best to worstfrom your own findings, and also ranked them best to worst based on the standardizedresults from the EPA, you likely will find the order to be the same. The vehiclewith the best standardized result will likely also have the best result from your ownexperience with the rental cars. The vehicle with the worst standardized result willlikely match the worst result from your rental cars.
(This will be one of my main points, that standardized estimates don't have to be accurate to beuseful in making comparisons. The comparisons and decisions you would make with estimatesare the same as you would have made with actual results, or customized estimates based on current workloads. Because the rankings are in the same order, they are relevant and useful for making decisions based on those comparisons.)
Most people shopping around for a new vehicle do not have the time or patience to do this with rental cars. Theycan use the EPA-certified standardized results to make a "ball-park" estimate on how much they will spendin gasoline per year, decide only on cars that might go a certain distancebetween two cities on a single tank of gas, or merely to provide ranking of thevehicles being considered. While mileage may not be the only metric used in making a purchase decision, it can certainly be used to help reduce your consideration setand factor in with other attributes, like number of cup-holders, or leather seats.
In this regard, the Storage Performance Council has developed two benchmarks that attempt to reflect normal business usage, similar to "City" and "Highway" driving measurements.
SPC-1 consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications. Those applications are characterized by predominately random I/O operations and require both queries as well as update operations. Examples of those types of applications include OLTP, database operations, and mail server implementations.
SPC-2 consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.
Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.
The SPC-2 benchmark was added when people suggested that not everyone runs OLTP anddatabase transactional update workloads, just as the "Highway" measurement was addedto address the fact that not everyone drives in the City.
If you are one of the customers out there willing to spend the time and resources to do your own performance benchmarking, either at your own data center, or with theassistance of a storage provider, I suspect most, if not all, the major vendors(including IBM, EMC and others), and perhaps even some of the smaller start-ups, would be glad to work with you.
If you want to gather performance data of your actual workloads, and use this to estimate how your performance might be with a new or different storage configuration, IBMhas tools to make these estimates, and I suspect (again) that most, if not all, of theother storage vendors have developed similar tools.
For the rest of you who are just looking to decide which storage vendors to invite on your next RFP, and which products you might like to investigate that matchthe level of performance you need for your next project or application deployment,than the SPC benchmarks might help you with this decision. If performance is importantto you, factor these benchmark comparisons with the rest of the attributes you arelooking for in a storage vendor and a storage system.
In my opinion, I feel that for some people, the SPC benchmarks provide some value in this decision making process. They are proportionally correct, in that even ifyour workload gets only a portion of the SPC estimate, that storage systems withfaster benchmarks will provide you better performance than storage systems with lower benchmark results. That is why I feel they can be relevant in makingvalid comparisons for purchase decisions.
Hopefully, I have provided enough "food for thought"on this subject to support why IBM participates in the Storage Performance Council, why the performance of the SAN Volume Controller can be compared to the performanceof other disk systems, and why we at IBM are proud of the recent benchmark results in our recent press release.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
New Nearline expansion enclosures for FlashSystem V9000 and SAN Volume Controller (SVC)
The new 12 Gb SAS expansion enclosure expands total capacity and delivers a tiered data solution. Each LFF expansion enclosure supports twelve 3.5-inch 8 TB NL-SAS drives. Up to two expansion enclosures are supported by a FlashSystem V9000 or SVC controller pair, delivering up to twenty-four drives and 192 TB of raw capacity. The capacity can be compressed up to 5x (80 percent savings) using IBM Real-time Compression.
IBM Spectrum Control and IBM Virtual Storage Center V5.2.10 release
IBM Spectrum Control continues its quarterly continous delivery model with version 5.2.10 release. This is also included in all variants of IBM Virtual Storage Center which bundles IBM Spectrum Control with IBM Spectrum Virtualize products. New features include:
View more details about capacity growth over time for storage systems, pools, volumes, fileset, and file systems. This helps capacity planners to plan for future purchases and procurement.
Aggregate basic information across multiple Spectrum Control servers into a single place, rolling up information was temporally removed in Spectrum Control V5.2.8 and is now available in the web-based GUI for the first time. This is intended for clients to manage multiple data center sites, but can also be used to Cloud Service Providers and Managed Service Providers to generate reporting across a group of clients.
Compare the workload and performance characteristics of IBM SAN Volume Control and IBM Storwize systems against best practice performance guidelines. This is especially useful synergy for the IBM Virtual Storage Center bundles.
Export performance data for storage systems and fabrics using a new Create Performance Support Package wizard. This is helpful in case you observe a performance problem with your IBM storage system, and the IBM support for that device would like to receive the measured performance statistics for further analysis. In the same manner that IBM Spectrum Control drastically reduces troubleshooting time for clients, it is also proven useful for IBM support teams.
Understand how the capacity of storage systems is used when storage virtualization is implemented in the environment, by looking at the information about virtualized and non-virtualized capacity. This allows storage administrators to show upper management how their investment in IBM Spectrum Virtualize (SVC, Storwize, etc.) has returned on investment.
Launch the IBM Spectrum Scale GUI from IBM Spectrum Control to deliver an even better integration of the two products.
It is hard to believe I was the "Technical Evangelist" for SAN Volume Controller when it launched in 2003. That was 13 years ago! Since then, a variety of products using the shared codes base (IBM Spectrum Virtualize) have launched, including IBM Storwize family and IBM FlashSystem V9000 mentioned above. The new IBM Spectrum Virtualize Software V7.7 delivers the following improvements:
Reliability, availability, and serviceablility with NPIV host port fabric virtualization. This is actually pretty cool feature. NPIV stands for "N-port ID Virtualization". Every Spectrum Virtualize port has an N-port ID, and if one node fails, multi-pathing software must scramble to look up its partner node and re-direct traffic to the ports of that other node. With NPIV, the partner node takes on the N-ports of both its own node, as well as the failed node, and handles all the traffic, and then gives back the N-ports back to the other node when it is back up and running.
Distributed RAID (DRAID) support for encryption. IBM added support for distributed RAID-5 and RAID-6 in the previous release, but at the time did not include the built-in encryption feature for these new kind of RAID ranks. Now it supports encryption.
Graphical User Interface (GUI) enhancements to manage your IP-based quorums. Previously, if you had a two-site configuration like Stretched Cluster or HyperSwap, best practices would require a third location as "tie breaker". Thus, people ran fiber optic cables from both sites to a third location, with a small disk system in a closet somewhere. The IP-based quorum is a little Java program you can run on any system, and so long as both sites have LAN or WAN access, serves the same role.
Graphical User Interface (GUI) enhancements to run the "Compresitmator" tool. The Comprestimator tool can run against existing volumes (vDisks) to identify estimated compression savings.
Flexibility Virtualization of iSCSI-attached and Fibre Channel-attached external Storage arrays. Previously, only Fibre Channel (FCP and FCoE) back-end devices could be virtualized. Initially, this will support iSCSI virtualization of IBM Storwize and Dell EqualLogic.
Performance with 64 GB read cache. The software code was enhanced to take advantage of 64-bit memory addressing to support larger read cache.
Software licensing metrics to better align the value of SVC software with Storage use cases through Differential Licensing, based on Storage Capacity Unit (SCU). The licensing for SVC has base and compression license based on the back-end (managed physical usable capacity), and then various features that are based on the subset of front-end capacity (virtual volumes). The new Differential Licensing applies SCU to the back-end (base license and compression). The front-end features continue to be TB-based.
IP Link compression to improve usage of IP networks for remote-copy data transmission
Differential Licensing based on Storage Capacity Unit (SCU)
Differential licensing based on new concept IBM calls Storage Capacity Unit (SCU). Previously, software was licensed per Terabyte (TB), but that treated all TB the same, from Flash to Nearline disks. The new license method takes storage media into three categeories:
1 SCU equals 1.00 TB of Flash and Solid-State Drives (SSDs), and any other storage not listed in the categories below.
1 SCU equals 1.18 TB of 10K and 15K rpm drives, such as Serial Attached SCSI (SAS) Drives and Fiber Channel Drives, as well as systems using "Category 3" (Nearline or SATA drives) with advanced architectures to deliver high-end storage performance, such as IBM XIV Storage System, HP 3PAR or Infinidat .
1 SCU equals 4.00 TB of 7200 rpm Nearline SAS (NL-SAS) and Serial ATA (SATA) Drives
This new licensing is experimental. I would be interested in your feedback.
Wrapping up my week's theme of "diversity", with posts on a diverse set of topics,today I will suggest ways to spendyour time while you are walking 10,000 steps per day, as recommended by the authorsof the book "You: On a Diet".
(If you thought this was about the 10,000 steps it might take to implement a storage solution, you should switch over to IBM as your storage vendor. For example, the DS3200 and DS3400 can beimplemented in as little as SIX steps. That's pretty cool.)
Blogs like Lifehacker are an excellent resource for neat littletips and tricks to help you throughout your day, like how to use your iPod, cell phone or computer better, for example. These suggestions are based on the idea that you can walk your 10,000 steps with access to an iPod and cell phone.
Learning a language
... or refreshing yourself on a language you might not have spoken in a while. In addition to formal audio-based lessons from Pimsleur, there are podcasts you can get for various languages. In preparation for my upcoming trip to Japan and China, I have been listening to JapanesePod101.com and ChinesePod.com which have quick lessons that complement the formal training.This Lifehacker postindicates there are similar ones for French, Spanish, Italian, and Brazillian Portuguese.
Practicing your presentation
Walking while practicing your 30-60 minute presentation would be good exercise.MicroPersuasion explains how to turn your iPod into the ultimate PowerPoint accessory, and this article in PlayListmag.com providesthe steps to get a PowerPoint presentation onto your iPod. I did this, and the slides are found underPhotos->Photo Library. The images are small, but heck, they are your charts and you should recognize themwell enough to remind yourself what to say on each slide.Also, I am able to record my practice sessions using MP3 Recorder and listen as I page through each slide. (In theory, you can use your iPod to present your slides to your audience, plugging the iPod directly into the laptop projector, instead of a laptop, using cables available at your local Apple store, and use the iPod controls as your forward/backward remote.)
Working your To-Do list
You can download your to-do list to your iPod. I use BackPackIt from 37 Signals. You can sign up for a free account, or upgrade to a paid account, and have anamazingly simple browser-based tool to develop your to-do lists, one for each project or aspect of your life. Oncedone, the list can be emailed to you as plain text. Enable your iPod as an "external disk drive" and copy this text file to your NOTES directory on the iPod drive. Voila! You can now read your to-do list! (I could also send it to my cell phone, using firstname.lastname@example.org, but I find the iPod easier to read and navigate)
Think of something to add? Send an email from your cell phone. With BackPackit, I can send an email that will directly add my text as a note or todo list item. On my phone, this is simply sending a text message to "500" with text like:
"email@example.com todo # buy bread".
The hash mark (#) separates the subject line from the body of the email, and this is how Backpackit knows its a todo item or a note. If you pre-program the huge email address in advance on your phone, then it isn't as bad as it looks. It will be on your packpackit page the next time you log in.
Well, that's three suggestions. The next time you complain that there is no time to walk, you now have no excuse.
On his The Storage Architect blog, Chris Evans wrote [Twofor the Price of One]. He asks: why use RAID-1 compared to say a 14+2 RAID-6 configuration which would be much cheaper in terms of the disk cost? Perhpaps without realizing it, answers itwith his post today [XIV part II]:
So, as a drive fails, all drives could be copying to all drives in an attempt to ensure the recreated lost mirrors are well distributed across the subsystem. If this is true, all drives would become busy for read/writes for the rebuild time, rather than rebuild overhead being isolated to just one RAID group.
Let me try to explain. (Note: This is an oversimplification of the actual algorithm in an effortto make it more accessible to most readers, based on written materials I have been provided as partof the acquisition.)
In a typical RAID environment, say 7+P RAID-5, you might have to read 7 drives to rebuild one drive, and in the case of a 14+2 RAID-6, reading 15 drives to rebuild one drive. It turns out the performance bottleneck is the one driveto write, and today's systems can rebuild faster Fibre Channel (FC) drives at about 50-55 MB/sec, and slower ATA disk at around 40-42 MB/sec. At these rates, a 750GB SATA rebuild would take at least 5 hours.
In the IBM XIV Nextra architecture, let's say we have 100 drives. We lose drive 13, and we need to re-replicate any at-risk 1MB objects.An object is at-risk if it is the last and only remaining copy on the system. A 750GB that is 90 percent full wouldhave 700,000 or so at-risk object re-replications to manage. These can be sorted by drive. Drive 1 might have about 7000 objects that need re-replication, drive 2might have slightly more, slightly less, and so on, up to drive 100. The re-replication of objects on these other 99 drives goes through three waves.
Select 49 drives as "source volumes", and pair each randomly with a "destination volume". For example, drive 1 mapped todrive 87, drive 2 to drive 59, and so on. Initiate 49 tasks in parallel, each will re-replicate the blocks thatneed to be copied from the source volume to the destination volume.
50 volumes left.Select another 49 drives as "source volumes", and pair each with a "destination volume". For example, drive 87 mapped todrive 15, drive 59 to drive 42, and so on. Initiate 49 tasks in parallel, each will re-replicate the blocks thatneed to be copied from the source volume to the destination volume.
Only one drive left. We select the last volume as the source volume, pair it off with a random destination volume,and complete the process.
Each wave can take as little as 3-5 minutes. The actual algorithm is more complicated than this, as tasks complete early the source and volumes drives are available for re-assignment to another task, but you get the idea. XIV hasdemonstrated the entire process, identifying all at-risk objects, sorting them by drive location, randomly selectingdrive pairs, and then performing most of these tasks in parallel, can be done in 15-20 minutes. Over 40 customershave been using this architecture over the past 2 years, and by now all have probably experienced at least adrive failure to validate this methodology.
In the unlikely event that a second drive fails during this short time, only one of the 99 task fails. The other 98 tasks continue to helpprotect the data. By comparison, in a RAID-5 rebuild, no data is protected until all the blocks are copied.
As for requiring spare capacity on each drive to handle this case, the best disks in production environments aretypically only 85-90 percent full, leaving plenty of spare capacity to handle re-replication process. On average,Linux, UNIX and Windows systems tend to only fill disks 30 to 50 percent full, so the fear there is not enough sparecapacity should not be an issue.
The difference in cost between RAID-1 and RAID-5 becomes minimal as hardware gets cheaper and cheaper. For every $1 dollar you spend on storage hardware, you spend $5-$8 dollars managing the environment. As hardware gets cheaper still, it might even be worth making three copies of every 1MB object, the parallel processto perform re-replications would be the same. This could be done using policy-based management, some data gets triple-copied, and other data gets only double-copied, based on whether the user selected "premium" or "basic" service.
The beauty of this approach is that it works with 100 drives, 1000 drives, or even a million drives. Parallel processingis how supercomputers are able to perform feats of amazing mathematical computations so quickly, and how Web 2.0services like Google and Yahoo can perform web searches so quickly. Spreading the re-replication process acrossmany drives in parallel, rather than performing them serially onto a single drive, is just one of the many uniquefeatures of this new architecture.
This week I'm in Los Angeles for the Systems Technology Conference (STC '08).We have over 1900 IT professionals attending, of which 1200 IBMers from North America, Latin America,and Asia Pacific regions, as well as another 350 IBM Business Partners. The rest, including me, are world wideor from other areas.
Last January, IBM reorganized its team to be more client-focused. Instead of focused on products, we are nowclient-centric, and have teams to cover our large enterprise systems through direct sales force, business systemsfor sales through our channel business partners, and industry systems for specific areas like deep computing,digital surveillance and retail systems solutions.
In addition to 788 sessions to attend these next four days, we had a few main tent sessions.My third line (my boss' boss' boss) David Gelardi presented Enterprise Systems. This is the group I am in.
Akemi Watanabe presented for Business Systems. Her native language is Japanese, so to do an entire talk inEnglish was quite impressive. Her focus is on SMB accounts, those customers with less than 1000 employeesthat are looking for easy-to-use solutions. She mentioned IBM's new [Blue Business Platform] which includesLotus Foundation Start, an Application Integration Toolkit, and the Global Application Marketplace.
Part of this process is the merger of System p and System i into "POWER" systems, and then offering both midrangeand enterprise versions of these that run AIX, i5/OS and Linux on POWER. It turns out that only 9 percent of ourSystem i customers are only on this platform. Another 87 percent have Windows, so it makes sense to offer i5/OSon BladeCenter, to consolidate Windows servers from HP, Dell or Sun over to IBM.
Meanwhile, IBM's strategy to support Linux has proven successful. 25 percent of x86 servers now run Linux. IBMhas 600 full-time developers for Linux, over 500 of which contributed to the latest 2.6 kernel development. Our ["chiphopper"] program has successfullyported over 900 applications. There are now over 6500 applications that run on Linux applications, on our strategic alliances with Red Hat (RHEL) and Novell (SUSE) distributions of Linux.
Her recommendation to SMB reps: learn POWER systems, BladeCenter, and Linux. I agree!
Mary Coucher presented Industry systems. In addition to the game chips for the Sony Playstation, Nintendo Wii,and Microsoft Xbox-360, this segment focuses on Digital Video Surveillance (DVS), Retail Solutions, Healthcare and Life sciences (HCLS), OEM and embedded solutions, and Deep computing. She mentioned our recently announcediDataPlex solution.
IBM is focused on "real-world-aware" applications, which includes traffic, crime, surveillance, fraud, andRFID enablement. These are streams of data that happen real-time, that need to be dealt with now, not later.
Most people know that IBM has the majority of the top 500 supercomputers, but few may not realize that IBMalso has delivered solutions to the top 100 green companies. IBM success is explained in more detail in this[Press Release].
The group split up to four different platform meetings: Storage, Modular, Power, and Mainframe. Barry Rudolphpresented for the Storage platform. He talked about the explosion in information, business opportunities,risk and cost management. IBM has shifted from being product-focused, to the stack of servers and storage,to our latest focus on solutions across the infrastructure. He mentioned our DARPA win for [PERCS] which stands for productive,easy-to-use, reliable computing system.
My session was the first in the morning, at 8:30am, but managed to pack the room full of people. A few looklike they just rolled in from Brocade's special get-together in Casey's Irish Pub the night before.I presented how IBM's storage strategy for the information infrastructure fits into the greater corporate-wide themes.To liven things up, I gave out copies of my book[Inside System Storage: Volume I] to those who asked or answered the toughest questions.
Data Deduplication and IBM Tivoli Storage Manager (TSM)
IBM Toby Marek compared and contrasted the various data deduplication technologies and products available, andhow to deploy them as the repository for TSM workloads. She is a software engineer for our TSM software product,and gave a fair comparison between IBM System Storage N series Advanced Single Instance Storage (A-SIS), IBMDiligent, and other solutions out in the marketplace.If you are going to combine technologies, then it isbest to dedupe first, then compress, and finally encrypt the data. She also explained about the many cleverways that TSM does data reduction at the client side greatly reduces the bandwidth traffic over the LAN,as well as reducing disk and tape resources for storage. This includes progressive "incremental forever" backup for file selection, incremental backups for databases, and adaptive sub-file backup.Because of these data reduction techniques, you may not get as much benefit as deduplication vendors claim.
The Business Value of Energy Efficiency Data Centers
Scott Barielle did a great job presenting the issues related to the Green IT data center. He is part of IBM"STG Lab Services" team that does energy efficiency studies for customers. It is not unusual for his teamto find potential savings of up to 80 percent of the Watts consumed in a client's data center.
IBM has done a lot to make its products more energy efficient. For example, in the United States, most datacenters are supplied three-phase 480V AC current, but this is often stepped down to 208V or 110V with powerdistribution units (PDUs). IBM's equipment allows for direct connection to this 480V, eliminating the step-downloss. This is available for the IBM System z mainframe, the IBM System Storage DS8000disk system, and larger full-frame models of our POWER-based servers, and will probably be rolled out to someof our other offerings later this year. The end result saves 8 to 14 percent in energy costs.
Scott had some interesting statistics. Typical US data centers only spend about 9 percent of their IT budgeton power and cooling costs. The majority of clients that engage IBM for an energy efficiency study are not tryingto reduce their operational expenditures (OPEX), but have run out, or close to running out, of total kW ratingof their current facility, and have been turned down by their upper management to spend the average $20 million USDneeded to build a new one. The cost of electricity in the USA has risen very slowly over the past 35 years, andis more tied the to fluctuations of Natural Gas than it is to Oil prices.(a recent article in the Dallas News confirmed this:["As electricity rates go up, natural gas' high prices, deregulation blamed"])
Cognos v8 - Delivering Operational Business Intelligence (BI) on Mainframe
Mike Biere, author of the book [BusinessIntelligence for the Enterprise], presented Cognos v8 and how it is being deployed for the IBMSystem z mainframe. Typically, customers do their BI processing on distributed systems, but 70 percent of the world's business data is on mainframes, so it makes sense to do yourBI there as well. Cognos v8 runs on Linux for System z, connecting to z/OS via [Hypersockets].
There are a variety of other BI applications on the mainframe already, including DataQuant,AlphaBlox, IBI WebFocus and SAS Enterprise Business Intelligence. In addition to accessing traditional onlinetransaction processing (OLTP) repositories like DB2, IMS and VSAM, using the [IBM WebSphere ClassicFederation Server], Cognos v8 can also read Lotus databases.
Business Intelligence is traditionally query, reporting and online analytics process (OLAP) for the top 10 to 15 percent of the company, mostly executives andanalysts, for activities like business planning, budgeting and forecasting. Cognos PowerPlay stores numericaldata in an [OLAP cube] for faster processing.OLAP cubes are typically constructed with a batch cycle, using either "Extract, Transfer, Load" [ETL], or "Change Data Capture" [CDC], which playsto the strength of IBM System z mainframe batch processing capabilities.If you are not familiar with OLAP, Nigel Pendse has an article[What is OLAP?] for background information.
Over the past five years, BI is now being more andmore deployed for the rest of the company, knowledge workers tasked with doing day-to-day operations. Thisphenomenom is being called "Operational" Business Intelligence.
IBM Glen Corneau, who is on the Advanced Technical Support team for AIX and System p, presented the IBMGeneral Parellel File System (GPFS), which is available for AIX, Linux-x86 and Linux on POWER.Unfortunately, many of the questions were related to Scale Out File Services (SOFS), which my colleague GlennHechler was presenting in another room during this same time slot.
GPFS is now in its 11th release since its introducing in 1997. All of the IBM supercomputers on the [Top 500 list] use GPFS. The largest deployment of GPFS is 2241 nodes.A GPFS environment can support up to 256 file systems, each file system can have up to 2 billion filesacross 2 PB of storage. GPFS supports "Direct I/O" making it a great candidate for Oracle RAC deployments.Oracle 10g automatically detects if it is using GPFS, and sets the appropriate DIO bits in the stream totake advantage of GPFS features.
Glen also covered the many new features of GPFS, such as the ability to place data on different tiers ofstorage, with policies to move to lower tiers of storage, or delete after a certain time period, all conceptswe call Information Lifecycle Management. GPFS also supports access across multiple locations and offersa variety of choices for disaster recovery (DR) data replication.
Perhaps the only problem with conferences like this is that it can be an overwhelming["fire hose"] of information!
Continuing this week in Los Angeles, I went to some interesting sessions today at theSystems Technical Conference (STC08).
System Storage Productivity Center (SSPC) - Install and Configuration
Dominic Pruitt, an IBM IT specialist in our Advanced Technical Support team, presented SSPC and howto install and configure it. For those confused between the difference of TotalStorage ProductivityCenter and System Storage Productivity Center, the former is pure software that you install on aWindows or Linux server, and the latter is an IBM server, pre-installed with Windows 2003, TotalStorageProductivity Center software, TPCTOOL command line interface, DB2 Universal Database, the DS8000 Element Manager, SVC GUI and CIMOM, and [PuTTY] rLogin/SSH/Telnet terminal application software.
Of course, the problem with having a server pre-installed with a lot of software is that there is alwayssomeone that wants to customize it further. For those who just want to manage their DS8000 disk systems,for example, it is possible to uninstall the SVC GUI, CIMOM and PuTTY, and re-install them later when youchange your mind. As a general rule, it is not wise to mix CIMOMs on the same machine, as it might causeconflicts with TCP ports or Java level requirements, so if you want a different CIMOM than SVC, uninstallthe SVC CIMOM first. For those who have SVC, the SSPC replaces the SVC Master Console, so you can safelyturn off the SVC CIMOM on your existing SVC Master Consoles.
The base level is TotalStorage Productivity Center "Basic Edition", but you can upgrade the Productivity Centerfor Disk, Data and Fabric components with license keys. You can also run Productivity Center for Replication,but IBM recommends adding processor and memory to do this (IBM offers this as an orderable option).Whether you have the TotalStorage software or SSPC hardware, Productivity Center has a cool role-to-groups mapping feature.You can create user groups, either on the Windows server, the Active Directory, or other LDAP, and then map which roles should be assigned to users in each group.
Since Productivity Center manages a variety of different disk systems, it has made anattempt to standardize some terminology. The term "storage pool" refers to an extentpool on the DS8000, or a managed disk group on the SAN Volume Controller. Since the DS8000 can support both mainframe CKD volumes and LUNs for distributed systems, theterm "volume" refers to a CKD volume or LUN, and "disk" refers to the hard disk drive (HDD).
To help people learn Productivity Center, IBM offers single-day "remote workshops"that use Windows Remote Desktop to allow participants to install, customize and usethe software with no travel required.
IBM Integrated Approach to Archiving
Dan Marshall, IBM global program manager for storage and data services on our Global Technology Services team, presented IBM's corporate-wide integration to support archive across systems, software and services.One attendee asked me why I was there, given that "archive" is one of my areas of subject matter expertise that I present often at the Tucson Executive Briefing Center. I find it useful to watch others present the material, even material that I helped to develop, to see a different slant or spin on each talking point.
Archive is one area that brings all parts of IBM together: systems, software and services.Dan provided a look at archive from the services angle, providing an objective unbiasedview of the different software and systems available to solve specific challenges.
Encryption Key Manager (EKM) Design and Implementation
Jeff Ziehm, IBM tape technical sales specialist, presented IBM's EKM software, how it works in a tape environment, and how to deploy it in various environments. Since IBM is allabout being open and non-proprietary, the EKM software runs on Java on a variety ofIBM and non-IBM operating systems. IBM offers "keytool" command line interface (CLI) for the LTO4 and TS1120 tape systems, and "iKeyMan" graphical user interface (GUI) for theTS1120. Since it runs on Java, IBM Business Partners and technical support personneloften just [download and install EKM]onto their own laptops to learn how to use it.
Virtual Tape Update
We had three presenters at this one. First, Jeff Mulliken, formerly from Diligent and now a full IBM employee, presented the current ProtecTier softwarewith the HyperFactor technology, then Abbe Woodcock, IBM tape systems, compared Diligent with IBM's TS7520 and just-announced TS7530virtual tape libraries, and finally Randy Fleenor, IBM tape sales leader, presented IBM's strategy going forward in tape virtualization.
Let's start with Diligent. The ProtecTier software runs on any x86-64 server withat least four cores and the correct Emulex host bus adapter (HBA) cards. Using Red HatEnterprise Linux (RHEL) as a base, the ProtecTier software performs its deduplication entirely in-lineat an "ingest rate" of 400-450 MB/sec. This is all possible using 4GB memory-resident "dictionary table" that can map up to 1 PB of back end physical storage, which could represent as much as 25PB of "nominal" storage. Theserver is then point-to-point or SAN-attached to Fibre Channel disk systems.
As we learned yesterday from Toby Marek's session, there are four ways to performdeduplication:
full-file comparisons. Store only one copy of identical files.
fixed-chunk comparisons. Files are carved up into fixed-size chunks, and each chunkis compared or hashed to existing chunks to eliminate duplicates.
variable-chunk comparisons. Variable-length chunks are hashed or diffed to eliminate duplicate data.
content-aware comparisons. If you knew data was in Powerpoint format, for example,you could compare text, photos or charts against other existing Powerpoint files toeliminate duplicates.
IBM System Storage N series Advanced Single Instance Storage (A-SIS) uses fixed-chunkmethod, and Diligent uses variable-chunk comparisons. Diligent does this using "dataprofiling". For example, let's say most of my photographs are pictures of people, buildings, landscapes, flowers and IT equipment. When I back these up, the Diligentserver "profiles" each, and determines if any existing data have a similar profilethat might have at least 50 percent similar content. Diligent than reads in the data that is mostly likely similar, does a byte-for-byte ["diff" comparison], and creates variable-lengthchunks that are either identical or unique to sections of the existing data. Theunique data is compressed with LZH and written to disk, and the sequential series of pointer segments representing the ingested file is written in a separate section on disk.
That Diligent can represent profiles for 1PB of data in as little as 4GB memory-residentdictionary is incredible. By comparison, 10TB data would require 10 million entries on a content-aware solution, and 1.25 billion entries for one based on hash-codes.
Abbe Woodcock presented the TS7530 tape system that IBM announced on Tuesday. It has some advantages over the current Diligent offering:
Hardware-based compression (TS7520 and Diligent use software-based compression)
1200 MB/sec (faster ingest rate than Diligent)
1.7PB of SATA disk (more disk capacity than Diligent)
Support for i5/OS (Diligent's emulation of ATL P3000 with DLT7000 tapes not supported on IBM's POWER systems running i5/OS)
Ability to attach a real tape library
NDMP backup to tape
tape "shredding" (virtual equivalent of degaussing a physical tape to erase all previously stored data)
Randy Fleenor wrapped up the session telling us IBM's strategy going forward with all of thevirtual tape systems technologies. Until then, IBM is working on "recipes" or "bundles", puttingDiligent software with specific models of IBM System x servers and IBM System Storage DS4000 disk systemsto avoid the "do-it-yourself" problems of its current software-only packaging.
Understanding Web 2.0 and Digital Archive Workloads
I got to present this in the last time slot of the day, just before everyone headed off to the [Westin Bonaventure hotel] for our big fancy barbecue dinner. Like my previous sessionon IBM Strategy, this session was more oriented toward a sales audience, but both garnereda huge turn-out and were well-received by the technical attendees.
This session was requested because these new applications and workloads are what is driving IBM to acquire small start-ups like XIV, deploy Scale-Out File Services (SOFS), and develop the innovative iDataPlex server rack.
The session was fun because it was a mix of explanation of the characteristics ofWeb 2.0 services; my own experience as a blogger and user of Google Docs, FlickR, Second Life andTivo; and an exploration in how database and digital archives will impact thegrowth in computing and storage requirements.
I'll expand on some of these topics in later blog posts.
I'm glad this is the final day of the IBM Systems Technical Conference (STC08) here in Los Angeles.While I enjoyed the conference, one quickly reaches saturation point with all the information presented.
XIV Architecture Overview
Before this conference, many of the attendees didn't understandIBM's strategy, didn't understand Web 2.0 and Digital archive workloads,and didn't understand why IBM acquired XIV to offer "yet another disk systemthat servers LUNs to distributed server platforms." Brian Shermanchanged all that!
Brian Sherman, IBM Advanced Technical Support (ATS), is part of the exclusive dedicated XIVtechnical team to install these boxes at client locations, so he is very knowledgeable with the technical aspects of the architecture. He presented what the current XIV-branded model that clients can purchase now in select countries, and what the IBM-branded model will change when available worldwide.
Those who missed my earlier series on XIV can find them here:
Beyond this, Brian gave additional information on how thin provisioning, storage pools, disk mirroring, consistency groups, management consoles, and microcode updates are implemented.
N series and VMware Deep Dive
Norm Bogard, IBM Advanced Technical Support, presented why the IBM N series makes such great disk storage for VMware
deployments. This wasclearly labeled as a "deep dive", so anyone who got lost in all of theacronyms could not blame Norm for misrepresentation.
IBM has been doing server virtualization for over 40 years, so it makes sense thatit happens to be the number one reseller of VMware offerings.VMware ESX server is a hypervisor that runs on x86 host, and provides an emulationlayer for "guest Operating Systems". Each guest can hvae one or more virtualdisks, which are represented by VMware as VMDK files. VMware ESX server acceptsread/write requests from the guests, and forwards them on to physical storage.Many of VMware's most exciting features requires storage to be external to thehost machine. [VMotion]allows guests to move from one host to another, [Distributed Resource Scheduler (DRS)]allows a set of hosts to load-balance the guestsacross the hosts, and [High Availability (HA)] allows the guests on a failed hostto be resurrected on a surviving host. All of these require external disk storage.
ESX server allows up to 256 LUNs, attached via FCP and/or iSCSI, and up to 32 NFS mount points. Across LUNs, ESX server uses VMFS file system, which is a clusteredfile system like IBM GPFS that allows multiple hosts to access the same LUNs.ESX server has its own built-in native multipathing driver, and even provides FCP-iSCSIand iSCSI multipathing. In other words, you can have a LUN on an IBM System Storage N series thatis attached over both FCP and iSCSI, so if the SAN switch or HBA fails, ESX servercan failover to the iSCSI connection.
ESX server can use NFS protocol to access the VMDK files instead. While the default is only 8 NFS mount points, you can increase this to 32 mount points. NAS can takeadvantage of Link Aggregate Control Protocol [LACP] groups, what some call "trunking" or "EtherChannel". This is the ability to consolidate multiple streams onto fewer inter-switch Ethernet links, similar to what happens on SAN switches.For the IBM N series, IBM recommends a "fixed" path policy, rather than "most recently used".
IBM recommends disabling SnapShot schedules, and setting the Snap reserve to 0 percent.Why? A snapshot of an ESX server datastore has the VMDK files of many guests, all of which would have had to quiesce or stop to make the data "crash consistent" for theSnapshot of the datastore to even make any sense. So, if you want to take Snapshots, itshould be something you coordinate with the ESX server and its guest OS images, and notscheduled by the N series itself.
If you are running NFS protocol to N series, you can turn off the "accesstime" updates. In normal file systems, when you read a file, it updates the"access time" in the file directory. This can be useful if you are looking forfiles that haven't been read in a while, such as software that migrates infrequentlyaccessed files to tape. Assuming you are not doing that on your N series, you might as well turnoff this feature, and reduce the unnecessary write activity to the IBM N series box.
ESX server can also support "thin provisioning" on the IBM N series. There isa checkbox for "space reserved". Checked means "thick provisioning" and uncheckedmeans "thin provisioning". If you decide to use "thin provisioning" with VMware,you should consider setting AutoSize to automatically increase your datastorewhen needed, and to auto-delete-snap your oldest snapshots first.
The key advantage of using NFS rather than FCP or iSCSI is that it eliminates theuse of the VMFS file system. IBM N series has the WAFL file system instead, andso you don't have to worry about VMFS partition alignment issue. Most VMDK aremisaligned, so the performance is sub-optimal. If you can align each VMDK to a32KB or 64KB boundary (depending on guest OS), then you can get better performance.WAFL does this for you automatically, but VMFS does not. For Windows guests, use "Windows PE" to configurecorrectly-aligned disks. For UNIX or Linux guests, use "fdisk" utility.
What Industry Analysts are saying about IBM
Vic Peltz gave a presentation highlighting the accolades from securities analysts, IT analysts, and newsagencies about IBM and IBM storage products. For example, analysts like that IBM offersmany of the exciting new technologies their clients are demanding, like "thin provisioning", RAID-6 double-drive protection,SATA and Solid State Disk (SSD) drive technology.Analysts also like that IBM is open to non-IBM heterogeneous environments. Whereas EMC Celerra gateways supportonly EMC disk, IBM N series gateways and IBM SAN Volume Controller support a mix of IBM and non-IBM equipment.
Analysts also like IBM's "datacenter-wide" approach to issues like security and "Green IT". Rather than focusingon these issues with individual point solutions, IBM attacks these challenges with a complete"end-to-end" solution approach. A typical 25,000 square foot data center consumes $2.6 million dollars USD in power andcooling today, and IBM has proven technologies to reduce this cost in half. IBM's DS8000 on average consume26.5 to 27.8 percent less electricity than a comparable EMC DMX-4 disk system. IBM's tape systemsconsume less energy than comparable Sun or HP models.
IBM iDataPlex product technical presentation
Vallard Benincosa, IBM Technical Sales Specialist, presented the recently-announced [IBM System x iDataPlex].This is designed for our clients that have thousands of x86 servers, that buy servers "racks at a time", tosupport Web 2.0 and digital archive workloads. The iDataPlex is designed for efficient power and cooling,rapid scalability, and usable server density.
iDataPlex is such a radical design departure, that it might be difficult to describe in words.Most racks take up two floor tiles, each tile is 2 foot by 2 foot square. In that space, a traditionalrack would have servers that were 19 inches wide slide in horizontally, with flashing lights and hot-swappabledisks in the front, and all the power supply, fans and networking connections in the back. Even with IBM BladeCenter,you have chassis in these racks, and then servers slide in vertically in the front, and all of the power supply, fanand networking connections in the back. To access these racks, you have to be able to open the door on boththe front and back. And the cooling has to go through at least 26.5 inches from the front of the equipment to the back.
iDataPlex turns the rack sideways. Instead of two feet wide, and four feet deep, it is four feet wide, and two feet deep.This gives you two 19 inch columns to slide equipment into, and the air only has to travel 15 inches from frontto back. Less distance makes cooling more efficient.
Next, iDataPlex makes only thing in the back the power cord, controlled by an intelligent power distribution unit (iPDU) so you can turnthe power off without having to physically pull the plug. Everything else is serviced from the front door.This means that the back door can now be an optional "Rear Door Heat Exchanger" [RDHX] that is filled with running water to makecooling the rack extremely efficient. Water from a cooler distirubtion unit (CDU) can power about threeto four RDHX doors.
Let's say you wanted to compare traditional racks with iDataPlex for 84 servers. You can put 42 "1U" serversin two racks each, each rack requires 10 kVA (kilo-volt-amps) so you give it two 8.6 kVA feeds each, that is fourfeeds, and at $1500-2000 dollars USD per month, will cost you $6000-8000. The iDataPlex you can fit 84 serversin one 20 kVA rack, with only three 8.6 kVA feeds, saving you $1500-2000 dollars USD per month.
Fans are also improved. Fan efficiency is based on their diameter, so small fans in 1U servers aren't as effective as iDataPlex's 2U fans, saving about 12-49W per server. Whereas typical 1U server racks spend 10-20percent of their energy on the fans, the iDataPlex spends only about 1 percent, saving 8 to 36 kWH per year per rack.
Each 2U chassis snaps into a single power supply and a bank of 2U fans. A "Y"power cord allows you to have one cord for two power supplies. A chassis can hold either two small server "flexnodes"or one big "flexnode". An iDataPlex rack can hold up to 84 small servers or 42 big servers. Since each "Y" cord can power up to four "flexnode" servers, you greatly reduce the number of PDU sockets taken,leaving some sockets available for traditional 1U switches.
The small "flexnode" server can have one 3.5 inch HDD, or two 2.5 inch HDD, either SAS or SATA, and the big "flexnode" can have twice these.If you need more storage, there is a 2U chassis that holds five 3.5 inch HDD or eight 2.5 inch HDD. These areall "simple-swappable" (servers must be powered down to pull out the drives). For hot-swappable drives, a 3Uchassis with twelve 3.5 inch SAS or SATA drives.
The small "flexnode" server has one [PCI Express] slot, the big servers have two. Thesecould be used for [Myrinet] clustering. With only 25W power,the PCI Express slots cannot support graphics cards.
The iDataPlex is managed using the "Extreme Cluster Administration Toolkit" [XCAT]. This is an open source project under Eclipse that IBM contributes to.
Finally was the concept of "pitch". This is the distance from the center of one "cold aisle" to the next "cold aisle".On typical data centers, a pitch is 9 to 11 tiles. With the iDataPlex it is only three tiles when using the RDHX doors, or six tiles without. Most data centers run out of power and cooling before they run out of floor space, so having more dense equipmentdoesn't help if it doesn't also use less electricity.Since the iDataPlex uses 40 percent less power and cooling, you can pack more racks persquare foot of an existing data center floor with the existing power and cooling available. That is what IBM calls "usable density"!
What Did You Say? Effective Questioning and Listening Techniques
Maria L. Anderson, IBM Human Resources Learning, gave this "professional development" talk. I deal with different clients every week, so I fully understand that there is a mix of art and science incrafting the right questions and listening to the responses.The focus was on howto ask better questions and improve the understanding and communication during consultative engagements. Thisinvolves the appropriate mix of closed and open-ended questions, exchanging or prefacing as needed. This wasa good overview of the ERIC technique (Explore, Refine, Influence, and Confirm).
Well, that wraps up my week here in Los Angeles.Special thanks to my two colleagues, Jack Arnold and Glenn Hechler, both from the Tucson Executive Briefing Center,who helped me prepare and review my presentations!