Registration is now open for our next "Meet the Storage Experts" event in Second Life. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
Inside System Storage -- by Tony PearsonTony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson )
Registration is now open for our next "Meet the Storage Experts" event in Second Life. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
I have arrived safely in Las Vegas for the IBM System Storage and Storage Networking Symposium. This eventis held once every year. The gold sponsors were: Brocade, Cisco, Finisar, Servergraph, and VMware. Our silversponsor was Qlogic.
Barry Rudolph was the keynote speaker with "Storage for the Green Data Center", similar to his presentationfor Storage Networking World in April, but with new and improved slides.
I myself had a busy day. Here's a quick recap:
The last session I attended was "Storage .. to Optimize your ECM depoloyments" by Jerry Bower, now working for IBM as part of our recent acquisition of the Filenet company. ECM stands for Enterprise Content Management, and IBM is the market leader in this space. Jerry gave a great overview of IBM Content Manager software suite, our newly acquired Filenet portfolio, and the storage supported.
After the sessions was a reception at the Solution Center with dozens of exhibitor booths. For example,Optica Technologies had their PRIZM productswhich are able to connect FICON servers to ESCON storage devices.
technorati tags: IBM, storage, networking, symposium, Brocade, Cisco, Finisar, Servergraph, VMware, Qlogic, Barry Rudolph, green, datacenter, strategy, ILM, ITIL, SNIA, SMI-S, offering, disk, tape, software, SAN Volume Controller, SVC, David Snyder, Mark Prybylski, Jerry Bower, Filenet, ECM, Optica, FICON, ESCON[Read More]
I am back at "the Office" for a single day today. This happens often enough I need a name for it.Air Force pilots that practice landing and take-offs call them "Touch and Go", but I think I needsomething better. If you can think of a better phrase, let me know.
This week, I was in Hartford, CT, Somers, NY and our Corporate Headquarters in Armonk, in a varietyof meetings, some with editors of magazines, others with IBMers I have only spoken to over the phone andfinally got a chance to meet face to face.
I got back to Tucson last night, had meetings this morning in Second Life, then presented "InformationLifecycle Management" in Spanish to a group of customers from Mexico, Chile, and Brazil. We have a great Tucson Executive Briefing Center, and plenty of foreign-language speakers to draw from our localemployees here at the lab site.
Sunday, I leave for Las Vegas for our upcoming IBM Storage and Storage Networking Symposium. We will cover the latest in our disk, tape, storage networking and related software.Do you have your tickets? If you plan to attend, and want to meet up with me, let me know.Read More]
Last week, a writer for a magazine contacted us at IBM to confirm a quote that writing a Terabyte (TB) on disk saves 50,000 trees. I explained that this was cited from UC Berkeley's famousHow Much Information? 2003 study.
I thought of this today as I read Jefferson Graham's article "How many trees did your iPhone bill kill?" in the USA Today newspaper. Apparently, new Apple iPhone users were sent AT&T billing statements that detailed their every phone call, text message or internet access. Here's a video on YouTube from Justine Ezarik that shows the absurdity of a 300-page monthly phone bill:
To be fair, the USA Today article explains that AT&T also offers "summary billing" as well as "on-line billing", but apparently neither of these are the default choice. I can understand that phone companies send out bills on paper because not everyone who has a phone has internet access, but in the case of its iPhone customers, internet access is in the palm of your hands! Since all iPhone customers have internet access, and AT&T knows which customers are using an iPhone, it would make sense for either on-line billing or summary billing to be the default choice, and let only those that hate trees explicitly request the full billing option.
Sending a box of 300 pages of printed paper is expensive, both for the sender and the recipient. This informationcould have been shipped less expensively on computer media, a single floppy diskette or CDrom for example. Forthose who prefer getting this level of detail, a searchable digitized version might be more useful to the consumer.
Which brings me to the concept of Information Lifecycle Management (ILM). You can read my recent posts on ILM byclicking the Lifecycle tab on the right panel, or my now infamous post from last year about ILM for my iPod.
His recollection of the history and evolution of ILM fairly matches mine:
While the SNIA definition provides a vendor-independent platform to start the conversation, it can be intimidatingto some, and is difficult to memorize word for word.When I am briefing clients, especially high-level executives, they often ask for ILM to be explained in simpler terms. My simplified version is:
So ILM is not just a good idea to save a company money, it can keep them out of the court room, as well as help save the environment and not kill so many trees. Now that 100 percent of iPhone customers have internet access, and a goodnumber of non-iPhone customers have internet access at home, work, school or public library, it makes sense for companies to ask people to "opt-in" to getting their statements on paper, rather than forcing them to "opt-out".
technorati tags: IBM, Terabyte, TB, 50,000 trees, Jefferson Graham, USAtoday, Apple, iPhone, iPod, AT&T, Justine Ezarik, YouTube, Information, Lifecycle, Management, ILM, SNIA, EMC, Sun, StorageTek, HP, asset, laptops, expense, employees, privacy, exposure, liability, unethical tampering, unexpected loss, unauthorized access, opt-in, opt-out[Read More]
Jon W Toigo over at Drunkendata has had a great set of posts on his skepticism of storage vendors touting their "green storage" solutions. My apologies for my"unnecessary" use of quotation marks.
The ones I liked specifically were:
The last of which refers to this ComputerWorld article "EPA: U.S. needs more power plants to support data centers", which claims "from a technology perspective, the systems most responsible for gobbling up power are the relatively low-cost x86 servers ..." The article is based onthe recent EPA report that was just released.
Last month, in my post How manys Watts per Terabyte, I mentioned:
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
I would like to welcome IBMer Barry Whyte to the blogosphere!
From his bio:
Barry Whyte is a 'Master Inventor' working in the Systems & Technology Group based in IBM Hursley, UK. Barry primarly works on the IBM SAN Volume Controller virtualization appliance. Barry graduated from The University of Glasgow in 1996 with a B.Sc (Hons) in Computing Science. In his 10 years at IBM he has worked on the successful Serial Storage Architecture (SSA) range of products and the follow-on Fibre Channel products used in the IBM DS8000 series. Barry joined the SVC development team soon after its inception and has held many positions before taking on his current role as SVC performance architect. Outside of work, Barry enjoys playing golf and all things to do with Rotary Engines.
To avoid confusion in future posts, I will refer to Barry Whyte as BarryW, and fellow EMC blogger Barry Burke (aka the Storage Anarchist) as BarryB.
I'm in Chicago this week, but it is actually HOTTER here than in my home town of Tucson, Arizona.
Perhaps I wrapped up my exploration of disk system performance one day too early. (While it is Friday here in Malaysia, it is still only Thursday back home)
Barry Burke, EMC blogger (aka The Storage Anarchist) writes:
Aren't you mixing metrics here?
This is a fair question, Barry, so I will try to address it here.
It was not a typo, I did mean MPG (miles per gallon) and not MPH (miles per hour). It is always challenging to find an analogy that everyone can relate to explain concepts in Information Technology that might be harder to grasp. I chose MPG because it was closely related to IOPS and MB/s in four ways:
It seemed that if I was going to explain why standardized benchmarks were relevant, I should find an analogy that has similar features to compare to. I thought about MPH, since it is based on time units like IOPS and MB/s, butdecided against it based on an earlier comment you made, Barry, about NASCAR:
Let's imagine that a Dodge Charger wins the overwhelming majority of NASCAR races. Would that prove that a stock Charger is the best car for driving to work, or for a cross-country trip?
Your comparison, Barry, to car-racing brings up three reasons why I felt MPH is a bad metric to use for an analogy:
You also mention, Barry, the term "efficiency" but mileage is about "fuel economy".Wikipedia is quick to point out that the fuel efficiency of petroleum engines has improved markedly in recent decades, this does not necessarily translate into fuel economy of cars. The same can be said about the performance of internal bandwidth ofthe backplane between controllers and faster HDD does not necessarily translate to external performance of the disk system as a whole. You correctly point this out in your blog about the DMX-4:
Complementing the 4Gb FC and FICON front-end support added to the DMX-3 at the end of 2006, the new 4Gb back-end allows the DMX-4 to support the latest in 4Gb FC disk drives.
This also explains why the IBM DS8000, with its clever "Adaptive Replacement Cache" algorithm, has such highSPC-1 benchmarks despite the fact that it still uses 2Gbps drives inside. Given that it doesn't matter between2Gbps and 4Gbps on the back-end, why would it matter which vendor came first, second or third, and why call it a "distant 3rd" for IBM? How soon would IBM need to announce similar back-end support for it to be a "close 3rd" in your mind?
I'll wrap up with you're excellent comment that Watts per GB is a typical "green" metric. I strongly support the whole"green initiative" and I used "Watts per GB" last month to explain about how tape is less energy-consumptive than paper.I see on your blog you have used it yourself here:
The DMX-3 requires less Watts/GB in an apples-to-apples comparison of capacity and ports against both the USP and the DS8000, using the same exact disk drives
It is not clear if "requires less" means "slightly less" or "substantially less" in this context, and have no facts from my own folks within IBM to confirm or deny it. Given that tape is orders of magnitude less energy-consumptive than anything EMC manufacturers today, the point is probably moot.
I find it refreshing, nonetheless, to have agreed-upon "energy consumption" metrics to make such apples-to-apples comparisons between products from different storage vendors. This is exactly what customers want to do with performance as well, without necessarily having to run their own benchmarks or work with specific storage vendors. Of course, Watts/GB consumption varies by workload, so to make such comparisons truly apples-to-apples, you would need to run the same workload against both systems. Why not use the SPC-1 or SPC-2 benchmarks to measure the Watts/GB consumption? That way, EMC can publish the DMX performance numbers at the same time as the energy consumption numbers, and then HDS can follow suit for its USP-V.
I'm on my way back to the USA soon, but wanted to post this now so I can relax on the plane.
technorati tags: IBM, EMC, Storage Anarchist, MPG, MPH, IOPS, NASCAR, Malaysia, Watts, GB, green, back-end, DMX-3, DMX-4, HDS, USP, USP-V, SPC, SPC-1, SPC-2, standardized, benchmarks, workload, DS8000, disk, storage, tape[Read More]
Wrapping up this week's exploration on disk system performance, today I willcover the Storage Performance Council (SPC) benchmarks, and why I feel they are relevant to help customers make purchase decisions. This all started to address a comment from EMC blogger Chuck Hollis, who expressed his disappointment in IBM as follows:
You've made representations that SPC testing is somehow relevant to customers' environments, but offered nothing more than platitudes in support of that statement.
Apparently, while everyone else in the blogosphere merely states their opinions and moves on,IBM is held to a higher standard. Fair enough, we're used to that.Let's recap what we covered so far this week:
Today, I will explore ways to apply these metrics to measure and compare storageperformance.
Let's take, for example, an IBM System Storage DS8000 disk system. This has a controller thatsupports various RAID configurations, cache memory, and HDD inside one or more frames.Engineers who are testing individual components of this system might run specifictypes of I/O requests to test out the performance or validate certain processing.
Known affectionately in the industry as the "four corners" test, because you can show them on a box, with writes on the left, reads on the right,hits on the top, and misses on the bottom.Engineers are proud of these results, but these workloads do notreflect any practical production workload. At best, since all I/O requests are oneof these four types, the four corners provide an expectation range from the worst performance (most often write-missin the lower left corner)and the best performance (most often read-hit in the upper right corner) you might get with a real workload.
To understand what is needed to design a test that is more reflective of real business conditions,let's go back to yesterday's discussion of fuel economy of vehicles, with mileage measured in miles per gallon.The How Stuff Works websiteoffers the following description for the two measurements taken by the EPA:
Why two different measurements? Not everyone drives in a city in stop-and-go traffic. Having only one measurement may not reflect the reality that you may travel long distances on the highway. Offering both city and highway measurements allows the consumers to decide which metric relates closer to their actual usage.
Should you expect your actual mileage to be the exact same as the standardized test?Of course not. Nobody drives exactly 11 miles in the city every morning with 23 stops along the way,or 10 miles on the highway at the exact speeds listed.The EPA's famous phrase "your mileage may vary" has been quickly adopted into popular culture's lexicon. All kinds of factors, like weather, distance, anddriving style can cause people to get better or worse mileage than thestandardized tests would estimate.
Want more accurate results that reflect your driving pattern, in specific conditions that you are most likely to drive in? You could rentdifferent vehicles for a week and drive them around yourself, keeping track of whereyou go, and how fast you drove, and how many gallons of gas you purchased, so thatyou can then repeat the process with another rental, and so on, and then use yourown findings to base your comparisons. Perhaps you find that your results are always20% worse than EPA estimates when you drive in the city, and 10% worse when you driveon the highway. Perhaps you have many mountains and hills where you drive, you drive too fast, you run the Air Conditioner too cold, or whatever.
If you did this with five or more vehicles, and ranked them best to worstfrom your own findings, and also ranked them best to worst based on the standardizedresults from the EPA, you likely will find the order to be the same. The vehiclewith the best standardized result will likely also have the best result from your ownexperience with the rental cars. The vehicle with the worst standardized result willlikely match the worst result from your rental cars.
(This will be one of my main points, that standardized estimates don't have to be accurate to beuseful in making comparisons. The comparisons and decisions you would make with estimatesare the same as you would have made with actual results, or customized estimates based on current workloads. Because the rankings are in the same order, they are relevant and useful for making decisions based on those comparisons.)
Most people shopping around for a new vehicle do not have the time or patience to do this with rental cars. Theycan use the EPA-certified standardized results to make a "ball-park" estimate on how much they will spendin gasoline per year, decide only on cars that might go a certain distancebetween two cities on a single tank of gas, or merely to provide ranking of thevehicles being considered. While mileage may not be the only metric used in making a purchase decision, it can certainly be used to help reduce your consideration setand factor in with other attributes, like number of cup-holders, or leather seats.
In this regard, the Storage Performance Council has developed two benchmarks that attempt to reflect normal business usage, similar to "City" and "Highway" driving measurements.
The SPC-2 benchmark was added when people suggested that not everyone runs OLTP anddatabase transactional update workloads, just as the "Highway" measurement was addedto address the fact that not everyone drives in the City.
If you are one of the customers out there willing to spend the time and resources to do your own performance benchmarking, either at your own data center, or with theassistance of a storage provider, I suspect most, if not all, the major vendors(including IBM, EMC and others), and perhaps even some of the smaller start-ups, would be glad to work with you.
If you want to gather performance data of your actual workloads, and use this to estimate how your performance might be with a new or different storage configuration, IBMhas tools to make these estimates, and I suspect (again) that most, if not all, of theother storage vendors have developed similar tools.
For the rest of you who are just looking to decide which storage vendors to invite on your next RFP, and which products you might like to investigate that matchthe level of performance you need for your next project or application deployment,than the SPC benchmarks might help you with this decision. If performance is importantto you, factor these benchmark comparisons with the rest of the attributes you arelooking for in a storage vendor and a storage system.
In my opinion, I feel that for some people, the SPC benchmarks provide some value in this decision making process. They are proportionally correct, in that even ifyour workload gets only a portion of the SPC estimate, that storage systems withfaster benchmarks will provide you better performance than storage systems with lower benchmark results. That is why I feel they can be relevant in makingvalid comparisons for purchase decisions.
Hopefully, I have provided enough "food for thought"on this subject to support why IBM participates in the Storage Performance Council, why the performance of the SAN Volume Controller can be compared to the performanceof other disk systems, and why we at IBM are proud of the recent benchmark results in our recent press release.
Enjoy the weekend!
technorati tags: IBM, SPC, EMC, Chuck Hollis, fastest, disk, system, SVC, HDD, storage, four corners, read-hit, read-miss, write-hit, write-miss, City, Highway, MPG, OLTP, SPC-1, SPC-2, benchmarks, file, database, video,[Read More]
Continuing our exploration this week into the performance of disk systems, today I will cover the metrics to measure performance. Why do people have metrics?
Several bloggers suggested that perhaps an analogy to vehicles would be reasonable, given that cars and trucks are expensive pieces of engineering equipment, and people make purchase decisions between different makes and models.
In the United States, the Environmental Protection Agency (EPA) government entity is responsible for measuringfuel economy of vehicles using the metric Miles Per Gallon (mpg).Specifically, these are U.S. miles (not nautical miles) and U.S. gallons, not imperial gallons. It is importantwhen defining metrics that you are precise on the units involved.
Since nearly all vehicles are driven by gallons of gasoline, and travel miles of distance, this is a great metric to use for comparing all kinds of vehicles, including motorcycles, cars, trucks and airplanes. The EPA has a fuel economy website to help people make these comparisons.Manufacturers are required by law to post their vehicles' fuel-economy ratings, as certified by the federal Environmental Protection Agency (EPA), on the window stickers of most every new vehicle sold in the U.S. -- vehicles that have gross-vehicle-weight ratings over 8,500 pounds are the exception.
What about storage performance? What could we use as the "MPG"-like metric that would allow you to compare different makes and models of storage?
The two most commonly used are I/O requests per second (IOPS) and Megabytes transferred per second (MB/s). To understand the difference in each one, let's go back to our analogy from yesterday's post.
(A woman calls the local public library. She picks up the phone, and dials the phone number of the one down the street. A man working at the library hears the phone ring, answers it with "Welcome to the Public Library! How can I help you?" She asks "What is the capital city of Ethiopia?" He replies "Addis Ababa" and hangs up. Satisfied with this response, she hangs up. In this example, the query for information was the I/O request, initiated by the lady, to the public library target)
In this example, it might have only taken 1 second to actually provide the answer, but it might have taken 10-30 seconds to pick up the phone, hear the request, respond, and then hang up the phone. If one person is able to do this in 10 seconds, on average, then he can handle 360 questions per hour. If another person takes 30 seconds, then only 120 questions per hour. Many business applications read or write less than 4KB of information per I/O request, and as such the dominant factor is not the amount of time to transfer the data, but how quickly the disk system can respond to each request. IOPS is very much like counting "Questions handled per hour" at the public library. To be more specific on units, we may specify the specific block size of the request, say 512 bytes or 4096 bytes, to make comparisons consistent.
Now suppose that instead of asking for something with a short answer, you ask the public library to read you the article from a magazine, identify all the movies and show times of a local theatre, or recite a work from Shakespeare. In this case, the time it took to pick up the phone and respond is very small compared to the time it takes to deliverthe information, and could be measured instead in words per minute. Some employees of the library may be faster talkers, having perhaps worked in auction houses in a prior job, and can deliver more words per minute than other employees. MB/s is very much like counting "Spoken words per minute" at the public library. To be more specific on units, we may request a specific amount of information, say the words contained in "Romeo and Juliet", to make comparisons consistent.
Now that we understand the metrics involved, tomorrow we can discuss how to use these in the measurement process.
Yesterday, I started this week's topic discussing the various areas of exploration to helpunderstand our recent press release of the IBM System Storage SAN Volume Controller and itsimpressive SPC-1 and SPC-2 benchmark results that ranks it the fastest disk system in the industry.
Some have suggested that since the SVC has a unique design, it should be placed in its own category,and not compared to other disk systems. To address this, I would like to define what IBM meansby "disk system" and how it is comparable to other disk systems.
When I say "disk system", I am going to focus specifically on block-oriented direct-access storage systems, which I will define as:
One or more IT components, connected together, that function as a whole, to serve as a target forread and write requests for specific blocks of data.
Clarification: One could argue, and several do in various comments below, that there are other typesof storage systems that contain disks, some that emulate sequential access tape libraries, some that emulate file-systems through CIFS or NFS protocols, and some that support thestorage of archive objects and other fixed content. At the risk of looking like I may be including or excluding such to fit my purposes, I wanted to avoid apples-to-orangescomparisons between very different access methods. I will limit this exploration to block-oriented, direct-access devices. We can explore these other types of storage systems in later posts.
People who have been working a long time in the storage industry might be satisfied by this definition, thinkingof all the disk systems that would be included by this definition, and recognize that other types of storage liketape systems that are appropriately excluded.
Others might be scratching their heads, thinking to themselves "Huh?" So, I will provide some background, history, and additional explanation. Let's break up the definition into different phrases, and handle each separately.
So, the SAN Volume Controller is a disk system comprising of one to four node-pairs. Each node is a piece of IT equipment that have processors and cache. These node-pairs are connected to a pair of UPS power supplies to protect the cache memory holding writes that have not yet been de-staged. The combination of node-pairs and UPS acting as a whole, is able to serve as a target to SCSI commands sent over Fibre Channel cables on a Storage Area Network (SAN). To read some blocks of data, it uses its internal cache storage to satisfy the request, and for others, it goes out to external disk systems that contain the data required. All writes are satisfied immediately in cache on the SVC, and later de-staged to external disk when appropriate.
As of end of 2Q07, having reached our four-year anniversary for this product, IBM has sold over 9000 SVC nodes, which are part of more than 3100 SVC disk systems. These things are flying off the shelves, clocking in a 100% YTY growth over the amount we sold twelve months ago. Congratulations go to the SVC development team for their impressive feat of engineering that is starting to catch the attention of many customers and return astounding results!
So, now that I have explained why the SVC is considered a disk system, tomorrow I'll discuss metrics to measure performance.
Continuing my business trip through Asia, I have left Chengdu, China, and am now in Kuala Lumpur, Malaysia.
On Sunday, a colleague and I went to the famous Petronas Twin Towers, which a few years ago were officially the tallestbuildings in the world. If you get there early enough in the day, and wait in line for a few hours, you can get a ticket permitting you to go up to the "Skybridge" on the 41st floor that connects the two buildings. The views are stunning, and I am glad to have done this.(If you are afraid of heights, get cured by facing your fears with skydiving)
You would think that a question as simple as "Which is the tallest building in the world?" could easily be answered, given that buildings remain fixed in one place and do not drastically shrink or get taller over time or weather conditions, and the unit of height, the "meter", is an officially accepted standard in all countries, defined as the distance traveled by light in absolute vacuum in 1/299,792,458 of a second.
The controversy stems around two key areas of dispute:
To bring some sanity to these comparisons, the Council on Tall Buildings and Urban Habitat has tried to standardize the terms and definitions to makecomparisons between buildings fair. Why does all this matter whose building is tallest? It matters in twoways:
What does any of this have to do with storage? Two weeks ago, IBM and the Storage Performance Councilanswered the question "Which is the fastest disk system?" with apress release. Customers thatcare about performance of their most mission critical applications are often willing to pay a premium to run theirapplications on the fastest disk system, and the IBM System Storage SAN Volume Controller, built through aglobal collaboration of architects and engineers across several countries, is (in my opinion at least) an impressive feat of storage engineering.
EMC bloggerChuck Hollis was the first to question the relevance of these results, and I failed to "turn the other cheek" and responded accordingly. The blogosphere erupted, with more opinions piled on by others, many from EMC andIBM, found in comments on these posts or other blogs, some have since been retracted or deleted, while othersremain for historical purposes.
At the heart of all this opinionated debate, lies a few areas of exploration:
I will try to address some of these issues in a series of posts this week.
technorati tags: IBM, KL, Kuala Lumpur, Malaysia, Petronas, Twin Towers, SkyBridge, tallest, building, structure, tower, fasted, disk, system, SVC, SAN Volume Controller, EMC, Chuck Hollis, SPC, Storage Performance Council
For those in the US, a comedian named Carlos Mencia has a great TV show, Mind of Menciaand one of my favorite segments is "Why the @#$% is this news!" where he goes about showingblatantly obvious things that were reported in various channels.
So, when I saw that IBM once again, for the third year in a row, has the fastest disk system,the IBM System Storage SAN Volume Controller (SVC), based on widely-accepted industry benchmarksrepresenting typical business workloads, I thought, "Do I really want to blog about this,and sound like a broken record, repeating my various statements of the past of how great SVC is?" It's like reminding people that IBM hashad the most US patents than any other company, every year, for the past 14 years.
(Last year, I received comments fromWoody Hutsell, VP of Texas Memory Systems,because I pointed out that their "World's Fastest Storage"® cache-only system, was not as fast as IBM's SVC.You can ready my opinions, and the various comments that ensued, hereand here. )
That all changed when EMC uber-blogger Chuck Hollis forgot his own Lessons in Marketingwhen heposted his rantDoes Anyone Take The SPC Seriously?That's like asking "Does anyone take book and movie reviews seriously?" Of course they do!In fact, if a movie doesn't make a big deal of its "Two thumbs up!" rating, you know it did not sitwill with the reviewers. It's even more critical for books. I guess this latest news from SPC reallygot under EMC's skin.
For medium and large size businesses, storage is expensive, and customers want to do as much research as possible ahead of time to make informed decisions. A lot of money is at stake, and often, once you choose a product, you are stuckwith that vendor for many years to come, sometimes paying software renewals after only 90 days, and hardware maintenance renewals after only a year when the warranty runs out.
Customers shopping for storage like the idea of a standardized test that is representative, so they can compare one vendor's claims with another. The Storage Performance Council (SPC), much like the Transaction Processing Performance Council (TPC-C) for servers, requires full disclosure of the test environment so people can see what was measured and make their own judgement on whether or not it reflects their workloads. Chuck pours scorn on SPC but I think we should point to TPC-C as a great success story and ask why he thinks the same can't happen for storage? Server performance is also a complicatedsubject, but people compare TPC-C and TPC-H benchmarks all the time.
Note: This blog post has been updated. I am retracting comments that were unfair generalizations. The next two paragraphs are different than originally posted.
Chuck states that "Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it." I encourage every customer to do this with whatever disk systems they already have installed. Judge for yourself how each benchmark compares to your experience with your application workload, and consider publishing the results for the benefit of others, or at least send me the results, so that I can understand better all of these"use cases" that Chuck talks about so often. I agree that real-world performance measurements using real applications and real data are always going to be more accurate and more relevant to that particular customer. Unfortunately, there are little or no such results made public. They are noticeably absent. With thousands of customers running with storage from all the major storage vendors, as well as storage from smaller start-up companies, I would expect more performance comparison data to be readily available.
In my opinion, customers would benefit by seeing the performance results obtained by others. SPC benchmarks help to fill this void, to provide customers who have not yet purchased the equipment, and are looking for guidance of which vendors to work with, and which products to put into their consideration set.
Truth is, benchmarks are just one of the many ways to evaluate storage vendors and their products. There are also customer references, industry awards, and corporate statements of a company's financial health, strategy and vision.Like anything, it is information to weigh against other factors when making expensive decisions. And I am sure the SPC would be glad to hear of any suggestions for a third SPC-3 benchmark, if the first two don't provide you enough guidance.
So, if you are not delighted with the performance you are getting from your storage now, or would benefit by having even faster I/O, consider improving its performance by adding SAN Volume Controller. SVC is like salt or soy sauce, it makes everything taste better. IBM would be glad to help you with a try-and-buy or proof-of-concept approach, and even help you compare the performance, before and after, with whatever gear you have now. You might just be surprised how much better life is with SVC. And if, for some reason, the performance boost you experience for your unique workload is only 10-30% better with SVC, you are free to tell the world about your disappointment.
technorati tags: Carlos Mencia, Mind of Mencia, IBM, system, storage, SVC, SAN Volume Controller, Storage Performance Council,SPC, benchmarks, Texas Memory Systems, Woody Hutsell, EMC, Chuck Hollis, movie, book, reviews, awards, salt, soy sauce
Continuing this week's theme on Business Continuity, I will use this post to discuss this week'sIBM solid state disk announcement.This new offering provides a new way to separate programs from data, to help minimizedowntime and outages normally associated with disk drive failures.
Until now, the method most people used to minimize the amount of data on internalstorage was to use disk-less servers with Boot-Over-SAN, however, not all operating systems, and not all disk systems, supported this.
Windows, however, is not supported, because of the small 4GB size and USB protocol limitations. For Windows, you would add a SAS drive, you boot from this hard drive, and use the 4GB Flash drive for data only.
So what's new this time? Here's a quick recap of July 17 announcement. For the IBM BladeCenter HS21 XM blade servers, new models of internal "disk" storage:
Until recently, solid state storage was available at a price premium only. Flash prices have dropped 50% annually while capacities have doubled. This trend is expected to continue through 2009.
Flash drives use non-volatile memory instead of moving parts, so less likely to break down during high external environmental stress conditions, like vibration and shock, or extreme temperature ranges (-0C° to +70°C) that would make traditional hard disks prone to failure.This is especially important for our telecommunications clients, who are always looking for solutions that are NEBS Level 3 compliant.
Last year, I mentioned that flash drives could provide only a limited number of write and erase cycles, but today's new advances in wear-leveling algorithms have nearly eliminated this limitation.
As with any SATA drive, performance depends on workload.Solid state drives perform best as OS boot devices, taking only a few secondslonger to boot an OS than from a traditional 73GB SAS drive. Flash drives also excel in applications featuring random read workloads, such as web servers. For random and sequential write workloads, use SAS drives instead for higher levels of performance.
So, even though this is not part of the System Storage product line, I am very excitedfor IBM. To find out if this will work in your environment, go to the IBM Server Provenwebsite that lists compatability with hardware, applications and middleware, or review the latest Configuration and Options Guide (COG).
technorati tags: IBM, Business, Continuity, solid, state, flash, disk, drive, announcement, blade, server, BladeCenter, H21, XM, 4GB, Flash, Memory, Device, USB2.0, Linux, RedHat, RHEL, Novell, SUSE, SLES, Windows, Project, Big Green, SATA, SAS, energy, efficient, efficiency, performance, NEBS, telecommunications, boot-over-SAN, Google, Carnegie Mellon, study, Vmware
This week and next I am touring Asia, meeting with IBM Business Partners and sales repsabout our July 10 announcements.
Clark Hodge might want to figure out where I am, given the nuclearreactor shutdowns from an earthquake in Japan. His theory is that you can follow my whereabouts just by following the news of major power outages throughout the world.
So I thought this would be a good week to cover the topic of Business Continuity, which includes disaster recovery planning. When making Business Continuity plans, I find it best to work backwards. Think of the scenarios that wouldrequire such recovery actions to take place, then figure out what you need to have at hand to perform the recovery, and then work out the tasks and processes to make sure those things are created and available when and where needed.
I will use my IBM Thinkpad T60 as an example of how this works. Last week, I was among several speakers making presentations to an audience in Denver, and this involved carrying my laptop from the back of the room, up to the front of the room, several times. When I got my new T60 laptop a year ago, it specifically stated NOT to carry the laptop while the disk drive was spinning, to avoid vibrations and gyroscopic effects. It suggested always putting the laptop in standby, hibernate or shutdown mode, prior to transportation, but I haven't gotten yet in the habit of doing this. After enough trips back and forth, I had somehow corrupted my C: drive. It wasn't a complete corruption, I could still use Microsoft PowerPoint to show my slides, but other things failed, sometimes the fatal BSOD and other times less drastically. Perhaps the biggest annoyance was that I lost a few critical DLL files needed for my VPN software to connect to IBM networks, so I was unable to download or access e-mail or files inside IBM's firewall.
Fortunately, I had planned for this scenario, and was able to recover my laptop myself, which is important when you are on the road and your help desk is thousands of miles away. (In theory, I am now thousands of miles closer to our help desk folks in India and China, but perhaps further away from those in Brazil.) Not being able to respond to e-mail for two days was one thing, but no access for two weeks would have been a disaster! The good news: My system was up and running before leaving for the trip I am on now to Asia.
Following my three-step process, here's how this looks:
technorati tags: IBM, July, announcements, earthquake, Japan, nuclear reactor, power, outage, business, continuity, disaster, recovery, plan, plans, planning, IBM, Thinkpad, T60, laptop, Windows, Denver, BSOD, VPN, India, China, Brazil, help desk, Asia, Tivoli, Storage, Manager, TSM, BMR, external, USB, bootable, CD, DVD, separating, programs, data, Clark Hodge[Read More]
It's Tuesday, which means IBM makes its announcements. We had several for the IBM System Storage product line. Here's a quick recap.
I'm off to Denver, Colorado this week. I hope it is cooler there than it is down here in Tucson, Arizona.
technorati tags: IBM, disk, system, storage, SAS, FC, DS3000, DS3200, DS3400, EXP3000, NAS, EXN1000, tape, virtualization, library, TS7740, grid, Copy Export, throughput, TS3400, TS3200, mainframe, LTO, Ultrium, Cisco, MDS, 9124, Express, Advantage, DS4000, DS4700, TS3200, GAM, Grid Archive Manager, 3996, optical, WORM, Denver, Colorado, Tucson, Arizona, announcements[Read More]
Avi Bar-Zeeb of RealityPrime has an interesting post aboutHow Google Earth [really] Works.Normally, people who are very knowledgeable in a topic have a hard time describing concepts in basic terms. Avi was one of the co-founders of Keyhole, the company that built the predecessor for Google Earth, and also worked with Linden Lab for its 3D rendering it its virtual world, so he certainly knows what he is talking about. While he sometimes drops down into techno-talk about patents, the post overall is a good read.
It is perhaps human nature to be curious on how things are put together and how they function, leading to the popularity of web sites like www.howstuffworks.com that cover a wide range of topics.
Many things can be used without understanding their internal inner workings. You can put on a pair of blue jeans without knowing how the cotton was made into denim fabric; lace up your favorite pair of running shoes without understanding the chemical make-up of the plastic that cushions your feet; or drink a glass of beer after your five mile run without knowing how alcohol is processed by your liver.
For technology, however, some people insist they need to know how it works in order for them to get the most use of it. When shopping for a car, for example, a guy might look under the hood, and ask questions about how the engine works, while his wife sits inside the vehicle, counting cup holders and making sure the radio has all the right buttons.
Not all technology suffers from need-to-know-itis. For example, the Apple iPod music player and the Canon PowerShot digital camera, are both just disk systems that read and write data, with knobs and dials on one end, and ports for connectivity on the other. Everyone just asks how to use their controls, and might read the manual to understand how to connect the cables. Few people who use these devices ask how they work before they buy them.
Other disk systems, the kind designed for data centers for the medium and large enterprise, apparently aren't there yet. Storage admins who might happily own both an iPod player and a PowerShot camera, insist they need to know how the technologies inside various storage offerings work. Is this just curiosity talking? Or are there some tasks like configuration, tuning, and support that just can't be done without this knowledge? Does knowing the inner workings somehow make the job more enjoyable, easier, or performed with less stress?
I'm curious what you think, send me a comment on this.
technorati tags: Avi Bar-Zeeb, Google, Earth, cotton, demin, plastic, shoes, beer, alcohol, liver, IBM, disk, system, storage, technology, Apple, iPod, music, player, Canon, PowerShot, digital, camera[Read More]
Chris Evans over at Storage Architect posts aboutHardware Replacement Lifecycle Update, on how storage virtualization can helpwith storage hardware replacemement. He makes two points that I would like to comment on.
In a typical four year lifecycle of storage arrays, it might take six months or so to fill up the box, and might takeas much as a year at the end to move the data out to other equipment. SVC can greatly reduce both of these, so that you can take immediate advantage of new equipment as soon as possible, and keep using it for close to the full four years,migrating weeks or days before your lease expires.
NetworkWorld has compiled interlude with storage videos, a follow up to last year's Yikes! Exploding Servers.
I've blogged about some of these videos already, but since there are probably a few out there buying the brand new Apple iPhone looking for YouTube videos to play on them, these links might provide some exampleentertainment on your new handheld device.
Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
TonyPearson 120000HQFF Tags:  disk tape infrastructure green lifecycle secondlife 1 Comment 5,470 Views
Chuck Hollis makes some excellent points about Green Data Center Goes Marketing Mainstream. He does a great job summarizing EMC's strategy in this area:
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
technorati tags: IBM, EMC, Chuck Hollis, VMware, FC, SAS, SATA, FATA, disk, storage, logical partition, energy, power, cooling, Steve Duplessie, dynamic, persistent, data, Lawrence Berkeley National Laboratory, megawatt, paper, optical, microfiche, LTO, 3592, Project Big Green, Secondlife[Read More]
I'm in the Malev lounge at the Budapest Airport, waiting for my flight to return back to Tucson.
Back in the late 1980's and early 1990's, I was one of the architects for DFSMS on z/OS, and customers always asked, "What is the clip level?", in other words, how big does a customer have to be to take advantage of DFSMS. We worked it out that if you had more than 100GB of disk data, DFSMS is worthwhile. DFSMS is now just standard by default, as everyone now easily has more than 100GB of data.
Later, in the late 1990's, I worked on Linux for System z. Again, customers asked how many Linux guest images would justify deploying applications on a mainframe. We worked it out to about 10 images. 10 Linux logical partitions, or Linux guests under z/VM was enough to cost justify the entire investment.
So what is the "clip level" for SANs? How many servers does an SMB need to have to justify deploying a SAN? IBM announced the new BladeCenter S designed specifically for mid-sized companies, 100 to 1000 employees, typically running 25 to 45 servers. However, I suspect companies as small as 7-10 servers would probably benefit from deploying an FC or IP SAN.
What do you think? Send me a comment on how many servers should be the clip level.
This week I am off to Budapest, Hungary, for business meetings. It is the closest major city to IBM'smanufacturing plant in a small town called Vac (rhymes with "knots") where the IBM System Storage DS8000 seriesand SAN Volume Controller are assembled.
One of the differences between IBM and the other storage vendors is that IBM is also in the business of middleware, application-aware backup software, and advanced copy services. This allows IBM to put togethersolutions that work to address specific challenges for our clients.
IBM has written a whitepaper on a cleverVSS Snapshot Backup for Exchange using IBM Tivoli Storage Manager and the point-in-time copy capabilities of IBM System Storage disk systems.
A problem in the past was that each vendor's point-in-time copy method had its own unique proprietary interface.Microsoft Developed Volume Shadow Copy Services (VSS) as a common interface front-end to resolve this concern.IBM Tivoli Storage Manager for Mail can invoke standard VSS interfaces, and this in turn can invoke FlashCopyon the IBM System Storage SAN Volume Controller, DS8000 series, or DS6000 series disk system.
You might be thinking: Wouldn't it have been less effort to just have TSM for Mail invoke IBM proprietary interfaces,rather than having to put full VSS support into TSM for mail, and then full VSS support into IBM's various disksystems? Perhaps, but IBM doesn't decide to do things because it is the cheapest way, we focus on what is theright way, and in this case, customers now have more choices, then can use TSM for Mail with IBM or non-IBM disksystems that support the VSS interface, and IBM disk systems can be employed into other uses for VSS snapshot.
Of course, we would like our clients to consider both TSM and IBM System Storage disk systems for a combined solution,not because they are required to make the solution work, but because both are best-of-breed, and whitepapers likethis show how they can provide synergy working together.
A recent blog by Chris Mellor makes the outlandish conspiracy theory that IBM and HDS copied virtualisation technology from small start-up company DataCore.
(Chris doesn't actually name who is his source making such a claim, whether thatsomeone was employed by any of the parties involved at the time the events occurred,or is currently employed by a competitor like EMC bitterly jealous of the success IBM and HDScurrently enjoy with their offerings.)
As I already posted before about IBM'slong history of storage virtualization, SAN Volume Controller was really part of a sequence of major product in this area, after the successful 3850 MSS and 3494 VTS block virtualization products.
In the late 1990's, our research teams in Almaden, California and Hursley, UK were exploring storagetechnologies that could take advantage of commodity hardware parts and the industry-leadingLinux operating system.
As is often the case, while IBM was working on "the perfect product", small start-ups announce "not-yet-perfect" products into the marketplace. Tactical moves like partneringwith DataCore was a smart move, for the following reasons:
The partnership proved worthwhile, not just to prove to IBM that this was a worthwhile market to enter, but also how "NOT" to package a solution. Specifically, DataCore SANsymphony was software that you had to install on your own Windows-based server. The client was left with the task of orderinga suitable Intel-based server, with the right amount of CPU cycles, RAM and host bus adapter ports,and configure the Windows operating system and DataCore software.
It didn't go well. Basically, customers were expected to be their own "hardware engineers", having to knowway too much about storage hardware and software to design a combination that worked for theirworkloads. Most clients were disappointed with the amount of effort involved, and the resulting poor performance.
To fix this, IBM delivered the SAN Volume Controller, with an optimized Linux operating system and internally-writtensoftware that runs on IBM System x(tm) server hardware optimized for performance.
I can't speak for HDS, but I suspect they came to similar conclusions that resulted in a similar decisionto build their product in-house. I welcome Hu Yoshida to correct me if I am wrong on this.
This week I was in Palm Springs in meetings with clients, prospects, business partners and IBM sales reps.
Tuesday consisted of "outdoor meetings", but the high winds caused some people to arrive late, and others to land in the various sand traps and water hazards. A "welcome reception" event allowed everyone to socialize and get to know the IBM experts and executives. Two of my colleagues, Mike Stanek and Dave Wyatt, were with me also in Australia last week, and so the three of us were discussing recovery from jet lag.
Wednesday was organized as a main tent event, where everyone met into one large room to hear our strategy,latest set of offerings, and customer testimonials. This was done indoors, of course, which was a good thing as the winds were now gusting up to 50 miles per hour, knocking over windmills and making the local news.
Here's a quick sample from the testimonials:
The event got great reviews, and I look forward to the next one. Until then, enjoy the weekend!
IDC announced that IBM was number #1 in storage hardware (disk and tape combined)for 2006. Here are some excerpts from the IBM press release:
The newly released May 2007 report  by leading industry analyst firm IDC, "Worldwide Combined Disk and Tape Storage 2006 Market Share Update," shows IBM in the #1 overall position for all disk and tape storage hardware for the full year 2006.
Five years ago, IBM was only #3 in this area, butis this new standing from IBM doing things better, or HP and EMC doing things poorly? Probably a little of both, but since it's not polite to point out the flaws of others in a blog, I will focus on what IBM is doing right, and I think our leadership in tape accounts for a good measure of this.
The resurgence of tape comes from a variety of factors:
For more details, see IBM's press release.
In a recent post, ESG Analyst Tony Asaro asks What happened to CAS?
Many often associate CAS with EMC's Centera offering, but with IBM's comprehensive set of compliance storageofferings, EMC doesn't talk about CAS or Centera much anymore.I covered the confusion around CAS in a previous post. When clients ask for "CAS" what they really are looking for is storage designed forfixed content, unstructured data that doesn't change once written. A lot of data falls under this category, such as scanned documents, audio and video recordings, medical images, and so on. Some laws and regulations further require enforcement that the data is not deleted or tampered with, until some time after an event or expiration date is met.
In the past, clients used write-once read-many (WORM) optical media, but today we have disk and tape offerings instead. Since the term "WORM" is inappropriate fordisk-based solutions, IBM has standardized to the use of the term "non-erasable, non-rewriteable" (NENR) to discusstoday's solutions and offerings.
Let's recap what IBM has to offer:
As you see, IBM doesn't limit itself to disk-only offerings. Our leadership in tape allows us to innovate tape and disk-and-tape offerings that can provide more cost-effective solutions to store fixed content, retention managed data.The next time you have a conversation with a storage vendor, don't ask for CAS, ask instead for archive and compliance storage. Broaden your mind, and broaden the set of options and choices that might provide a better fit for your requirements.
technorati tags: ESG, analyst, Tony Asaro, EMC, Centera, CAS, IBM, system, storage, DR550, Express, N series, GAM, grid, GMAS, medical, archive, WORM, TS1120, LTO, LTO3, LTO4, NENR, fixed, content, retention[Read More]
Yesterday, IBM announced a variety of new storage offerings. Our theme this time around was "Policies and Performance". Here's a quick recap.
Our clients tell us they need performance to meet their dynamic business demands, and policies to help them manage the ever growing size of their storage infrastructure. We listened!
technorati tags: IBM, disk, storage, system, May, 2007, announcement, N5300, N5600, Advanced Single Instance Storage, EXN4000, DS8000, SAN Volume Controller, DCS9550, TotalStorage, Productivity Center, System z, Replication, FlashCopy, SVC, policy, performance, HPC, genome, research, rich media[Read More]
The results are finally in. IBMer Wolfgang Singer was awarded "Top Speaker" award for his NAS and iSCSI tutorial at last year's Orlando 2006 conference. Here he is receiving the awardfrom SNIA Executive Director Leo Leger.
Of course, NAS and iSCSI technologies have been around for a while, but they are still new formany customers, which is why tutorials like this are so important.
Not everyone is clear on these technologies. For example, Dave Hitz asksis iSCSI SAN or is iSCSI NAS? I Don’t Know.
To avoid this confusion, IBM adopted clarifying technology.
Today was the "First Ever Live Virtual Virtualization Tech Fair" sponsored by IBM and VMware. This was a 1-day event hosted by Unisfair.
The day included presentations done at a conference call, along with exhibition booths.
We had an exhibition booth exclusively for "storage virtualization" featuring our IBM System Storage SAN Volume Controller (disk virtualization) and IBM System Storage TS7520 Virtualization Engine (a virtual tape library, or VTL).
People who were logged in were represented in silhouette form. When someone walked into the booth, our army of "booth reps" were able to chat with them and answer their questions. They could also peruse the various online materials we made available about each product.
Here are some of my observations:
technorati tags: IBM, SAN Volume Controller, SVC, TS7520, VTL, disk, system, virtualization, tape, library, EMC, Invista, VMware, SecondLife, Xen, Microsoft, Virtual Server, mainframe, silhouette, IPO[Read More]
We had a great event today! This was a first-of-a-kind product launch, using Second Life as the medium. We invited IBM Business Partners, industry analysts and reporters from the Press to have their "avatars" in-world to watch us launch new tape systems, archive and retention systems, and disk systems announced this month.
Andy Monshaw, IBM System Storage General Manager, welcomed everyone to the event, and introduced our three speakers.He mentioned that this was a great innovative way to meet, collaborate and forge relationships without the carbon pollution associated with travel required by a more traditional face-to-face meeting. We had attendees from the USA, UK, Germany, Sweden, Italy, Colombia, and Brazil.
All the attendees were given a "goody bag" that contained IBM BP-logo clothing, animations and gestures to be used during the meeting.
Eric Buckley, one of our marketing managers for tape systems, introduced our complete line of LTO 4 tape systems, as wellas the TS7520 Virtualization Engine, a virtual tape library for Windows, UNIX and Linux servers. Eric had a virtual 3-Dversion of an LTO cartridge that is photo-realistic and dimensionally correct.
Funda Eceral, our solutions manager for archive and retention offerings, presented the new version of the IBM System Storage DR550, the DR550 file system gateway, and the IBM System Storage Multilevel Grid Archive Manager. At first we thought we would "pass the microphone" from speaker to speaker, but it turned out to be easier just to give all three speakers their own microphone.
Last, but not least, was David Tareen, marketing manager for disk systems, covering the entry-level DS3000 Express disk system bundles designed for our SMB client. David used a black-and-brown pointer stick to point out specific things on the charts.
After the presentations, Kristie Bell, VP of Marketing for IBM System Storage, hosted a Question & Answer (Q&A) panel.Avatars rose their left hand to indicate they had a question.
We thought it would be a good idea to have a few minutes at the end to socialize over a cup of coffee. This involved making a "coffee machine" that dispensed coffee, and the appropriate animations and gestures so that everyone could sip the coffee, and hold the coffee at waist level when they were talking.
The event was held upstairs in one of the conference rooms of the IBM Briefing Center, located on "IBM 8" island.Many people went to the ground floor to look at the many IBM System Storage products on display. Unlike a picture on a web-page, Second Life gives you a 3-D view that you can walk around each product, and get a feel for the size and shape of the hardware.
If you missed the event, you can still visit the IBM Briefing Center. Here is the SLURL:http://slurl.com/secondlife/IBM%208/114/242/23/
We had four photographers and camera-persons on hand to capture still shots, video, audio, and chat text, and are working now to combine them for marketing collateral. I want to thank the builders, script programmers, animators, clothing designers, speakers, editors, and channel enablement team for making this event such a great success!
technorati tags: IBM, tape, LTO4, cartridge, systems, TS7520, VTL, DR550, GAM, GMAS, DS3000, Express, SMB, Andy Monshaw, Eric Buckley, Funda Eceral, David Tareen, Kristie Bell, coffee, socialization, display, floor, briefing center, SecondLife[Read More]
IBM had some big announcements today. The theme for today's announcement was "Protected Information", as there are many reasons to protect your most strategic asset, your information. Let's do a quick run-down of a few of them.
I've provided all the links, so that you can delve deeply into all the data sheets.
technorati tags: IBM, Tape, TS3500, TS3310, TS3200, TS3100, TS7520, LTO4, LTO3, CIFS,NFS,LTO, Linear Tape Open, DR550, File System Gateway, SAN, switch, SAN32B-3, System Storage, SOX, HIPAA, compliance, regulation, compliance, archiving, retention[Read More]
SNW wrapped up Thursday. As is often the case, a lot of people have left already.
I saw two presentations worth discussing here in this blog.
Continuing my coverage of SNW Spring 2007, Ron and Vincent kicked off Wednesday main tent sessions with more survey questions:
Q1. How secure is your storage network?
Q2. What was the cause of most downtime in last 12 months?
Thornton May, futurist and columnist for ComputerWorld, presented "Storage 3.0: What Comes After, What Comes Next."I have seen several "futurists" present at conferences like this. They all feel the need to explain what their job is, and what it takes to be one. This time, Thornton indicated he was "ridiculously well-travelled, amazingly well-connected, pathologically observant, and brutally honest." His insights:
Gabriel Broner, General Manager of the newly created "Storage Solutions" division of Microsoft, presented "The Drive to Unified Storage". The people sitting around me asked "What does Microsoft have to do with storage?" He defined "Unified Storage" the way we use it for IBM Sytstem Storage N series "a storage unit that provides both file and block level protocol support." Microsoft is using "e-mail" as the model for data access, identifying the need to have "off-line" copies on your PC or laptop that are synced up with "on-line" sources. Features that were typically only available for high-end applications are now being made available to the masses, like "Volume Snapshot" capability in Windows Vista. On the home front, Microsoft recognizes that typically one person acts as the "IT manager" for the family.
Their survey of storage spend of Fortune 1000 companies. It was not clear if this was for Windows environments, or how the data was collected. These numbers don't match what we hear from our UNIX or mainframe customers.
Microsoft is implementing application changes, such as Office 2007, to simplify storage issues. Storage virtualization is the key for the future, he says, stating that Microsoft's "iSCSI target" software support makes files look like block-oriented volumes. Virtualization is now mainstream, and deploying software on standard hardware is the new storage business model. The end goal is to simplify provisioning, device and resource management, without reducing functionality, narrowing the gap between general IT tasks and specific storage tasks.
Craig Lau, NBC Olympic coverage, presented their success story. Look at the number of "hours" of TV Olympic coverage over the years:
NBC now is able to deliver 70 hours of TV programs per day, shown across their seven channels (NBC, CNBC, MSNBC, Brave, USA Network, Telemundo, and HD-tv). The Olympics in Torino, Italy generated 25,000 tapes in 17 days. Their 100,000 tape Olympic repository is starting to deteriorate, and they need to consider conversion to digital format. Their challenge was that footage was difficult to find and producers needed immediate access to time sensitive/critical content.
Their solution was Digital Asset Management, automating indexing and logging, using an IP-based workflows that reduces the number of people at the Olympics location, and allowing content to be sent back to USA for remote editing.The facilities at Torino involved:
NBC is frustrated by the lack of compatability and interoperability in the video format industry. They have been testing MPEG-1 (1.5 Mbps) formats, and plan to deploy a new system using 1080i for the upcoming 2008 Olympics in Beijing. With the new system, they can index footage by athlete, by event, and by human emotional reaction. They can review and edit footage within 30-45 seconds of live coverage, allowing rough edits to be documented as "Edit Decision Lists" that can be e-mailed or put on USB key for others to review.
Although I missed Anil Gupta's "Blogger Event" on Monday, several bloggers did stop by to visit me at the IBMbooth.
Robin Harris, Tony Pearson, Clark Hodge
The evening finished off with a Gala Dinner, with an award ceremony for Best Practices.Here were the "Honorees":
I survived my first day at SNW Spring 2007.This is my first time at SNW, but it is very much like many of the other conferences I have been to.It officially started Monday morning with pre-conferencetutorials and primer break-outsessions that covered storage fundamentals, but I didn't arrive until late Monday night due to highwind conditions at the Phoenix airport that delayed my travel.
Tuesday started out with main tent sessions. Ron Milton, VP of ComputerWorld that puts on this conference,and Vincent Franceschini, Chairman of the Board for SNIA, kicked off the event.It didn't take them long to get into the alphabet soup: ILM, ITIL, SMI-S, XAM, IMA, MMA, DDF,MF, DMF, IPSF, SSIF, and SRM.Several hundred people had "voting devices" so that they could participate in "informal" surveys.
Q1. What was the greatest need?
The first keynote speaker was Cora Carmody, CIO of SAIC. In the late 1980s and early 1990s, I did a lot of work with SAIC here in San Diego, and so IBM sent me to San Diego quite frequentlyfor face-to-face meetings with them. Her talk was cryptically titled "Jumbo Shrimp, InformationManagement, and the Mark of the Beast." Coming up with good titles is important. Some of herkey points:
IBM's own Barry Rudolph, presented "Storage in an Age of Inconvenient Truths", dressed up like Oscar-winner andformer USA Vice President Al Gore. Barry's focus was on the growingconcern of over environmental Power and Cooling issues in the data center. According to IDC, the cost of power and cooling an individual server, over its lifetime, now exceeds its acquisition cost. Storage devices are not as bad as servers in this regard. Data centers now consume 1.2% of the worlds energy.
Over lunch, I heard Tony Asaro from ESG present "The Need for Highly Virtualized Storage Systems withina Virtualized Data Center." His concern is that there is still a "heavy touch" required to manage storage.Without virtualization, your data center is less than the sum of its parts. Although IBM has been doingstorage virtualization since 1974, Tony mentioned that most storage vendors were "late to the party".He argues that "internal virtualization" inside storage arrays is not enough, you need "external virtualization"(like the IBM System Storage SAN Volume Controller) to virtualize your entire infrastructure.What storage administrators would like is for storage to have consumer levels of "ease of use", and today'snon-virtualized storage environments are nowhere near that.
"The great advantage [the telephone] possesses over every other form of electrical apparatus consists in the fact that it requires no skill to operate the instrument."
I attended a few break-out sessions in the afternoon.
The day ended at the "Expo". I hung out at the IBM booth to help answer questions and network with others.
technorati tags: IBM, SNW, Ron Milton, ComputerWorld, Vincent Franceschini, SNIA, SAIC, Barry Rudolph, Al Gore, Inconvenient Truth, presence awareness, Tony Asaro, ESG, Alexander Graham Bell, Ralph Wescott, Pacific Northwest National Library, Terry+Yoshi, Intel[Read More]
Today I'm sitting in an airport, delayed due to weather.
Dick Benton of Glasshouse Technologies has an article on SearchStorage.com titled Justifying your storage staffing.
The concept that there should be a linear "Storage Administrators per TB" rule-of-thumb has been around for a while.Back in 1992, I went to visit a customer in Germany who had FIVE storage admins for 90 GB (yes, GB, not TB) disk array.I told them they only needed 3 admins, but they cited German laws that prohibited "overtime" work on evenings and weekends.
Later, in 1996, I visited an insurance company in Ohio to talk about IBM Tivoli Storage Manager. They had TWO admins to manage 7TB on their mainframe, and another 45 people managing the 7TB across their distributed systems running Linux, UNIX, and Windows. My first question, why TWO? Only one would be needed for the mainframe, but they responded that they back each other up when one takes a 2-week vacation. My second question to the rest of the audience was... "When was the last time you guys took a 2-week vacation?"
Today, admins manage many TBs of storage. But TBs are turning out not to be a fair ruler to estimate the number of admins you need. It's a moving target, and other factors have more influence that sheer quantity of data.Let's take a look at some of those factors, which we call "the three V's":
So, the key is that there is no simple rule-of-thumb. Fewer admins are need per TB on mainframe than distributed systems data. Fewer admins per TB are needed when you deploy productivity software, like IBM TotalStorage Productivity Center. Fewer admins per TB are needed when you deploy storage virtualization, like IBM SAN Volume Controller or IBM virtual tape libraries.
technorati tags: IBM, disk, storage, infrastructure, SearchStorage.com, Dick Benton, Glasshouse, variety, volume, velocity, storage+administrators, TB, GB, TotalStorage, Productivity Center, SAN Volume Controller, virtual tape library, mainframe, distributed, systems,[Read More]
The "corporate bloggers" from the various storage vendors often mention their opinions about IBM products. Sometimes, they say something nice, and other times they poke fun. It's good to read the various opinions. Most are well-thought and well-written.
EMC blogger Chuck Hollis has a post about the various categories that industry analyst IDC used for external controller-based disk in their most recentQ4 Storage Scorecard.I agree with Chuck that it is good to have independent analysts that take an objective look across all storage vendors to provide the facts on various makes and models. Both IBM and EMC took marketshare in 4Q, so we cancongratulate ourselves and each other for the efforts needed to make this happen.
Chuck mentions that while EMC and HDS high-end boxes are similar, perhaps IBM's "DS" series is different enough to question putting it in the same "high-end" category. It's not clear if Chuck is poking fun at the fact that theIBM DS family spans multiple categories; or an admission thatthe IBM DS8300 Turbo is faster than the EMC DMX-3 and HDS USP offerings. Perhaps we need a new categorycalled "super high-end"?
IDC doesn't publish their data by price band, but we can infer from the products in each how they decidedwhich products were grouped into which categories. Let's examine the entire IBM DS family in the various categories.
Storage is a competitive marketplace.Both EMC and HDS are reputable companies that make quality products that attach to IBM System z mainframe servers. Not all workloads are mission-critical or performance-sensitive. For less critical workloads, perhaps you may find EMC or HDS performance is "good enough".
But if performance is important to you, you should consider IBM on your list of vendors for your next purchase decision. Let IBM help you prove it to yourself, running your specific workloads side by side with your existing equipment.
technorati tags: IBM, EMC, Chuck Hollis, IDC, Q4, storage, disk,scorecard, z/OS, AIX, Linux, Java, DB2, HDS, USP, DMX, SPC, benchmarks, mainframe, System Storage, DS3000, DS4000, DS6000, DS8000, DS8300, Turbo[Read More]
In case you missed it, IBMunveiled a new digital video surveillance service yesterday. This "marks an important shift in the industry's approach to security, applying advanced analytics to video data and signaling the ability to converge physical and information technology (IT) security."
The IBM Smart Surveillance Solution is designed to provide the unique capability to carry out efficient data analysis of video sequences either in real time or from recordings. These recordings can be on disk or tape storage.
The problem with today's existing "analog" surveillance is that the analog cameras record onto traditional VHS tapes, and these are rotated through, re-written after a few hours or days. To review tapes often involves human intervention, and must be done before the VHS tapes are re-used. Many shoplifters, thieves, and other law-breakers take a chance that their actions will not be caught on tape, or that they will be long gone by the time the video is analyzed.
The IBM Smart Surveillance Solution can provide a number of advantages over traditional video solutions, including:
With real-time analytics capabilities, the new DVS service can open up a wide array of new applications that go far beyond the traditional security aspects of surveillance systems. Early adopter industries in this rapidly evolving market include retail, public sector and financial services. The retail industry estimates nearly $50 billion is lost annually to fraud, theft and administrative errors.
Once in digital format, video surveillance can be sent further, processed quicker, and stored for longer periods of time, than traditional media makes practical today.
Beyond fraud and theft, this kind of solution could also help identify bullies who makedeath threats in High School.
Today was our annual "State of the Site" meeting for the IBM Tucson site. This facility was completed in 1978, and I started my career here in 1986.
Various employees and teams were recognized for the contributions and dedication. For example:
Our site manager, Terri Mitchell, did a recap of all our recent awards and accomplishments.Of the nine Design Innovation awards won by IBM this year at the CeBIT conference, eight were for IBM System Storage products!
A representative from Tucson's Brewster Center presented Terri an award, thanking IBM for its strong support for the community through various charity initiatives.
The final speaker was a new IBM client, Tony Casella, the IT Director of the town of Marana. Recently, the town of Marana selected IBM products made big news. Arizona is the fastest growing state in the USA, and the town of Marana, just north of Tucson, is one of the fastest growing communities in Arizona. The town is growing so large that it will soon spill over from Pima into Pinal county, and will be the first town in Arizona authorized to span county boundaries.
Marana is most famous for its Gallery Golf Club on Dove Mountain that is the new home of the World Golf Championships-Accenture Match Play Championship.
His decision was based on conversations he had with other IT directors of other towns and cities, and this November 2006 article in Network World. He held up the copy of his magazine.
Tony was very delighted with IBM's solution-oriented approach, rather than just selling more boxes of hardware. He found IBM easy to do business with, and committed to his success.
technorati tags: IBM, Tucson, Tom Beglin, Jack Arnold, Michael Scott, Second Life, Terri Mitchell, CeBIT, design, awards, NEBS, disk, tape, NAS, Tony Casella, Marana, Arizona, Accenture, Golf, Championship, Network World, HP
As an alumni of the University of Arizona, it is always good to see any of the Arizona schools try something new and innovative. This time, it was our arch-rivals atArizona State University (in Tempe, AZ, near Phoenix).
An article in InformationWeek reports that40,000 ASU Students Leap to Google Apps; University Pays Zero. The ASU president, Michael Crow, wants to make IT the primary driver in his ambitious "New American University" project.Last October, ASU became the first large institution to deploy Google Apps, a comprehensive suite of productivity applications that includes e-mail, search, calendars, instant messaging, and even word processing and spreadsheets.I've tried them out, they work, nothing fancy but certainly good enough for college homework assignments.
Already 40,000 students and faculty have switched their e-mail to Google, while keeping their asu.edu designation. (out of 65,000 student population, which Mr. Crow is trying to raise to 90,000 students!)
E-mail is a thorn in the side of storage administrators. Being "semi-structured" repositories, they cannot just delete or move files around, as there is context between notes and their attachments, that shouldn't be broken. E-mail systems are often the fastest growing consumer of storage for many organizations.
Switching from maintaining their own mail servers to Google is saving ASU $500,000 US dollars alone, not including the administrator labor savings. Again, some corporations might feel their e-mail is too "secret" to be outsourced like this, but for college students who spend all their creative talent posting things on MySpace and YouTube, and faculty who spend their careers TRYING to get published, they have nothing to hide from the rest of the world. It makes perfect sense.
Best of all, Google isn't charging ASU anything for this service. Google is able to cover the costs from advertising revenue instead. I can think of a lot of companies that might want to advertise to a demographic of "40,000 students who are mostly 18-25 years old and all live in or near Tempe, AZ".
On the news today, they mentioned it was "Happy Pi Day". Today is the 14th day of the 3rd month, and "pi" is about 3.14159, the ratio of the circumference of a circle to its diameter. So, in Tucson it is celebrated on 3/14, at 1:59pm MST.
The ratio has a lot to do with storage.
The value of "pi" has been calculated to over a billion significant digits. Here is a cuteapplet to use if you ever need the value to any level of accuracy.
The blogosphere has quieted down a bit over the two papers on MTBF estimates for Disk Drive Modules (DDM).One article on SearchStorage.com by Arun Taneja asksIs RAID passé? Disk capacity is growing at a faster rate than DDM reliability. During the hours to rebuild a DDM, companies are at risk of additional failures that could require recovery from a copy, or result in data loss, depending on how well your Business Continuity (BC) plan is written and followed.
I'll discuss two comments in particular.
Both are fair comments. Disk arrays do run microcode to assist or perform the RAID function, detect failures and start the rebuild process, and so clever designs to support spare disks, process the rebuild quickly, and so on, can differentiate one vendor's offering from another.
On the issue of what does IBM provide to help its clients make the right decisions for their environments, Jon William Toigo at DrunkenData points his readers to IBM's Business Continuity Self-Assessment tool. In normal data center conditions, DDMs will fail, and a Business Continuity plan shouldbe written and developed to handle this fact. Using 2-site and 3-site mirroring, complemented with versions of tape backups, can help address some of these concerns and mitigate some of the risks involved with using disk systems.
For those who want a more technical answer, IBM has just published a series of IBM Redbooks.
Tuesday is always good for announcements. Today, Gartner, Inc. announced that IBM has taken over HP in its climb to the top. I'll quote directly from today's press release:
STAMFORD, Conn., March 6, 2007 — Worldwide external controller-based (ECB) disk storage revenue totaled $15.2 billion in 2006, a 4.1 percent increase over 2005 revenue of $14.6 billion, according to Gartner, Inc.IBM overtook Hewlett-Packard for the No. 2 position in 2006 (see Table 1). IBM’s worldwide ECB market share increased to 15.8 percent, while HP’s market share dropped to 13.1 percent.
IBM beat HP both in 4Q06, as well as 2006 full year.You can read more about it from Gartner Dataquest report “Market Share: Disk Array Storage, All Regions, All Countries, 1Q05-4Q06" on their website. (Note: non-IBMers might need an account with Gartner to access this, not sure)
The focus was on external controller-based disk, not external controller-less SCSI/SAS disk, not disk arrays posing as virtual tape libraries, nor any disk sold inside HP, Sun, IBM or Dell servers. This is to compare with disk-only vendors such as EMC and HDS. The revenues reflect hardware only, including hardware-related parts of financial leases and managed services. Revenues from optional priced software features such as multi-pathing drivers, management software, or advanced copy services were excluded.I discussed these types of analyst reports back in blog post last September: Space Race Heats Up.
These marketshare numbers are based on revenues, not units or terabytes. When a box gets sold, the revenue was counted toward the vendor that sold it, not the manufacturer that built it. In this last report:
Well, this week I am in Maryland, just outside of Washington DC. It's a bit cold here.
Robin Harris over at StorageMojo put out this Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun about the results of two academic papers, one from Google, and another from Carnegie Mellon University (CMU). The papers imply that the disk drive module (DDM) manufacturers have perhaps misrepresented their reliability estimates, and asks major vendors to respond. So far, NetAppand EMC have responded.
I will not bother to re-iterate or repeat what others have said already, but make just a few points. Robin, you are free to consider this "my" official response if you like to post it on your blog, or point to mine, whatever is easier for you. Given that IBM no longer manufacturers the DDMs we use inside our disk systems, there may not be any reason for a more formal response.
Tonight I had dinner with Henry Daboub (an SVC expert from Houston, TX) and some clients, who asked what I would blog about tonight, and I figured it made sense to blog about the SVC.
Hu Yoshida clarifies his position about storage virtualization, including the statement: "As a result they can not provide the availability, scalability, and performance of a DS8300. If they could, there would be no need for a DS8300."
Of course, if humans descended from apes, why are there still apes? Now that we have cars, why are there still trains? But perhaps a better question is: now that there are supercomputers, why are there still mainframe servers?
The issue is the difference between scale-up versus scale-out. Scale-up is making a single box as big and beefy as possible. When the SVC was introduced, the major vendors all had scale-up designs: IBM ESS 800, HDS Lightning, EMC Symmetrix. Like the mainframe, they were for customers that wanted everything in a single monolithic container.
SAN Volume Controller was the result of IBM Research asking the question, if you could put anyone's software (feature and functionality) on anyone's hardware (monolithic scale-up design), what combination would you choose? What if the brains inside today's monolithic systems could be snapped into the another vendor's frame? What if you could run SRDF on an HDS box, or ShadowImage on an IBM box? The surprising response was that most customers would want a single software for consistency, but wanted the option to choose from different vendors hardware, to negotiate the best price of the commodity iron. Based on this feedback, the SVC was born.
The idea was simple, put all the brains in a separate appliance. The appliance would do the non-disruptive migrations, the caching, the striping, and all the copy services. This lets the customer chose then the hardware they want, any mix of FC and ATA disk, from any vendor.
The SVC design was based on IBM's long history in supercomputers. Using the same "scale-out" technology, the power comes not from having it all in one monolithic box, but rather in a design that combines small nodes together. While the cache is not globally shared, the data is shared between node-pairs, and the logical-to-physical mapping is routed around to all nodes in a cluster. Each SVC node talks to each other SVC node through the FCP ports, eliminating the need for additional wiring. For the most part, each node does its own separate work, but when it needs to, they can communicate across, just like nodes in a supercomputer.
Well, I'm back from Mexico.
The flight back was uneventful, except for the leg from Houston to Tucson. The lady in the window seat had "overallocated storage" and required a "distance extension" on her safety belt. To accomodate her, her husband and I flipped up the "logical partitions" between the seats, and "compressed" to take up less space to accomodate. Luckily, it was only for two hours.
On the flight to Houston, I was asked what kind of drink I wanted, in Spanish, as the crew were all from Mexico. Here's a quick Spanish lesson:
Before IBM got into an OEM agreement with Network Appliance, I used to indicate that EMC and NetApp were the "Coke and Pepsi" of the NAS marketplace. IBM had a presence, but it was in the single digits, whereas these two major players had roughly equal marketshare, just as Coke and Pepsi dominate equally the US marketplace. That analogy doesn't work in other countries, as in some cases the country might be more heavily in favor of one or the other.
On my flight over from Houston to Tucson, however, I was asked what kind of "pop" I wanted. I always say "soda" to refer generically to soft drinks, but realize that others say "pop" instead. Not only can Americans be able to detect what part of the country people are from by accent, but also by the words they use.
Now I see a blog that explores in great detail the issue of Pop vs Soda vs Coke.
So, it looks like I'll need to "retire" my Coke vs. Pepsi analogy, not because their marketshare has changed, but because IBM's parntering with NetApp greatly skews the advantage over EMC.
Today, I went looking for reading-glasses. Unfamiliar with my surroundings, I asked several people where I might be able to find and purchase these, and was sent in various directions. My first stop was a bookstore. It would make sense that since many people need reading glasses to read the books, that they would sell them there, but no. The staff didn't know where I could go, but pointed me in the direction of a mall. At the mall, I found a pharmacy. Many pharmacies sell reading glasses, so I stopped in, but no, not this one. The pharmacists suggested the super-store nearby. I walked in to the super-store, and asked the first employee where they keep their reading glasses, and they said the other corner. The other corner was the electronics department. It made sense that they sold CDs and DVDs in the same section as the equipment that plays them, but reading glasses? Skeptical, I went to the pharmacy department, and the young and beautiful lady (everyone is young, thin and beautiful here) had me follow her, and she led me back to the electronics department, whereupon she pointed to a rack of sunglasses. I indicated that I need reading glasses, not sunglasses. She pulled one out, and it was indeed reading glasses, 1.25, just what I was looking for. Others were tinted, so you can read the newspaper out in the sunlight. The pair I chose cost only $97 in the local currency.
After reading the last sentence, you might be thinking I am describing my "avatar" in Second Life, but no, I am talking about my search for reading glasses on the streets of Mexico. I am here this week in meetings with IBM Business Partners and sales reps to discuss IBM's latest System Storage products and offerings.
We used to tell people they should "clothe" servers with storage. IBM offers both, so yes it makes sense to offer both as part of a complete solution. However, when you look through a dictionary definition "to clothe" you learn it is to dress, wrap or cover with clothing, an implication that it is external, and perhaps temporary, easily changed, like switching from sunglasses to reading glasses. In Second Life, objects can be "worn", simply by attaching or detaching them to your "avatar". Sometimes clothing serves a purpose, like reading glasses, provides protection, like raincoats, and other times, more decorative, like"icing on the cake" or "gold plating".
This concept was fine 50 years ago, when we were in a server-centric world, and dumb storage devices were attached to very intelligent servers. Back then, we used the derogatory term "subsystems" to emphasize that storage was just part of the server, not a system of its own.
Today, we live in an information-centric world. The information outlives the media, and the media outlives the servers that access it. It is not unreasonable to attach dozens or hundreds of servers to a single storage system, or collection of storage systems. Over 20 percent of IBM System Storage DS8000 series, for example, are attached to Windows rack-optimized or blade servers. Imagine a refrigerator surrounded by dozens or hundreds of pizza boxes. Storage is no longer a subsystem, but a system on its own right, dressed, wrapped or covered by servers that deliver the right information, to the right people, at the right time.
So perhaps we should reverse it, telling people they should "clothe" their storage with servers!
I am still wiping the coffee off my computer screen, inadvertently sprayed when I took a sip while reading HDS' uber-blogger Hu Yoshida's post on storage virtualization andvendor lock-in. This blog appears to be the text version of theirfunny video.
While most of the post is accurate and well-stated, two opinions particular caught my eye. I'll be nice and call them opinions, since these are blogs, and always subject to interpretation. I'll put quotes around them so that people will correctly relate these to Hu, and not me.
"Storage virtualization can only be done in a storage controller. Currently Hitachi is the only vendor to provide this."
Hu, I enjoy all of your blog entries, but you should know better. HDS is fairly new-comer to the storage virtualization arena, so since IBM has been doing this for decades, I will bring you and the rest of the readers up to speed. I am not starting a blog-fight, just want to provide some additional information for clients to consider when making choices in the marketplace.
First, let's clarify the terminology. I will use 'storage' in the broad sense, including anything that can hold 1's and 0's, including memory, spinning disk media, and plastic tape media. These all have different mechanisms and access methods, based on their physical geometry and characteristics. The concept of 'virtualization' is any technology that makes one set of resources look like another set of resources with more preferable characteristics, and this applies to storage as well as servers and networks. Finally, 'storage controller' is any device with the intelligence to talk to a server and handle its read and write requests.
Second, let's take a look at all the different flavors of storage virtualization that IBM has developed over the past 30 years.
So, bottom line, storage virtualization can, and has, been delivered in the operating system software, in the server's host bus adapter, inside SAN switches, and in storage controllers. It can be delivered anywhere in the path between application and physical media. Today, the two major vendors that provide disk virtualization "in the storage controller" are IBM and HDS, and the three major vendors that provide tape virtualization "in the storage controller" are IBM, Sun/STK, and EMC. All of these involve a mapping of logical to physical resources. Hitachi uses a one-for-one mapping, whereas IBM additionally offers more sophisticated mappings as well.
In case you haven't noticed, IBM System Storage makes most of their announcements on Tuesdays. IBM announced a lot today, so here is a quick run-down.
IBM continues its market leadership with these new set of features and offerings!
I am back from China, and now glad to be back in the old USA. Last week, someone asked me what would it take to add a specific feature to the IBM System Storage DS8300. The what-would-it-take question is well-known among development circles informally as a "sizing" effort, or more formally as "Development Expense" estimate.
For software engineering projects, the process was simply that an architect would estimate the number of "Lines of Code" (LOC) typically represented in thousands of lines of code (KLOC). This single number would convert to another single number, "person-months", which would then translate to another single number "dollars". Once you had KLOC, the rest followed directly from a formula, average or rule-of-thumb.
More amazing is that this single number could then determine a variety of other numbers, the number of total months for the schedule, the number of developers, testers, publication writers and quality assurance team members needed, and so on. Again, these were developed using a formula, developed and based on past experience of similar projects.
Earlier in my career, I was the lead architect for DFSMS for the z/OS operating system, and later for IBM TotalStorage Productivity Center, performing these sizing efforts. A famous IBM architect, Frederick P. Brooks, wrote a now-classic book that was requiredreading when I started at IBM, which just was re-released as Mythical Man-Month: Essays in Software Engineering, 20th Anniversary Edition. In addition to sound advice, he alsooffered a formula or two that helps with these estimating tasks.
Hardware design introduces a different set of challenges. When I was getting my Masters Degree in Electrical Engineering, it took myself and four other grad students a full semester just to design a six-layer, 900 transistor silicon chip, which could only perform a single function, multiply two numbers together.At IBM, another book that I was given to read was Soul of a New Machine, documenting six hardware engineers, and six software engineers, working long hours on a tight schedule to produce a new computer for Data General.
So why do I bring this up now? IBM architects William Goddard and John Lynott are being inducted posthumously this year into the prestigious National Inventors Hall of Fame for their disk system innovation.
Under the leadership of Reynold Johnson, the team developed an air-bearing head to “float” above the disk without crashing into the disk. Imagine a fighter airplane flying full speed across the country-side at 50 feet off the ground. If you every heard the term "my disk crashed", it was originally referring to the read/write head touching the disk surface, causing terrible damage.
A uniformly flat disk surface was created by spinning the coating onto the rapidly rotating disk, leaving many wearing lab coats covered with disk liquid at waist level. Developing disk-to-disk and track-to-track access mechanisms proved more challenging, and nearly halted the project. The team, however, was adamant that this problem could be solved, and customers were increasingly asking for random access technology. The result was the "350 Disk Storage Unit" designed for the "305 RAMAC computer", which I have talked about a lot last year as part of our "50 years of disk systems innovation" celebration.
Neither Goddard nor Lynott had computing experience prior to joining IBM. Goddard was a former science teacher who briefly worked in aerospace. Lynott had been a mechanic in the Navy and later a mechanical engineer. They didn't have a nice formula based on past experience, they didn't have the benefit of Fred Brooks' advice, or the rules-of-thumb or averages now used to estimate the size of projects. They had to break new ground.
Now that's innovation!
technorati tags: IBM, DS8300, disk, KLOC, sizing, estimate, DFSMS, z/OS, TotalStorage Productivity Center, Frederick Brooks, William Goddard, John Lynott, Mythical Man-Month, Reynold Johnson, RAMAC, 305, 350,[Read More]
In Storage Technology News, Marc Staimer makes hisSeven network storage predictions for 2007. Let's take a closer look at each one.
technorati tags: IBM, FRCP, SOX, TotalStorage, Productivity Center, Microsoft, Exchange, Lotus, Domino, DR550, SnapLock, unified storage, NAS, iSCSI, FCP, ROBO, Tivoli, Storage Manager, TSM, Ethernet, AoE, CDP, DB2, Oracle, SAP, VTL, TS7700, TS7510, GPFS, DFSMS, Optical, 3995, 3996, Blue-Ray, D2D2T,DVD[Read More]