Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Today I spoke at the IBM Think Green Roadshow in Phoenix, Arizona. This is justone of a 15-city tour to help make people aware of Green data center issues.Here is the schedule forthe remaining cities. Contact your local IBM rep for details.
Victor Ferreira was our moderator and host. He is the site level executive for the2000 IBM employees in the Phoenix area, and manages the Public Sector for our Westernregion.
The first speaker was Dave McCoy, IBM principal in our Data Center services group.He explained IBM's Project Big Green and the Energy Efficiency Initiative, and wentinto details on how IBM can act as general contractor to design, plan and build theideal Green Data Center for you. IBM can also retrofit existing buildings, with new technologies like stored cooling, optimized airflow assessments, and modulardata center floorspace. While not related to energy, but still important to ourenvironment was IBM Asset Recovery Services, where IBM can take all those old PCmonitors, keyboards and other outdated equipment and refurbish or melt down to recapture useful metals and plastics, and disposing the rest in an environmentally-friendly,non-toxic manner.
I was the second speaker, covering "How to get it done". While Dave covered the issuesand technologies available, I explained how to put it all into practice. This includesIT systems assessments, health audits, and thermal profiling. Using server and storagevirtualization, you can increase resource utilization and reduce energy waste. IBM's CoolBlueproduct line, which includes the IBM PowerExecutive software to monitor your IT environment, and the "Rear Door Heat Exchanger" that uses chilled water to remove asmuch as 60% of the heat coming out of the back of a server rack, greatly reducing hot-spotson the data center floor, and allowing you to run the entire room at warmer, less-expensivetemperatures.
On the server side, I covered IBM's System z mainframe and the BladeCenter as examples of how innovative technologies can be used to run more applications with less energy. The newSystem p570 based on the energy-intelligent POWER6 processor has twice the performance for the same amountof power as its POWER5 predecessor. On thestorage side, I explained how Information Lifecycle Management (ILM), storage virtualization,and the use of a blended disk and tape environment can greatly reduce energy costs.
Reps from our many technology partners Eaton, APC, Schneider Electric, Liebert, and Anixter werethere to support this event.
The session ended with a Q&A Panel, with Dave McCoy, myself, and Greg Briner from IBM GlobalFinancing. IBM is able to offer creative "project financing" that can often times match theactual monthly savings, resulting in net zero cost to your operational budget, with payback periods as little as 2.5 years.
To learn more about IBM's efforts to help clients create "Green" data centers, clickGreen Data Center.
To get beyond the simple statistics of vendor popularity, we looked at the number and combinations of vendors with which enterprises work. Many were customers of one or two storage providers, but the rest were customers of up to six storage providers. More than one-third were customers of systems vendors only, bypassing storage specialists.
Comparisons between solutions vendors and storage component vendors are not new. One could argue that this can be compared to supermarkets and specialty shops.
Supermarkets offer everything you need to prepare a meal. You can buy your meat, bread, cheese,and extras all with one-stop shopping. In a sense, IBM, HP, Sun and Dell are offering this to clients who prefer this approach. Not surprisingly, the two leaders in overall storage hardware,IBM and HP, are also the two best to offer a complete set of software, services, servers and storage.
IBM and HP are also the leaders in tape.While Forrester reports that many large enterprises in North America prefer to buy diskfrom storage specialists, others have found that customers prefer to buy their tape from solution providers. Recently, Byte and Switch reports thatLTO Hits New Milestones,where the LTO consortium (IBM, HP, and Quantum) have collectively shipped over 2 million LTO tape drives, and over 80 million LTO tape cartridges. Perhaps this is because tape is part of an overallbackup, archive or space management solution, and customers trust a solution vendor overa storage specialist.
Where possible, IBM brings synergy between its servers and storage. For example, we justannounced the IBM BladeCenter Boot Disk System, a 2U high unit that supports up to 28 blade servers, ideal for applications running under Windows or Linux, and helping to reduce the energy consumption for thoseinterested in a "Green" data center.
Some people prefer buying their meat at the slaughterhouse, bread at the French pastry shop, andso on. Storage specialists focus on just storage, leaving the rest of the solution, like servers,to be purchased separately from someone else. Storage vendors like NetApp, EMC, HDS and othersoffer storage components to customers that like to do their own "system integration", or to thosethat are large enough to hire their own "systems integrator".
Storage specialists recognize that not everybody is a "specialty shop" shopper.HDS has done well selling their disk through solution vendorslike HP and Sun. EMC sells its gear through solution vendor Dell.
Interestingly, I have met clients who prefer to buy IBM System Storage N series from IBM, becauseIBM is a solution vendor, and others that prefer to buy comparable NetApp equipment directly fromNetApp, because they are a storage component vendor.
I mostly buy my groceries at a supermarket, buthave, on occasion, bought something from the local butcher, baker or candlestick maker. And if you are ever in Tucson, you might be able to find Mexican tamalessold by a complete stranger standing outside of a Walgreens pharmacy, the ultimate extreme of specialization. You can get a dozen tamales for tenbucks, and in my experience they are usually quite good. Theoretically, if you get sick, or they don't taste right, you have no recourse, and will probably never see that stranger again to complain to.(And no, before I get flamed, I am not implying any major vendor mentioned above is like this tamale vendor)
Of course, nothing is starkly black and white, and comparisons like this are just to help provide context and perspective,but if you are looking to have a complete IT solutionthat works, from software and servers to storage and financing, come to the vendor you can trust, IBM.
A few weeks ago, my Tivo(R) digital video recorder (DVR) died. All of my digital clocks in my house were flashing 12:00 so I suspect it wasa power strike while I was at the office. The only other item to die was the surge protector,and so it did what it was supposed to do, give up its own life to protect the rest of myequipment. Although somehow, it did not protect my Tivo.
I opened a problem ticket with Sony, and they sent me instructions on how to send itover to another state to get it repaired.Amusingly, the instructions included "Please make a backup of the drive contents beforesending the unit in for repair." Excuse me? How am I supposed to do that, exactly?
My model has only a single 80GB drive, and so my friend and I removed the drive and attachedit to one of our other systems to see if anything was salvageable. It failed every diagnostictest. There was just not enough to read to be usable elsewhere.
This is typical of many home systems. They are not designed for robust usage, high availability, nor any form of backup/recovery process. Some of the newer models havetwo drives in a RAID-1 mode configuration, but most have many single points of failure.
And certainly, it is not mission critical data. Life goes on without the last few episodesof Jack Bauer on "24", or the various Food Network shows that I recorded for items I planto bake some day. For the past few weeks, I have spent more time listening to the radioand reading books. Somehow, even though my television runs fine without my Tivo, watchingTV in "real time" just isn't the same.
I suspect that if you gave someone a method to do the backup, most would not bother to useit. People are now relying more and more heavily on their home-basedinformation storage systems, digital music, video and cherished photographs. Perhaps experiencing a "loss" will help them appreciate backup/recovery systems so much more than they do today.
The smart people at the University of Pittsburgh manage five campuses and over 33,000 students, andneeded to create an enterprise storage solution that would give it three key benefits. Of course, they turnedto IBM, the number one overall storage hardware vendor, to deliver.
A new storage infrastructure with the capacity to grow with the University of Pittsburgh as needed
Improved system reliability with reduced downtime, and availability 24/7/365
A significantly more manageable storage solution that could lower costs and provide better system efficiency through virtualization
As a result, IBM shipped its 25,000th high-end disk storage system, in this case two IBM System Storage DS8300 models, along with storage virtualization, and other related hardware, software and services, to provide a complete end-to-end solution.
Here is what Jinx Walton, Director of Computing Services and Systems Development at the University of Pittsburgh, had to say about it...
"The University of Pittsburgh supports large enterprise systems, and the number and complexity of new systems continue to grow. To effectively manage these systems it was necessary to identify an enterprise storage solution that would leverage our existing investments in storage, make allocation of storage flexible and responsive to project needs, provide centralized management, and offer the reliability and stability we require. The integrated IBM storage solution met these requirements"
ESG Analyst, Tony Asaro, talks about the many small storage startups having aBillion Dollar Impact on the storage system industry. Tony has counted over50 storage system vendors that are now in the marketplace. Is it really that many?Most of the time, the media only focus on the top seven major players, but I agree that big players like IBM should take trends about small startups like this seriously.
EMC Blogger Chuck Hollis suggests that this trend might be the start of a squeeze play, where top players and new upstarts squeeze out the middle playerslike Sun and HDS, in his postDesperate Times In Storage Land?
(His statement that IDC and Gartner have listed EMC as number one in "almost all"market segments is perhaps a bit misleading. IBM is number one in overall storage hardware, as wellas leading in tape drives, tape libraries, tape virtualization, and for that matter,disk virtualization. I don't know if IDC or Gartner count EMC Disk Library in the "tape virtualization" category, or if either analyst distinguishes between "cache-based" versus "switch-based" disk virtualization as separate categories.Perhaps Chuck should have qualified this to say "almost all of themarket segments that EMC does business in," which of course is better than the othervendors in the middle.)
Often, when looking at disk storage it is easy to focus on comparisons to other disk storage, but disruptive technologies cross boundaries. Already we have seen Flash Memory drives on the IBM BladeCenter, replacing traditional disk drives internal to each blade server. They are smaller than regular disk drives, but big enough to hold the operating system to boot from.
The New York Times has an article by John Markoff, Redefining the Architecture of Memory that talks about IBM's research on "Racetrack Memory".The article is a good read, but here are some interesting excerpts:
Now, if an idea that Stuart S. P. Parkin is kicking around in an I.B.M. lab here is on the money, electronic devices could hold 10 to 100 times the data in the same amount of space.
Currently the flash storage chip business is exploding. Used as storage in digital cameras, cellphones and PCs, the commercially available flash drives with multiple memory chips store up to 64 gigabytes of data.
However, flash memory has an Achilles’ heel. Although it can read data quickly, it is very slow at storing it. That has led the industry on a frantic hunt for alternative storage technologies that might unseat flash.
Mr. Parkin’s new approach, referred to as “racetrack memory,” could outpace both solid-state flash memory chips as well as computer hard disks, making it a technology that could transform not only the storage business but the entire computing industry.
But ultimately, the technology may have even more dramatic implications than just smaller music players or wristwatch TVs, said Mark Dean, vice president for systems at I.B.M. Research.“Something along these lines will be very disruptive,” he said. “It will not only change the way we look at storage, but it could change the way we look at processing information. We’re moving into a world that is more data-centric than computing-centric.”
This technology has the potential to break some of the physical limitations that are currently worrying disk drive designers. I look forward to see how this plays out.
Registration for the "Meet the Storage Experts" event in Second Life will close this week fornext week's September 20 event. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
If you miss this one, we plan to have another one in November!
Array-based replication does have drawbacks; all externalised storage becomes dependent on the virtualising array. This makes replacement potentially complex. To date, HDS have not provided tools to seamlessly migrate away from one USP to another (as far as I am aware). In addition, there's the problem of "all your eggs in one basket"; any issue with the array (e.g. physical intervention like fire, loss of power, microcode bug etc) could result in loss of access to all of your data. Consider the upgrade scenario of moving to a higher level of code; if all data was virtualised through one array, you would want to be darn sure that both the upgrade process and the new code are going to work seamlessly...
The final option is to use fabric-based virtualisation and at the moment this means Invista and SVC. SVC is an interesting one as it isn't an array and it isn't a fabric switch, but it does effectively provide switching capabilities. Although I think SVC is a good product, there are inevitably going to be some drawbacks, most notably those similar issues to array-based virtualisation (Barry/Tony, feel free to correct me if SVC has a non-disruptive replacement path).
I would argue that the IBM System Storage SAN Volume Controller (SVC) is more like the HDS USP, and less like the Invista. Both SVC and USP provide a common look and feel to the application server, both provide additional cache to external disk, both are able to provide a consistent set of copy services.
IBM designed the SVC so that upgrades can occur non-disruptively. You can replace the hardware nodes, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the software, one node at a time, while the SVC system is up and running, without disruption to reading and writing data on virtual disk. You can upgrade the firmware on the managed disk arrays behind the SVC, again, without disruption to reading and writing data on virtual disk.
More importantly, SVC has the ultimate "un-do" feature. It is called "image mode". If for any reason you want to take a virtual disk out of SVC management, you migrate over to an "image mode" LUN, and then disconnect it from SVC. The "image mode" LUN can then be used directly, with all the file system data in tact.
I define "virtualization" as technology that makes one set of resources look and feel like a different set of resources with more desirable characteristics. For SVC, the more desirable characteristics include choice of multi-pathing driver, consistent copy services, improved performance, etc. For EMC Invista, the question is "more desirable for whom?" EMC Invista seems more designed to meet EMC's needs, not its customers. EMC profits greatly from its EMC PowerPath multi-pathing driver, and from its SRDF copy services, so it appears to have designed a virtualization offering that:
Continuesthe use of EMC Powerpath as a multi-pathing driver. SVC supports driversthat are provided at no charge to the customer, as well as those built-in to each operating system like MPIO.
and, continuesthe use of Array-based copy services like SRDF of the underlying disk. SVC providesconsistent copy services regardless of storage vendor being managed.
A post from Dan over at Architectures of Control explains the anti-social nature of public benches. City planners, in an effort to discourage homeless people from sleeping on benches in parks or sidewalks, design benches that are so uncomfortableto use, that nobody uses them. These included benches made of metal that are too hot or too cold during certainmonths, benches slanted at an angle that dump you on the ground if you lay down, or benches that have dividers sothat you must be in an upright seated position to use.
This is not a disparagement of split-path switch-based designs. Rather, EMC's specific implementation appears to be designed for it to continuevendor lock-in for its multi-pathing driver, continuevendor lock-in for its copy services when used with EMC disk, and only provide slightly improved data migration capability for heterogeneous storage environments. Other switch-based solutions, such as those from Incipient or StoreAge, had different goals in mind.
Sadly, my IBM colleague BarryW and I have probably spent more words discussing Invista than all eleven EMC bloggers combined this year. While everyone in the industry is impressed how often EMC can sell "me, too" products with an incredibly large marketing budget, EMC appears not to have set aside funds for the Invista.
If a customer could design the ideal "storage virtualization" solution that would provide them the characteristics they desire the most from storage resources, it would not be anything like an Invista. While there are pros and cons between IBM's SVC and HDS's TagmaStore offerings, the reason both IBM and HDS are the market leaders in storage virtualization is because both companies are trying to provide value to the customer, just in different ways, and with different implementations.
When new technologies are introduced to the marketplace, it is normal for customers to be skeptical.
My sister is a mechanical engineer, so when she needs to configure a part or component, she candesign it on the computer, and then use a "Rapid Prototyping Machine"that acts like a 3D printer, to generate a plastic part that matches the specifications. Some machinesdo this by taking a hunk of plastic and cutting it down to the appropriate shape, and others use glue andpowder to assemble the piece.
But not everything is that simple. Harry Beckwith deals with the issue of selling services and software featuresin his book "Selling the Invisible". How do you sell a service before it is performed? How do you sell a softwarefeature based on new technology that the customer is not familiar with?
Our good friends over at NetApp, our technology partners for the IBM System Storage N series, developed a"storage savings estimator" tool that can provide good insight into the benefits of Advanced Single InstanceStorage (A-SIS) deduplication feature.
I decided to run the tool to analyze my own IBM Thinkpad C: drive (Windows operating system and programs) and D: drive ("My Documents" folder containing all my data files) to see how much storage savings thetool would estimate. Here are my results:
WINXP-C-07G (C: drive)Total Number of Directories: 1272Total Number of Files: 56265Total Number of Symbolic Links: 0Total Number of Hard Links: 41996Total Number of 4k Blocks: 2395884Total Number of 512b Blocks: 18944730Total Number of Blocks: 2395884Total Number of Hole Blocks: 290258Total Number of Unique Blocks: 1611792Percentage of Space Savings: 20.61Scan Start Time: Wed Sep 5 14:37:06 2007Scan End Time: Wed Sep 5 14:53:51 2007
WINXP-D-07H (D: drive)Total Number of Directories: 507Total Number of Files: 7242Total Number of Symbolic Links: 0Total Number of Hard Links: 11744Total Number of 4k Blocks: 3954712Total Number of 512b Blocks: 31610595Total Number of Blocks: 3954712Total Number of Hole Blocks: 3204Total Number of Unique Blocks: 3524605Percentage of Space Savings: 10.79Scan Start Time: Wed Sep 5 14:21:16 2007Scan End Time: Wed Sep 5 14:34:30 2007
I am impressed with the results, and have a better understanding of the way A-SIS works. A-SIS looks at every4kB block of data, and creates a "fingerprint", a type of hash code of the contents. If two blocks have different "fingerprints", then the contents are known to be different. If two blocks have the same fingerprint, it is mathematically possible for them to be unique in content, so A-SIS schedules a byte-for-byte comparison to be sure they are indeed the same. This might happen hours after the block is initially written to disk, but is a much safer implementation, and does not slow down the applications writing data.
(In an effort to provide support "real time" as data was being written, earlier versions of deduplication
had to either assume that a hash collision was a match, or take time to perform the byte-for-byte comparisonrequired during the write process. Doing this byte-for-byte comparison when the device is the busiest doingwrite activities causes excessive undesirable load on the CPU.)
The estimator tool runs on any x86-based Laptop, personal computer or server, and can scan direct-attached, SAN-attached, or NAS-attached file systems. If you are a customer shopping around for deduplication, ask your IBM pre-sales technical support, storage sales rep, or IBM Business Partner to analyze your data. Tools like this can help make a simple cost-benefit analysis: the cost of licensing the A-SIS software feature versus the amount of storage savings.
I can't believe I have been blogging for a year now!
I have Jennifer Jones from IBM to thank for getting this started. She was my predecessor in the job I have now, and she was moving on to bigger and better things, and during the transition for me to take over, she suggested that we start a blog, podcast, or similar. While there are many blogs and podcasts inside the firewall of IBM, I wanted something to be accessible to all of our IBM sales team, IBM Business Partners, existing and prospective clients, and to enable comments, to enable two-waycommunication. Podcasts are very one-way, so we chose a blog instead.Getting it set up took a while, convincing our own management that this was worthwhile, and dealing with our legal department on the IBM blogging guidelines of what we can and cannot write about, we finally got it going last year, launching September 1, just in time for our 50 years of disk systems innovation campaign.
It has been a wild ride, a great learning experience, and has proven quite fulfilling for job satisfaction. Here are some observations and lessons I have learned along the way.
Roller is the open source blog server that drives Sun Microsystem's blogs.sun.com employee blogging site, IBM DeveloperWorks blogs that this blog exists on, thousands of internal blogs at IBM Blog Central, the JRoller Java community site, and hundreds of others world-wide.Whereas there might be fancier blog systems elsewhere that I could have chosen, hosting my blog with IBM Developerworksseemed like a good choice. I can access from any web-browser capable machine, and enter my blog posts in nativeHTML, that I develop in the tool itself, or offline with a standard basic text editor like Microsoft Notepad that I can then cut-and-paste back in.
One lesson I learned the hard way was that Roller generates the Permalink URL for each blog post based on the first five words of the title. For that reason, it is important to chose an appropriate and unique title, avoiding the use of punctuation, quotation marks, or pharmaceutical "enhancement products" that might get rejected by SPAM filters.Once chosen, you can't change the title afterwards as it won't match the Permalink anymore.My blog post "Aperi is (enhancement product) for SMI-S" caused no end of grief to our Press Release team.
Writing blog posts in native HTML is not as hard as it sounds. I am limited to hosting a maximum of 24MB of files, and they can only be jpg, jpeg, gif, png, mp3, pdf or ppt format.So, wherever possible, I point to other websites for content.For those new to blogging, I recommendThe Barebones Guide to HTML.
Roller also generates for me a spreadsheet of all my page views for the week. Tracking blog traffic closely is as crazyas checking your company's stock price every day. These "web-stat" e-mails get filed directly into my Bacn folder on Lotus Notes.
In my earlyadvice to bloggers, I mentioned my choice of Bloglines as my RSS feed reader. When I subscribe to a new blog, I specify Full entries, not Partial,which allows me to scan it quickly, but filters out many of the non-text content like videos. It also allowed meto see what my own blog posts looked like from within a reader, so that I can write them appropriately.
I find if valuable to read other blogs, including those written by employees of our toughest competitors. Evenif you don't blog yourself, following blogs can be extremely valuable. Be careful what you leave as comments onother blogs, they may come back to haunt you later.
Currently, I track 55 blogs, some about storage,marketing, Web 2.0 issues, Second Life, Linux, or other areas of interest. I prefer blogs that make only 1-5 postsper week, so blogs like LifeHacker and LifeRemix are off my Bloglines list, but are excellent resourceswhen I am searching for something specific. If you think 55 is a lot of blogs, consider Timothy Ferriss' post onHow RobertScoble reads 622 RSS feeds each morning.
I have quite an international readership, so I have to be careful using American idioms and pop cultural references.For example, in my blog post IBM acquires Softek, I mentioned "shotgun weddings" and had various responses asking what exactly did that mean,all from readers outside the USA. I've learned that sometimes you need to link them to an American Slang dictionary,or Wikipedia encyclopedia entry to explain these terms and phrases.
Technoraticurrently tracks over 100 million blogs and over 250 million pieces of tagged social media. Getting my blogtracked had some issues. You have to join, thenpost a "claim"on your own blog. My mistake was having a case-sensitive URL with a mix of upper and lower case letters, but Technorati prefers all lower case. IBM worked with Technorati to get this resolved.
Del.icio.us is a social bookmarking website -- the primary use is to store your bookmarks online, which allows you to access the same bookmarks from any computer and add bookmarks from anywhere, too. On del.icio.us, you can use tags to organize and remember your bookmarks, which is a much more flexible system than folders.
I use Firefox, Safari, Dillo and Internet Explorer web browsers, so it is nice that I have access to allmy bookmarks in the same consistent manner. When I see content on a website that I might like to reference laterin a blog, I tag it with del.icio.us so that I can get to it later.
Fellow GTD-ers will quickly recognize this acronym, but for the rest of you, it refers to David Allen's book "Getting Things Done®".This is a great book! I learned about it reading other people's blogs, and found it incrediblyuseful helping me organize my time.There are various online tools available to help employ this method. I use Lotus Connections Activitiesfor group projects with co-workers at IBM, and BackPack for projects withmy friends outside of work.
The success of YouTube encouraged IBM to launch IBM TV, a portal for IBM's video and multimedia assets and make it easier for IBM employees, customers, partners and prospects to access and view IBM multimedia. The plan is to have eight anchor episodes per year, professionally hosted by TV personality, Joe Washington, and point to related offers and other resources for viewers to learn more.
Blogging also introduced me to Second Life. I asked around if anyone else within IBM was using Second Life, anddiscovered quite a few. I got invited to join our internal Eightbar group, and participated in various events, including an IBM Holidayparty that I discussed in my blog post"Building a Snowman in Second Life".
In April, we had a launch of our newest products in Second Life, and we plan to have two more Second Life events,September 20 and another in November, staged as "Meet the Experts" question and answer panels.
I wrap up with Facebook. Actually, whereas most of my Web 2.0 efforts have been work-related, I have quite a few friends and family who follow my blog. Several were inspired to start their own blogs, such asPassages from Pamand Barry Whyte on Storage Virtualization. Bridging the gap is Facebook, something I can use to keep tabs on my friends, as well as my storage industry-related contacts.
Wow, that's quite a lot in one year. Well, I am done with my meetings down here in Sao Paulo, Brazil. My colleauges and I are returning tonight to enjoy the long Labor Day weekend.
August 31 is my good friend Jim Cosentino's retirement day as a full-time employee at IBM. After over 30 years at IBM, in various marketing, sales and consulting roles, he is going to be thinking about happy things instead of working. His last seven years has been at theIBM Poughkeepsie Customer Executive Briefing Center as the lead System Storage presenter.
The past few years, I've traveled with him around the world on various business trips, teaching our IBM sales force and IBM Business Partners about our System Storage offerings, and presenting to clients. He is a class act, always positive, laughing, seeing the bright side of things.
While "spend more time with his family" has become a business cliche, I know Jim will actually enjoy his retirement years, spend more time with his family, take on other pursuits and hobbies, and perhaps do some more traveling.
Jim, if you are reading this, I have one suggestion. I know you have lots of friends within IBM, and count myself as one of them, but may I suggest your first goal is to makeat least three newfriends, to help you in your transition to retirement.
Congratulations Jim! Enjoy your well-deserved retirement!
If you are ever down in Sao Paulo, Brazil, may I suggest not drinking "American amounts" of their "Brazilian Coffee". The coffee here is "robust", to say the least.
Yesterday, my blog focused on IBM iSCSI offerings that were announced in August.Also announced earlier this month, the Integrated Removable Media Manager (IRMM) on System zhas been years in the making.IRMM is a new robust systems management product for Linux® on IBM System z™ that manages open system media in heterogeneous distributed environments and virtualizes physical tape libraries. IRMM combines the capacity of multiple heterogeneous libraries into a single reservoir of tape storage that can be managed from a central point.By providing an integrated solution with the opportunity for both mainframe z/OS DFSMSrmm and distributed Tivoli® Storage Manager™ environments to be managed by IRMM, System z can now be a hub for the management of removable media.
The people who thought the "Mainframe is obsolete", and those that thought "Tape is dead", are both proven wrong again with this announcement. People are looking to deploy robust tape automation for backup and archive, and this convergence with mainframe makes perfect sense by providing business value that extends to other distributed systems.
The proof-of-concept that IBM Haifa research center developed back in 1998 became what we now call the iSCSI protocol.The book iSCSI: The Universal Storage Connection introduces the history as follows:
In the fall of 1999 IBM and Cisco met to discuss the possibility of combining their SCSI-over-TCP/IP efforts. After Cisco saw IBM's demonstration of SCSI over TCP/IP, the two companies agreed to develop a proposal that would be taken to the IETF for standardization.
There are three ways to introduce iSCSI into your data center:
Through a gateway, like the IBM System Storage N series gateway, that allows iSCSI-based servers connect to FC-based storage devices
Through a SAN switch or director, a FC-based server can access iSCSI-based storage, an iSCSI-based server accessing FC-based storage, or even iSCSI-based servers attaching to iSCSI-based storage.
Directly through the storage controller.
IBM has been delivering the first method with its successful IBM System Storage N series gateway products, buttoday we have announced additional support for the second and third methods.Here's a quick recap.
New SAN director blades
Supporting the second method, IBM TotalStorage SAN256B Director is enhanced to deliver iSCSI functionality with a new M48 iSCSI Blade, which includes 16 ports (8 Fibre Channel ports; and 8 Ethernet ports for iSCSI connectivity). We also announced a new Fibre Channel M48 Blade which provides 10 Gbps Fibre Channel Inter Switch Link (ISL) connectivity between SAN256B Directors.
With support for Boot-over-iSCSI, diskless rack-optimized and blade servers can boot Windows or Linux over Ethernet,eliminating the management hassles with internal disk.
All of this is part of IBM's overall push into the Small and Medium size Business marketplace, making it easier to shop for and buy from IBM and its many IBM Business Partners, easier to deploy and install storage, and easier tomanage the storage once you have it.
In his blog Rough Type, Nick Carr asks Where is my CloudBook?and points to John Markoff's 2-part series in the New York Times on computing in the clouds.(Read it here: Part 1, Part 2)
At first, I thought he meant computing while in an airplane, but instead, he is talking about computing on a laptop or other hand-held device that does not have an internal disk drive, no installedoperating system, no internal data storage. Instead, the idea is that you boot from a CD, accessyour data, and even some of your programs, over the internet. John used an Ubuntu Linux LiveCD in his example.
This week, I am in Sao Paulo, Brazil, and was "in the clouds" for over 10 hours flying from Dallas to here.The one time I am guaranteed "off-line" from the internet is on the plane, and I spend enough time on planesthat I am able to get work done despite being "disconnected".
The same reasons people want to get out of having a disk drive on their laptop, are the reasons data centersare getting out of internal disk on their servers.
disks crash, and typically are not protected in any RAID configuration on most laptops
operating systems get infected with viruses and malware
storage on one server is generally inaccessible to every other server
Booting from CD is especially clever. No more worrying about fixing your Windows registry, viruses,corrupted operating system files, or the cruft that accumulates on your C: drive that slowsyou down. The CD is the sameevery time, so it is like running your system with a freshly installed operating system every day.
The need for central repositories of data harkens back to the years of the IBM mainframe. Of course, whatmade sense back then continues to make sense now. The old 3270 terminals stored no data, and instead merelyprovided keyboard input and display text screen output to the vast amount of data stored on the central system.Today, the inputs are different, using your finger or mouse instead to point to what you want, sliding itacross to make things happen, and the output may now include photos, audio and video, but the concept isstill the same.
I carry my Ubuntu Linux LiveCD with me on every business trip. Combined with externally rewriteable media,such as a USB key, you can get work done even when you are in an airplane, and upload it whenyou are back on the net.
The IBM Storage and Storage Networking Symposium concludes today. As typical for manysuch conferences, it ended at noon, so that people can catch airline flights.
TS1120 Tape Encryption - Customer Experiences
Jonathan Barney had implemented many deployments of tape encryption, and shared hisexperiences at two customer locations.
The first company had decided to implement their EKM servers on dedicated 64-bitWindows servers. They had three sites, one in Chicago, Alphareta, and New York City,each with two EKM servers. Each library had a single TS3500 tape library, and pointedto four EKM servers, two local, and two remote.
The clever trick was managing the keystore. They decided that EKM-1 was their trustedsource, made all changes to that, and then copied it to the other five EKM servers.His team deployed one site at a time, which turned out to be ok, but he would notrecommend it. Better to design your complete solution, and make sure that all librariescan access all EKM servers.
This company decided to have a single key-label/key-pair for all three locations, but change it every 6 months. You have to keep the old keys for as long as you have tapesencrypted with those keys, perhaps 10-20 years.The customer found the IBM encryption implementation "elegant" and it can be easily replicated to a fourth site if needed.
The second company had both z/OS and Sun Solaris. Initially they planned to have botha hardware-based keystore on System z, and software-based keystore on Sun, but they realized that System z version was so much more secure and reliable, that it made nosense to have anything on the Sun Solaris platform.
On System z, they had two EKM images, and used VIPA to ensure load balancing fromthe library. Tapes written from z/OS used DFSMS Data Class to determine which tapesare encrypted and which aren't. All Tapes written from Sun Solaris were encryptied, written to a separate logical library partition of the TS3500, which in turn contactedthe System z for the EKM management to provide the keys to use for the encryption.
The "gotcha" for this case was that when they tested Disaster Recovery, they had torecover the two EKM servers first, before any other restores could take place, and thistook way too long. Instead, they developed a scaled-down 10-volume "rescue recovery" z/OS image that would contain the RACF database and all EKM related software to actas the keystore during a disaster recovery. Anytime they make updates, they only haveto dump 10 volumes to tape. Restore time is down to only 2 hours.
He gave this advice to deploy tape encryption:
Some third party z/OS security products, like Computer Associates Top Secret orACF2, require some PTFs to work with the EKM. The latest IBM RACF is good to go.
Getting IP support from IOS to OMVS requires IPL.
At one customer, an OMVS monitor software program killed the EKM because it wasn'tin their list of "acceptable Java programs". They updated the list and EKM ran fine.
DO not update EKM properties file while EKM is running. EKM keeps a lot of stuffin memory, and when it is recycled, copies this back to the EKM properties file, reversing any changes you may have done. It is best to shut down EKM, update theproperties file, then start up EKM back up again. This is why you should always haveat least two EKM servers for redundancy.
TSM for Linux on System z
Randy Larson from our Tivoli group presented this session.There is a lot of interest in deploying IBM Tivoli Storage Manager backup and archivesoftware on Linux for System z. Many customers are already invested in a mainframeinfrastructure, may have TSM for z/OS or z/VM, and want the newer features and functions that are available for TSM on Linux.
TSM has special support for Lotus Domino, Oracle, DB2 and WebSphere Application Servers.TSM clients can send backup data to a TSM server internally via Hipersockets, a virtualLAN feature on the System z platform that uses shared memory to emulate TCP/IP stack.
One of the big questions is whether to run Linux as guests under z/VM, or natively onLPAR. The general deployment is to carve an LPAR and run Linux natively untilyour server and storage administration staff have taken z/VM training classes. Oncetrained, they can easily move native LPAR images to z/VM guests. Unlike VMware that takesa hefty 40% overhead on x86 platforms to manage guests, z/VM only takes 5-10% overhead.
For the TSM database and disk storage pools, Randy recommends FC/SCSI disk, with ext3 file system, combined with LVM2 into logical volumes. ECKD disk and reiserfsworks too. Avoid use of z/VM minidisks. Under LVM2, consider 32KB stripes for the TSM database, and 256KB stripes for the disk storage pools. For multipathing, usefailover rather than multibus method. Read IC45459 before you activate "directio".
The TSM for Linux on z is very much like the TSM on AIX or Windows, and not like theTSM for z/OS. For tape, TSM for Linux on z does not support ESCON/FICON attached tape,you need to use FC/SCSI attached tape and tape libraries. TSM owns the library anddrives it uses, so give it a logical library partition separate from z/OS. ForSun/StorageTek customers, TSM works with or without the Gersham Enterprise Distrbu-Tape(EDT) software. Use the IBM-provided drivers for IBM tape. For non-IBM tape, TSM providessome drivers that you can use instead.
That wraps up my week. This was a great conference! If you missed it, look for the one in Montpelier, France this October. Check out the list of IBM Technical Conferencesto find others that might interest you.
The IBM Storage and Storage Networking Symposium continues ...
DS8300 Benchmark for Global Mirror
Phil Allison of Fidelity National Information Services presented his success switching from competition over to IBM DS8300 disk systems for use with Global Mirror. They had usedPerformance Associates famous PAIO driver to help to the benchmarktesting. They ran the benchmars at 2x and 3x their current workloads to see how well the DS8000 performed,measuring IOPS, MB/sec, and millisecond response time (msec). They were very impressed with their results,staying below their target 0.8 msec for most of their runs.
For the Global Mirror, the did a performance "bake-off" between Ciena CN2000 versus Cisco 9216i. These areimplemented differently. Ciena uses a Layer-2 approach, encapsulating the Fibre Channel packets directlyto transport as SDH/SONET or Gigabit Ethernet (GigE), which required dedicated circuits between JacksonvilleFlorida and Little Rock, Arkansas. By contrast, Cisco uses a Layer-3 approach, encapsulating Fibre Channelpackets within an IP packet, which can leverage existing datacenter-to-datacenter backbone.
To add stress to the benchmarks, they used a "Network Impairment" emulator. These artificially inject errors,lose packets, and other signal loss conditions. Running both Cisco and Ciena under these tests help them decide which to purchase, but also enforced that idea that they made the right choice choosing IBM for theirremote distance mirroring solution.
Comparison of Bare Machine Recovery Techniques
"Bare machine recovery" is the phrase used to restore a machine that has no operating system installed (or thewrong operating system). Dave Canan from IBM Advanced Technical Support did a great job reviewing the variousproducts and techniques available, and the pros and cons of each approach. The ones he covered were:
Tivoli Storage Manager - install fresh Windows Operating System, TSM client, and then follow certain steps
Automated System Recovery(ASR) - a new feature of Windows XP and Windows 2003 works with TSM client
Symantec Ghost - formerly callled PowerQuest Drive Image, there are now two versions: Ghost Home Edition and Ghost Corporate Solution Suite
Cristie Bare Machine Recovery(CBMR) - This is an IBM partner that provides both Linux and Windows PE versions. Cristie includes a license for Windows PE, so no need to use the alternative Bart PE method.
SAN Volume Controller - Customer Experience
Bill Giles of Catholic Medical Center, a hospital in New Hampshire, presented his experienceswith IBM System Storage SAN Volume Controller. They have a mix of IBM System x, System p, andSystem i servers, as well as machines from HP, Sun, and Dell. For applications, they havePicture Archiving and Communicatiion System (PACS) for cardiology and radiology, HL7 Interface engine, Clinical Information System, TSM for backup, and Microsoft Exchange fore-mail.
They deployed SVC on AIX, Solaris, Windows 2000 and 2003. They were very delightedwith the results:
Centralized Storage Provisioning
Consolidating disparate storage into a universal platform
Enables non-disruptive data migration
Increased utilization of existing disk resources
Improved disaster recovery with FlashCopy and Metro Mirror
Birds of a Feather (BOF) sessions
We had two BOFs, one for storage attached to System z operating systems, and another for storage attached to Linux, UNIX and Windows systems. This distinctionmade sense when mainframes could only attach to CKD disks and ESCON/FICON tape,and distributed systems could only do FCP/SCSI, but these days, there are all kindsof convergence going on.
Linux on System z can now attach via FCP to LTO tape and SAN Volume Controller, allowing now a wide range of storage options for that platform. z/OS, z/VM, z/VSEand Linux on System z can all access IBM System Storage N series via NFS.
The format was traditional Q&A panel, we had experts at the front of the room,handling the questions and discussion topics brought up by the audience. I'll spareyou the individual questions and answers.
The IBM Storage and Storage Networking Symposium in Las Vegas continues ...
N series and VMware
Jeff Barnett presented how VMware manages disk image files in its VMfs repository, and how N series offersa better alternative. Virtual machines can access N series volumes directly.
Business Continuity with System i
Allison Pate presented the various Business Continuity options for System i. Many customersuse internal storage for System i, but this then hampers Business Continuity efforts. Instead,you can have IBM System Storage DS8000 or DS6000 series disk systems provide disk mirroringbetween clustered systems.
There was a lot of interest in DR550, one of our many compliance storage solutions. Ron Henkhauspresented an overview of our DR550 and DR550 Express offerings. Unlike the competitive disk-onlysolutions, such as the EMC Centera, the DR550 allows you to attach an automated tape library, managing large amounts of fixed content data at a much lower cost point. It also has encryption, for both diskand tape data.
Open Systems Disk Management
Siebo Friesenborg presented the various steps needed to troubleshoot performance problemswith open systems, including the use of "iostat" on AIX systems as an example, and the stepsyou can take to make formal Service Level Agreements (SLA) between the IT department and thevarious lines of business.
IBM Encryption - TS1120 and LTO-4 encryption comparison
Tony Abete presented TS1120 and LTO-4 encryption techniques. Deploying encryption is more thanjust choosing a tape drive. There are a variety of factors involved, such as whether to managethe keys from the application, the operating system, or the library manager. You need policiesto decided when to encrypt tapes and when not to, generating your keys, storing them, and sharingthem with your business partners, suppliers and service providers with which you send tapes.
I can tell that many people are feeling like they are "drinking from a firehose".IBM's success in storage reaches out to so many different aspects of information management,a variety of industries, and disciplines as varied as regulatory compliance and medical imaging.
Registration is now open for our next "Meet the Storage Experts" event in Second Life. All IBMers, clients and IBM Business Partners are welcome to attend. We will focus this time on DS3000 and N series disk systems, tape systems,and IBM storage networking gear.
The blog team is working on re-directs for those who don't see this in time. Depending on which RSS feed reader you use, you may need to unsubscribe/re-subscribe to re-activate. You can updatethe URL for the feed to one of these:
Continuing this week in Las Vegas, we had a great set of sessions today.
Fibre Channel Overview
I like the manner in whichJim Robinson presented this "basics" session on how Fibre Channel works, why it is spelled "Fibre" not "Fiber", and how all the different layers work in the protocol.
IBM Virtualization Engine TS7700 series
Jim Fisher from the IBM Tucson lab presented the TS7700 series, which replaces our Virtual Tape Server (VTS). Hehad performance numbers to show that it was faster in various measurements against the B20 model of the VTS. Itis supported on the z/OS, z/VM, z/VSE, TPF and z/TPF operating systems.
IBM E-mail Archiving and Storage solution
Ron Henkhaus provided an overview of IBM's E-mail Archive and Storage appliance. The solution combines IBM BladeCenter server blade, DS4200 serieswith SATA disk, and pre-installed software: IBM Content Manager, IBM Records Manager, IBM CommonStore for Lotus Domino and Microsoft Exchange, and IBM System Storage Archive Manager. Services are included to get it connected toyour e-mail environment.
Lee La Frese from our Tucson performance lab presented various performance featuresof the IBM System Storage DS8000 series, and how they compare to competition.
First, some interesting statistics.
Back in 2002, the average high-end EnterpriseStorage Server (ESS) model F20 was configured only for 4 Terabytes (TB). In 2004,the average ESS was up to 12 TB. Today, the average DS8100 is 17.4 TB and the averageDS8300 is 41.5 TB.
51 percent of DS8000 series are configured for FCP only (Linux, UNIX, Windows, i5/OS),35 percent FICON only (System z mainframe), and 14% have both mixed.
Average I/O density has stabilized to about 0.6 IOPS per GB. This means that for everyTB of business data, you can expect most applications to issue 600 Input/Output requestsper second.
While IBM SAN Volume Controller has the fastest SPC-1 and SPC-2 benchmarks, the DS8000also has good results. Looking at just the monolithic "scale-up" systems, DS8000 hasthe fastest SPC-1, and second place for SPC-2.
Compared against the EMC DMX-3, the IBM DS8000 series has superior performance.For example, comparing 2Gbps port performance on each, DMX-3 is able to do 20 IOPS perport, compared to DS8000 with 38 IOPS per port.Compared against HDS USP, the response time for 60,000 IOPS for HDS averaged 10.5 milliseconds (msec), compared to IBM DS8000 less than 6.5 msec.
There are some unique features of the DS8000 to optimize performance. Two areAdaptive Multi-stream Prefetching (AMP) which helps improve processing of databasequeries, and HyperPAV which helps on mainframe workloads.
For FATA disks, performance of sequential reads and writes is only 20 percent less than15K RPM FC disks, but a whopping 50 percent less for random access. Consider using FATAfor audio/video streaming, surveillance data, seismic recordings, and medical imaging.
Comparing 146GB 10K versus 300GB 15K from a capacity perspective was interesting.37TB of 300GB 15K had 20 percent better response time, but 25 percent less maximum throughput,than 37TB of 146GB drives. Depending on your workload, this can help decided which youchoose.
Lee also covered RAID rebuild performance. When an individual HDD fails that is part of a RAIDgroup, the DS8000 performs a rebuild onto a spare drive. A RAID-5 rebuild is processedat 52 MB/sec, compared to RAID-10 at 56 MB/sec. Rebuild processing is low priority,so any other workload will take higher priority to avoid impacting application performance.Compared to EMC, the IBM DS8000 can rebuild RAID-5 73GB 15K RPM drive in only 24 minutes, but it takes 37 minutes to do this on a DMX-3. That is 13 minutes of additional exposure where a second drive failure might cause you to lose all your data in that RAID group!
N series ILM and Business Continuity
James Goodwin from our Advanced Technical Support team presented IBM System Storage N series featuresthat relate to ILM and Business Continuity. He covered features like SnapShot, SnapLock,SnapVault and LockVault.
I have arrived safely in Las Vegas for the IBM System Storage and Storage Networking Symposium. This eventis held once every year. The gold sponsors were: Brocade, Cisco, Finisar, Servergraph, and VMware. Our silversponsor was Qlogic.
I presented IBM's System Storage strategy and an overview of our product line. For those who missed it,our strategy is focused on helping customers in four key areas:
Optimize IT - to simplify and automate your IT operations and optimize performance and functionality, through server/storage synergies, storage virtualization, and intergrated storage infrastructure management.
Leverage Information - to enable a single view of trusted business information through data sharing, and to get the most value from information through Information Lifecycle Management (ILM).
Mitigate Risk - to comply with security and regulatory requirements, and keep your business running with a complete set of business continuity solutions. IBM offers a range of non-erasable, non-rewriteable storage, encryption on disk and tape, and support for IT Infrastructure Library (ITIL) service management disciplines.
Enable Business Flexibility - to provide scalable solutions and protect your IT investment through the use of open industry standards like Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S). IBM offers scalability in three dimensions: Scale-up, Scale-out, and Scale-within.
IBM has a broad storage portfolio, in seven offering categories:
Disk Systems, including our SAN Volume Controller, DS family, and N series.
Tape Systems, including tape drives, libraries and virtualization.
Storage Networking, a complete set of switches, directors and routes
Infrastructure Management, featuring the IBM TotalStorage Productivity Center software
Business Continuity, advanced copy services and the software to manage them
Lifecycle and Retention, our non-erasable, non-rewriteable storage including DR550, N series with SnapLock, and WORM tape support, Grid Archive Manager and our Grid Medical Archive Solution (GMAS)
Storage Services, everything from consulting, design and deployment to outsourcing and hosting.
I could talk all day on this, but given that the room was packed, every seat taken and the rest of the audience standing along the walls, I had to keep it down to one hour.
SAN Volume Controller Overview
I presented an overview of the IBM System Storage SAN Volume Controller (SVC), IBM's flagship disk virtualizationproduct. Rather than giving a long laundry list of features and benefits,I focused on the five that matter most:
Reduces the cost and complexity of managing storage, especially for mixed storage environments
Simplifies Business Continuity through non-disruptive data migration and advanced copy services
Improves storage utilization, getting more value from the storage hardware you already have
Enhances personnel productivity, empowering storage administrators to get their job done
Delivers high availability and performance
SAN Volume Controller - Customer Success Stories
A good part of this conference are presented by non-IBMers, which include Business Partners and clientssharing their experiences. In this session, we had two speakers share their experiences with SVC.
David Snyder keeps over 80 web sites online and available. His digital media technologiesteam uses SVC to make their storage administration easier, and ensure high availability for web site content creation and publishing.
Mark Prybylski manages storage at his company, a financial bank. His storage management team uses SVC Global Mirror which provides asynchronous disk mirroring between different types of disk, as part oftheir Business Continuity/Disaster Recovery plan.
The last session I attended was "Storage .. to Optimize your ECM depoloyments" by Jerry Bower, now working for IBM as part of our recent acquisition of the Filenet company. ECM stands for Enterprise Content Management, and IBM is the market leader in this space. Jerry gave a great overview of IBM Content Manager software suite, our newly acquired Filenet portfolio, and the storage supported.
After the sessions was a reception at the Solution Center with dozens of exhibitor booths. For example,Optica Technologies had their PRIZM productswhich are able to connect FICON servers to ESCON storage devices.
I am back at "the Office" for a single day today. This happens often enough I need a name for it.Air Force pilots that practice landing and take-offs call them "Touch and Go", but I think I needsomething better. If you can think of a better phrase, let me know.
This week, I was in Hartford, CT, Somers, NY and our Corporate Headquarters in Armonk, in a varietyof meetings, some with editors of magazines, others with IBMers I have only spoken to over the phone andfinally got a chance to meet face to face.
I got back to Tucson last night, had meetings this morning in Second Life, then presented "InformationLifecycle Management" in Spanish to a group of customers from Mexico, Chile, and Brazil. We have a great Tucson Executive Briefing Center, and plenty of foreign-language speakers to draw from our localemployees here at the lab site.
Sunday, I leave for Las Vegas for our upcoming IBM Storage and Storage Networking Symposium. We will cover the latest in our disk, tape, storage networking and related software.Do you have your tickets? If you plan to attend, and want to meet up with me, let me know.
Last week, a writer for a magazine contacted us at IBM to confirm a quote that writing a Terabyte (TB) on disk saves 50,000 trees. I explained that this was cited from UC Berkeley's famousHow Much Information? 2003 study.
To be fair, the USA Today article explains that AT&T also offers "summary billing" as well as "on-line billing", but apparently neither of these are the default choice. I can understand that phone companies send out bills on paper because not everyone who has a phone has internet access, but in the case of its iPhone customers, internet access is in the palm of your hands! Since all iPhone customers have internet access, and AT&T knows which customers are using an iPhone, it would make sense for either on-line billing or summary billing to be the default choice, and let only those that hate trees explicitly request the full billing option.
Sending a box of 300 pages of printed paper is expensive, both for the sender and the recipient. This informationcould have been shipped less expensively on computer media, a single floppy diskette or CDrom for example. Forthose who prefer getting this level of detail, a searchable digitized version might be more useful to the consumer.
Which brings me to the concept of Information Lifecycle Management (ILM). You can read my recent posts on ILM byclicking the Lifecycle tab on the right panel, or my now infamous post from last year about ILM for my iPod.
His recollection of the history and evolution of ILM fairly matches mine:
The phrase "Information Lifecycle Management" was originally coined by StorageTek in early 1990s as a way to sell its tape systems into mainframe environments. Automated tape libraries eliminated most if not all of the concerns that disk-only vendors tout as the problem with manual tape. I began my IBM career in a product now called DFSMShsm which specifically moved data from disk to tape when it no longer needed the service level of disk. IBM had been delivering ILM offerings since the 1970s, so while StorageTek can't claim inventing the concept, we give them credit for giving it a catchy phrase.
EMC then started using the phrase four years ago in its marketing to sell its disk systems, including slower less-expensive SATA disk. The ILM concept helped EMC provide context for the many acquisitions of smaller companies that filled gaps in the EMC portfolio. Question: Why did EMC acquire company X? Answer: To be more like IBM and broaden its ILM solution portfolio.
Information Lifecycle Management is comprised of the policies, processes,practices, and tools used to align the business value of information with the mostappropriate and cost effective IT infrastructure from the time information isconceived through its final disposition. Information is aligned with businessrequirements through management policies and service levels associated withapplications, metadata, and data.
Whitepapers and other materials you might read from IBM, EMC, Sun/StorageTek, HP and others will all pretty much tell you what ILM is, consistent with this SNIA definition, why it is good for most companies, and how it is not just about buying disk and tape hardware. Software, services, and some discipline are needed to complete the implementation.
While the SNIA definition provides a vendor-independent platform to start the conversation, it can be intimidatingto some, and is difficult to memorize word for word.When I am briefing clients, especially high-level executives, they often ask for ILM to be explained in simpler terms. My simplified version is:
Information starts its life captured or entered as an "asset" ...
This asset can sometimes provide competitive advantage, or is just something needed for daily operations. Digital assets vary in business value in much the same way that other physical assets for a company might. Some assets might be declared a "necessary evil" like laptops, but are tracked to the n'th degree to ensure they are not lost, stolen or taken out of the building. Other assetsare declared "strategically important" but are readily discarded, or at least allowed to walk out the door each evening.
... then transitions into becoming just an "expense" ...
After 30-60 days, many of the pieces of information are kept around for a variety of reasons. However, if it isn'tneeded for daily operations, you might save some money moving it to less expensive storage media, throughless expensive SAN or LAN network gear, via less expensive host application servers. If you don't need instantaccess, then perhaps the 30 seconds or so to fetch it from much-less-expensive tape in an automated tape librarycould be a reasonable business trade-off.
... and ends up as a "liability".
Keeping data around too long can be a problem. In some cases, incriminating, and in other cases, just having toomuch data clogs up your datacenter arteries. If not handled properly within privacy guidelines, data potentially exposes sensitive personal or financial information of your employees and clients. Most regulations require certain data to be kept, in a manner protected against unexpected loss, unethical tampering, and unauthorized access, for a specific amount of time, after which it can be destroyed, deleted or shredded.
So ILM is not just a good idea to save a company money, it can keep them out of the court room, as well as help save the environment and not kill so many trees. Now that 100 percent of iPhone customers have internet access, and a goodnumber of non-iPhone customers have internet access at home, work, school or public library, it makes sense for companies to ask people to "opt-in" to getting their statements on paper, rather than forcing them to "opt-out".
Despite this, or perhaps because of this, over 30 percent of IBM's Linux server revenue is onnon-x86 platforms, avoiding the XenSource vs. VMware decision altogether. Both System z (traditional mainframe servers) and System p (traditional UNIX servers) are able to run many Linux images in a fully virtualized manner, without VMware or XenSource.
Philip Rosedale, chief executive of Linden Labs, which produced the Second Life virtual reality environment, said Second Life and Facebook are popular because they give people a new environment to interact in that they are comfortable with.
Of course I have blogged for months now on my involvement in Second Life, and how IBM is investing in this platform for business purposes. Recently, IBM made news for publishing its Code of Conduct,and set of guidelines on how you run your avatar in virtual worlds, including Second Life. IBM recognizesthe business potential of virtual worlds, and has formed the "3D Internet" group exploring the possibilities.Over 5000 IBM employees now use Second Life on a regular basis.
I was surprised to learn that there were over 23,000 IBMers already on Facebook. I used to be on LinkedIn,but found FaceBook to have more IBMers and have made the switch. Recently, we were told that these 23,000 IBMers spend 19 minutes, on average, per day visiting Facebook pages. Nobody askedme how much time I spend every day on FaceBook, but with over 350,000 employees in the company,I am sure some have ways to track the lives of others.
Both of these count as adding more "FUN" into the workplace, which everyone should strive for. It is also good to know that the skills you developusing Second Life or FaceBook can carry over to your next job role or your next employer.The number-one question I get from new colleagues when I mention either these exciting new ways to communicate and collaborate is: "But how is this related to business?"
Second Life is obvious, a new innovative way to hold meetings with colleagues, Business Partners and clients isgoing to have business value. Meetings in Second Life help you focus on what is being discussed, versus a plaintelephone call where your eyes may wander to other things in your view. Of course nothing beatsthe effectiveness of face-to-face meetings, but Second Life offers a more energy-efficient alternative than traveling to other cities or countries.
Stephen over at RupturedMonkey discusses the challenges of recruiting storage administrators:
There has been a Storage Admin job advertised for many months but no one wants it. Why? It's offering VERY good money but the word has got around the company has poor management practices and most people don't last for more than 6 months. So, with the shortage of good SAN people, good money and conditions, what can that company do to recruit someone? ...
This leads me to the thought that has anyone ever thought about the standards that storage administrators should follow? Can an employer look up a web site to find questions to ask prospective employees? More often than not, they are recruiting because the previous one left so how can companies know what they are getting.
There is actually a great standard called Information Technology Infrastructure Library (ITIL) that applies not just to storage administrators, but other IT personnel such as network administrators and server administrators. Here's a quick web-site about ITIL History:
ITIL History can be traced back to the late 1980’s when the British government determined that the level of IT service quality provided to them was not sufficient enough. The Central Computer and Telecommunications Agency (CCTA), now called the Office of Government Commerce (OGC), was tasked with developing a framework for efficient and financially responsible use of IT resources within the British government and the private sector.
The goal was to develop an approach that would be vendor-independent and applicable to organizations with differing technical and business needs. This resulted in the creation of the ITIL.
This standard spread from the UK to other governments in Europe, and is now being adopted worldwide by government agencies, non-profit organizations and commercial enterprises. IBM, of course, has been involved along the way, encouraging this set of best practices to take hold.
ITIL provides a common vocabulary that puts everyone in the IT industry on the same page, with the ultimate goal of helping companies run their IT organizations more efficiently.
ITIL provides recommendations, or best practices, for managing the way IT provides services to the rest of the organization, in the same way you would the rest of your business, with a defined set of processes.
While ITIL does a great job of describing what needs to be done, it doesn’t describe how to get it done. It doesn’t tell you how to take those best practices and implement them with real-life tools and technology. It’s not prescriptive.
The general process is now referred to as "IT Service Management", and the seven ITIL books are managed by the IT Service Management forum (ITSMf).
ITIL is vendor-independent. You can learn ITIL disciplines at one IT shop, and carry those skills with you when you go to another IT shop that has completely different gear. A common vocabulary would allow employers to post jobs in a consistent manner, and ask questions to those interviewing for the job. You can be ITIL-trained, and even ITIL-certified. IBM offers this training.
Of course, specific skills on how to use specific software to configure storage devices, request change control approvals, or define SAN zones, are useful, but often can be picked up on the job, reading the vendor manuals on the specifics. Of course, you can use IBM TotalStorage Productivity Center, which would allow someone to manage a variety of disk, tape and SAN fabric gear from one interface, greatly reducing the learning curve.
Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
Perhaps they won't be surprised any more. Here is an article in eWeek that explains how IBM isreducing energy costs 80% by consolidating 3,900 rack-optimized servers to 33 IBM System z mainframe servers, running Linux, in its own data centers. Since 1997, IBM has consolidated its 155 strategic worldwide data center locations down to just seven.
I am very pleased that IBM has invested heavily into Linux, with support across servers, storage, software andservices. Linux is allowing IBM to deliver clever, innovative solutions that may not be possible with other operating systems. If you are in storage, you should consider becoming more knowledgeable in Linux.
The older systems won't just end up in a landfill somewhere. Instead, the details are spelled out inthe IBM Press Release:
As part of the effort to protect the environment, IBM Global Asset Recovery Services, the refurbishment and recycling unit of IBM, will process and properly dispose of the 3,900 reclaimed systems. Newer units will be refurbished and resold through IBM's sales force and partner network, while older systems will be harvested for parts or sold for scrap. Prior to disposition, the machines will be scrubbed of all sensitive data. Any unusable e-waste will be properly disposed following environmentally compliant processes perfected over 20 years of leading environmental skill and experience in the area of IT asset disposition.
Whereas other vendors might think that some operational improvements will be enough, such as switching to higher-capacity SATA drives, or virtualizing x86 servers, IBM recognizes that sometimes more fundamental changes are required to effect real changes and real results.
I would like to welcome IBMer Barry Whyte to the blogosphere!
From his bio:
Barry Whyte is a 'Master Inventor' working in the Systems & Technology Group based in IBM Hursley, UK. Barry primarly works on the IBM SAN Volume Controller virtualization appliance. Barry graduated from The University of Glasgow in 1996 with a B.Sc (Hons) in Computing Science. In his 10 years at IBM he has worked on the successful Serial Storage Architecture (SSA) range of products and the follow-on Fibre Channel products used in the IBM DS8000 series. Barry joined the SVC development team soon after its inception and has held many positions before taking on his current role as SVC performance architect. Outside of work, Barry enjoys playing golf and all things to do with Rotary Engines.
To avoid confusion in future posts, I will refer to Barry Whyte as BarryW, and fellow EMC blogger Barry Burke (aka the Storage Anarchist) as BarryB.
I'm in Chicago this week, but it is actually HOTTER here than in my home town of Tucson, Arizona.
There are a lot of exciting conferences and events coming up soon.
SHARE will be in San Diego, August 12-17. Held twice a year, I attended SHARE for 10 years back when I was lead architect for DFSMS,and then later the focal point for storage support on the Linux for System z platform.I won't be there this time around, but am glad to see that it is still thriving.
IBM Storage and Storage Networking Symposium
IBM Storage and Storage Networking Symposium will be in Las Vegas, August 19-24.This is a great conference that is focused entirelyon the products and solutions I deal with the most. I attended nearly every one since they startedthis back in the 1990s, and am glad that I will be there this year, making several presentations.If you plan to attend this and want to meet up, drop me a note.
VMworld will be held in San Francisco, September 11-13.IBM is a top reseller of VMware software, and is proud to be a Platinum Sponsor for this event. Lookfor the panel discussion on "Storage Virtualization" which I am sure will include SAN Volume Controller.
Meet the Storage Experts
Based on our successful product launch in Second Life back in April, we are now holding meetingsevery quarter to discuss various IBM System Storage topics. The next one will be September 20 onone of the IBM islands in Second Life. For those without travel budgets to go anywhere, the advantageto our "Second Life" events is that no travel is required, it can be done from the comfort of workor home office location.
I will post updates on how to register for this event as soon as I know them.
Virtual Worlds Fall 2007 onOctober 10-11, 2007 at the San Jose Convention Center. Sandy Kearney, IBM GlobalDirector of IBM 3D Internet and Virtual Business, will be the keynote speaker.This will include discussion of Second Life.
I am sure there are others, but these are the ones that I am aware of IBM's involvement.I'll be in Chicago next week, meeting with Sales Reps and Business Partners.
The question is if this is unique or specific to these particular models, or if this affects all kinds of blade servers because of their very nature and architecture. Stephen indicates that they also have HP C class enclosures, but since they are still in test mode, cannot comment on them.
I have no experience with any of HP's blade servers, but I have worked closely with our IBM BladeCenter team to help make sure that our storage, and our SAN equipment, work well together with the BladeCenter, and more importantly, that problems can be diagnosed effectively.
When I asked why people feel they need to know the inner workings of storage, the overwhelming response was to help diagnose problems. This could include problems inplacing related data on a potentially single point of failure, problems with performance, and problems communicating with 1-800-IBM-SERV.
So, if you have encountered problems diagnosing SAN problems with BladeCenter, or find that setting up an IBM SAN with blade servers in general, I would be interested in hearing what IBM can do to make the situation better.[Read More]
Miles per Gallon measures an effeciency ratio (amount of work done with a fixed amount of energy), not a speed ratio (distance traveled in a unit of time).
Given that IOPs and MB/s are the unit of "work" a storage array does, wouldn't the MPG equivalent for storage be more like IOPs per Watt or MB/s per Watt? Or maybe just simply Megabytes Stored per Watt (a typical "green" measurement)?
You appear to be intentionally avoiding the comparison of I/Os per Second and Megabytes per Second to Miles Per Hour?
May I ask why?
This is a fair question, Barry, so I will try to address it here.
It was not a typo, I did mean MPG (miles per gallon) and not MPH (miles per hour). It is always challenging to find an analogy that everyone can relate to explain concepts in Information Technology that might be harder to grasp. I chose MPG because it was closely related to IOPS and MB/s in four ways:
MPG applies to all instances of a particular make and model. Before Henry Ford and the assembly line, cars were made one at a time, by a small team of craftsmen, and so there could be variety from one instance to another. Today, vehicles and storage systems are mass-produced in a manner that provides consistent quality. You can test one vehicle, and safely assume that all similar instances of the same make and model will have the similar mileage. The same is true for disk systems, test one disk system and you can assume that all others of the same make and model will have similar performance.
MPG has a standardized measurement benchmark that is publicly available. The US Environmental Protection Agency (EPA) is an easy analogy for the Storage Performance Council, providing the results of various offerings to chose from.
MPG has usage-specific benchmarks to reflect real-world conditions.The EPA offers City MPG for the type of driving you do to get to work, and Highway MPG, to reflect the type ofdriving on a cross-country trip. These serve as a direct analogy to SPC having SPC-1 for Online transaction processing (OLTP) and SPC-2 for large file transfers, database queries and video streaming.
MPG can be used for cost/benefit analysis.For example, one could estimate the amount of business value (miles travelled) for the amount of dollar investment (cost to purchase gallons of gasoline, at an assumed gas price). The EPA does this as part of their analysis. This is similar to the way IOPS and MB/s can be divided by the cost of the storage system being tested on SPC benchmark results. The business value of IOPS or MB/s depends on the application, but could relate to the number of transactions processed per hour, the number of music downloads per hour, or number of customer queries handled per hour, all of which can be assigned a specific dollar amount for analysis.
It seemed that if I was going to explain why standardized benchmarks were relevant, I should find an analogy that has similar features to compare to. I thought about MPH, since it is based on time units like IOPS and MB/s, butdecided against it based on an earlier comment you made, Barry, about NASCAR:
Let's imagine that a Dodge Charger wins the overwhelming majority of NASCAR races. Would that prove that a stock Charger is the best car for driving to work, or for a cross-country trip?
Your comparison, Barry, to car-racing brings up three reasons why I felt MPH is a bad metric to use for an analogy:
Increasing MPH, and driving anywhere near the maximum rated MPH for a vehicle, can be reckless and dangerous,risking loss of human life and property damage. Even professional race car drivers will agree there are dangers involved. By contrast, processing I/O requests at maximum speed poses no additional risk to the data, nor possibledamage to any of the IT equipment involved.
While most vehicles have top speeds in excess of 100 miles per hour, most Federal, State and Local speed limits prevent anyone from taking advantage of those maximums. Race-car drivers in NASCAR may be able to take advantage of maximum MPH of a vehicle, the rest of us can't. The government limits speed of vehicles precisely because of the dangers mentioned in the previous bullet. In contrast, processing I/O requests at faster speeds poses no such dangers, so the government poses no limits.
Neither IOPS nor MB/s match MPH exactly.Earlier this week,I related IOPS to "Questions handled per hour" at the local public library, and MB/s to "Spoken words per minute" in those replies. If I tried to find a metric based on unit type to match the "per second" in IOPS and MB/s, then I would need to find a unit that equated to "I/O requests" or "MB transferred" rather than something related to "distance travelled".
In terms of time-based units, the closest I could come up with for IOPS was acceleration rate of zero-to-sixty MPH in a certain number of seconds. Speeding up to 60MPH, then slamming the breaks, and then back up to 60MPH, start-stop, start-stop, and so on, would reflect what IOPS is doing on a requestby request basis, but nobody drives like this (except maybe the taxi cab drivers here in Malaysia!)
Since vehicles are limited to speed limits in normal road conditions, the closest I could come up with for MB/s would be "passenger-miles per hour", such that high-occupancy vehicles like school buses could deliver more passengers than low-occupancy vehicles with only a few passengers.
Neither start-stops nor passenger-miles per hour have standardized benchmarks, so they don't work well for comparisonbetween vehicles.If you or anyone can come up with a metric that will help explain the relevance of standardized benchmarks better than the MPG that I already used, I would be interested in it.
You also mention, Barry, the term "efficiency" but mileage is about "fuel economy".Wikipedia is quick to point out that the fuel efficiency of petroleum engines has improved markedly in recent decades, this does not necessarily translate into fuel economy of cars. The same can be said about the performance of internal bandwidth ofthe backplane between controllers and faster HDD does not necessarily translate to external performance of the disk system as a whole. You correctly point this out in your blog about the DMX-4:
Complementing the 4Gb FC and FICON front-end support added to the DMX-3 at the end of 2006, the new 4Gb back-end allows the DMX-4 to support the latest in 4Gb FC disk drives.
You may have noticed that there weren't any specific performance claims attributed to the new 4Gb FC back-end. This wasn't an oversight, it is in fact intentional. The reality is that when it comes to massive-cache storage architectures, there really isn't that much of a difference between 2Gb/s transfer speeds and 4Gb/s.
Oh, and yes, it's true - the DMX-4 is not the first high-end storage array to ship a 4Gb/s FC back-end. The USP-V, announced way back in May, has that honor (but only if it meets the promised first shipments in July 2007). DMX-4 will be in August '07, so I guess that leaves the DS8000 a distant 3rd.
This also explains why the IBM DS8000, with its clever "Adaptive Replacement Cache" algorithm, has such highSPC-1 benchmarks despite the fact that it still uses 2Gbps drives inside. Given that it doesn't matter between2Gbps and 4Gbps on the back-end, why would it matter which vendor came first, second or third, and why call it a "distant 3rd" for IBM? How soon would IBM need to announce similar back-end support for it to be a "close 3rd" in your mind?
I'll wrap up with you're excellent comment that Watts per GB is a typical "green" metric. I strongly support the whole"green initiative" and I used "Watts per GB" last month to explain about how tape is less energy-consumptive than paper.I see on your blog you have used it yourself here:
The DMX-3 requires less Watts/GB in an apples-to-apples comparison of capacity and ports against both the USP and the DS8000, using the same exact disk drives
It is not clear if "requires less" means "slightly less" or "substantially less" in this context, and have no facts from my own folks within IBM to confirm or deny it. Given that tape is orders of magnitude less energy-consumptive than anything EMC manufacturers today, the point is probably moot.
I find it refreshing, nonetheless, to have agreed-upon "energy consumption" metrics to make such apples-to-apples comparisons between products from different storage vendors. This is exactly what customers want to do with performance as well, without necessarily having to run their own benchmarks or work with specific storage vendors. Of course, Watts/GB consumption varies by workload, so to make such comparisons truly apples-to-apples, you would need to run the same workload against both systems. Why not use the SPC-1 or SPC-2 benchmarks to measure the Watts/GB consumption? That way, EMC can publish the DMX performance numbers at the same time as the energy consumption numbers, and then HDS can follow suit for its USP-V.
I'm on my way back to the USA soon, but wanted to post this now so I can relax on the plane.
Wrapping up this week's exploration on disk system performance, today I willcover the Storage Performance Council (SPC) benchmarks, and why I feel they are relevant to help customers make purchase decisions. This all started to address a comment from EMC blogger Chuck Hollis, who expressed his disappointment in IBM as follows:
You've made representations that SPC testing is somehow relevant to customers' environments, but offered nothing more than platitudes in support of that statement.
Apparently, while everyone else in the blogosphere merely states their opinions and moves on,IBM is held to a higher standard. Fair enough, we're used to that.Let's recap what we covered so far this week:
Monday, I explained how seemingly simple questions like "Which is the tallestbuilding?" or "Which is the fastest disk system?" can be steeped in controversy.
Tuesday, I explored what constitutes a disk system. While there are special storage systemsthat include HDD that offer tape-emulation, file-oriented access, or non-erasable non-rewriteable protection,it is difficult to get apples-to-apples comparisions with storage systems that don't offer these special features.I focused on the majority of general-purpose disk systems, those that are block-oriented, direct-access.
Today, I will explore ways to apply these metrics to measure and compare storageperformance.
Let's take, for example, an IBM System Storage DS8000 disk system. This has a controller thatsupports various RAID configurations, cache memory, and HDD inside one or more frames.Engineers who are testing individual components of this system might run specifictypes of I/O requests to test out the performance or validate certain processing.
100% read-hit, this means that all the I/O requests are to read data expectedto be in the cache.
100% read-miss, this means that all the I/O requests are to read data expectedNOT to be in the cache, and must go fetch the data from HDD.
100% write-hit, this means that all the I/O requests are to write data into cache.
100% write-miss, this means that all the I/O requests are to bypass the cache,and are immediately de-staged to HDD. Depending on the RAID configuration, this can result in actually reading or writing several blocks of data on HDD to satisfy thisI/O request.
Known affectionately in the industry as the "four corners" test, because you can show them on a box, with writes on the left, reads on the right,hits on the top, and misses on the bottom.Engineers are proud of these results, but these workloads do notreflect any practical production workload. At best, since all I/O requests are oneof these four types, the four corners provide an expectation range from the worst performance (most often write-missin the lower left corner)and the best performance (most often read-hit in the upper right corner) you might get with a real workload.
To understand what is needed to design a test that is more reflective of real business conditions,let's go back to yesterday's discussion of fuel economy of vehicles, with mileage measured in miles per gallon.The How Stuff Works websiteoffers the following description for the two measurements taken by the EPA:
The "city" program is designed to replicate an urban rush-hour driving experience in which the vehicle is started with the engine cold and is driven in stop-and-go traffic with frequent idling. The car or truck is driven for 11 miles and makes 23 stops over the course of 31 minutes, with an average speed of 20 mph and a top speed of 56 mph.
The "highway" program, on the other hand, is created to emulate rural and interstate freeway driving with a warmed-up engine, making no stops (both of which ensure maximum fuel economy). The vehicle is driven for 10 miles over a period of 12.5 minutes with an average speed of 48 mph and a top speed of 60 mph.
Why two different measurements? Not everyone drives in a city in stop-and-go traffic. Having only one measurement may not reflect the reality that you may travel long distances on the highway. Offering both city and highway measurements allows the consumers to decide which metric relates closer to their actual usage.
Should you expect your actual mileage to be the exact same as the standardized test?Of course not. Nobody drives exactly 11 miles in the city every morning with 23 stops along the way,or 10 miles on the highway at the exact speeds listed.The EPA's famous phrase "your mileage may vary" has been quickly adopted into popular culture's lexicon. All kinds of factors, like weather, distance, anddriving style can cause people to get better or worse mileage than thestandardized tests would estimate.
Want more accurate results that reflect your driving pattern, in specific conditions that you are most likely to drive in? You could rentdifferent vehicles for a week and drive them around yourself, keeping track of whereyou go, and how fast you drove, and how many gallons of gas you purchased, so thatyou can then repeat the process with another rental, and so on, and then use yourown findings to base your comparisons. Perhaps you find that your results are always20% worse than EPA estimates when you drive in the city, and 10% worse when you driveon the highway. Perhaps you have many mountains and hills where you drive, you drive too fast, you run the Air Conditioner too cold, or whatever.
If you did this with five or more vehicles, and ranked them best to worstfrom your own findings, and also ranked them best to worst based on the standardizedresults from the EPA, you likely will find the order to be the same. The vehiclewith the best standardized result will likely also have the best result from your ownexperience with the rental cars. The vehicle with the worst standardized result willlikely match the worst result from your rental cars.
(This will be one of my main points, that standardized estimates don't have to be accurate to beuseful in making comparisons. The comparisons and decisions you would make with estimatesare the same as you would have made with actual results, or customized estimates based on current workloads. Because the rankings are in the same order, they are relevant and useful for making decisions based on those comparisons.)
Most people shopping around for a new vehicle do not have the time or patience to do this with rental cars. Theycan use the EPA-certified standardized results to make a "ball-park" estimate on how much they will spendin gasoline per year, decide only on cars that might go a certain distancebetween two cities on a single tank of gas, or merely to provide ranking of thevehicles being considered. While mileage may not be the only metric used in making a purchase decision, it can certainly be used to help reduce your consideration setand factor in with other attributes, like number of cup-holders, or leather seats.
In this regard, the Storage Performance Council has developed two benchmarks that attempt to reflect normal business usage, similar to "City" and "Highway" driving measurements.
SPC-1 consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications. Those applications are characterized by predominately random I/O operations and require both queries as well as update operations. Examples of those types of applications include OLTP, database operations, and mail server implementations.
SPC-2 consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.
Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.
The SPC-2 benchmark was added when people suggested that not everyone runs OLTP anddatabase transactional update workloads, just as the "Highway" measurement was addedto address the fact that not everyone drives in the City.
If you are one of the customers out there willing to spend the time and resources to do your own performance benchmarking, either at your own data center, or with theassistance of a storage provider, I suspect most, if not all, the major vendors(including IBM, EMC and others), and perhaps even some of the smaller start-ups, would be glad to work with you.
If you want to gather performance data of your actual workloads, and use this to estimate how your performance might be with a new or different storage configuration, IBMhas tools to make these estimates, and I suspect (again) that most, if not all, of theother storage vendors have developed similar tools.
For the rest of you who are just looking to decide which storage vendors to invite on your next RFP, and which products you might like to investigate that matchthe level of performance you need for your next project or application deployment,than the SPC benchmarks might help you with this decision. If performance is importantto you, factor these benchmark comparisons with the rest of the attributes you arelooking for in a storage vendor and a storage system.
In my opinion, I feel that for some people, the SPC benchmarks provide some value in this decision making process. They are proportionally correct, in that even ifyour workload gets only a portion of the SPC estimate, that storage systems withfaster benchmarks will provide you better performance than storage systems with lower benchmark results. That is why I feel they can be relevant in makingvalid comparisons for purchase decisions.
Hopefully, I have provided enough "food for thought"on this subject to support why IBM participates in the Storage Performance Council, why the performance of the SAN Volume Controller can be compared to the performanceof other disk systems, and why we at IBM are proud of the recent benchmark results in our recent press release.
Continuing our exploration this week into the performance of disk systems, today I will cover the metrics to measure performance. Why do people have metrics?
Help provide guidance in decision making prior to purchase
Help manage your current environment
Help drive changes
Several bloggers suggested that perhaps an analogy to vehicles would be reasonable, given that cars and trucks are expensive pieces of engineering equipment, and people make purchase decisions between different makes and models.
In the United States, the Environmental Protection Agency (EPA) government entity is responsible for measuringfuel economy of vehicles using the metric Miles Per Gallon (mpg).Specifically, these are U.S. miles (not nautical miles) and U.S. gallons, not imperial gallons. It is importantwhen defining metrics that you are precise on the units involved.
Since nearly all vehicles are driven by gallons of gasoline, and travel miles of distance, this is a great metric to use for comparing all kinds of vehicles, including motorcycles, cars, trucks and airplanes. The EPA has a fuel economy website to help people make these comparisons.Manufacturers are required by law to post their vehicles' fuel-economy ratings, as certified by the federal Environmental Protection Agency (EPA), on the window stickers of most every new vehicle sold in the U.S. -- vehicles that have gross-vehicle-weight ratings over 8,500 pounds are the exception.
What about storage performance? What could we use as the "MPG"-like metric that would allow you to compare different makes and models of storage?
The two most commonly used are I/O requests per second (IOPS) and Megabytes transferred per second (MB/s). To understand the difference in each one, let's go back to our analogy from yesterday's post.
(A woman calls the local public library. She picks up the phone, and dials the phone number of the one down the street. A man working at the library hears the phone ring, answers it with "Welcome to the Public Library! How can I help you?" She asks "What is the capital city of Ethiopia?" He replies "Addis Ababa" and hangs up. Satisfied with this response, she hangs up. In this example, the query for information was the I/O request, initiated by the lady, to the public library target)
In this example, it might have only taken 1 second to actually provide the answer, but it might have taken 10-30 seconds to pick up the phone, hear the request, respond, and then hang up the phone. If one person is able to do this in 10 seconds, on average, then he can handle 360 questions per hour. If another person takes 30 seconds, then only 120 questions per hour. Many business applications read or write less than 4KB of information per I/O request, and as such the dominant factor is not the amount of time to transfer the data, but how quickly the disk system can respond to each request. IOPS is very much like counting "Questions handled per hour" at the public library. To be more specific on units, we may specify the specific block size of the request, say 512 bytes or 4096 bytes, to make comparisons consistent.
Now suppose that instead of asking for something with a short answer, you ask the public library to read you the article from a magazine, identify all the movies and show times of a local theatre, or recite a work from Shakespeare. In this case, the time it took to pick up the phone and respond is very small compared to the time it takes to deliverthe information, and could be measured instead in words per minute. Some employees of the library may be faster talkers, having perhaps worked in auction houses in a prior job, and can deliver more words per minute than other employees. MB/s is very much like counting "Spoken words per minute" at the public library. To be more specific on units, we may request a specific amount of information, say the words contained in "Romeo and Juliet", to make comparisons consistent.
Now that we understand the metrics involved, tomorrow we can discuss how to use these in the measurement process.
Yesterday, I started this week's topic discussing the various areas of exploration to helpunderstand our recent press release of the IBM System Storage SAN Volume Controller and itsimpressive SPC-1 and SPC-2 benchmark results that ranks it the fastest disk system in the industry.
Some have suggested that since the SVC has a unique design, it should be placed in its own category,and not compared to other disk systems. To address this, I would like to define what IBM meansby "disk system" and how it is comparable to other disk systems.
When I say "disk system", I am going to focus specifically on block-oriented direct-access storage systems, which I will define as:
One or more IT components, connected together, that function as a whole, to serve as a target forread and write requests for specific blocks of data.
Clarification: One could argue, and several do in various comments below, that there are other typesof storage systems that contain disks, some that emulate sequential access tape libraries, some that emulate file-systems through CIFS or NFS protocols, and some that support thestorage of archive objects and other fixed content. At the risk of looking like I may be including or excluding such to fit my purposes, I wanted to avoid apples-to-orangescomparisons between very different access methods. I will limit this exploration to block-oriented, direct-access devices. We can explore these other types of storage systems in later posts.
People who have been working a long time in the storage industry might be satisfied by this definition, thinkingof all the disk systems that would be included by this definition, and recognize that other types of storage liketape systems that are appropriately excluded.
Others might be scratching their heads, thinking to themselves "Huh?" So, I will provide some background, history, and additional explanation. Let's break up the definition into different phrases, and handle each separately.
read and write requests
Let's start with "read and write requests", which we often lump together generically as input/output request, or just I/O request. Typically an I/O request is initiated by a host, over a cable or network, to a target. The target responds with acknowledgment, data, or failure indication. A host can be a server, workstation, personal computer, laptop or other IT device that is capable of initiating such requests, and a target is a device or system designed to receive and respond to such requests.
(An analogy might help. A woman calls the local public library. She picks up the phone, and dials the phone number of the one down the street. A man working at the library hears the phone ring, answers it with "Welcome to the Public Library! How can I help you?" She asks "What is the capital city of Ethiopia?" and replies "Addis Ababa." and hangs up. Satisfied with this response, she hangs up. In this example, the query for information was the I/O request, initiated by the lady, to the public library target)
Today, there are three popular ways I/O requests are made:
CCW commands over OEMI, ESCON or FICON cables
SCSI commands over SCSI, Fibre Channel or SAS cables
SCSI commands over Ethernet cables, wireless or other IP communication methods
specific blocks of data
In 1956, IBM was the first to deliver a disk system. It was different from tape because it was a "direct access storage device" (the acronym DASD is still used today by some mainframe programmers). Tape was a sequential media, so it could handle commands like "read the next block" or "write the next block", it could not directly read without having to read past other blocks to get to it, nor could it write over an existing block without risking overwriting the contents of blocks past it.
The nature of a "block" of data varies. It is represented by a sequence of bytes of specific length. The length is determined in a variety of ways.
CCW commands assume a Count-Key-Data (CKD) format for disk, meaning that tracks are fixed in size, but that a track can consist of one or more blocks, and can be fixed or variable in length. Some blocks can span off the end of one track, and over to another track. Typical block sizes in this case are 8000 to 22000 bytes.
SCSI commands assume a Fixed-Block-Architecture (FBA) format for disk, where all blocks are the same size, almost always a power of two, such as 512 or 4096 bytes. A few operating systems, however, such as i5/OS on IBM System i machines, use a block size that doesn't follow this power-of-two rule.
one or more IT components
You may find one or more of the following IT components in a disk system:
motorized platter(s) covered in magnetic coating with a read/write head to move over its surface. These are often referred to as Hard Disk Drive (HDD) or Disk Drive Modules (DDM), and are manufacturedby companies like Seagate or Hitachi Global Storage Technologies.
A set of HDD can be accessed individually, affectionately known as JBOD for Just-a-bunch-of-disk, or collectively in a RAID configuration.
Memory can act as the high-speed cache in front of slower storage, or as the storage itself. For example, the solid state disk that IBM announced last week is entirely memory storage, using Flash technology.
Lately, there are two popular packaging methods for disk systems:
Monolithic -- all the components you need connected together inside a big refrigerator-sized unit, with options to attach additional frames. The IBM System Storage DS8000, EMC Symmetrix DMX-4 and HDS TagmaStore USP-V all fit this category.
Modular -- components that fit into standard 19-inch racks, often the size of the vegetable drawer inside a refrigerator, that can be connected externally with other components, if necessary, to make a complete disk system. The IBM System Storage DS6000, DS4000, and DS3000 series, as well as our SVC and N series, fall into this category.
Regardless of packaging, the general design is that a "controller" receives a request from its host attachment port, and uses its processors and cache storage to either satisfy the request, or pass the request to the appropriate HDD,and the results are sent back through the host attachment port.
In all of the monolithic systems, as well as some of the modular ones, the controller and HDD storage are contained in the same unit. On other modular systems, the controller is one system, and the HDD storage is in a separate system, and they are cabled together.
serve as a target
The last part is that a disk system must be able to satisfy some or all requests that come to it.
(Using the same analogy used above, when the lady asked her question, the guy at the public library knew the answer from memory, and replied immediately. However, for other questions, he might need to look up the answer in a book, do a search on the internet, or call another library on her behalf.)
Some disk systems are cache-only controllers. For these, either the I/O request is satisfied as a read-hit or write-hit in cache, or it is not, and has to go to the HDD. The IBM DS4800 and N series gateways are examples of this type of controller.
Other systems may have controller and disk, but support additional disk attachment. In this case, either the I/O request is handled by the cache or internal disk, or it has to go out to external HDD to satisfy the request. IBM DS3000 series, DS4100, DS4700, and our N series appliance models, all fall into this category.
So, the SAN Volume Controller is a disk system comprising of one to four node-pairs. Each node is a piece of IT equipment that have processors and cache. These node-pairs are connected to a pair of UPS power supplies to protect the cache memory holding writes that have not yet been de-staged. The combination of node-pairs and UPS acting as a whole, is able to serve as a target to SCSI commands sent over Fibre Channel cables on a Storage Area Network (SAN). To read some blocks of data, it uses its internal cache storage to satisfy the request, and for others, it goes out to external disk systems that contain the data required. All writes are satisfied immediately in cache on the SVC, and later de-staged to external disk when appropriate.
As of end of 2Q07, having reached our four-year anniversary for this product, IBM has sold over 9000 SVC nodes, which are part of more than 3100 SVC disk systems. These things are flying off the shelves, clocking in a 100% YTY growth over the amount we sold twelve months ago. Congratulations go to the SVC development team for their impressive feat of engineering that is starting to catch the attention of many customers and return astounding results!
So, now that I have explained why the SVC is considered a disk system, tomorrow I'll discuss metrics to measure performance.
Continuing my business trip through Asia, I have left Chengdu, China, and am now in Kuala Lumpur, Malaysia.
On Sunday, a colleague and I went to the famous Petronas Twin Towers, which a few years ago were officially the tallestbuildings in the world. If you get there early enough in the day, and wait in line for a few hours, you can get a ticket permitting you to go up to the "Skybridge" on the 41st floor that connects the two buildings. The views are stunning, and I am glad to have done this.(If you are afraid of heights, get cured by facing your fears with skydiving)
You would think that a question as simple as "Which is the tallest building in the world?" could easily be answered, given that buildings remain fixed in one place and do not drastically shrink or get taller over time or weather conditions, and the unit of height, the "meter", is an officially accepted standard in all countries, defined as the distance traveled by light in absolute vacuum in 1/299,792,458 of a second.
The controversy stems around two key areas of dispute:
What constitutes a building?
A building is a structure intended for continuous human occupancy, as opposed to the dozens ofradio and television broadcasting towers which measure over 600 meters in height. The Petronas Twin Towers is occupied by a variety of business tenants and would qualify as a building. Radio and Television towers are not intended for occupation, and should not be considered.
Where do you start measuring, and where do you stop?
Since 1969, the height was generally based on a building's height from the sidewalk level of the main entrance to the architectural top of the building. The "architectural top" included towers, spires (but not antennas), masts or flagpoles. Should the measurements be only to the top to the highest inhabitable floor?
What if the building has many more floors below ground level? What if the building exists in a body of water, should sidewalk level equate to water level, and at low tide or high tide? (Laugh now, but this might happen sooner than you think!)
To bring some sanity to these comparisons, the Council on Tall Buildings and Urban Habitat has tried to standardize the terms and definitions to makecomparisons between buildings fair. Why does all this matter whose building is tallest? It matters in twoways:
People and companies are willing to pay more to be a tenant in tall towers, affording a luxurious bird's-eyeview to impress friends, partners and clients, and so the rankings can influence purchase or leasing prices of floorspace in these buildings.
Architects and engineers involved in building these structures want to list these on their resume.These buildings are an impressive feat of engineering, and the teams involved collaborate in a global mannerto accomplish them. If an architecture or engineeering company can build the world's tallest building, you can trust themto build one for you. The rankings can help drive revenues in generating demand for services and offerings.
What does any of this have to do with storage? Two weeks ago, IBM and the Storage Performance Councilanswered the question "Which is the fastest disk system?" with apress release. Customers thatcare about performance of their most mission critical applications are often willing to pay a premium to run theirapplications on the fastest disk system, and the IBM System Storage SAN Volume Controller, built through aglobal collaboration of architects and engineers across several countries, is (in my opinion at least) an impressive feat of storage engineering.
For those in the US, a comedian named Carlos Mencia has a great TV show, Mind of Menciaand one of my favorite segments is "Why the @#$% is this news!" where he goes about showingblatantly obvious things that were reported in various channels.
So, when I saw that IBM once again, for the third year in a row, has the fastest disk system,the IBM System Storage SAN Volume Controller (SVC), based on widely-accepted industry benchmarksrepresenting typical business workloads, I thought, "Do I really want to blog about this,and sound like a broken record, repeating my various statements of the past of how great SVC is?" It's like reminding people that IBM hashad the most US patents than any other company, every year, for the past 14 years.
(Last year, I received comments fromWoody Hutsell, VP of Texas Memory Systems,because I pointed out that their "World's Fastest Storage"® cache-only system, was not as fast as IBM's SVC.You can ready my opinions, and the various comments that ensued, hereand here. )
That all changed when EMC uber-blogger Chuck Hollis forgot his own Lessons in Marketingwhen heposted his rantDoes Anyone Take The SPC Seriously?That's like asking "Does anyone take book and movie reviews seriously?" Of course they do!In fact, if a movie doesn't make a big deal of its "Two thumbs up!" rating, you know it did not sitwill with the reviewers. It's even more critical for books. I guess this latest news from SPC reallygot under EMC's skin.
For medium and large size businesses, storage is expensive, and customers want to do as much research as possible ahead of time to make informed decisions. A lot of money is at stake, and often, once you choose a product, you are stuckwith that vendor for many years to come, sometimes paying software renewals after only 90 days, and hardware maintenance renewals after only a year when the warranty runs out.
Customers shopping for storage like the idea of a standardized test that is representative, so they can compare one vendor's claims with another. The Storage Performance Council (SPC), much like the Transaction Processing Performance Council (TPC-C) for servers, requires full disclosure of the test environment so people can see what was measured and make their own judgement on whether or not it reflects their workloads. Chuck pours scorn on SPC but I think we should point to TPC-C as a great success story and ask why he thinks the same can't happen for storage? Server performance is also a complicatedsubject, but people compare TPC-C and TPC-H benchmarks all the time.
Note:This blog post has been updated. I am retracting comments that were unfair generalizations. The next two paragraphs are different than originally posted.
Chuck states that "Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it." I encourage every customer to do this with whatever disk systems they already have installed. Judge for yourself how each benchmark compares to your experience with your application workload, and consider publishing the results for the benefit of others, or at least send me the results, so that I can understand better all of these"use cases" that Chuck talks about so often. I agree that real-world performance measurements using real applications and real data are always going to be more accurate and more relevant to that particular customer. Unfortunately, there are little or no such results made public. They are noticeably absent. With thousands of customers running with storage from all the major storage vendors, as well as storage from smaller start-up companies, I would expect more performance comparison data to be readily available.
In my opinion, customers would benefit by seeing the performance results obtained by others. SPC benchmarks help to fill this void, to provide customers who have not yet purchased the equipment, and are looking for guidance of which vendors to work with, and which products to put into their consideration set.
Truth is, benchmarks are just one of the many ways to evaluate storage vendors and their products. There are also customer references, industry awards, and corporate statements of a company's financial health, strategy and vision.Like anything, it is information to weigh against other factors when making expensive decisions. And I am sure the SPC would be glad to hear of any suggestions for a third SPC-3 benchmark, if the first two don't provide you enough guidance.
So, if you are not delighted with the performance you are getting from your storage now, or would benefit by having even faster I/O, consider improving its performance by adding SAN Volume Controller. SVC is like salt or soy sauce, it makes everything taste better. IBM would be glad to help you with a try-and-buy or proof-of-concept approach, and even help you compare the performance, before and after, with whatever gear you have now. You might just be surprised how much better life is with SVC. And if, for some reason, the performance boost you experience for your unique workload is only 10-30% better with SVC, you are free to tell the world about your disappointment.
Wrapping up my week's discussion on Business Continuity, I've had lots of interest in myopinion stated earlier this week that it is good to separate programs from data, and thatthis simplifies the recovery process, and that the Windows operating system can fit in a partition as small asthe 15.8GB solid state drive we just announced for BladeCenter. It worked for me, and I will use this post to show you how to get it done.
Disclaimer: This is based entirely on what I know and have experienced with my IBM Thinkpad T60 running Windows XP, and is meant as a guide. If you are running with different hardware or different operating system software, some steps may vary.
(Warning: Windows Vista apparently handles data, Dual Boot, andPartitions differently. These steps may not work for Vista)
For this project, I have a DVD/CD burner in my Ultra-Bay, a stack of black CDs and DVDs, and a USB-attached 320GB external disk drive.
I like to backup the master boot record to one file, and then the rest of the C: drive to a series of 690MB compressed chunks. These can be directed to the USB-attached drive, and then later burned onto CDrom, or pack 6 files per DVD.Most USB-attached drives are formatted to FAT32 file system, which doesn't support any chunks greater than 2GB, so splitting these up into 690MB is well below that limit.
You can learn more about these commands here and here.
Step 1 - Defrag your C: drive
From Windows, right-click on your Recycle Bin and select "Empty Recycle Bin".
Click Start->Programs->Accessories->System Tools->Disk Defragmenter. Select C: drive and push the Analyze button. You will see a bunch of red, blue and white vertical bars. If there are any greenbars, we need to fix that. The following worked for me:
Right-click "My Computer" and select Properties. Select Advanced, then press "Settings" buttonunder Performance. Select Advanced tab and press the "Change" button under Virtual Memory.Select "No Paging File" and press the "Set" button. Virtual memory lets you have many programs open, moving memory back and forth between your RAM and hard disk.
Click Start->Control Panel->Performance and Maintenance->Power Options. On the Hibernate tab,make sure the "Enable Hibernation" box is un-checked. I don't use Hibernate, as it seems likeit takes just as long to come back from Hibernation as it does to just boot Windows normally.
Reboot your system to Windows.
If all went well, Windows will have deleted both pagefile.sys and hiberfil.sys, the twomost common unmovable files, and free up 2GB of space. You can run just fine without either of these features, but if you want them back, we will put them back on Step 6 below.
Go back to Disk Defragmenter, verify there are no green bars, andproceed by pressing the "Defragment" button. If there are still some green bars,you can proceed cautiously (you can always restore from your backup right?), or seek professional help.
Step 2 - Resize your C: drive
When the defrag is done, we are ready to re-size your file system. This can be done with commercial software like Partition Magic.If you don't have this, you can use open source software. Burn yourself the Gparted LiveCD.This is another Linux LiveCD, and is similar to Partition Magic.
Either way, re-size the C: drive smaller. In theory, you can shrink it down to 15GB if this is a fresh install of Windows, and there is no data on it. If you have lots of data, and the drive wasnearly full, only resize the C: drive smaller by 2GB. That is how much we freed upfrom the unmovable files, so that should be safe.
You could do steps 2 and 3 while you are here, but I don't recommend it. Just re-size C:press the "Apply" button, reboot into Windows, and verify everything starts correctly before going to the next step.
Step 3 - Create Extended Paritition and Logical D: drive
You can only have FOUR partitions, either Primary for programs, or Extended for data. However, theExtended partition can act as a container of one or more logical partitions.
Get back into Partition Magic or Gparted program, and in the unused space freed up from re-sizing inthe last step, create a new extended/logical partition. For now, just have one logical inside theextended, but I have co-workers who have two logical partitions, D: for data, and E: for their e-mailfrom Lotus Notes. You can always add more logical partitions later.
I selected "NTFS" type for the D: drive. In years past, people chose the older FAT32 type, but this has some limitations, but allowed read/write capability from DOS, OS/2, and Linux.Windows XP can only format up to 32GB partitions of FAT32, and each file cannot be bigger than 2GB.I have files bigger than that. Linux can now read/write NTFS file systems directly, using the new NTFS-3Gdriver, so that is no longer an issue.
Step 4 - Format drive D: as NTFS
Just because you have told your partitioning program that D: was NTFS type, you stillhave to put a file system on it.
Click Start->Control Panel->Performance and Maintenance->Computer Management. Under Storage, select Disk Management. Right-click your D: drive and choose format.Make sure the "Perform Quick Format" box is un-checked, so that it peforms slowly.
Step 5 - Move data from C: to D: drive
Create two directories, "D:\documents" and "D:\notes\data", either through explorer, or in a commandline window with "MKDIR documents notes\data" command.
Move files from c:\notes\data to d:\notes\data, and any folder in your "My Documents" over to d:\documents.
(If you have more data than the size of the D: drive, copy over what you can, run another defrag, resize your C: drive even smaller with Partition Magic or Gparted, Reboot, verify Windows is still working,resize your D: bigger, and repeat the process until you have all of your data moved over.)
To inform Lotus Notes that all of your data is now on the D: drive, use NOTEPAD to edit notes.ini and change the Directory line to "Directory=D:\notes\data". If you have a special signature file, leave it in C:\notes directory.
Once all of your data is moved over to D:\documents, right-click on "My Documents" and select Properties. Change the target to "D:\documents" and press "Move" button. Now, whenever you select "My Documents", youwill be on your D: drive instead.
Step 6 - Take A Fresh Backup
If you use IBM Tivoli Storage Manager, now would be a good time to re-evaluate your "dsm.opt" file that listswhat drives and sub-directories to backup. Take a backup, and verify your data is being backed up correctly.
With the USB-attached, backup both C: and D: drives. I leave my USB drive back in Tucson. For a backup copywhile traveling, go to IBM Rescue and Recovery and take a C:-only backup to DVD. Make sure D: drive box is un-checked. Now, if I ever need to reinstall Windows, because of file system corruption or virus, I can do this from my one bootable CD plus 2 DVDs, which I can easily carry with me in my laptop bag, leaving all my data on the D: drive in tact.
In the worst case, if I had to re-format the whole drive or get a replacement disk, I can restore C: and thenrestore the few individual data files I need from IBM Tivoli Storage Manager, or small USB key/thumbdrive,delaying a full recovery until I return to Tucson.
Lastly, if you want, reactivate "Virtual Memory" and "Hibernation" features that we disabled in Step 1.
As with Business Continuity in the data center, planning in this manner can help you get back "up and running"quickly in the event of a disaster.
Continuing this week's theme on Business Continuity, I will use this post to discuss this week'sIBM solid state disk announcement.This new offering provides a new way to separate programs from data, to help minimizedowntime and outages normally associated with disk drive failures.
Until now, the method most people used to minimize the amount of data on internalstorage was to use disk-less servers with Boot-Over-SAN, however, not all operating systems, and not all disk systems, supported this.
Windows, however, is not supported, because of the small 4GB size and USB protocol limitations. For Windows, you would add a SAS drive, you boot from this hard drive, and use the 4GB Flash drive for data only.
So what's new this time? Here's a quick recap of July 17 announcement. For the IBM BladeCenter HS21 XM blade servers, new models of internal "disk" storage:
Single drive model
A single 15.8GB solid-state disk drive, based on SATA protocol. In addition to theLinux operating systems mentioned above, the capacity and SATA protocols allowsyou to boot 32-bit and 64-bit versions of Windows 2003 Server R2, with plans in placeto other platforms in the future, such as VMware. I am able to run my laptop Windows with only 15GB of C: drive, separating my data to a separate D: partition, so this appears to be a reasonable size.
Dual drive model
The dual drive fits in the space of a single 2.5-inch HDD drive bay.You can combine these in either RAID 0 or RAID 1 mode.
RAID 0 gives you a total of 31.6GB, but is riskier. If you lose either drive,you lose all your data. Michael Horowitz of Cnet covers the risks of RAID zerohere andhere.However, if you are just storing your operating system and application, easily re-loadable from CD or DVD in the case of loss, then perhaps that is a reasonable risk/benefit trade-off.
RAID 1 keeps the capacity at 15.8GB, but provides added protection. If you loseeither drive, the server keeps running on the surviving drive, allowing you to schedule repair actions when convenient and appropriate. This would be the configuration I would recommend for most applications.
Until recently, solid state storage was available at a price premium only. Flash prices have dropped 50% annually while capacities have doubled. This trend is expected to continue through 2009.
According to recent studies from Google and Carnegie Mellon, hard drives fail more oftenthan expected. By one account, conventional hard disk drives internal to the server account for as much as 20-50% of component replacements.IBM analysis indicates that the replacement rate of a solid state drive on a typical blade server configuration is only about 1% per year, vs. 3% or more mentionedin the these studies for traditional disk drives.
Flash drives use non-volatile memory instead of moving parts, so less likely to break down during high external environmental stress conditions, like vibration and shock, or extreme temperature ranges (-0C° to +70°C) that would make traditional hard disks prone to failure.This is especially important for our telecommunications clients, who are always looking for solutions that are NEBS Level 3 compliant.
As with any SATA drive, performance depends on workload.Solid state drives perform best as OS boot devices, taking only a few secondslonger to boot an OS than from a traditional 73GB SAS drive. Flash drives also excel in applications featuring random read workloads, such as web servers. For random and sequential write workloads, use SAS drives instead for higher levels of performance.
Part of IBM's Project Big Green, these flash drives are very energy efficient. Thanks to sophisticated power management software, the power requirement of the solid state drive can be 95 percent better than that of a traditional 73GB hard disk drive. These 15.8GB drives use only 2W per drive versus as much as 10W per 2.5” hard drive and 16W per 3.5” hard drive. The resulting power savings can be up to 1,512 watts per server rack, with 50% heat reduction.
So, even though this is not part of the System Storage product line, I am very excitedfor IBM. To find out if this will work in your environment, go to the IBM Server Provenwebsite that lists compatability with hardware, applications and middleware, or review the latest Configuration and Options Guide (COG).
Continuing this week's theme on Business Continuity, I thought I would explore more on the identification of scenarios to help drive appropriate planning. As I mentioned in my last post, this should be done first.
A recent post in Anecdote talks about the long list of cognitive biases which affect business decision making. This list is a good explanation of why so many people have a difficult time identifying appropriate recovery scenarios as the basis for Business Continuity planning. Their "cognitive biases" get in the way.
Again, using my IBM Thinkpad T60 laptop as an example, here are a variety of different scenarios:
Corrupted File System
Some file systems are more fragile than others. If your NTFS file system gets corrupted, you might be able to run
CHKDSK C: /F
but this just puts damaged blocks into dummy files, it doesn't really repair your files back to their pre-damage level.All kinds of things can damage the file system, including viruses, software defects, and user error.
I keep my programs and data in separate file systems. C: has my Windows operating system and applications, and D: holds my pure data. If one file system is corrupted, the other one might be in tact, mitigating the risk.
Hard Disk Crash
Hopefully, you will have temporary read/write errors to provide warning prior to a complete failure. In theory, if I kept a spare hard disk in my laptop bag, I could swap out the bad drive with the good drive. I don't have that. The three times that I have had a disk failure all occurred while I was in Tucson.
Instead, I keep the few files I need for my trip on a separate USB key, and carry bootable Live CD, which allows you to boot entirely from CDrom drive, either to run applications, or perform rescue operations.
The latest one that I am trying out is Ubuntu Linux, which has OpenOffice 2.2 that can read/write PowerPoint, Word, and Excel spreadsheets; Firefox web browser; Gimp graphics software; and a variety of other applications, all in a 700MB CDrom image. I even have been able to get Wireless (Wi-Fi) working with it, and the process to create your own customized Live CD with the your own application packages is fairly straightforward. Combined with a writeable USB key, you can actually get work done this way. Special thanks to IBM blogger Bob Sutor for pointing me to this.
(If you have a DVD-RAM drive, there are bigger Live CDs from SUSE and RedHat Fedora that provide even more applications)
Laptop Shell Failure
This might catch some people by surprise. I have had the keyboard, LCD screen, or some essential port/plug fail on my laptop. The disk drive and CDrom drive work fine, but unless you have another "laptop" to stick them into, they don't help you recover. This can also happen if the motherboard fails, or the battery is unable to hold a charge.
IBM provides a 24-hour turn around fix. Basically, IBM sends me a laptop shell, no drive, no CDrom, with instructions to move the disk drive and CDrom drive from your broken shell, to the new shell, then send the bad shell back in the same shipping box.
Here, again, I am thankful that I keep my key files on an USB key. Often I travel with other IBMers, and can borrow their laptop to make presentations, check my e-mail, or other work, until I can get my replacement shell. In you are travelling outside the US, you might be able to move your disk drive into a colleague's laptop, access the data, copy it to your USB key or burn a copy on CD or DVD.
In a data center, many outages are really "failures to access data", but the data is safe. For example, power outages, network outages, and so on, can prevent people from using their IT systems, but the data is safe when these are re-established.
At times, I have been temporarily separated from my laptop. Three examples:
A higher level executive had technical difficulties with his laptop, and usurped mine instead.
A colleague forgot his power supply for his laptop, and borrowed my laptop instead. (I wish there were a standard for laptop power plug connectors)
Customs agents confiscate your laptop, give you a receipt, and eventually you get it back.
In all cases, I was glad that no "recovery" was required, and that the few files I needed were on my USB key. A few times, I was able to get by on the machines available at the nearest Internet Cafe, in the meantime.
With some imagination, you can recognize that this scenario is similar to the previous one for laptop shell failure.Here is a good example that you can identify different scenarios, and then later discover they have similar properties in terms of recovery, and can be treated as one.
Laptops are stolen every day. Luckily, I've only had this happen twice to me in my career at IBM, and I managed to get a replacement soon enough. The key lesson here is to keep your USB key and recovery media in separate luggage.I know it is more convenient to keep all computer-related stuff in one place, but a thief is going to take your whole laptop bag, to make sure that all cables and power supplies are included, and is not going to leave anything behind. That would just slow them down.
In each case, some brainstorming, or personal experience, can help identify scenarios, identify what makes them unique from a recovery perspective, and plan accordingly. If you looking to create or upgrade your Business Continuity plan, give IBM a call, we can help!
This week and next I am touring Asia, meeting with IBM Business Partners and sales repsabout our July 10 announcements.
Clark Hodge might want to figure out where I am, given the nuclearreactor shutdowns from an earthquake in Japan. His theory is that you can follow my whereabouts just by following the news of major power outages throughout the world.
So I thought this would be a good week to cover the topic of Business Continuity, which includes disaster recovery planning. When making Business Continuity plans, I find it best to work backwards. Think of the scenarios that wouldrequire such recovery actions to take place, then figure out what you need to have at hand to perform the recovery, and then work out the tasks and processes to make sure those things are created and available when and where needed.
I will use my IBM Thinkpad T60 as an example of how this works. Last week, I was among several speakers making presentations to an audience in Denver, and this involved carrying my laptop from the back of the room, up to the front of the room, several times. When I got my new T60 laptop a year ago, it specifically stated NOT to carry the laptop while the disk drive was spinning, to avoid vibrations and gyroscopic effects. It suggested always putting the laptop in standby, hibernate or shutdown mode, prior to transportation, but I haven't gotten yet in the habit of doing this. After enough trips back and forth, I had somehow corrupted my C: drive. It wasn't a complete corruption, I could still use Microsoft PowerPoint to show my slides, but other things failed, sometimes the fatal BSOD and other times less drastically. Perhaps the biggest annoyance was that I lost a few critical DLL files needed for my VPN software to connect to IBM networks, so I was unable to download or access e-mail or files inside IBM's firewall.
Fortunately, I had planned for this scenario, and was able to recover my laptop myself, which is important when you are on the road and your help desk is thousands of miles away. (In theory, I am now thousands of miles closer to our help desk folks in India and China, but perhaps further away from those in Brazil.) Not being able to respond to e-mail for two days was one thing, but no access for two weeks would have been a disaster! The good news: My system was up and running before leaving for the trip I am on now to Asia.
Following my three-step process, here's how this looks:
Step 1: Identify the scenario
In this case, my scenario is that the file system the runs my operating system is corrupted, but my drive does not have hardware problems. Running PC-Doctor confirmed the hardware was operating correctly. This can happen in a variety of ways, from errant application software upgrades, malicious viruses, or in my case, picking up your laptop and carrying it across the room while the disk drive is spinning.
Step 2: Figure out what you need at hand
All I needed to do was repair or reload my file sytem. "Easier said than done!" you are probably thinking. Many people use IBM Tivoli Storage Manager (TSM) to back up their application settings and data. Corporate include/exclude lists avoid backing up the same Windows files from everyone's machines. This is great for those who sit at the same desk, in the same building, and would be given a new machine with Windows pre-installed as the start of their recovery process. If on the other hand you are traveling, and can't access your VPN to reach your TSM server, you have to do something else. This is often called "Bare Metal Restore" or "Bare Machine Recovery", BMR for short in both cases.
I carry with me on business trips bootable rescue compact discs, DVDs of full system backup of my Windows operating system, and my most critical files needed for each specific trip on a separate USB key. So, while I am on the road, I can re-install Windows, recover my applications, and copy over just the files I need to continue on my trip, and then I can do a more thorough recovery back in the office upon return.
Step 3: Determine the tasks and processes
In addition to backing up with IBM TSM, I also use IBM Thinkvantage Rescue and Recovery to make local backups. IBM Rescue and Recovery is provided with IBM Thinkpad systems, and allows me to backup my entire system to an external 320GB USB drive that I can leave behind in Tucson, as well as create bootable recovery CD and DVDs that I can carry with me while traveling.
The problem most people have with a full system backup is that their data changes so frequently, they would have to take backups too often, or recover "very old" data. Most Windows systems are pre-formatted as one huge C: drive that mixes programs and data together. However, I follow best practice, separating programs from data. My C: drive contains the Windows operating system, along with key applications, and the essential settings needed to make them run. My D: drive contains all my data. This has the advantage that I only have to backup my C: drive, and this fits nicely on two DVDs. Since I don't change my operating system or programs that often, and monthly or quarterly backup is frequent enough.
In my situation in Denver, only my C: drive was corrupted, so all of my data on D: drive was safe and unaffected.
When it comes to Business Continuity, it is important to prioritize what will allow you to continue doing business, and what resources you need to make that happen. The above concepts apply from laptops to mainframes. If you need help creating or updating your Business Continuity plan, give IBM a call.
It's Tuesday, which means IBM makes its announcements. We had several for the IBM System Storage product line. Here's a quick recap.
The IBM System Storage DS3000 now offers DC power models.New DC powered models of the DS3200, DS3400, and EXP3000 are well suited for Telco industry environments, as theseare NEBS and ETSI compliant and are powered by an industry standard 48 volt DC power source.
Also, the IBM System Storage N series now supports750GB SATA drives available for the EXN1000 drawer.
IBM Virtualization Engine TS7740now supports 3-cluster grids. Unlike 3-way replication on disk mirroring, such as IBM Metro/Global Mirror for the DS8000 that enforces a primary, secondary and tertiary copy, the grid implementation of TS7740 tape virtualization allows for any-to-any mirroring. Existing standalone TS7740 clusters can be converted to grid-enabled. A "Copy Export" feature allows virtual tapes to be exported onto physical tape. And in keeping with our theme of "enabling business flexibility", performance throughput can now be purchased in 100 MB/sec increments, up to 600 MB/sec, to match your workload bandwidth requirements.
The IBM System Storage TS1120drives installed in the IBM System Storage™ TS3400 Tape Library can now be attached to System z platforms using the IBM System Storage™ TS1120 Tape Controller. Before this, the TS3400 could only be attached to UNIX, Windows and Linux systems.
The IBM System StorageTS2230 Express is offered as an external stand-alone or rack-mountable unit. This model incorporates the new LTO IBM Ultrium 3 Serial Attached SCSI (SAS) Half-High Tape Drive, and a 3 Gbps single port SAS interface for a connection to a wide spectrum of distributed system servers that support Microsoft Windows and Linux systems.
IBM has added theCisco MDS 9124 for IBM System Storageentry-level fabric switch as an Express offering and part of the IBM Express Advantage Program. Express offerings are specifically created for mid-market companies and are well suited for workgroup storage applications like e-mail serving, collaborative databases and web serving. They bring enterprise-class performance, scalability and features to small and medium-sized companies and are easy to use, highly scalable, and cost-effective.This will make it easier for IBM Business Partners to provide fabric switch connectivity for:
Storage consolidation solutions with IBM System Storage™ DS4000 Express disk arrays, especially the DS4700 Express.
Backup / restore solutions with IBM System Storage™ TS3000 Tape Libraries, such as the TS3200.
Archive and Retention
Ordering large configurations of the IBM System Storage Grid Access Manager just got a lot easier.New features enable configurations greater than 500 TB to be submitted as a single order. No change in the actualproduct, just an improvement in the ordering process.
For System p and System i servers, the IBM 3996 Optical library now supports Gen 2 60GB optical cartridges. These can be read/write or WORM cartridges.
I'm off to Denver, Colorado this week. I hope it is cooler there than it is down here in Tucson, Arizona.
Avi Bar-Zeeb of RealityPrime has an interesting post aboutHow Google Earth [really] Works.Normally, people who are very knowledgeable in a topic have a hard time describing concepts in basic terms. Avi was one of the co-founders of Keyhole, the company that built the predecessor for Google Earth, and also worked with Linden Lab for its 3D rendering it its virtual world, so he certainly knows what he is talking about. While he sometimes drops down into techno-talk about patents, the post overall is a good read.
It is perhaps human nature to be curious on how things are put together and how they function, leading to the popularity of web sites like www.howstuffworks.com that cover a wide range of topics.
Many things can be used without understanding their internal inner workings. You can put on a pair of blue jeans without knowing how the cotton was made into denim fabric; lace up your favorite pair of running shoes without understanding the chemical make-up of the plastic that cushions your feet; or drink a glass of beer after your five mile run without knowing how alcohol is processed by your liver.
For technology, however, some people insist they need to know how it works in order for them to get the most use of it. When shopping for a car, for example, a guy might look under the hood, and ask questions about how the engine works, while his wife sits inside the vehicle, counting cup holders and making sure the radio has all the right buttons.
Not all technology suffers from need-to-know-itis. For example, the Apple iPod music player and the Canon PowerShot digital camera, are both just disk systems that read and write data, with knobs and dials on one end, and ports for connectivity on the other. Everyone just asks how to use their controls, and might read the manual to understand how to connect the cables. Few people who use these devices ask how they work before they buy them.
Other disk systems, the kind designed for data centers for the medium and large enterprise, apparently aren't there yet. Storage admins who might happily own both an iPod player and a PowerShot camera, insist they need to know how the technologies inside various storage offerings work. Is this just curiosity talking? Or are there some tasks like configuration, tuning, and support that just can't be done without this knowledge? Does knowing the inner workings somehow make the job more enjoyable, easier, or performed with less stress?
I'm curious what you think, send me a comment on this.
Seth Godin has an interesting post titled Times a Million.He recounts how many people determine the fuel savings of higher-mileage cars to be only $300-$900 per year,and that this is not enough to motivate the purchase of a more-efficient vehicle, such as a hybrid orelectric car. Of course, if everyone drove more efficient vehicles, the benefits "times a million" wouldbenefit everyone and the world's ecology.
When I discuss storage-related concepts, many executives mistakenly relate them to the one area of information technologythey know best: their laptop. Let's take a look at some examples:
Information Lifecycle Management
Information Lifecycle Management (ILM) includes classifying data by business value, and then using this to determineplacement, movement or deletion. If you think about the amount of time and effort to review the files on yourindividual laptop, and to manually select and move or delete data, versus the benefits for the individual laptopowner, you would dismiss the concept. Most administrative tasks are done manually on laptops, because automatedsoftware is either unavailable or too expensive to justify for a single owner.
In medium and large size enterprises, automated software to help classify, move and delete data makes a lot of sense.Executives who decide that ILM is not for their data center, based on their experiences with their laptop, are losingout on the "times a million" effect.
Laptops have various controls to minimize the use of battery, and these controls are equally available when pluggedin. Many users don't bother turning off the features and functions they don't need when plugged in, because theyfeel the cost savings would only amount to pennies per day.
Times a million, energy savings do add up, and options to reduce the amount used per server, per TB of data stored, not only save millions of dollars per year, but can also postpone the need to build a new data center, or upgrade the electrical systems in your existing data center.
Backup and Disaster Recovery planning
I am not surprised how many laptops do not have adequate backup and disaster recovery plans. When executives thinkin terms of the time and effort to backup their data, often crudely copying key files to CDrom or USB key, and worryingabout the management of those copies, which copies are the latest, and when those copies can be destroyed, theymight reject deploying appropriate backup policies for others.
Times a million, the collected data stored on laptops could easily be half of your companies emails and intellectual property. Products like IBM Tivoli Storage Manager can manage a large number of clients with a few administrators,keeping track of how many copies to keep, and how long to keep them.
So, next time you are looking at technology or solutions for your data center, don't suffer from "Laptop Mentality". Focus instead on the data center as a whole.
Chris Evans over at Storage Architect posts aboutHardware Replacement Lifecycle Update, on how storage virtualization can helpwith storage hardware replacemement. He makes two points that I would like to comment on.
... indeed products such as USP, SVC and Invista can help in this regard. However at some stage even the virtualisation tools need replacing and the problem remains, although in a different place.
Knowing that replacement of technologies at all levels are inevitable, IBM System Storage SAN Volume Controlleris actually designed to allow cluster non-disruptive upgrade, which we announcedMay 2006.
The process is quite elegant. The SVC consists of one or more node-pairs, and can be upgraded while the systemis up and running by replacing nodes one at a time in a sequence of suspend and resume. All of the mapping tablesare loaded onto the new nodes from the rest of the still active nodes.
I was hoping as part of the USP-V announcement HDS would indicate how they intend to help customers migrate from an existing USP which is virtualising storage, but alas it didn't happen.
Unlike the SVC, once cannot just upgrade the USP in place and make it into a USP-V. While it might be possible tounplug external disk from the old USP, and re-plug into the new USP-V, what do you do about the internal disk data?I doubt you can just move drawers and trays of disk from the old to the new. The data has to be moved some other way.
Some have asked why not just put an SVC in front of both the old USP and the new USP-V and transfer the data that way.While SVC does support virtualizing the old USP device, IBM is still testing the new USP-V as a managed device, and so this solution is not yet available, and would only apply to the LUNs in the USP-V, not the volumes specifically formatted for System i or System z.
An alternative is to take advantage of IBM's Data Mobility Services, the result of our recentacquisition of SofTek. IBM can help you both mainframe and distributed systems data from any device, to any device.
In a typical four year lifecycle of storage arrays, it might take six months or so to fill up the box, and might takeas much as a year at the end to move the data out to other equipment. SVC can greatly reduce both of these, so that you can take immediate advantage of new equipment as soon as possible, and keep using it for close to the full four years,migrating weeks or days before your lease expires.
I've blogged about some of these videos already, but since there are probably a few out there buying the brand new Apple iPhone looking for YouTube videos to play on them, these links might provide some exampleentertainment on your new handheld device.
Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
Alan Lepofsky posts about The Value Of Social Networking which points to this same presentation about Web 2.0 concepts and ideas.He also points to this article in the Wall Street Journal titledPlaying Well With Others about IBM and their leadership in Web 2.0 technologies, such as those from our Lotus group.
Some quotes from the WSJ article I found interesting:
Some 26,000 IBM workers have registered blogs on the company's internal computer network where they opine on technology and their work.
Social networking is especially important for the 42% of IBM employees who regularly work from their homes or client locations rather than IBM facilities.
At most companies, public-relations managers and the human-resources department tightly control all electronic communications except for email and instant messaging. ... Not at IBM.
"Any employee can have a blog, a wiki or a podcast,..."
IBM owns more than 50 "islands" in Second Life and often uses them for lectures and group discussions.
Two years ago, IBM started Wiki Central to manage wikis for IBM groups. It now has more than 20,000 wikis online with more than 100,000 users.
Interesting in learning more about Web 2.0? The last page of the deck above has a good set of links and resources, for example, here are 23 Things to know about Web 2.0 to get you started.
Use more efficient disk media, such as high-capacity SATA disk drives
Both are great recommendations, but why limit yourself to what EMC offers? Your x86-based machines are only a subset of your servers,and disk is only a subset of your storage. IBM takes a more holistic approach, looking at the entire data center.
VMware is a great product, and IBM is its top reseller. But in addition to VMware, there are other solutions for the x86-based servers, like Xen and Microsoft Virtual Server. IBM's System p, System i, and System z product lines all support logical partitioning.
To compare the energy effectiveness of server virtualization, consider a metric that can apply across platforms. For example, for an e-mail server, consider watts per mailbox. If you have, say, 15,000 users, you can calculate how many watts you are consuming to manage their mailboxes on your current environment, and compare that with running them on VMware, or logical partitions on other servers. Some people find it surprising that it is often more cost-effective, and power-efficient, to run workloads on mainframe logical partitions (LPARs) than a stack of x86 servers running VMware.
More efficient Media
SATA and FATA disks support higher capacities, and run at slower RPM speeds, thus using fewer watts per terabyte.A terabyte stored on 73GB high-speed 15K RPM drives consumes more watts than the same terabyte stored using 500GB SATA.Chuck correctly identifies that tape is more power-efficient than disk, but then argues that paper is more power-efficient than tape. But paper is not necessarily more efficient than tape.
ESG analyst Steve Duplessie divides up data betweenDynamic vs. Persistent. The best place to put dynamic data is on disk, and here is where evaluation of FC/SAS versus SATA/FATA comes into play.Persistent data, on the other hand, can be stored on paper, microfiche, optical or tape media. All of these shelf-resident media consume no electricity, nor generate any heat that would require additional cooling.
A study by scientists at the Lawrence Berkeley National Laboratory titled High-Tech Means High-Efficiency: The Business Case for Energy Management in High-Tech Industries indicates thatData centers consume 15 to 100 times more energy per square foot than traditional office space. Storing persistent data in traditional office space can save a huge amount of energy. Steve Duplessie feels the ratio of dynamic to persistent data is 1:10 today, but is likely to grow to 1:100 in the near future, raising the demand for energy-efficient storage of persistent data ever more important to our environment.
Data centers consume nearly 5000 Megawatts in the USA alone, 14000 Megawatts worldwide. To put that in perspective, the country of Hungary I was in last week can generate up to 8000 Megawatts for the entire country (and they were using 7400 Megawatts last week as a result of their current heat wave, causing them grave concern).
Back in the 1990's, one of the insurance companies IBM worked with kept data on paper in manila folders, and armiesof young adults in roller skates were dispatched throughout the large warehouses of shelves to get the appropriate folder in response to customer service inquiries. Digitizing this paper into electronic format greatly reduced the need for this amount of warehouse space, as well as improved the time to retrieve the data.
A typical file storage box (12 inch x 12 inch x 18 inch) containing typed pages single-spaced, double-sided, 12 point font could hold perhaps 100MB. The same box could hold a hundred or more LTO or 3592 tape cartridges, each storing hundreds of GB of information. That's a million-to-one improvement of space-efficiency, and from a watts-per-TB basis, translates to substantial improvement in standard office air conditioning and lighting conditions.
To learn more about IBM's Project Big Green, watch thisintroductory video which used Second Life for the animation.
Back in the late 1980's and early 1990's, I was one of the architects for DFSMS on z/OS, and customers always asked, "What is the clip level?", in other words, how big does a customer have to be to take advantage of DFSMS. We worked it out that if you had more than 100GB of disk data, DFSMS is worthwhile. DFSMS is now just standard by default, as everyone now easily has more than 100GB of data.
Later, in the late 1990's, I worked on Linux for System z. Again, customers asked how many Linux guest images would justify deploying applications on a mainframe. We worked it out to about 10 images. 10 Linux logical partitions, or Linux guests under z/VM was enough to cost justify the entire investment.
So what is the "clip level" for SANs? How many servers does an SMB need to have to justify deploying a SAN? IBM announced the new BladeCenter S designed specifically for mid-sized companies, 100 to 1000 employees, typically running 25 to 45 servers. However, I suspect companies as small as 7-10 servers would probably benefit from deploying an FC or IP SAN.
What do you think? Send me a comment on how many servers should be the clip level.
A client complained that their tape drives were not compressing data as well as it used to. Investigating further reminded me of a scene from the 1970's television show "All in the family", summarized well inAmerican Scientist:
... in one episode of All in the Family, Archie Bunker's son-in-law, Mike, watches Archie put on his shoes and socks. Mike goes into a conniption when Archie puts the sock and shoe completely on one foot first, tying a bow to complete the action, while the other foot remains bare. To Mike, if I remember correctly, the right way to put on shoes and socks is first to put a sock on each foot and only then put the shoes on over them, and only in the same order as the socks. In an ironic development in his character, the politically liberal Mike shows himself to be intolerant of differences in how people do common little things, unaccepting of the fact that there is more than one way to skin a cat or put on one's shoes.
Both agreed that socks go first, then shoes, but the actual deployment was different.
In the case of this customer, a recent change was the use of "encryption" before the data reached the tape drive. In regards to compression and encryption, you should always compress first, then encrypt. Compression algorithms rely on frequency of data, for example the letter "E" appears more often in the English language than the letter "Z". However, once you encrypt data, those data patterns are randomized, and any attempt to compress the data afterwards is wasted effort.
With IBM tape encryption on either the TS1120 or LTO4 tape drives, we compress, then encrypt, the data when it arrives to the tape drive, so that the compression has some chance of getting up to 3:1 reduction. This compress-then-encrypt process can be done at the host as well, either from the application software or feature of the operating system.
So, just as the case between Archie Bunker and his son-in-law, there are many ways to deploy compression and encryption, just make sure you do them in the right order to get the most benefit.
This week I am off to Budapest, Hungary, for business meetings. It is the closest major city to IBM'smanufacturing plant in a small town called Vac (rhymes with "knots") where the IBM System Storage DS8000 seriesand SAN Volume Controller are assembled.