Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Wrapping up my week on successful uses of information, I thought I would discuss the visualization of data.Not just bar charts and pie charts, but how effective visual information can be on multi-variable plots.
IBM's [Many Eyes] recognizes that 70 percentof our sensory input neurons in our brain our focused on visual inputs, and so we might recognize patternsif only data was presented in more interesting and visual representations.
In addition to X/Y axis, variables can be presented by size of circle and color. Here's an example plot of the past US bailouts, with variables representing amount, year, company andindustry. This plot does not include the current 700 Billion US Dollar bailout currently under discussion.
This is part of IBM's Collaborative User Experience (CUE) research lab. The software is available Web2.0style at no charge, just upload your data set, and choose one of 16 different presentation styles.
These plots get even more interesting when you animate them over time. In 2006, Hans Rosling presenteddata he gathered from the United Nations and other publicly funded sources and presented his findings atthe TED conference. Here is the 20-minute video of that presentation (click on play at right), titled ["Debunking third-world myths with the best stats you've ever seen"], in which he debunks the myth that all countries fall into two distinct categories: Industrialized and Developing.
Amazingly, the data--as well as the software to analyze it--is available at[GapMinder.org] Web site.
For more information on how you can deploy an information infrastructure that allows you to search, visualize and leverage the most value from your information, contact your local IBM representative or IBM Business Partner.
This post will focus on Information Compliance, the fourth and final part of the four-part series this week.I have received a few queries on my choice of sequence for this series: Availability, Security, Retention andCompliance.
Why not have them in alphabetical order? IBM avoids alphabetizing in one language, because thenit may not be alphabetized when translated to other languages.
Why not have them in a sequence that spells outan easy to remember mnemonic, like "CARS"? Again, when translated to other languages, those mnemonics no longerwork.
Instead, I worked with our marketing team for a more appropriate sequence, based on psychology and the cognitive bias of [primacy and recency effects].
Here's another short 2-minute video, on Information Compliance
Full disclosure: I am not a lawyer. The following will delveinto areas related to government and industry regulations. Consultyour risk officer or legal counsel to make sure any IT solution is appropriatefor your country, your industry, or your specific situation.
IBM estimates there are over 20,000 regulations worldwide related to information storage and transmission.
For information availability, some industry regulations mandate a secondary copy a minimum distance away toprotect against regional disasters like hurricanes or tsunamis.IBM offers Metro Mirror (up to 300km) and Global Mirror (unlimited distance) disk mirroring to support theserequirements.
For information security, some regulations relate to privacy and prevention of unauthorized access. Twoprominent ones in the United States are:
Health Insurance Portability and Accountability Act (HIPAA) of 1996
HIPAA regulates health care providers, health plans, and health care clearinghouses in how they handle the privacy of patient's medical records. These regulations apply whether the information is on film, paper, or storedelectronically. Obviously, electronic medical records are easier to keep private. Here is an excerpt froman article from [WebMD]:
"There are very good ways to protect data electronically. Although it sounds scary, it makes data more protected than current paper records. For example, think about someone looking at your medical chart in the hospital. It has a record of all that is happening -- lab results, doctor consultations, nursing notes, orders, prescriptions, etc. Anybody who opens it for whatever reason can see all of this information. But if the chart is an electronic record, it's easy to limit access to any of that. So a physical therapist writing physical therapy notes can only see information related to physical therapy. There is an opportunity with electronic records to limit information to those who really need to see it. It could in many ways allow more privacy than current paper records."
GLBA regulates the handling of sensitive customer information by banks, securities firms, insurance companies, and other financial service providers. Financial companies use tape encryption to comply with GLBA when sending tapes from one firm to another. IBM was the first to deliver tape drive encryption withthe TS1120, and then later with LTO-4 and TS1130 tape drives.
For information retention, there are a lot of regulations that deal with how information is stored, in some casesimmutable to protect against unethical tampering, and when it can be discarded. Two prominent regulations inthe United States are:
U.S. Securities and Exchange Commission (SEC) 17a-4 of 1997
In the past, the IT industryused the acronym "WORM" which stands for the "Write Once, Read Many" nature of certain media, like CDs, DVDs,optical and tape cartridges. Unfortunately, WORM does not apply to disk-based solutions, so IBM adopted the languagefrom SEC 17a-4 that calls for storage that is "Non-Erasable, Non-Rewriteable" or NENR. This new umbrella term applies to disk-based solutions, as well as tape and optical WORM media.
SEC 17a-4 indicates that broker/dealers and exchange members must preserve all electronic communications relating to the business of their firmm a specific period of time. During this time, the information must not be erased or re-written.
Sarbanes-Oxley (SOX) Act of 2002
SOX was born in the wake of [Enron and other corporate scandals]. It protects the way that financial information is stored, maintained and presented to investors, as well as disciplines those who break its rules. It applies onlyto public companies, i.e. those that offer their securities (stock shares, bonds, liabilities) to be sold to the publicthrough a listing on a U.S. exchange, such as NASDAQ or NYSE.
SOX focuses on preventing CEOs and other executives from tampering the financial records.To meet compliance, companies are turning to the [IBM System Storage DR550] which providesNon-erasable, Non-rewriteable (NENR) storage for financial records. Unlike competitive products like EMC Centera thatfunction mostly as space-heaters on the data center floor once they filled up, the DR550 can be configured as a blended disk-and-tape storage system, so that the most recent, and most likely to be accessed data, remains on disk, but the older, least likely to be accessed data, is moved automatically to less expensive, more environment-friendly "green" tape media.
Did SOX hurt the United States' competitiveness? Critics feared that these new regulations would discourage newcompanies from going public. Earnst & Young found these fears did not come true, and published a study [U.S. Record IPO Activity from 2006 Continues in 2007]. In fact, the improved confidence that SOX has given investors has given rise to similarlegislation in other parts of the world: Euro-Sox for the European Union Investor Protection Act, and J-SOX Financial Instruments and Exchange Law for Japan.
For those who only read the first and last paragraphs of each post, here is my recap:Information Compliance is ensuring that information is protected against regional disasters, unauthorizedaccess, and unethical tampering, as required to meet industry and government regulations. Such regulationsoften apply if the information is stored on traditional paper or film media, but can often be handled more cost-effectively when stored electronically. Appropriate IT governance can help maintain investor confidence.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. The launch was presented at the IBM Storage and Storage Networking Symposium to over 400 attendeesin Montpelier, France, with corresponding standing-room-only crowds in New York and Tokyo.
This post will focus on Information Retention, the third of the four-part series this week.
Here's another short 2-minute video, on Information Retention
Let's start with some interesting statistics.Fellow blogger Robin Harris on his StorageMojo blog has an interesting post:[Our changing file workloads],which discusses the findings of study titled"Measurement and Analysis of Large-Scale Network File System Workloads"[14-page PDF]. This paper was a collaborationbetween researchers from University of California Santa Cruz and our friends at NetApp.Here's an excerpt from the study:
Compared to Previous Studies:
Both of our workloads are more write-oriented. Read to write byte ratios have significantly decreased.
Read-write access patterns have increased 30-fold relative to read-only and write-only access patterns.
Most bytes are transferred in longer sequential runs. These runs are an order of magnitude larger.
Most bytes transferred are from larger files. File sizes are up to an order of magnitude larger.
Files live an order of magnitude longer. Fewer than 50 percent are deleted within a day of creation.
Files are rarely re-opened. Over 66 percent are re-opened once and 95% fewer than five times.
Files re-opens are temporally related. Over 60 percent of re-opens occur within a minute of the first.
A small fraction of clients account for a large fraction of file activity. Fewer than 1 percent of clients account for50 percent of file requests.
Files are infrequently shared by more than one client. Over 76 percent of files are never opened by more than one client.
File sharing is rarely concurrent and sharing is usually read-only. Only 5 percent of files opened by multiple clients are concurrent and 90 percent of sharing is read-only.
Most file types do not have a common access pattern.
Why are files being kept ten times longer than before? Because the information still has value:
Provide historical context
Gain insight to specific situations, market segment demographics, or trends in the greater marketplace
Help innovate new ideas for products and services
Make better, smarter decisions
National Public Radio (NPR) had an interesting piece the other day. By analyzing old photos, a researcher for Cold War Analysis was able to identify an interesting [pattern for Russian presidents]. (Be sure to listen to the 3-minute audio to hear a hilarious song about the results!)
Which brings me to my own collection of "old photos". I bought my first digital camera in the year 2000,and have taken over 15,000 pictures since then. Before that,I used 35mm film camera, getting the negatives developed and prints made. Some of these date back to my years in High School and College. I have a mix of sizes, from 3x5, 4x6 and 5x7 inches,and sometimes I got double prints.Only a small portion are organized intoscrapbooks. The rest are in envelopes, prints and negatives, in boxes taking up half of my linen closet in my house.Following the success of the [Library of Congress using flickr],I decided the best way to organize these was to have them digitized first. There are several ways to do this.
This method is just too time consuming. Lift the lid place 1 or a few prints face down on the glass, close the lid,press the button, and then repeat. I estimate 70 percent of my photos are in [landscape orientation], and 30 percent in [portrait mode]. I can either spend extra time toorient each photo correctly on the glass, or rotate the digital image later.
I was pleased to learn that my Fujitsu ScanSnap S510 sheet-feed scanner can take in a short stack (dozen or so) photos, and generate JPEG format files for each. I can select 150, 300 or 600dpi, and five levels of JPEG compression.All the photos feed in portrait mode, which I can then rotate later on the computer once digitized.A command line tool called [ImageMagick] can help automate the rotations.While I highly recommend the ScanSnap scanner, this is still a time-consuming process for thousands of photos.
"The best way to save your valuable photos may be by eliminating the paper altogether. Consider making digital images of all your photos."
Here's how it works:You ship your prints (or slides, or negatives) totheir facility in Irvine, California. They have a huge machine that scans them all at 300dpi, no compression, andthey send back your photos and a DVD containing digitized versions in JPEG format, all for only 50 US dollars plusshipping and handling, per thousand photos. I don't think I could even hire someone locally to run my scanner for that!
The deal got better when I contacted them. For people like me with accounts on Facebook, flickr, MySpace or Blogger,they will [scan your first 1000 photos for free] (plus shipping and handling). I selected a thousand 4x6" photos from my vast collection, organized them into eight stacks with rubber bands,and sent them off in a shoe box. The photos get scanned in landscape mode, so I had spent about four hours in preparing what I sent them, making sure they were all face up, with the top of the picture oriented either to the top or left edge.For the envelopes that had double prints, I "deduplicated" them so that only one set got scanned.
The box weighed seven pounds, and cost about 10 US dollars to send from Tucson to Irvinevia UPS on Tuesday. They came back the following Monday, all my photos plus the DVD, for 20 US dollars shipping and handling. Each digital image is about 1.5MB in size, roughly 1800x1200 pixels in size, so easily fit on a single DVD. The quality is the sameas if I scanned them at 300dpi on my own scanner, and comparable to a 2-megapixel camera on most cell phones.Certainly not the high-res photos I take with my Canon PowerShot, but suitable enough for email or Web sites. So, for about 30 US dollars, I got my first batch of 1000 photos scanned.
ScanMyPhotos.com offers a variety of extra priced options, like rotating each file to the correct landscape or portrait orientation, color correction, exact sequence order, hosting them on their Web site online for 30 days to share with friends and family, and extra copies of the DVD.All of these represent a trade-off between having them do it for me for an additional fee, or me spending time doing it myself--either before in the preparation, or afterwards managing the digital files--so I can appreciate that.
Perhaps the weirdest option was to have your original box returned for an extra $9.95? If you don't have a hugecollection of empty shoe boxes in your garage, you can buy a similarly sized cardboard box for only $3.49 at the local office supply store, so I don't understand this one. The box they return all your photos in can easily be used for the next batch.
I opted not to get any of these extras. The one option I think they should add would be to have them just discardthe prints, and send back only the DVD itself. Or better yet, discard the prints, and email me an ISO file of the DVD that I can burn myself on my own computer.Why pay extra shipping to send back to me the entire box of prints, just so that I can dump the prints in the trash myself? I will keep the negatives, in case I ever need to re-print with high resolution.
Overall, I am thoroughlydelighted with the service, and will now pursue sending the rest of my photos in for processing, and reclaim my linen closet for more important things. Now that I know that a thousand 4x6 prints weighs 7 pounds, I can now estimate how many photos I have left to do, and decide on which discount bulk option to choose from.
With my photos digitized, I will be able to do all the things that IBM talks about with Information Retention:
Place them on an appropriate storage tier. I can keep them on disk, tape or optical media.
Easily move them from one storage tier to another. Copying digital files in bulk is straightforward, and as new techhologies develop, I can refresh the bits onto new media, to avoid the "obsolescence of CDs and DVDs" as discussed in this article in[PC World].
Share them with friends and family, either through email, on my Tivo (yes, my Tivo is networked to my Mac and PC and has the option to do this!), or upload themto a photo-oriented service like [Kodak Gallery or flickr].
Keep multiple copies in separate locations. I could easily burn another copy of the DVD myself and store in my safe deposit box or my desk at work.With all of the regional disasters like hurricanes, an alternative might be to backup all your files, including your digitized photos, with an online backup service like [IBM Information Protection Services] from last year's acquisition of Arsenal Digital.
If the prospect of preserving my high school and college memories for the next few decades seems extreme,consider the [Long Now Foundation] is focused on retaining information for centuries.They areeven suggesting that we start representing years with five digits, e.g., 02008, to handle the deca-millennium bug which will come into effect 8,000 years from now. IBM researchers are also working on [long-term preservation technologies and open standards] to help in this area.
For those who only read the first and last paragraphs of each post, here is my recap:Information Retention is about managing [information throughout its lifecycle], using policy-based automation to help with the placement, movement and expiration. An "active archive" of information serves to helpgain insight, innovate, and make better decisions. Disk, tape, and blended disk-and-tape solutions can all play a part in a tiered information infrastructure for long-term retention of information.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. For you podcast fans, IBM Vice Presidents Bob Cancilla (Disk Systems), Craig Smelser (Storage and Security Software), and Mike Riegel (Information Protection Services), highlight some of the new products and offerings in this 12-minute recording:
This post will focus on Information Security, the second of the four-part series this week.
Here's another short 2-minute video, on Information Security
Security protects information against both internal and external threats.
For internal threats, most focus on whether person A has a "need-to-know" about information B. Most of the time, thisis fairly straightforward. However, sometimes production data is copied to support test and development efforts. Here is the typical scenario: the storage admin copies production data that contains sensitive or personal informationto a new copy and authorizes software engineers or testers full read/write access to this data.In some cases, the engineers or testers may be employees, other times they might be hired contractors from an outside firm.In any case, they may not be authorized to read this sensitive information. To solve this IBM announced the[IBM Optim Data Privacy Solution] for a variety of environments, including Siebel and SAP enterprise resource planning (ERP)applications.
I found this solution quite clever. The challenge is that production data is interrelated and typically liveinside [relational databases].For example, one record in one database might have a name and serial number, and then that serial number is used to reference a corresponding record in another database. The IBM Optim Data Privacy Solution applies a range of"masks" to transform complex data elements such as credit card numbers, email addresses and national identifiers, while retaining their contextual meaning. The masked results are fictitious, but consistent and realistic, creating a “safe sandbox” for application testing. This method can mask data from multiple interrelated applications to create a “production-like” test environment that accurately reflects end-to-end business processes.The testers get data they can use to validate their changes, and the storage admins can rest assured theyhave not exposed anyone's sensitive information.
Beyond just who has the "need-to-know", we might also be concerned with who is "qualified-to-act".Most systems today have both authentication and authorization support. Authentication determines that youare who you say you are, through the knowledge of unique userid/passwords combinations, or other credentials. Fingerprint, eye retinal scans or other biometrics look great in spy movies, but they are not yetwidely used. Instead, storage admins have to worry about dozens of different passwords on differentsystems. One of the many preview announcements made by Andy Monshaw on Monday's launch was that IBM isgoing to integrate the features of [Tivoli Access Manager for Enterprise Single Sign-On] into IBM's Productivity Center software, and be renamed "IBM Tivoli Storage Productivity Center".You enter one userid/password, and you will not have to enter the individual userid/password of all the managedstorage devices.
Once a storage admin is authenticated,they may or may not be authorized to read or act on certain information.Productivity Center offers role-based authorization, so that people can be identifiedby their roles (tape operator, storage administrator, DBA) and that would then determine what they areauthorized to see, read, or act upon.
For external threats, you need to protect data both in-flight and at-rest. In-flight deals with data thattravels over a wire, or wirelessly through the air, from source to destination. When companies have multiplebuildings, the transmissions can be encrypted at the source, and decrypted on arrival.The bigger threat is data at-rest. Hackers and cyber-thieves looking to download specific content, like personal identifiable information, financial information, and other sensitive data.
IBM was the first to deliver an encrypting tape drive, the TS1120. The encryption process is handled right at the driveitself, eliminating the burden of encryption from the host processing cycles, and eliminating the need forspecialized hardware sitting between server and storage system. Since then, we have delivered encryption onthe LTO-4 and TS1130 drives as well.
When disk drives break or are decommissioned, the data on them may still be accessible. Customers have a tough decision to make when a disk drive module (DDM) stops working:
Send it back to the vendor or manufacturer to have it replaced, repaired or investigated, exposing potentialsensitive information.
Keep the broken drive, forfeit any refund or free replacement, and then physically destroy the drive. Thereare dozens of videos on [YouTube.com] on different ways to do this!
The launch previewed the [IBM partnership with LSI and Seagate] to deliver encryption technology for disk drives, known as "Full Drive Encryption" or FDE.Having all data encrypted on all drives, without impacting performance, eliminates having to decide which data gets encryptedand which doesn't. With data safely encrypted, companies can now send in their broken drives for problemdetermination and replacement.Anytime you can apply a consistent solution across everything, without human intervention anddecision making, the less impact it will have. This was the driving motivation in both disk and tape driveencryption.
(Early in my IBM career, some lawyers decided we need to add a standard 'paragraph' to our copyright text in the upper comment section of our software modules, and so we had a team meeting on this. The lawyer that presented to us that perhaps only20 to 35 percent of the modules needed to be updated with this paragraph, and taught us what to look for to decidewhether or not the module needed to be changed. Myteam argued how tedious this was going to be, that this will take time to open up each module, evaluate it, and make the decision. With thousands of modules involved the process could take weeks. The fact that this was going to take us weeks did not seem to concern our lawyer one bit, it was just thecost of doing business.Finally, I asked if it would be legal to just add the standard paragraph to ALL the modules without any analysis whatsoever. The lawyer was stunned. There was no harm adding this paragraph to all the modules, he said, but that would be 3-5x more work and why would I even suggest that. Our team laughed, recognizing immediately that it was the fastest way to get it done. One quick program updated all modules that afternoon.)
To manage these keys, IBM previewed the Tivoli Key Lifecycle Manager (TKLM).This software helps automate the management of encryption keys throughout their lifecycle to help ensure that encrypted data on storage devices cannot be compromised if lost or stolen. It will apply to both disk and tapeencryption, so that one system will manage all of the encryption keys in your data center.
For those who only read the first and last paragraphs of each post, here is my recap:Information Security is intended as an end-to-end capability to protect against both internal and external threats, restricting access only to those who have a "need-to-know" or are "qualified-to-act". Security approacheslike "single sign-on" and encryption that applies to all tapes and all disks in the data center greatly simplify the deployment.
In yesterday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. For those who prefer audio podcasts, here is Marissa Benekos interviewing Andy Monshaw, IBM General Manager of IBM System Storage.
This post will focus on Information Availability, the first of the four-part series this week.
Here's another short 2-minute video, on Information Availability
I am not in marketing department anymore, so have no idea how much IBM spentto get these videos made, but hate for the money to go wasted. I suspect theonly way they will get viewed is if I include them in my blog. I hope youlike them.
As with many IT terms, "availability" might conjure up different meanings for different people.
Some can focus on the pure mechanics of delivering information. An information infrastructure involves all of thesoftware, servers, networks and storage to bring information to the application or end user, so all of the chainsin the link must be highly available: software should not crash, servers should have "five nines" (99.999%) uptime, networks should be redundant, and storage should handle the I/O request with sufficient performance. For tape libraries, the tape cartridge must be available, robotics are needed to fetch the tape, and a drive must be available toread the cartridge. All of these factors represent the continuous operations and high availability features of business continuity.
In addition to the IT equipment, you need to make sure your facilities that support that equipment, such aspower and cooling, are also available.Independent IT analyst Mark Peters from Enterprise Strategy Group (ESG) summarizes his shock about the findings in a recent [survey commissioned by Emerson Network Power]on his post [Backing Up Your Back Up]. Here is an excerpt:
"The net take-away is that the majority of SMBs in the US do not have back-up power systems. As regional power supplies get more stretched in many areas, the possibility of power outages increases and obviously many SMBs would be vulnerable. Indeed, while the small business decision makers questioned for the survey ranked such power outages ahead of other threats (fires, government regulation, weather, theft and employee turnover) only 39% had a back-up power system. Yeah, you could say, but anything actually going wrong is unlikely; but apparently not, as 79% of those surveyed had experienced at least one power outage during 2007. Yeah, you might say, but maybe the effects were minor; again, apparently not, since 42% of those who'd had outages had to actually close their businesses during the longest outages. The DoE says power outages cost $80 billion a year and businesses bear 98% of those costs."
Others might be more concerned about outages resulting from planned and unplanned downtime. Storage virtualizationcan help reduce planned downtime, by allowing data to be migrated from one storage device to another withoutdisrupting the application's ability to read and write data. The latest "Virtual Disk Mirroring" (VDM) feature of the IBM System Storage SAN Volume Controller takes it one stepfurther, providing high-availability even for entry-level and midrange disk systems managed by the SVC.For unplanned downtime, IBM offers a complete range of support, from highly available clusters, two-site and three-site disaster recovery support, and application-aware data protection through IBM Tivoli Storage Manager.
Many outages are caused by human error, and in many cases it is the human factor that prevent quick resolution.Storage admins are unable to isolate the failing component, identify the configuration or provide the appropriateproblem determination data to the technical team ready to offer support and assistance. For this, IBM TotalStorageProductivity Center software, and its hardware-version the IBM System Storage Productivity Center, can helpreduce outage time and increase information availability. It can also provide automation to predict or provideearly warning of impending conditions that could get worse if not taken care of.
But perhaps yet another take on information availability is the ability to find and communicate the right informnationto the right people at the right time. Recently, Google announced a historic milestone, their search engine nowindexes over [One trillion Web pages]!Google and other search engines have changed the level of expectations for finding information. People ask whythey can find information on the internet so quickly, yet it takes weeks for companies to respond to a judge foran e-discovery request.
Lastly, the team at IBM's[Eightbar blog] pointedme to Mozilla Lab's Ubiquity project for their popular FireFox browser. This project aims to help people communicate the information in a more natural way, rather than unfriently URL links on an email. It is still beta, of course, but helps show what "information availability" might be possible in the near future.Here is a 7-minute demonstration:
For those who only read the first and last paragraphs of each post, here is my recap:Information Availability includes Business Continuity and Data Protection to facilitatequick recovery, storage virtualization to maximize performance and minimize planned downtime, infrastructure management and automation to reduce human error, and the ability to find and communicate information to others.
Earlier this year, IBM launched its[New Enterprise Data Center vision]. The average data center was built 10-15 years ago,at a time when the World Wide Web was still in its infancy, some companies were deploying their first storage areanetwork (SAN) and email system, and if you asked anyone what "Google" was, they might tell you it was ["a one followed by a hundred zeros"]!
Full disclosure: Google, the company, justcelebrated its [10th anniversary] yesterday, and IBM has partnered with Google on a varietyof exciting projects. I am employed by IBM, and own stock in both companies.
In just the last five years, we saw a rapid growth in information, fueled by Web 2.0 social media, email, mobile hand-held devices, and the convergenceof digital technologies that blurs the lines between communications, entertainment and business information. This explosion in information is not just "more of the same", but rather a dramatic shift from predominantly databases for online transaction processing to mostly unstructured content. IT departments are no longer just the"back office" recording financial transactions for accountants, but now also take on a more active "front office" role. For a growing number of industries, information technology plays a pivotal role in generating revenue, making smarter business decisions, and providing better customer service.
IBM felt a new IT model was needed to address this changing landscape, so IBM's New Enterprise Data Center vision has these five key strategic initiatives:
Highly virtualized resources
Business-driven Service Management
Green, Efficient, Optimized facilities
In February, IBM announced new products and features to support the first two initiatives, including the highlyvirtualized capability of the IBM z10 EC mainframe, and and related business resiliency features of the [IBM System Storage DS8000 Turbo] disk system.
In May, IBM launched its Service Management strategic initiative at the Pulse 2008 conference. I was there in Orlando, Florida at the Swan and Dolphin resort to present to clients. You can read my three posts:[Day 1; Day 2 Main Tent; Day 2 Breakout sessions].
In June, IBM launched its fourth strategic initiative "Green, Efficient and Optimized Facilities" with [Project BigGreen 2.0], which included the Space-Efficient Volume (SEV) and Space-Efficient FlashCopy (SEFC) capabilitiesof the IBM System Storage SAN Volume Controller (SVC) 4.3 release. Fellow blogger and IBM master inventor Barry Whyte (BarryW) has three posts on his blog about this:[SVC 4.3.0Overview; SEV and SEFCdetail; Virtual Disk Mirroring and More]
Some have speculated that the IBM System Storage team seemed to be on vacation the past two months, with few pressreleases and little or no fanfare about our July and August announcements, and not responding directly to critics and FUD in the blogosphere.It was because we were holding them all for today's launch, taking our cue from a famous perfume commercial:
"If you want to capture someone's attention -- whisper."
My team and I were actually quite busy at the [IBM Tucson Executive Briefing Center]. In between doing our regular job talking to excited prospects and clients,we trained sales reps and IBM Business Partners, wrote certification exams, and updated marketing collateral. Fortunately, competitors stopped promotingtheir own products to discuss and demonstrate why they are so scared of what IBM is planning.The fear was well justified. Even a few journalists helped raise the word-of-mouth buzz and excitement level. A big kiss to Beth Pariseau for her article in [SearchStorage.com]!
(Last week we broke radio silence to promote our technology demonstration of 1 million IOPS using Solid StateDisk, just to get the huge IBM marketing machine oiled up and ready for today)
Today, IBM General Manager Andy Monshaw launchedthe fifth strategic initiative, [IBM Information Infrastructure], at the[IBM Storage and Storage Networking Symposium] in Montpellier, France. Montpellier is one of the six locations of our New Enterprise Data Center Leadership Centers launched today. The other five are Poughkeepsie, Gaithersburg, Dallas, Mainz and Boebligen, with more planned for 2009.
Although IBM has been using the term "information infrastructure" for more than 30 years, it might be helpful to define it for you readers:
“An information infrastructure comprises the storage, networks, software, and servers integrated and optimized to securely deliver information to the business.”
In other words, it's all the "stuff" that delivers information from the magnetic surface recording of the disk ortape media to the eyes and ears of the end user. Everybody has an information infrastructure already, some are just more effective than others. For those of you not happy with yours, IBM hasthe products, services and expertise to help with your data center transformation.
IBM wants to help its clients deliver the right information to theright people at the right time, to get the most benefits of information, while controlling costs and mitigatingrisks. There might be more than a dozen ways to address the challenges involved, but IBM's Information Infrastructure strategic initiative focuses on four key solution areas:
Last, but not least, I would like to welcome to the blogosphere IBM's newest blogger, Moshe Yanai, formerly the father of the EMC Symmetrix and now leading the IBM XIV team. Already from his first poston his new [ThinkStorage blog], I can tell he is not going to pullany punches either.
"... firms don't have the detailed electricity consumption data they need to implement energy efficiency initiatives. What they have is an energy bill for a facility."
A common adage is that "you can't manage what you don't measure." IBM has beefed up the ability to measure andmonitor electricity usage, not just IBM servers and storage, but also non-IBM IT equipment and facilities infrastructurelike UPS, HVAC, lighting and security alarm systems.
Hitch Green IT to data centre refurbishment projects
"Energy savings alone don't constitute a business case to overhaul an existing data centre, undertake a refurbishment project or build a new Green Data Centre."
Either CIOs don't have the measurements of electricity to perform an ROI or cost/benefit analysis, or the facilitiesfolks that sense improvements are possible may not see the big picture compared to other business investments.Instead, IBM seeks to incorporate IT energy efficiency best practices into existing business plans for data center improvements.
Tackle corporate energy efficiency and emissions
"... a strategy discussion and corporate carbon diagnostic are the start point to stimulate demand. Not a cold sell on Green IT."
Project Big Green is more than just an IT project.IBM's Global Business Services consultants have transformed it into a Carbon Management Strategy encompassing employees, information, property, the supply chain, customers and products. For companies that are looking atreducing their carbon footprint overall, this approach makes a lot of sense.
Differentiate offerings by industry and country
"The inability to get more power into urban data centres has driven demand for energy efficiency by banks, telcos and outsourcers."
Different countries, and different industries, have different priorities.Europe, and in particular the UK, focuses on carbon emissions as much as energy costs due to mandatory emissions caps.For data centers in the largest cities, an increase in electrical supply may not be available, or be too expensive,and the time it takes to build a new data center elsewhere, typically 12-18 months, may not be soon enough to handlecurrent business growth rates. Energy efficiency projects can help buy them some time.
Plan for slow customer adoption
"IBM is developing the market for IT energy efficiency and carbon management services. And its very much an early stage market today."
IBM is frequently on the forefront of new technologies and emerging markets, so it is no surprise that we areused to dealing with slow customer adoption. The combination of high energy costs, tightening regulations and stakeholder pressure will drive the market. Larger companies and government organizations that have the meansto make these necessary changes will probably lead the adoption curve.
Prepare for investment barriers to IT energy efficiency
"With the low hanging fruit picked, IBM has found that there is an unwillingness to spend money on planting a new orchard."
IBM has helped IT clients with quick fixes offering rapid payback such as adjusting data center temperature and humidity to reduce energy consumption. But in the current economic environment, persuading firms to install variable speed fans with a 6-year payback is much tougher. Again, this is a matter of CIOs and other upper level management balancingfinancial investment decisions with some foresight and vision for the future.
Project Big Green launched back in May 2007, and last month IBM renewed its commitment with Project Big Green 2.0,continuing to enhance product and service offerings in support for this much needed area. And while the leadersin the G8 Summit will discuss a variety of topics, three top "green" issues on their agenda include rising energy costs, global climate change and controlling carbon emissions.
Based on this success, and perhaps because I am also fluent in Spanish, I was asked to help with Proyecto Ceibal, the team for OLPC Uruguay. Normally theXS school server resides at the school location itself, so that even if the internet connection is disrupted or limited, the school kids can continue to access each other and the web cache content until internet connection is resumed.However, with a diverse developmentteam with people in United States, Uruguay, and India, we first looked to Linux hosting providers that wouldagree to provide free or low-cost monthly access. We spent (make that "wasted") the month of May investigating.Most that I talked to were not interested in having a customized Linux kernel on non-standard hardware on their shop floor, and wanted instead to offer their own standard Linux build on existing standard servers, managed by theirown system administrators, or were not interested in providing it for free. Since the XS-163 kernel is customizedfor the x86 architecture, it is one of those exceptions where we could not host it on an IBM POWER or mainframe as a virtual guest.
This got picked up as an [idea] for the Google's[Summer of Code] and we are mentoring Tarun, a 19-year-old student to actas lead software developer. However, summer was fast approaching, and we wanted this ready for the next semester. In June, our project leader, Greg, came up with a new plan. Build a machine and have it connected at an internet service provider that would cover the cost of bandwidth, and be willing to accept this with remote administration. We found a volunteer organization to cover this -- Thank you Glen and Vicki!
We found a location, so the request to me sounded simple enough: put together a PC from commodity parts that meet the requirements of the customizedLinux kernel, the latest release being called [XS-163]. The server would have two disk drives, three Ethernet ports, and 2GB of memory; and be installed with the customized XS-163 software, SSHD for remote administration, Apache web server, PostgreSQL database and PHP programming language.Of course, the team wanted this for as little cost as possible, and for me to document the process, so that it could be repeated elsewhere. Some stretch goals included having a dual-boot with Debian 4.0 Etch Linux for development/test purposes, an alternative database such as MySQL for testing, a backup procedure, and a Recover-DVD in case something goes wrong.
Some interesting things happened:
The XS-163 is shipped as an ISO file representing a LiveCD bootable Linux that will wipe your system cleanand lay down the exact customized software for a one-drive, three-Ethernet-port server. Since it is based on Red Hat's Fedora 7 Linux base, I found it helpful to install that instead, and experiment moving sections of code over.This is similar to geneticists extracting the DNA from the cell of a pit bull and putting it into the cell for a poodle. I would not recommend this for anyone not familiar with Linux.
I also experimented with modifying the pre-built XS-163 CD image by cracking open the squashfs, hacking thecontents, and then putting it back together and burning a new CD. This provided some interesting insight, but in the end was able to do it all from the standard XS-163 image.
Once I figured out the appropriate "scaffolding" required, I managed to proceed quickly, with running versionsof XS-163, plain vanilla Fedora 7, and Debian 4, in a multi-boot configuration.
The BIOS "raid" capability was really more like BIOS-assisted RAID for Windows operating system drivers. This"fake raid" wasn't supported by Linux, so I used Linux's built-in "software raid" instead, which allowed somepartitions to be raid-mirrored, and other partitions to be un-mirrored. Why not mirror everything? With two160GB SATA drives, you have three choices:
No RAID, for a total space of 320GB
RAID everything, for a total space of 160GB
Tiered information infrastructure, use RAID for some partitions, but not all.
The last approach made sense, as a lot of of the data is cache web page images, and is easily retrievable fromthe internet. This also allowed to have some "scratch space" for downloading large files and so on. For example,90GB mirrored that contained the OS images, settings and critical applications, and 70GB on each drive for scratchand web cache, results in a total of 230GB of disk space, which is 43 percent improvement over an all-RAID solution.
While [Linux LVM2] provides software-based "storage virtualization" similar to the hardware-based IBM System Storage SAN Volume Controller (SVC), it was a bad idea putting different "root" directories of my many OS images on there. With Linux, as with mostoperating systems, it expects things to be in the same place where it last shutdown, but in a multi-boot environment, you might boot the first OS, move things around, and then when you try to boot second OS, it doesn'twork anymore, or corrupts what it does find, or hangs with a "kernel panic". In the end, I decided to use RAIDnon-LVM partitions for the root directories, and only use LVM2 for data that is not needed at boot time.
While they are both Linux, Debian and Fedora were different enough to cause me headaches. Settings weredifferent, parameters were different, file directories were different. Not quite as religious as MacOS-versus-Windows,but you get the picture.
During this time, the facility was out getting a domain name, IP address, subnet mask and so on, so I testedwith my internal 192.168.x.y and figured I would change this to whatever it should be the day I shipped the unit.(I'll find out next week if that was the right approach!)
Afraid that something might go wrong while I am in Tokyo, Japan next week (July 7-11), or Mumbai, India the following week (July 14-18), I added a Secure Shell [SSH] daemon that runs automaticallyat boot time. This involves putting the public key on the server, and each remote admin has their own private key on their own client machine.I know all about public/private key pairs, as IBM is a leader in encryption technology, and was the first todeliver built-in encryption with the IBM System Storage TS1120 tape drive.
To have users have access to all their files from any OS image required that I either (a) have identical copieseverywhere, or (b) have a shared partition. The latter turned out to be the best choice, with an LVM2 logical volumefor "/home" directory that is shared among all of the OS images. As we develop the application, we might findother directories that make sense to share as well.
For developing across platforms, I wanted the Ethernet devices (eth0, eth1, and so on) match the actual ports they aresupposed to be connected to in a static IP configuration. Most people use DHCP so it doesn't matter, but the XSsoftware requires this, so it did. For example, "eth0" as the 1 Gbps port to the WAN, and "eth1/eth2" as the two 10/100 Mbps PCI NIC cards to other servers.Naming the internet interfaces to specific hardware ports wasdifferent on Fedora and Debian, but I got it working.
While it was a stretch goal to develop a backup method, one that could perform Bare Machine Recovery frommedia burned by the DVD, it turned out I needed to do this anyways just to prevent me from losing my work in case thingswent wrong. I used an external USB drive to develop the process, and got everything to fit onto a single 4GB DVD. Using IBM Tivoli Storage Manager (TSM) for this seemed overkill, and [Mondo Rescue] didn't handle LVM2+RAID as well as I wanted, so I chose [partimage] instead, which backs up each primary partition, mirrored partition, or LVM2 logical volume, keeping all the time stamps, ownerships, and symbolic links in tact. It has the ability to chop up the output into fixed sized pieces, which is helpful if you are goingto burn them on 700MB CDs or 4.7GB DVDs. In my case, my FAT32-formatted external USB disk drive can't handle files bigger than 2GB, so this feature was helpful for that as well. I standardized to 660 GiB [about 692GB] per piece, sincethat met all criteria.
The folks at [SysRescCD] saved the day. The standard "SysRescueCD" assigned eth0, eth1, and eth2 differently than the three base OS images, but the nice folks in France that write SysRescCD created a customized[kernel parameter that allowed the assignments to be fixed per MAC address ] in support of this project. With this in place, I was able to make a live Boot-CD that brings up SSH, with all the users, passwords,and Ethernet devices to match the hardware. Install this LiveCD as the "Rescue Image" on the hard disk itself, and also made a Recovery-DVD that boots up just like the Boot-CD, but contains the 4GB of backup files.
For testing, I used Linux's built-in Kernel-based Virtual Machine [KVM]which works like VMware, but is open source and included into the 2.6.20 kernels that I am using. IBM is the leadingreseller of Vmware and has been doing server virtualization for the past 40 years, so I am comfortable with thetechnology. The XS-163 platform with Apache and PostgreSQL servers as a platform for [Moodle], an open source class management system, and the combination is memory-intensive enough that I did not want to incur the overheads running production this manner, but it wasgreat for testing!
With all this in place, it is designed to not need a Linux system admin or XS-163/Moodle expert at the facility. Instead, all we need is someone to insert the Boot-CD or Recover-DVD and reboot the system if needed.
Just before packing up the unit for shipment, I changed the IP addresses to the values they need at the destination facility, updated the [GRUB boot loader] default, and made a final backup which burned the Recover-DVD. Hopefully, it works by just turning on the unit,[headless], without any keyboard, monitor or configuration required. Fingers crossed!
So, thanks to the rest of my team: Greg, Glen, Vicki, Tarun, Marcel, Pablo and Said. I am very excited to bepart of this, and look forward to seeing this become something remarkable!
Wrapping up this week's theme on why the System z10 EC mainframe can replace so many older, smaller,underutilized x86 boxes.This was all started to help fellow bloggers Jon Toigo of DrunkenData and Jeff Savit from Sun Microsystemsunderstand our IBM press release that we put out last February on this machine with my post[Yes, Jon, there is a mainframe that can help replace 1500 x86 servers] and my follow uppost [Virtualization, Carpools and Marathons"].The computations were based on running 1500 unique workloads as Linux guests under z/VM, and notrunning them as z/OS applications.
My colleagues in IBM Poughkeepsierecommended these books to provide more insight and in-depth understanding. Looks like some interesting summer reading. I put in quotes thesections I excerpted from the synopsis I found for each.
"From Microsoft to IBM, Compaq to Sun to DEC, virtually every large computer company now uses clustering as a key strategy for high-availability, high-performance computing. This book tells you why-and how. It cuts through the marketing hype and techno-religious wars surrounding parallel processing, delivering the practical information you need to purchase, market, plan or design servers and other high-performance computing systems.
Microsoft Cluster Services ("Wolfpack")
IBM Parallel Sysplex and SP systems
DEC OpenVMS Cluster and Memory Channel
Tandem ServerNet and Himalaya
Intel Virtual Interface Architecture
Symmetric Multiprocessors (SMPs) and NUMA systems"
Fellow IBM author Gregory Pfister worked in IBM Austin as a Senior Technical Staff Member focused on parallel processing issues, but I never met him in person. He points out that workloads fall into regions called parallel hell, parallel nirvana, and parallel purgatory. Careful examination of machine designs and benchmark definitions will show that the “industry standard benchmarks" fall largely in parallel nirvana and parallel purgatory. Large UNIX machines tend to be designed for these benchmarks and so are particularly well suited to parallel purgatory. Clusters of distributed systems do very well in parallel nirvana. The mainframe resides in parallel hell as do its primary workloads. The current confusion is where virtualization takes workloads, since there are no good benchmarks for it.
"In these days of shortened fiscal horizons and contracted time-to-market schedules, traditional approaches to capacity planning are often seen by management as tending to inflate their production schedules. Rather than giving up in the face of this kind of relentless pressure to get things done faster, Guerrilla Capacity Planning facilitates rapid forecasting of capacity requirements based on the opportunistic use of whatever performance data and tools are available in such a way that management insight is expanded but their schedules are not."
Neil Gunther points out that vendor claims of near linear scaling are not to be trusted and shows a method to “derate” scaling claims. His suggested scaling values for data base servers is closer IBM's LSPR-like scaling model, than TPC-C or SPEC scaling. I had mentioned that "While a 1-way z10 EC can handle 920 MIPS, the 64-way can only handle 30,657 MIPS."in my post, but still people felt I was using "linear scaling". Linear scaling would mean that if a 1Ghz single-core AMD Opteron can do four(4) MIPS, and an one-way z10 EC can do 920 MIPS, than one might assume that 1GHz dual-core AMD could do eight(8) MIPS, and the largest 64-way z10 EC can do theoretically 64 x 920 = 58,880 MIPS. The reality is closer to 6.866 and 30,657 MIPS, respectively.
This was never an IBM-vs-Sun debate. One could easily make the same argument that a large Sun or HP system could replace a bunch of small 2-way x86 servers from Dell. Both types of servers have their place and purpose, and IBMsells both to meet the different needs of our clients. The savings are in total cost of ownership, reducing powerand cooling costs, floorspace, software licenses, administration costs, and outages.
I hope we covered enough information so that Jeff can go back about talking about Sun products, and I can go backto talk about IBM storage products.
Continuing this week's theme on the z10 EC mainframe being able to perform the workloadof hundreds or thousands of small 2-way x86 servers, I offer a simple analogy.
One car, one driver
If you wonder why so many companies subscribe to the notion that you should only runa single application per server, blame Sun, who I think helped promote this idea.Not to be out-done, Microsoft, HP and Dell think that it is a great idea too. Imaginethe convenience for operators to be able to switch off a single machine and impactonly a single application. Imagine how much this simplifies new application development,knowing that you are the only workload on a set of dedicated resources.
This is analogous to a single car, single driver, where the car helps get the personfrom "point A" to "point B" and the single driver represents the driver and solepassenger of the vehicle. If this were a single driver on a energy-efficient motorcycleor scooter, than would be reasonable, but people often drive alone much bigger vehicles,what Jeff Savit would call "over-provisioning". Chips have increased in processingpower much faster than individual applications have increased their requirements, so as a result,you have over-provisioning.
Carpooling - one bus, one driver, and many other passengers riding along
This is how z/OS operates. Yes, you could have up to 60 LPARs that you could individuallyturn on and off, but where z/OS gets most of its advantages is that you can run many applicationsin a single OS instance, through the use of "Address Spaces" which act as application containers.Of course, it is more difficult to write for this environment, because you have to be a good"z/OS citizen", share resources nicely, and be WLM-compliant to allow your application to beswapped out for others.
While you get efficiencies with this approach, when you bring the OS down, all the apps on that OS image haveto stop with it. For those who have "Parallel Sysplex" that is not an issue. For example, let's say youhave three mainframes, each running several LPARs of z/OS, and your various z/OS images all are able toprocess incoming transactions for a common shared DB2 database. Thanks to DB2 sharing technology, youcould take down an individual LPAR or z/OS image, and not disrupt transaction processing, because theIP spreader just sends them to the remaining LPARs. A "Coupling Facility" allows for smooth operationsif any of the OS images are lost from an unexpected disaster or disruption.
Needless to say, IBM does not give each z/OS developer his or her own mainframe. Instead, we get to run z/OS guest images under z/VM. It was even possible to emulate the next generation S/390 chipsetto allow us to test software on hardware that hasn't been created yet. With HiperSockets, we canhave virtual TCP/IP LAN connections between images, have virtual coupling facilities, have virtualdisk and virtual tape, and so on. It made development and test that much more efficient, which iswhy z/OS is recognized as one of the most rock-solid bullet-proof operating systems in existence.
The negatives of carpooling or taking the bus applies here as well. I have been on buses that havestopped working, and 50 people are stranded. And you don't need more than two people to make thelogistics of most carpools complicated. This feeds the fear that people want to have separatemanageable units one-car-one-driver than putting all of their eggs into one basket, having to scheduleoutages together, and so on.
(Disclaimer: From 1986 to 2001 I helped the development of z/OS and Linux on System z. Mostof my 17 patents are from that time of my career!)
Bicycle races and Marathons
The third computing model is the Supercomputer. Here we take a lot of one-way and two-way machines,and lash them together to form an incredible machine able to perform mathematical computations fasterthan any mainframe. The supercomputer that IBM built for Los Alamos National Laboratory just clockedin at 1,000,000,000,000,000 floating point operations per second. This is not a single operating system,but rather each machine runs its own OS, is given its primary objective, and tries to get it done.NetworkWorld has a nice article on this titled:[IBM, Los Alamos smash petaflop barrier, triple supercomputer speed record].If every person in the world was armed with a handheld calculator and performed one calculation per second, it would take us 46 years collectively to do everything this supercomputer can do in one day.
I originally thought of bicycle races as an analogy for this, but having listened to Lance Armstrong at the[IBM Pulse 2008] conference, I learned thatbiking is a team sport, and I wanted something that had the "every-man-for-himself" approach to computing.So, I changed this to marathons.
The marathon was named after a fabled greek soldier was sent as messenger from the [Battle of Marathon to the City of Athens],a distance that is now standardized to 26 miles and 385 yards, or 42.195 kilometers for my readersoutside the United States.
If you were given the task to get thousands of people from "point A" to "point B" 26 plus milesaway, would you chose thousands of cars, each with a lone driver? Conferences with a lot of people in a few hotels useshuttle buses instead. A few drivers, a few buses, and you can get thousands of people from a fewplaces to a few places. But the workloads that are sent to supercomputers have a single end point,so a dispatcher node gives a message to each "greek soldier" compute node, and has them run it on their own. Somemake it, some don't, but for a supercomputer that is OK. When the message is delivered, the calculation for thatlittle piece is done, and the compute node gives it another message to process. All of the computations areassembled to come up with the final result. Applications must be coded very speciallyto be able to handle this approach, but for the ones that are, amazing things happen.
So, how does "server virtualization" come into play?
IBM has had Logical Partitions for quite some time. A logical partition, or LPAR, can run its own OSimage, and can be turned on and off without impacting other LPARs. LPARs can have dedicated resources,or shared resources with other LPARs. The IBM z10 EC can have up to 60 LPARs. System p and System i,now merged into the new "POWER Systems" product line, also support LPARs in this manner. Depending onthe size of your LPAR, this could be for a single OS and application, or a single OS with lots of applications.
Address Spaces/Application Containers
This is the bus approach. You have a single OS, and that is shared by a set of application containers. z/OS does this with address spaces, all running under a single z/OS image, and for x86there are products like [Parallels Virtuozzo Containers] that can run hundred of Windows instances under a single Windows OS image, or a hundred Linux imagesunder a single Linux OS image. However, you cannot mix and match Windows with Linux, just as all theaddress spaces on z/OS all have to be coded for the same z/OS level on the LPAR they run in.
The term "guests" were chosen to model this after the way hotels are organized. Each guest has a roomwith its own lockable entrance and privacy, but shared lobby, and in some countries, shared bathroomson every hall. This approach is used by z/VM, VMware and others. The z/VM operating system can handle any S/390-chip operating system guest, so you could have a mix ofz/OS, TPF, z/VSE, Linux and OpenSolaris, and even other z/VM levels running as guests. Many z/VM developers runin this "second level" mode to develop new versions of the z/VM operating system!
As part of the One Laptop Per Child [OLPC] development team (yes, I ama member of their open source community, and now have developer keys to provide contributions), I havebeen experimenting with Linux KVM. This was [folded into the base Linux 2.6.20 kernel and availableto run Linux and Windows guest images. This is a nice write-up on[Wikipedia].
The key advantage of this approach is that you are back to one-car-one-driver simplistic mode of thinking. Each guest can be turned on and off without impacting otherapplications. Each guest has its own OS image, so you can mix different OS on the same server hardware.You can have your own customized kernel modules, levels of Java, etc.Externally, it looks like you are running dozens of applications on a single server, but internally,each application thinks it is the only one running on its own OS. This gives you simpler codingmodel to base your test and development with.
Jeff is correct that running less than 10 percent utilization average across your servers is a cryingshame, and that it could be managed in a manner that raises the utilization of the servers so that fewer areneeeded. Just as people could carpool, or could take the bus to work, it just doesn't happen, and data centersare full of single-application servers.
VMware has an architectural limit of 128 guests per machine, and IBM is able to reach this withits beefiest System x3850 M2 servers, but most of the x86 machines from HP, Dell and Sun are less powerful,and only run a dozen or so guests. In all cases, fewer servers means it is simpler to manage, so moreapplications per server is always the goal in mind.
VMware can soak up 30 to 40 percent of the cycles, meaning the most you can get from a VMware-basedsolution is 60 to 70 percent CPU utilization (which is still much better than the typical 5 to 10 percent average utilization we see today!) z/VM has been finely tuned to incur as little as 7 percent overhead,so IBM can achieve up to 93 percent utilization.
Jeff argues that since many of the z/OS technologies that allow customers to get over90 percent utilization don't apply to Linux guests under z/VM, then all of the numbers are wrong.My point is that there are two ways to achieve 90 percent utilization on the mainframe, one is throughz/OS running many applications on a single LPAR (the application container approach), and the other through z/VM supporting many Linux OS images, each with one (or a few) applications (the virtual guest approach).
I am still gathering more research on this topic, so I will try to have it ready later this week.
I am saddened to learn that one of my favorite comedians, [George Carlin],passed away yesterday. He was famous for a skit about "seven words" you could not say on Television.A few of those came to mind in the response I got from my post[Yes, Jon,There is a mainframe that can help replace 1500 x86 servers, which attempted to provide an answerto a simple question about the IBM System z10 Enterprise Class (EC) mainframe.
Jon: So, where is the 1500 number coming from? Tony: I’ll investigate and get back to you.
My post tried to explain how IBM estimated that number. However, my fellow blogger from Sun, Jeff Savit, posted on his blog [No, there isn't a Santa Claus] in response. (If Sun'sshareholders are expecting anything other than a [lump of coal] under the tree this year, they should probablyread Sun's press release about their last [financial results].)A few others contacted me about this also, from a bunch of rather different angles, from reverse-engineering emulation of other company's chipsets to my use of internal codenames. (There are now MORE than seven words I can't type in this blog!) Jon is just trying to gather information, but his [head hurts] from all of this debate.
This week I will try to clarify some of the confusion.
Two weeks ago, I mentioned in my post [Pulse 2008 - Day 2 Breakout sessions] thatHenk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers. I am no stranger to ABN Amro, having helped "ABN" and "Amro" banks merge their mainframe data in 1991. Henk has agreed to let me share with my readers more ofthis success story here on my blog:
Back in December 2005, Henkand his colleagues had come to visit the IBM Tucson ExecutiveBriefing Center (EBC) to hear about IBM products and services. At the time, I was part of our "STG Lab Services" team that performed ILM assessments at client locations. I explained to ABN Amro that the ILM methodology does not requirean all-IBM solution, and that ILM could even provide benefits with their current mix of storage, software and service providers.The ABN Amro team liked what I had to say, andmy team was commissioned to perform ILM assessments atthree of their data centers:
Sao Paulo (Brazil)
Chicago, IL (USA)
Each data center had its own management, its owndecision making, and its own set of issues, so we structuredeach ILM assessment independently. When we presented our results,we showed what each data center could do better with their existing mixed bagof storage, software and service providers, and also showed howmuch better their life would be with IBM storage, software andservices. They agreed to give IBM a chance to prove it, and soa new "Global Storage Study" was launched to take the recommendationsfrom our three ILM studies, and flesh out the details to make aglobally-integrated enterprise work for them. Once completed,it was renamed the "Global Storage Solution" (GSS).
Henk summarized the above with "I am glad to see Tony Pearsonin the audience, who was instrumental to making this all happen."As with many client testimonials, he presented a few charts onwho ABN Amro is today, the 12th largest bank worldwide, 8th largest in Europe. They operate in 53 countries and manage over a trillioneuros in assets.
They have over 20 data centers, with about 7 PB of disk, and over20 PB of tape, both growing at 50 to 70 percent CAGR. About 2/3 of theiroperations are now outsourced to IBM Global Services, the remaining 1/3is non-IBM equipment managed by a different service provider.
ABN Amro deployed IBM TotalStorage Productivity Center, variousIBM System Storage DS family disk systems, SAN Volume Controller (SVC), Tivoli StorageManager (TSM), Tivoli Provisioning Manager (TPM), and several other products. Armed with these products, they performed the following:
Clean Up. IBM uses the term "rationalization" to relate to the assignment of business value, to avoid confusion with theterm "classification" which many in IT relate to identifyingownership, read and write authorization levels. Often, in theinitial phases of an ILM deployment, a portion of the data isdetermined to be eligible for clean up, either to move to a lower-cost tier or deleted immediately. ABN Amro and IBM set a goal to identifyat least 20 percent of their data for clean up.
New tiers. Rather than traditional "storage tiers" which are often justTier 1 for Fibre Channel disk and Tier 2 for SATA disk, ABN Amroand IBM came up with seven "information infrastructure tiers" thatincorporate service levels, availability and protection status.They are:
High-performance, Highly-available disk with Remote replication.
High-performance, Highly-available disk (no remote replication)
Mid-performance, high-capacity disk with Remote replication
Mid-performance, high-capacity disk (no remote replication)
Non-erasable, Non-rewriteable (NENR) storage employinga blended disk and tape solution.
Enterprise Virtual Tape Library with remote replicationand back-end physical tape
Mid-performance physical tape
These tiers are applied equally across their mainframe anddistributed platforms. All of the tiers are priced per "primary GB", so any additional capacity required for replication orpoint-in-time copies, either local or remote, are all folded into the base price.ABN Amro felt a mission-critical applicationon Windows or UNIX deserves the same Tier 1 service level asa mission-critical mainframe application. Exactly!
Deployed storage virtualization for disk and tape. Thisinvolved the SAN Volume Controller and IBM TS7000 series library.
Implemented workflow automation. The key product here is IBM Tivoli Provisioning Manager
Started an investigation for HSM on distributed. This would be policy-based space management to migrate lessfrequently accessed data to the TSM pool for Windows or UNIX data.
While the deployment is not yet complete, ABN Amro feels they have alreadyrecognized business value:
Reduced cost by identifying data that should be stored on lower tiers
Simplified management, consolidated across all operating systems (mainframe, UNIX, Windows)
Increased utilization of existing storage resources
Reduced manual effort through policy-based automation, which can lead to fewer human errors and faster adaptability to new business opportunities
Standardized backup and other operational procedures
Henk and the rest of ABN Amro are quite pleased with the progress so far,although recent developments in terms of the takeover of ABN AMRO by aconsortium of banks means that the model is only implemented so far in Europe. Further rollout depends on the storage strategy of the new owners. Nonetheless,I am glad that I was able to work with Henk, Jason, Barbara, Steve, Tom, Dennis, Craig and othersto be part of this from the beginning and be able to see it rollout successfully over the years.
IBM is hosting a webcast about storage for SAP Environments. Learn how integrated IBM infrastructure solutions, specifically, customized for your SAP environments, can help lower your business costs, increases productivity in SAP development and test tasks, and improve resource utilization. This will include discussion of archive solutions with WebDAV, ArchiveLink and DR550;IBM Business Intelligence (BI) Accelerator; IBM support for SAP [Adaptive Computing]; and performance benchmark results. The session is intended for SAP and storage administrators, IT directorsand managers.
Here are the details:
Date: Wednesday, June 18, 2008
Time: 11:00am EDT (8:00am for those of us in Arizona or California)
( I cannot take credit for coining the new term "bleg". I saw this term firstused over on the [FreakonomicsBlog]. If you have not yet read the book "Freakonomics", I highly recommend it! The authors' blog is excellent as well.)
For this comparison, it is important to figure out how much workload a mainframe can support, how much an x86 cansupport, and then divide one from the other. Sounds simple enough, right? And what workload should you choose?IBM chose a business-oriented "data-intensive" workload using Oracle database. (If you wanted instead a scientific"compute-intensive" workload, consider an [IBM supercomputer] instead, the most recent of which clocked in over 1 quadrillion floating point operations per second, or PetaFLOP.) IBM compares the following two systems:
Sun Fire X2100 M2, model 1220 server (2-way)
IBM did not pick a wimpy machine to compare against. The model 1220 is the fastest in the series, with a 2.8Ghz x86-64 dual-core AMD Opteron processor, capable of running various levels of Solaris, Linux or Windows.In our case, we will use Oracle workloads running on Red Hat Enterprise Linux.All of the technical specifications are available at the[Sun Microsystems Sun Fire X1200] Web site.I am sure that there are comparable models from HP, Dell or even IBM that could have been used for this comparison.
IBM z10 Enterprise Class mainframe model E64 (64-way)
This machine can run a variety of operating systems also, including Red Hat Enterprise Linux (RHEL). The E64 has four "multiple processor modules" called"processor books" for a total of 77 processing units: 64 central processors, 11 system assist processors (SAP) and 2 spares. That's right, spare processors, in case any others gobad, IBM has got your back. You can designate a central processor in a variety of flavors. For running z/VM and Linux operating systems, the central processors can be put into "Integrated Facility for Linux" (IFL) mode.On IT Jungle, Timothy Patrick Morgan explains the z10 EC in his article[IBM Launches 64-Way z10 Enterprise Class Mainframe Behemoth]. For more information on the z10 EC, see the 110-page [Technical Introduction], orread the specifications on the[IBM z10 EC] Web site.
In a shop full of x86 servers, there are production servers, test and development servers, quality assuranceservers, standby idle servers for high availability, and so on. On average, these are only 10 percent utilized.For example, consider the following mix of servers:
125 Production machines running 70 percent busy
125 Backup machines running idle ready for active failover in case a production machine fails
1250 machines for test, development and quality assurance, running at 5 percent average utilization
While [some might question, dispute or challenge thisten percent] estimate, it matches the logic used to justify VMware, XEN, Virtual Iron or other virtualization technologies. Running 10 to 20 "virtual servers" on a single physical x86 machine assumes a similar 5-10 percent utilization rate.
Note: The following paragraphs have been revised per comments received.
Now the math. Jon, I want to make it clear I was not involved in writing the press release nor assisted with thesemath calculations. Please, don't shoot the messenger! Remember this cartoon where two scientists in white lab coats are writing mathcalculations on a chalkboard, and in the middle there is "and then a miracle happens..." to continue the rest ofthe calculations?
In this case, the miracle is the number that compares one server hardware platform to another. I am not going to bore people with details like the number of concurrent processor threads or the differencesbetween L1 and L3 cache. IBM used sophisticated tools and third party involvement that I am not allowed to talk about, and I have discussed this post with lawyers representing four (now five) different organizations already,so for the purposes of illustration and explanation only, I have reverse-engineered a new z10-to-Opteron conversion factor as 6.866 z10 EC MIPS per GHz of dual-core AMD Opteron for I/O-intensive workloads running only 10 percent average CPU utilization. Business applications that perform a lot of I/O don't use their CPU as much as other workloads.For compute-intensive or memory-intensive workloads, the conversion factor may be quite different, like 200 MIPS per GHz, as Jeff Savit from Sun Microsystems points out in the comments below.
Keep in mind that each processor is different, and we now have Intel, AMD, SPARC, PA-RISC and POWER (and others); 32-bit versus 64-bit; dual-core and quad-core; and different co-processor chip sets to worry about. AMD Opteron processors come in different speeds, but we are comparing against the 2.8GHz, so 1500 times 6.866 times 2.8 is 28,337. Since these would be running as Linux guestsunder z/VM, we add an additional 7 percent overhead or 2,019 MIPS. We then subtract 15 percent for "smoothing", whichis what happens when you consolidate workloads that have different peaks and valleys in workload, or 4,326 MIPS.The end is that we need a machine to do 26,530 MIPS. Thanks to advances in "Hypervisor" technological synergy between the z/VM operating system and the underlying z10 EC hardware, the mainframe can easily run 90 percent utilized when aggregating multiple workloads, so a 29,477 MIPS machine running at 90 percent utilization can handle these 26,530 MIPS.
N-way machines, from a little 2-way Sun Fire X2100 to the might 64-way z10 EC mainframe, are called "Symmetric Multiprocessors". All of the processors or cores are in play, but sometimes they have to taketurns, wait for exclusive access on a shared resource, such as cache or the bus. When your car is stopped at a red light, you are waiting for your turn to use the shared "intersection". As a result, you don't get linear improvement, but rather you get diminishing returns. This is known generically as the "SMP effect", and in IBM documentsthis as [Large System Performance Reference].While a 1-way z10 EC can handle 920 MIPS, the 64-way can only handle30,657 MIPS. The 29,477 MIPS needed for the Sun x2100 workload can be handled by a 61-way, giving you three extraprocessors to handle unexpected peaks in workload.
But are 1500 Linux guest images architecturally possible? A long time ago, David Boyes of[Sine Nomine Associates] ran 41,400 Linux guest images on a single mainframe using his [Test Plan Charlie], and IBM internallywas able to get 98,000 images, and in both cases these were on machines less powerful than the z10 EC. Neitherof these were tests ran I/O intensive workloads, but extreme limits are always worth testing. The 1500-to-1 reduction in IBM's press release is edge-of-the-envelope as well, so in production environments, several hundred guest images are probably more realistic, and still offer significant TCO savings.
The z10 EC can handle up to 60 LPARs, and each LPAR can run z/VM which acts much like VMware in allowing multipleLinux guests per z/VM instance. For 1500 Linux guests, you could have 25 guests each on 60 z/VM LPARs, or 250 guests on each of six z/VM LPARs, or 750 guests on two LPARs. with z/VM 5.3, each LPAR can support up to 256GB of memory and 32 processors, so you need at least two LPAR to use all 64 engines. Also, there are good reasons to have different guests under different z/VM LPARs, such as separating development/test from production workloads. If you had to re-IPLa specific z/VM LPAR, it could be done without impacting the workloads on other LPARs.
To access storage, IBM offers N-port ID Virtualization (NPIV). Without NPIV, two Linux guest images could not accessthe same LUN through the same FCP port because this would confuse the Host Bus Adapter (HBA), which IBM calls "FICON Express" cards. For example, Linux guest 1 asks to read LUN 587 block 32 and this is sent out a specific port, to a switch, to a disk system. Meanwhile, Linux guest 2 asks to read LUN 587 block 49. The data comes back to the z10 EC with the data, gives it to the correct z/VM LPAR, but then what? How does z/VM know which of the many Linux guests to give the data to? Both touched the same LUN, so it is unclear which made the request. To solve this, NPIV assigns a virtual "World Wide Port Name" (WWPN), up to 256 of them per physical port, so you can have up to 256 Linux guests sharing the same physical HBA port to access the same LUN.If you had 250 guests on each of six z/VM LPARs, and each LPAR had its own set of HBA ports, then all 1500 guestscould access the same LUN.
Yes, the z10 EC machines support Sysplex. The concept is confusing, but "Sysplex" in IBM terminology just means that you can have LPARs either on the same machine or on separate mainframes, all sharing the same time source, whether this be a "Sysplex Timer" or by using the "Server Time Protocol" (STP). The z10 EC can have STP over 6 Gbps Infiniband over distance. If you wantedto have all 1500 Linux guests time stamp data identically, all six z/VM LPARs need access to the shared time source. This can help in a re-do or roll-back situation for Oracle databases to complete or back-out "Units of Work" transactions. This time stamp is also used to form consistency groups in "z/OS Global Mirror", formerly called "XRC" for Extended Remote Distance Copy. Currently, the "timestamp" on I/O applies only to z/OS and Linux and not other operating systems. (The time stamp is done through the CDK driver on Linux, and contributed back to theopen source community so that it is available from both Novell SUSE and Red Hat distributions.)To have XRC have consistency between z/OS and Linux, the Linux guests would need to access native CKD volumes,rather than VM Minidisks or FCP-oriented LUNs.
Note: this is different than "Parallel Sysplex" which refers to having up to 32 z/OS images sharing a common "Coupling Facility" which acts as shared memory for applications. z/VM and Linux do not participate in"Parallel Sysplex".
As for the price, mainframes list for as little as "six figures" to as much as several million dollars, but I have no idea how much this particular model would cost. And, of course, this is just the hardware cost. I could not find the math for the $667 per server replacement you mentioned, so don't have details on that.You would need to purchase z/VM licenses, and possibly support contracts for Linux on System z to be fully comparable to all of the software license and support costs of the VMware, Solaris, Linux and/or Windows licenses you run on the x86 machines.
This is where a lot of the savings come from, as a lot of software is licensed "per processor" or "per core", and so software on 64 mainframe processors can be substantially less expensive than 1500 processors or 3000 cores.IBM does "eat its own cooking" in this case. IBM is consolidating 3900 one-application-each rack-mounted serversonto 30 mainframes, for a ratio of 130-to-1 and getting amazingly reduced TCO. The savings are in the followingareas:
Hardware infrastructure. It's not just servers, but racks, PDUs, etc. It turns out to be less expensive to incrementally add more CPU and storage to an existing mainframe than to add or replace older rack-em-and-stack-emwith newer models of the same.
Cables. Virtual servers can talk to each other in the same machine virtually, such as HiperSockets, eliminatingmany cables. NPIV allows many guests to share expensive cables to external devices.
Networking ports. Both LAN and SAN networking gear can be greatly reduced because fewer ports are needed.
Administration. We have Universities that can offer a guest image for every student without having a majorimpact to the sys-admins, as the students can do much of their administration remotely, without having physicalaccess to the machinery. Companies uses mainframe to host hundreds of virtual guests find reductions too!
Connectivity. Consolidating distributed servers in many locations to a mainframe in one location allows youto reduce connections to the outside world. Instead of sixteen OC3 lines for sixteen different data centers, you could have one big OC48 line instead to a single data center.
Software licenses. Licenses based on servers, cores or CPUs are reduced when you consolidate to the mainframe.
Floorspace. Generally, floorspace is not in short supply in the USA, but in other areas it can be an issue.
Power and Cooling. IBM has experienced significant reduction in power consumption and cooling requirementsin its own consolidation efforts.
All of the components of DFSMS (including DFP, DFHSM, DFDSS and DFRMM) were merged into a single product "DFSMS for z/OS" and is now an included element in the base z/OS operating system. As a result of these, customers typically have 80 to 90 percent utilization on their mainframe disk. For the 1500 Linux guests, however, most of the DFSMS features of z/OS do not apply. These functions were not "ported over" to z/VM nor Linux on any platform.
Instead, the DFSMS concepts have been re-implemented into a new product called "Scale-Out File Services" (SOFS) which would provide NAS interfaces to a blendeddisk-and-tape environment. The SOFS disk can be kept at 90 percent utilization because policies can place data, movedata and even expire files, just like DFSMS does for z/OS data sets. SOFS supports standard NAS protocols such as CIFS,NFS, FTP and HTTP, and these could be access from the 1500 Linux guests over an Ethernet Network Interface Card (NIC), which IBM calls "OSA Express" cards.
Lastly, IBM z10 EC is not emulating x86 or x86-64 interfaces for any of these workloads. No doubt IBM and AMD could collaborate together to come up with an AMD Opteron emulator for the S/390 chipset, and load Windows 2003 right on top of it, but that would just result in all kinds of emulation overhead.Instead, Linux on System z guests can run comparable workloads. There are many Linux applications that are functionally equivalent or the same as their Windows counterparts. If you run Oracle on Windows, you could runOracle on Linux. If you run MS Exchange on Windows, you could run Bynari on Linux and let all of your Outlook Expressusers not even know their Exchange server had been moved! Linux guest images can be application servers, web servers, database servers, network infrastructure servers, file servers, firewall, DNS, and so on. For nearly any business workload you can assign to an x86 server in a datacenter, there is likely an option for Linux on System z.
Hope this answers all of your questions, Jon. These were estimates based on basic assumptions. This is not to imply that IBM z10 EC and VMware are the only technologies that help in this area, you can certainly find virtualization on other systems and through other software.I have asked IBM to make public the "TCO framework" that sheds more light on this.As they say, "Your mileage may vary."
For more on this series, check out the following posts:
If in your travels, Jon, you run into someone interested to see how IBM could help consolidate rack-mounted servers over to a z10 EC mainframe, have them ask IBM for a "Scorpion study". That is the name of the assessment that evaluates a specific clientsituation, and can then recommend a more accurate estimate configuration.