Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior Software Engineer for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Continuing this week's theme on products that were part of last week'sIBM Information Infrastructure launch, today I'll cover the TS2900.
IBM System Storage TS2900 Tape Autoloader
This little baby is SWEET! At 1U high, it holds a single drive and up to 9 cartridges,up to a total of 14.4 TB at 2:1 compression. Thedrive can be a Half-Height (HH) LTO-3 or LTO-4 drive. (It is called an autoloader because there isonly a single drive. Automation with multiple drives are called libraries).
This can be rack-mounted, or sit on your desktop. There is an I/O station for insertingor removing individual cartridges, as well as a removable tape magazine to populate orremove the tapes in a more efficient manner.
Both LTO3 and LTO4 support a mix of regular and "Write Once, Read Many" (WORM) media tohelp comply with regulations demanding "Non-erasable, Non-rewriteable" storage. TheLTO4 can also support on-drive encryption, managed by the IBM Encryption Key Manager (EKM).
To learn more, see the IBM System Storage[TS2900 page].
Before acquisition, Diligent offered only software. The task of putting this software on an appropriate x86 server with sufficientmemory and processor capability was left as an exercise for the storage admin. With the TS7650G, IBM installs theProtectTIER software on the fastest servers in the industry, the IBM System x3850 M2 and x3950 M2. This eliminateshaving the storage admins pretend that they have hardware engineering degrees.
Before acquisition, the software worked only on a single system. IBM was able to offer multiple configurations of the TS7650G, including a single-controller model as well as a clustered dual-controller model. The clustered dual-controller model can ingest data at an impressive 900 MB/sec, which is up to nine times faster than some of thecompetitive deduplication offerings.
Before acquisition, ProtecTIER emulated DLT tape technology. This limited its viability, as the market sharefor DLT has dropped dramatically, and continues to dwindle. Most of the major backup software support DLT as anoption, but going forward this may not be true much longer for new tape applications.IBM was able to extend support by adding LTO emulation on theTS7650G gateway, future-proofing this into the 21st Century.
At last week's launch, covering so many products with so few slides, this announcement was shrunken down to a single line "Store 25 TB of backups onto 1 TB of disk, in 8 hours" and perhaps a few people missed that this wasactually covering two key features.
With deduplication, the TS7650G might get up to 25 times reduction on disk. If you back up a 1 TB data basethat changes only slightly from one day to the next, once a day for 25 days, it might only take 1 TB, or so, of disk tohold all the unique versions, as most of the blocks would be identical, rather than 25 TB on traditional disk or tapestorage systems. The TS7650G can manage up to 1 PB of disk,which could represent in theory up to 25 PB of backup data.
With an ingest rate of 900 MB/sec, the TS7650G could ingest 25 TB of backups during a typical 8 hour backup window.
The 25 TB of the first may not necessarily be the 25 TB of the second, but the wording was convenient for marketingpurposes, and a comma was used to ensure no misunderstandings.Of course, depending on the type of application, the frequency of daily change, and the backup software employed, your mileage may vary.
Continuing this week's theme about new products that were mentioned in last week's launch, today I willcover the new [S24 and S54 frames].
Before these new frames, customers had two choices for their tape cartridges: keep them in an automatedtape library, or on an external shelf. Most of the critics of tape focus almost entirely on the problemsrelated to the latter. When tapes are placed outside of automation, you need human intervention to findand fetch the tapes, tapes can be misplaced or misfiled, tapes can be dropped, tapes can get liquids spilledon them, and so on. These problems just don't happen when stored in automated tape libraries.
Until now, the number of cartridges were limited to the surface area of the wall accessible by the roboticpicker. Whether the robot rotates in a circle picking from dodecagon walls, or back and forth from longrectangular walls, the problem was the same.
But what about tapes that may not need to be readily accessible, but still automated? With the newhigh density frames, you can now stack tapes several cartridges deep, spring loaded deep shelves thatpush the tape cartridges up to the front one at a time. The high-density frame design might have been inspired by thefamous [Pez] candy dispenser, but at 70.9 inches, does not beat the[World's Tallest Pez Dispenser].
(Note: PEZ® is a registered trademark of Pez Candy, Inc.)
In a regular cartridge-only frame, like the D23, you have slots for 200 cartridges on the left, and 200 cartridges on the right, and the robotic picker can pull out and push back cartridges into any of theseslot positions. In the new S24, there are still 200 slots on the left, now referred to as "tier 0",but up to 800 cartridges on the right. In each slot there are up to four 3592 cartridges, the positionimmediately reachable to the picker is referred to as "tier 1", and the ones tucked behindare "tier 2", "tier 3" and "tier 4".
<- - - S24 frame - - - >
We have fun slow-motion videos we show customers on how these work. For example, in the diagram above, let'ssuppose you want to fetch Tape E in the "tier 4" position. The following sequence happens:
Robotic picker pulls "tier 1" tape cartridge B, and pushes it into another shelf slot. Tapes C, D and E get pushed up to be Tiers 1, 2 and 3 now.
Robotic picker pulls "tier 1" tape cartridge C, and puts it in another shelf slot. Tapes D and E get pushed up to be Tiers 1 and 2 now.
Robotic picker pulls "tier 1" tape cartridge D, and puts it in another shelf slot. Tape E gets pushed up to be Tier 1 now.
Robotic picker pulls "tier 1" tape cartridge E, this is the tape we wanted, and can move it to the drive.
The other three cartridges (B, C and D) are then pulled out of the temporary slot, and pushed back into their original order.
In this manner, the most recently referenced tape cartridges will be immediately accessible, and the ones leastreferenced will eventually migrate to the deeper tiers. The 3592 cartridges can be used with either TS1120 orTS1130 drives. Each cartridge can hold up to 3TB of data (1TB raw, at 3:1 compression), so the entire framecould hold 3PB in just 10 square feet of floor space. Five D23 frames could be consolidated down to two S24 frames.The S24 frame comes in "Capacity on Demand" pricing options. The base model of the S24 has just tiers 0, 1 and 2, for a total capacity of 600 cartridges. You can then later license tiers 3 and 4 when needed.
The S54 is basically similar in operation, but for LTO cartridges. It works with any mix of LTO-1, LTO-2, LTO-3 andLTO-4 cartridges.The left side holds tier 0 as before, but the right side has up to five LTO cartridges deep. For Capacity on Demand pricing,the base model supports 660 cartridges (tiers 0,1,2), with options to upgrade for the additional 660 cartridges.The total 1320 cartridges could hold up to 2.1 PB of data (at 2:1 compression). One S54 frame could replacethree traditional S53 frames that held only 440 LTO cartridges each.
If you have both TS1100 series and LTO drives in your TS3500 tape library, then you can haveboth S24 and S54 frames side by side.
Last week, I presented IBM's strategic initiative, the IBM Information Infrastructure, which is part of IBM's New Enterprise Data Center vision. This week, I will try to get around to talking about some of theproducts that support those solutions.
I was going to set the record straight on a variety of misunderstandings, rumors or speculations, but I think most have been taken care of already. IBM blogger BarryW covered the fact that SVC now supports XIV storage systems, in his post[SVC and XIV],and addressed some of the FUD already. Here was my list:
Now that IBM has an IBM-branded model of XIV, IBM will discontinue (insert another product here)
I had seen speculation that XIV meant the demise of the N series, the DS8000 or IBM's partnership with LSI.However, the launch reminded people that IBM announced a new release of DS8000 features, new models of N series N6000,and the new DS5000 disk, so that squashes those rumors.
IBM XIV is a (insert tier level here) product
While there seems to be no industry-standard or agreement for what a tier-1, tier-2 or tier-3 disk system is, there seemed to be a lot of argument over what pigeon-hole category to put IBM XIV in. No question many people want tier-1 performance and functionality at tier-2 prices, and perhaps IBM XIV is a good step at giving them this. In some circles, tier-1 means support for System z mainframes. The XIV does not have traditional z/OS CKD volume support, but Linux on System z partitions or guests can attach to XIV via SAN Volume Controller (SVC), or through NFS protocol as part of the Scale-Out File Services (SoFS) implementation.
Whenever any radicalgame-changing technology comes along, competitors with last century's products and architectures want to frame the discussion that it is just yet another storage system. IBM plans to update its Disk Magic and otherplanning/modeling tools to help people determine which workloads would be a good fit with XIV.
IBM XIV lacks (insert missing feature here) in the current release
I am glad to see that the accusations that XIV had unprotected, unmirrored cache were retracted. XIV mirrors all writes in the cache of two separate modules, with ECC protection. XIV allows concurrent code loadfor bug fixes to the software. XIV offers many of the features that people enjoy in other disksystems, such as thin provisioning, writeable snapshots, remote disk mirroring, and so on.IBM XIV can be part of a bigger solution, either through SVC, SoFS or GMAS that provide thebusiness value customers are looking for.
IBM XIV uses (insert block mirroring here) and is not as efficient for capacity utilization
It is interesting that this came from a competitor that still recommends RAID-1 or RAID-10 for itsCLARiiON and DMX products.On the IBM XIV, each 1MB chunk is written on two different disks in different modules. When disks wereexpensive, how much usable space for a given set of HDD was worthy of argument. Today, we sell you abig black box, with 79TB usable, for (insert dollar figure here). For those who feel 79TB istoo big to swallow all at once, IBM offers "capacity on demand" pricing, where you can pay initially for as littleas 22TB, but get all the performance, usability, functionality and advanced availability of the full box.
IBM XIV consumes (insert number of Watts here) of energy
For every disk system, a portion of the energy is consumed by the number of hard disk drives (HDD) andthe remainder to UPS, power conversion, processors and cache memory consumption. Again, the XIV is a bigblack box, and you can compare the 8.4 KW of this high-performance, low-cost storage one-frame system with thewattage consumed by competitive two-frame (sometimes called two-bay) systems, if you are willing to take some trade-offs. To getcomparable performance and hot-spot avoidance, competitors may need to over-provision or use faster, energy-consuming FC drives, and offer additional software to monitor and re-balance workloads across RAID ranks.To get comparable availability, competitors may need to drop from RAID-5 down to either RAID-1 or RAID-6.To get comparable usability, competitors may need more storage infrastructure management software to hide theinherent complexity of their multi-RAID design.
Of course, if energy consumption is a major concern for you, XIV can be part of IBM's many blended disk-and-tapesolutions. When it comes to being green, you can't get any greener storage than tape! Blended disk-and-tapesolutions help get the best of both worlds.
Well, I am glad I could help set the record straight. Let me know what other products people you would like me to focus on next.
This post will focus on Information Compliance, the fourth and final part of the four-part series this week.I have received a few queries on my choice of sequence for this series: Availability, Security, Retention andCompliance.
Why not have them in alphabetical order? IBM avoids alphabetizing in one language, because thenit may not be alphabetized when translated to other languages.
Why not have them in a sequence that spells outan easy to remember mnemonic, like "CARS"? Again, when translated to other languages, those mnemonics no longerwork.
Instead, I worked with our marketing team for a more appropriate sequence, based on psychology and the cognitive bias of [primacy and recency effects].
Here's another short 2-minute video, on Information Compliance
Full disclosure: I am not a lawyer. The following will delveinto areas related to government and industry regulations. Consultyour risk officer or legal counsel to make sure any IT solution is appropriatefor your country, your industry, or your specific situation.
IBM estimates there are over 20,000 regulations worldwide related to information storage and transmission.
For information availability, some industry regulations mandate a secondary copy a minimum distance away toprotect against regional disasters like hurricanes or tsunamis.IBM offers Metro Mirror (up to 300km) and Global Mirror (unlimited distance) disk mirroring to support theserequirements.
For information security, some regulations relate to privacy and prevention of unauthorized access. Twoprominent ones in the United States are:
Health Insurance Portability and Accountability Act (HIPAA) of 1996
HIPAA regulates health care providers, health plans, and health care clearinghouses in how they handle the privacy of patient's medical records. These regulations apply whether the information is on film, paper, or storedelectronically. Obviously, electronic medical records are easier to keep private. Here is an excerpt froman article from [WebMD]:
"There are very good ways to protect data electronically. Although it sounds scary, it makes data more protected than current paper records. For example, think about someone looking at your medical chart in the hospital. It has a record of all that is happening -- lab results, doctor consultations, nursing notes, orders, prescriptions, etc. Anybody who opens it for whatever reason can see all of this information. But if the chart is an electronic record, it's easy to limit access to any of that. So a physical therapist writing physical therapy notes can only see information related to physical therapy. There is an opportunity with electronic records to limit information to those who really need to see it. It could in many ways allow more privacy than current paper records."
GLBA regulates the handling of sensitive customer information by banks, securities firms, insurance companies, and other financial service providers. Financial companies use tape encryption to comply with GLBA when sending tapes from one firm to another. IBM was the first to deliver tape drive encryption withthe TS1120, and then later with LTO-4 and TS1130 tape drives.
For information retention, there are a lot of regulations that deal with how information is stored, in some casesimmutable to protect against unethical tampering, and when it can be discarded. Two prominent regulations inthe United States are:
U.S. Securities and Exchange Commission (SEC) 17a-4 of 1997
In the past, the IT industryused the acronym "WORM" which stands for the "Write Once, Read Many" nature of certain media, like CDs, DVDs,optical and tape cartridges. Unfortunately, WORM does not apply to disk-based solutions, so IBM adopted the languagefrom SEC 17a-4 that calls for storage that is "Non-Erasable, Non-Rewriteable" or NENR. This new umbrella term applies to disk-based solutions, as well as tape and optical WORM media.
SEC 17a-4 indicates that broker/dealers and exchange members must preserve all electronic communications relating to the business of their firmm a specific period of time. During this time, the information must not be erased or re-written.
Sarbanes-Oxley (SOX) Act of 2002
SOX was born in the wake of [Enron and other corporate scandals]. It protects the way that financial information is stored, maintained and presented to investors, as well as disciplines those who break its rules. It applies onlyto public companies, i.e. those that offer their securities (stock shares, bonds, liabilities) to be sold to the publicthrough a listing on a U.S. exchange, such as NASDAQ or NYSE.
SOX focuses on preventing CEOs and other executives from tampering the financial records.To meet compliance, companies are turning to the [IBM System Storage DR550] which providesNon-erasable, Non-rewriteable (NENR) storage for financial records. Unlike competitive products like EMC Centera thatfunction mostly as space-heaters on the data center floor once they filled up, the DR550 can be configured as a blended disk-and-tape storage system, so that the most recent, and most likely to be accessed data, remains on disk, but the older, least likely to be accessed data, is moved automatically to less expensive, more environment-friendly "green" tape media.
Did SOX hurt the United States' competitiveness? Critics feared that these new regulations would discourage newcompanies from going public. Earnst & Young found these fears did not come true, and published a study [U.S. Record IPO Activity from 2006 Continues in 2007]. In fact, the improved confidence that SOX has given investors has given rise to similarlegislation in other parts of the world: Euro-Sox for the European Union Investor Protection Act, and J-SOX Financial Instruments and Exchange Law for Japan.
For those who only read the first and last paragraphs of each post, here is my recap:Information Compliance is ensuring that information is protected against regional disasters, unauthorizedaccess, and unethical tampering, as required to meet industry and government regulations. Such regulationsoften apply if the information is stored on traditional paper or film media, but can often be handled more cost-effectively when stored electronically. Appropriate IT governance can help maintain investor confidence.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. The launch was presented at the IBM Storage and Storage Networking Symposium to over 400 attendeesin Montpelier, France, with corresponding standing-room-only crowds in New York and Tokyo.
This post will focus on Information Retention, the third of the four-part series this week.
Here's another short 2-minute video, on Information Retention
Let's start with some interesting statistics.Fellow blogger Robin Harris on his StorageMojo blog has an interesting post:[Our changing file workloads],which discusses the findings of study titled"Measurement and Analysis of Large-Scale Network File System Workloads"[14-page PDF]. This paper was a collaborationbetween researchers from University of California Santa Cruz and our friends at NetApp.Here's an excerpt from the study:
Compared to Previous Studies:
Both of our workloads are more write-oriented. Read to write byte ratios have significantly decreased.
Read-write access patterns have increased 30-fold relative to read-only and write-only access patterns.
Most bytes are transferred in longer sequential runs. These runs are an order of magnitude larger.
Most bytes transferred are from larger files. File sizes are up to an order of magnitude larger.
Files live an order of magnitude longer. Fewer than 50 percent are deleted within a day of creation.
Files are rarely re-opened. Over 66 percent are re-opened once and 95% fewer than five times.
Files re-opens are temporally related. Over 60 percent of re-opens occur within a minute of the first.
A small fraction of clients account for a large fraction of file activity. Fewer than 1 percent of clients account for50 percent of file requests.
Files are infrequently shared by more than one client. Over 76 percent of files are never opened by more than one client.
File sharing is rarely concurrent and sharing is usually read-only. Only 5 percent of files opened by multiple clients are concurrent and 90 percent of sharing is read-only.
Most file types do not have a common access pattern.
Why are files being kept ten times longer than before? Because the information still has value:
Provide historical context
Gain insight to specific situations, market segment demographics, or trends in the greater marketplace
Help innovate new ideas for products and services
Make better, smarter decisions
National Public Radio (NPR) had an interesting piece the other day. By analyzing old photos, a researcher for Cold War Analysis was able to identify an interesting [pattern for Russian presidents]. (Be sure to listen to the 3-minute audio to hear a hilarious song about the results!)
Which brings me to my own collection of "old photos". I bought my first digital camera in the year 2000,and have taken over 15,000 pictures since then. Before that,I used 35mm film camera, getting the negatives developed and prints made. Some of these date back to my years in High School and College. I have a mix of sizes, from 3x5, 4x6 and 5x7 inches,and sometimes I got double prints.Only a small portion are organized intoscrapbooks. The rest are in envelopes, prints and negatives, in boxes taking up half of my linen closet in my house.Following the success of the [Library of Congress using flickr],I decided the best way to organize these was to have them digitized first. There are several ways to do this.
This method is just too time consuming. Lift the lid place 1 or a few prints face down on the glass, close the lid,press the button, and then repeat. I estimate 70 percent of my photos are in [landscape orientation], and 30 percent in [portrait mode]. I can either spend extra time toorient each photo correctly on the glass, or rotate the digital image later.
I was pleased to learn that my Fujitsu ScanSnap S510 sheet-feed scanner can take in a short stack (dozen or so) photos, and generate JPEG format files for each. I can select 150, 300 or 600dpi, and five levels of JPEG compression.All the photos feed in portrait mode, which I can then rotate later on the computer once digitized.A command line tool called [ImageMagick] can help automate the rotations.While I highly recommend the ScanSnap scanner, this is still a time-consuming process for thousands of photos.
"The best way to save your valuable photos may be by eliminating the paper altogether. Consider making digital images of all your photos."
Here's how it works:You ship your prints (or slides, or negatives) totheir facility in Irvine, California. They have a huge machine that scans them all at 300dpi, no compression, andthey send back your photos and a DVD containing digitized versions in JPEG format, all for only 50 US dollars plusshipping and handling, per thousand photos. I don't think I could even hire someone locally to run my scanner for that!
The deal got better when I contacted them. For people like me with accounts on Facebook, flickr, MySpace or Blogger,they will [scan your first 1000 photos for free] (plus shipping and handling). I selected a thousand 4x6" photos from my vast collection, organized them into eight stacks with rubber bands,and sent them off in a shoe box. The photos get scanned in landscape mode, so I had spent about four hours in preparing what I sent them, making sure they were all face up, with the top of the picture oriented either to the top or left edge.For the envelopes that had double prints, I "deduplicated" them so that only one set got scanned.
The box weighed seven pounds, and cost about 10 US dollars to send from Tucson to Irvinevia UPS on Tuesday. They came back the following Monday, all my photos plus the DVD, for 20 US dollars shipping and handling. Each digital image is about 1.5MB in size, roughly 1800x1200 pixels in size, so easily fit on a single DVD. The quality is the sameas if I scanned them at 300dpi on my own scanner, and comparable to a 2-megapixel camera on most cell phones.Certainly not the high-res photos I take with my Canon PowerShot, but suitable enough for email or Web sites. So, for about 30 US dollars, I got my first batch of 1000 photos scanned.
ScanMyPhotos.com offers a variety of extra priced options, like rotating each file to the correct landscape or portrait orientation, color correction, exact sequence order, hosting them on their Web site online for 30 days to share with friends and family, and extra copies of the DVD.All of these represent a trade-off between having them do it for me for an additional fee, or me spending time doing it myself--either before in the preparation, or afterwards managing the digital files--so I can appreciate that.
Perhaps the weirdest option was to have your original box returned for an extra $9.95? If you don't have a hugecollection of empty shoe boxes in your garage, you can buy a similarly sized cardboard box for only $3.49 at the local office supply store, so I don't understand this one. The box they return all your photos in can easily be used for the next batch.
I opted not to get any of these extras. The one option I think they should add would be to have them just discardthe prints, and send back only the DVD itself. Or better yet, discard the prints, and email me an ISO file of the DVD that I can burn myself on my own computer.Why pay extra shipping to send back to me the entire box of prints, just so that I can dump the prints in the trash myself? I will keep the negatives, in case I ever need to re-print with high resolution.
Overall, I am thoroughlydelighted with the service, and will now pursue sending the rest of my photos in for processing, and reclaim my linen closet for more important things. Now that I know that a thousand 4x6 prints weighs 7 pounds, I can now estimate how many photos I have left to do, and decide on which discount bulk option to choose from.
With my photos digitized, I will be able to do all the things that IBM talks about with Information Retention:
Place them on an appropriate storage tier. I can keep them on disk, tape or optical media.
Easily move them from one storage tier to another. Copying digital files in bulk is straightforward, and as new techhologies develop, I can refresh the bits onto new media, to avoid the "obsolescence of CDs and DVDs" as discussed in this article in[PC World].
Share them with friends and family, either through email, on my Tivo (yes, my Tivo is networked to my Mac and PC and has the option to do this!), or upload themto a photo-oriented service like [Kodak Gallery or flickr].
Keep multiple copies in separate locations. I could easily burn another copy of the DVD myself and store in my safe deposit box or my desk at work.With all of the regional disasters like hurricanes, an alternative might be to backup all your files, including your digitized photos, with an online backup service like [IBM Information Protection Services] from last year's acquisition of Arsenal Digital.
If the prospect of preserving my high school and college memories for the next few decades seems extreme,consider the [Long Now Foundation] is focused on retaining information for centuries.They areeven suggesting that we start representing years with five digits, e.g., 02008, to handle the deca-millennium bug which will come into effect 8,000 years from now. IBM researchers are also working on [long-term preservation technologies and open standards] to help in this area.
For those who only read the first and last paragraphs of each post, here is my recap:Information Retention is about managing [information throughout its lifecycle], using policy-based automation to help with the placement, movement and expiration. An "active archive" of information serves to helpgain insight, innovate, and make better decisions. Disk, tape, and blended disk-and-tape solutions can all play a part in a tiered information infrastructure for long-term retention of information.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. For you podcast fans, IBM Vice Presidents Bob Cancilla (Disk Systems), Craig Smelser (Storage and Security Software), and Mike Riegel (Information Protection Services), highlight some of the new products and offerings in this 12-minute recording:
This post will focus on Information Security, the second of the four-part series this week.
Here's another short 2-minute video, on Information Security
Security protects information against both internal and external threats.
For internal threats, most focus on whether person A has a "need-to-know" about information B. Most of the time, thisis fairly straightforward. However, sometimes production data is copied to support test and development efforts. Here is the typical scenario: the storage admin copies production data that contains sensitive or personal informationto a new copy and authorizes software engineers or testers full read/write access to this data.In some cases, the engineers or testers may be employees, other times they might be hired contractors from an outside firm.In any case, they may not be authorized to read this sensitive information. To solve this IBM announced the[IBM Optim Data Privacy Solution] for a variety of environments, including Siebel and SAP enterprise resource planning (ERP)applications.
I found this solution quite clever. The challenge is that production data is interrelated and typically liveinside [relational databases].For example, one record in one database might have a name and serial number, and then that serial number is used to reference a corresponding record in another database. The IBM Optim Data Privacy Solution applies a range of"masks" to transform complex data elements such as credit card numbers, email addresses and national identifiers, while retaining their contextual meaning. The masked results are fictitious, but consistent and realistic, creating a “safe sandbox” for application testing. This method can mask data from multiple interrelated applications to create a “production-like” test environment that accurately reflects end-to-end business processes.The testers get data they can use to validate their changes, and the storage admins can rest assured theyhave not exposed anyone's sensitive information.
Beyond just who has the "need-to-know", we might also be concerned with who is "qualified-to-act".Most systems today have both authentication and authorization support. Authentication determines that youare who you say you are, through the knowledge of unique userid/passwords combinations, or other credentials. Fingerprint, eye retinal scans or other biometrics look great in spy movies, but they are not yetwidely used. Instead, storage admins have to worry about dozens of different passwords on differentsystems. One of the many preview announcements made by Andy Monshaw on Monday's launch was that IBM isgoing to integrate the features of [Tivoli Access Manager for Enterprise Single Sign-On] into IBM's Productivity Center software, and be renamed "IBM Tivoli Storage Productivity Center".You enter one userid/password, and you will not have to enter the individual userid/password of all the managedstorage devices.
Once a storage admin is authenticated,they may or may not be authorized to read or act on certain information.Productivity Center offers role-based authorization, so that people can be identifiedby their roles (tape operator, storage administrator, DBA) and that would then determine what they areauthorized to see, read, or act upon.
For external threats, you need to protect data both in-flight and at-rest. In-flight deals with data thattravels over a wire, or wirelessly through the air, from source to destination. When companies have multiplebuildings, the transmissions can be encrypted at the source, and decrypted on arrival.The bigger threat is data at-rest. Hackers and cyber-thieves looking to download specific content, like personal identifiable information, financial information, and other sensitive data.
IBM was the first to deliver an encrypting tape drive, the TS1120. The encryption process is handled right at the driveitself, eliminating the burden of encryption from the host processing cycles, and eliminating the need forspecialized hardware sitting between server and storage system. Since then, we have delivered encryption onthe LTO-4 and TS1130 drives as well.
When disk drives break or are decommissioned, the data on them may still be accessible. Customers have a tough decision to make when a disk drive module (DDM) stops working:
Send it back to the vendor or manufacturer to have it replaced, repaired or investigated, exposing potentialsensitive information.
Keep the broken drive, forfeit any refund or free replacement, and then physically destroy the drive. Thereare dozens of videos on [YouTube.com] on different ways to do this!
The launch previewed the [IBM partnership with LSI and Seagate] to deliver encryption technology for disk drives, known as "Full Drive Encryption" or FDE.Having all data encrypted on all drives, without impacting performance, eliminates having to decide which data gets encryptedand which doesn't. With data safely encrypted, companies can now send in their broken drives for problemdetermination and replacement.Anytime you can apply a consistent solution across everything, without human intervention anddecision making, the less impact it will have. This was the driving motivation in both disk and tape driveencryption.
(Early in my IBM career, some lawyers decided we need to add a standard 'paragraph' to our copyright text in the upper comment section of our software modules, and so we had a team meeting on this. The lawyer that presented to us that perhaps only20 to 35 percent of the modules needed to be updated with this paragraph, and taught us what to look for to decidewhether or not the module needed to be changed. Myteam argued how tedious this was going to be, that this will take time to open up each module, evaluate it, and make the decision. With thousands of modules involved the process could take weeks. The fact that this was going to take us weeks did not seem to concern our lawyer one bit, it was just thecost of doing business.Finally, I asked if it would be legal to just add the standard paragraph to ALL the modules without any analysis whatsoever. The lawyer was stunned. There was no harm adding this paragraph to all the modules, he said, but that would be 3-5x more work and why would I even suggest that. Our team laughed, recognizing immediately that it was the fastest way to get it done. One quick program updated all modules that afternoon.)
To manage these keys, IBM previewed the Tivoli Key Lifecycle Manager (TKLM).This software helps automate the management of encryption keys throughout their lifecycle to help ensure that encrypted data on storage devices cannot be compromised if lost or stolen. It will apply to both disk and tapeencryption, so that one system will manage all of the encryption keys in your data center.
For those who only read the first and last paragraphs of each post, here is my recap:Information Security is intended as an end-to-end capability to protect against both internal and external threats, restricting access only to those who have a "need-to-know" or are "qualified-to-act". Security approacheslike "single sign-on" and encryption that applies to all tapes and all disks in the data center greatly simplify the deployment.
Earlier this year, IBM launched its[New Enterprise Data Center vision]. The average data center was built 10-15 years ago,at a time when the World Wide Web was still in its infancy, some companies were deploying their first storage areanetwork (SAN) and email system, and if you asked anyone what "Google" was, they might tell you it was ["a one followed by a hundred zeros"]!
Full disclosure: Google, the company, justcelebrated its [10th anniversary] yesterday, and IBM has partnered with Google on a varietyof exciting projects. I am employed by IBM, and own stock in both companies.
In just the last five years, we saw a rapid growth in information, fueled by Web 2.0 social media, email, mobile hand-held devices, and the convergenceof digital technologies that blurs the lines between communications, entertainment and business information. This explosion in information is not just "more of the same", but rather a dramatic shift from predominantly databases for online transaction processing to mostly unstructured content. IT departments are no longer just the"back office" recording financial transactions for accountants, but now also take on a more active "front office" role. For a growing number of industries, information technology plays a pivotal role in generating revenue, making smarter business decisions, and providing better customer service.
IBM felt a new IT model was needed to address this changing landscape, so IBM's New Enterprise Data Center vision has these five key strategic initiatives:
Highly virtualized resources
Business-driven Service Management
Green, Efficient, Optimized facilities
In February, IBM announced new products and features to support the first two initiatives, including the highlyvirtualized capability of the IBM z10 EC mainframe, and and related business resiliency features of the [IBM System Storage DS8000 Turbo] disk system.
In May, IBM launched its Service Management strategic initiative at the Pulse 2008 conference. I was there in Orlando, Florida at the Swan and Dolphin resort to present to clients. You can read my three posts:[Day 1; Day 2 Main Tent; Day 2 Breakout sessions].
In June, IBM launched its fourth strategic initiative "Green, Efficient and Optimized Facilities" with [Project BigGreen 2.0], which included the Space-Efficient Volume (SEV) and Space-Efficient FlashCopy (SEFC) capabilitiesof the IBM System Storage SAN Volume Controller (SVC) 4.3 release. Fellow blogger and IBM master inventor Barry Whyte (BarryW) has three posts on his blog about this:[SVC 4.3.0Overview; SEV and SEFCdetail; Virtual Disk Mirroring and More]
Some have speculated that the IBM System Storage team seemed to be on vacation the past two months, with few pressreleases and little or no fanfare about our July and August announcements, and not responding directly to critics and FUD in the blogosphere.It was because we were holding them all for today's launch, taking our cue from a famous perfume commercial:
"If you want to capture someone's attention -- whisper."
My team and I were actually quite busy at the [IBM Tucson Executive Briefing Center]. In between doing our regular job talking to excited prospects and clients,we trained sales reps and IBM Business Partners, wrote certification exams, and updated marketing collateral. Fortunately, competitors stopped promotingtheir own products to discuss and demonstrate why they are so scared of what IBM is planning.The fear was well justified. Even a few journalists helped raise the word-of-mouth buzz and excitement level. A big kiss to Beth Pariseau for her article in [SearchStorage.com]!
(Last week we broke radio silence to promote our technology demonstration of 1 million IOPS using Solid StateDisk, just to get the huge IBM marketing machine oiled up and ready for today)
Today, IBM General Manager Andy Monshaw launchedthe fifth strategic initiative, [IBM Information Infrastructure], at the[IBM Storage and Storage Networking Symposium] in Montpellier, France. Montpellier is one of the six locations of our New Enterprise Data Center Leadership Centers launched today. The other five are Poughkeepsie, Gaithersburg, Dallas, Mainz and Boebligen, with more planned for 2009.
Although IBM has been using the term "information infrastructure" for more than 30 years, it might be helpful to define it for you readers:
“An information infrastructure comprises the storage, networks, software, and servers integrated and optimized to securely deliver information to the business.”
In other words, it's all the "stuff" that delivers information from the magnetic surface recording of the disk ortape media to the eyes and ears of the end user. Everybody has an information infrastructure already, some are just more effective than others. For those of you not happy with yours, IBM hasthe products, services and expertise to help with your data center transformation.
IBM wants to help its clients deliver the right information to theright people at the right time, to get the most benefits of information, while controlling costs and mitigatingrisks. There might be more than a dozen ways to address the challenges involved, but IBM's Information Infrastructure strategic initiative focuses on four key solution areas:
Last, but not least, I would like to welcome to the blogosphere IBM's newest blogger, Moshe Yanai, formerly the father of the EMC Symmetrix and now leading the IBM XIV team. Already from his first poston his new [ThinkStorage blog], I can tell he is not going to pullany punches either.
Next Monday, September 1, 2008, marks my two year "blogoversary" for this blog!
I won't be blogging on Monday, of course, because that is [Labor Day] holiday here in the United States.
(From a Canadian colleague: US is not the only country who celebrates Labor Day on the first weekend in September. Canada also celebrates Labour Day on the first weekend in September. It's the only holiday(other than Christmas/New Years) where we are in sync with US. Our Thanksgiving Days are different as is your July 4 vs our July 1. But for Labour Day we are one with the Borg...)
(From an Australian colleague: each province of Australia has its own day to celebrate Labor Day, see [Australia Public Holidays])
The rest of the world celebrates Labor Day on May 1, but the USA celebrates this on the first Monday of September, which this year lands on September 1.Originally, the day is intended to be a "day off for working citizens", IBM is kind enough to let managers and marketingpersonnel have the day off also. (Not that anyone is going to notice no press releases next Monday, right?)
I started this blog on September 1, 2006 as part of IBM's big["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. Last year, IBM celebrated the 55th anniversary of tape systems.
Several readers have asked me why I haven't talked about recent current events, such as the Olympic Games in Beijing, or the U.S. National Conventions for the race for U.S. President. I have to remind them of one of the key precepts of IBMblogging guidelines:
8. Respect your audience. Don’t use ethnic slurs, personal insults, obscenity, or engage in any conduct that would not be acceptable in IBM’s workplace. You should also show proper consideration for others’ privacy and for topics that may be considered objectionable or inflammatory - such as politics and religion.
I made subtle references to my senator from Arizona, John McCain, in my post [ILM for my iPod], and to Barack Obama in my post [Searching for matching information]. I don't think anyone would mind that I send a "Happy Birthday!" wish to both of them.Senator McCain turns 72 years old today, and Senator Obama turned 47 years old earlier this month.
And lastly, Tucson itself [celebrates this entire month] its 233rd birthday. That's right,Tucson, the 32nd largest city of the USA, and headquarters for IBM System Storage, is older than the USA itself.While the Tucson area has been continuously inhabited by humans for over 3500 years, it officially became Tucsonon August 20, 1775.
Fellow blogger Justin Thorp has opined that [blogging is like jogging]. Somedays, you are just too busy to do it, and other days, you make time for it, because you know it is important.For the record, it is not my job to blog for IBM, that ended last September 2007. I continue to blog anyways because I have benefited from it, both personally and professionally.I want to thank all of you readers out there for making this blog a great success! Being named one of the top 10 blogs of the IT storage industry by Network World, two back-to-back Brand Impact awards from Liquid Agency, and recently earning a "31" Technorati ranking, has really helped keep me going.
So, I look forward to next month, and beginning my third year on this blog. I am sure there will be lots of surprises and announcements you can all look forward to in the next coming weeks and months that I will have plenty to write about.
Well, it's Tuesday, and so it is "announcement day" again! Actually, for me it is Wednesday morning herein Mumbai, India, but since I was "press embargoed" until 4pm EDT in talking about these enhancements, I had to wait until Wednesday morning here to talk about them.
World's Fastest 1TB tape drive
IBM announced its new enterprise [TS1130 tape drive]and corresponding [TS3500 tape library support]. This one has a funny back-story. Last week while we were preparing the Press Release, we debated on whether we should compare the 1TB per cartridge capacity as double that of Sun's Enterprise T10000 (500GB), or LTO-4 (800GB). The problem changed when Sun announced on Monday they too had a 1TB tape drive, so now instead ofsaying that we had the "World's First 1TB tape drive", we quickly changed this to the "World's Fastest 1TB tape drive" instead. At 160MB/sec top speed, IBM's TS1130 is 33 percent faster than Sun's latest announcement. Sun was rather vague when they will actually ship their new units, so IBM may still end up being first to deliver as well.
While EMC and other disk-only vendors have stopped claiming that "tape is dead", these recent announcements from IBM and Sun indicate that indeed tape is alive and well. IBM is able to borrow technologies from disk, such as the Giant Magneto Resistive (GMR) head over to its tape offerings, which means much of the R&D for disk applies to tape, keeping both forms ofstorage well invested. Tape continues to be the "greenest" storage option, more energy efficient than disk, optical, film, microfiche and even paper.
On the LTO front, IBM enhanced the reporting capabilities of its[TS3310] midrange tape library. This includes identifying the resource utilization of the drives, reporting on media integrity, and improved diagnostics to support library-managed encryption.
IBM System Storage DR550
As a blended disk-and-tape solution, the [IBM System Storage DR550] easily replaces the EMC Centera to meet compliance storagerequirements. IBM announced that we have greatly expanded its scalability, being able to support both 1TBdisk drives, as well as being able to attach to either IBM or Sun's 1TB tape drives.
Massive Array of Idle Disks (MAID)
IBM now offers a "Sleep Mode" in the firmware of the [IBM System Storage DCS9550], which is often called "Massive Array of Idle Disks" (MAID) or spin-down capability. This can reduce the amount of power consumed during idle times.
That's a lot of exciting stuff. I'm off to breakfast now.
Based on this success, and perhaps because I am also fluent in Spanish, I was asked to help with Proyecto Ceibal, the team for OLPC Uruguay. Normally theXS school server resides at the school location itself, so that even if the internet connection is disrupted or limited, the school kids can continue to access each other and the web cache content until internet connection is resumed.However, with a diverse developmentteam with people in United States, Uruguay, and India, we first looked to Linux hosting providers that wouldagree to provide free or low-cost monthly access. We spent (make that "wasted") the month of May investigating.Most that I talked to were not interested in having a customized Linux kernel on non-standard hardware on their shop floor, and wanted instead to offer their own standard Linux build on existing standard servers, managed by theirown system administrators, or were not interested in providing it for free. Since the XS-163 kernel is customizedfor the x86 architecture, it is one of those exceptions where we could not host it on an IBM POWER or mainframe as a virtual guest.
This got picked up as an [idea] for the Google's[Summer of Code] and we are mentoring Tarun, a 19-year-old student to actas lead software developer. However, summer was fast approaching, and we wanted this ready for the next semester. In June, our project leader, Greg, came up with a new plan. Build a machine and have it connected at an internet service provider that would cover the cost of bandwidth, and be willing to accept this with remote administration. We found a volunteer organization to cover this -- Thank you Glen and Vicki!
We found a location, so the request to me sounded simple enough: put together a PC from commodity parts that meet the requirements of the customizedLinux kernel, the latest release being called [XS-163]. The server would have two disk drives, three Ethernet ports, and 2GB of memory; and be installed with the customized XS-163 software, SSHD for remote administration, Apache web server, PostgreSQL database and PHP programming language.Of course, the team wanted this for as little cost as possible, and for me to document the process, so that it could be repeated elsewhere. Some stretch goals included having a dual-boot with Debian 4.0 Etch Linux for development/test purposes, an alternative database such as MySQL for testing, a backup procedure, and a Recover-DVD in case something goes wrong.
Some interesting things happened:
The XS-163 is shipped as an ISO file representing a LiveCD bootable Linux that will wipe your system cleanand lay down the exact customized software for a one-drive, three-Ethernet-port server. Since it is based on Red Hat's Fedora 7 Linux base, I found it helpful to install that instead, and experiment moving sections of code over.This is similar to geneticists extracting the DNA from the cell of a pit bull and putting it into the cell for a poodle. I would not recommend this for anyone not familiar with Linux.
I also experimented with modifying the pre-built XS-163 CD image by cracking open the squashfs, hacking thecontents, and then putting it back together and burning a new CD. This provided some interesting insight, but in the end was able to do it all from the standard XS-163 image.
Once I figured out the appropriate "scaffolding" required, I managed to proceed quickly, with running versionsof XS-163, plain vanilla Fedora 7, and Debian 4, in a multi-boot configuration.
The BIOS "raid" capability was really more like BIOS-assisted RAID for Windows operating system drivers. This"fake raid" wasn't supported by Linux, so I used Linux's built-in "software raid" instead, which allowed somepartitions to be raid-mirrored, and other partitions to be un-mirrored. Why not mirror everything? With two160GB SATA drives, you have three choices:
No RAID, for a total space of 320GB
RAID everything, for a total space of 160GB
Tiered information infrastructure, use RAID for some partitions, but not all.
The last approach made sense, as a lot of of the data is cache web page images, and is easily retrievable fromthe internet. This also allowed to have some "scratch space" for downloading large files and so on. For example,90GB mirrored that contained the OS images, settings and critical applications, and 70GB on each drive for scratchand web cache, results in a total of 230GB of disk space, which is 43 percent improvement over an all-RAID solution.
While [Linux LVM2] provides software-based "storage virtualization" similar to the hardware-based IBM System Storage SAN Volume Controller (SVC), it was a bad idea putting different "root" directories of my many OS images on there. With Linux, as with mostoperating systems, it expects things to be in the same place where it last shutdown, but in a multi-boot environment, you might boot the first OS, move things around, and then when you try to boot second OS, it doesn'twork anymore, or corrupts what it does find, or hangs with a "kernel panic". In the end, I decided to use RAIDnon-LVM partitions for the root directories, and only use LVM2 for data that is not needed at boot time.
While they are both Linux, Debian and Fedora were different enough to cause me headaches. Settings weredifferent, parameters were different, file directories were different. Not quite as religious as MacOS-versus-Windows,but you get the picture.
During this time, the facility was out getting a domain name, IP address, subnet mask and so on, so I testedwith my internal 192.168.x.y and figured I would change this to whatever it should be the day I shipped the unit.(I'll find out next week if that was the right approach!)
Afraid that something might go wrong while I am in Tokyo, Japan next week (July 7-11), or Mumbai, India the following week (July 14-18), I added a Secure Shell [SSH] daemon that runs automaticallyat boot time. This involves putting the public key on the server, and each remote admin has their own private key on their own client machine.I know all about public/private key pairs, as IBM is a leader in encryption technology, and was the first todeliver built-in encryption with the IBM System Storage TS1120 tape drive.
To have users have access to all their files from any OS image required that I either (a) have identical copieseverywhere, or (b) have a shared partition. The latter turned out to be the best choice, with an LVM2 logical volumefor "/home" directory that is shared among all of the OS images. As we develop the application, we might findother directories that make sense to share as well.
For developing across platforms, I wanted the Ethernet devices (eth0, eth1, and so on) match the actual ports they aresupposed to be connected to in a static IP configuration. Most people use DHCP so it doesn't matter, but the XSsoftware requires this, so it did. For example, "eth0" as the 1 Gbps port to the WAN, and "eth1/eth2" as the two 10/100 Mbps PCI NIC cards to other servers.Naming the internet interfaces to specific hardware ports wasdifferent on Fedora and Debian, but I got it working.
While it was a stretch goal to develop a backup method, one that could perform Bare Machine Recovery frommedia burned by the DVD, it turned out I needed to do this anyways just to prevent me from losing my work in case thingswent wrong. I used an external USB drive to develop the process, and got everything to fit onto a single 4GB DVD. Using IBM Tivoli Storage Manager (TSM) for this seemed overkill, and [Mondo Rescue] didn't handle LVM2+RAID as well as I wanted, so I chose [partimage] instead, which backs up each primary partition, mirrored partition, or LVM2 logical volume, keeping all the time stamps, ownerships, and symbolic links in tact. It has the ability to chop up the output into fixed sized pieces, which is helpful if you are goingto burn them on 700MB CDs or 4.7GB DVDs. In my case, my FAT32-formatted external USB disk drive can't handle files bigger than 2GB, so this feature was helpful for that as well. I standardized to 660 GiB [about 692GB] per piece, sincethat met all criteria.
The folks at [SysRescCD] saved the day. The standard "SysRescueCD" assigned eth0, eth1, and eth2 differently than the three base OS images, but the nice folks in France that write SysRescCD created a customized[kernel parameter that allowed the assignments to be fixed per MAC address ] in support of this project. With this in place, I was able to make a live Boot-CD that brings up SSH, with all the users, passwords,and Ethernet devices to match the hardware. Install this LiveCD as the "Rescue Image" on the hard disk itself, and also made a Recovery-DVD that boots up just like the Boot-CD, but contains the 4GB of backup files.
For testing, I used Linux's built-in Kernel-based Virtual Machine [KVM]which works like VMware, but is open source and included into the 2.6.20 kernels that I am using. IBM is the leadingreseller of Vmware and has been doing server virtualization for the past 40 years, so I am comfortable with thetechnology. The XS-163 platform with Apache and PostgreSQL servers as a platform for [Moodle], an open source class management system, and the combination is memory-intensive enough that I did not want to incur the overheads running production this manner, but it wasgreat for testing!
With all this in place, it is designed to not need a Linux system admin or XS-163/Moodle expert at the facility. Instead, all we need is someone to insert the Boot-CD or Recover-DVD and reboot the system if needed.
Just before packing up the unit for shipment, I changed the IP addresses to the values they need at the destination facility, updated the [GRUB boot loader] default, and made a final backup which burned the Recover-DVD. Hopefully, it works by just turning on the unit,[headless], without any keyboard, monitor or configuration required. Fingers crossed!
So, thanks to the rest of my team: Greg, Glen, Vicki, Tarun, Marcel, Pablo and Said. I am very excited to bepart of this, and look forward to seeing this become something remarkable!
Two weeks ago, I mentioned in my post [Pulse 2008 - Day 2 Breakout sessions] thatHenk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers. I am no stranger to ABN Amro, having helped "ABN" and "Amro" banks merge their mainframe data in 1991. Henk has agreed to let me share with my readers more ofthis success story here on my blog:
Back in December 2005, Henkand his colleagues had come to visit the IBM Tucson ExecutiveBriefing Center (EBC) to hear about IBM products and services. At the time, I was part of our "STG Lab Services" team that performed ILM assessments at client locations. I explained to ABN Amro that the ILM methodology does not requirean all-IBM solution, and that ILM could even provide benefits with their current mix of storage, software and service providers.The ABN Amro team liked what I had to say, andmy team was commissioned to perform ILM assessments atthree of their data centers:
Sao Paulo (Brazil)
Chicago, IL (USA)
Each data center had its own management, its owndecision making, and its own set of issues, so we structuredeach ILM assessment independently. When we presented our results,we showed what each data center could do better with their existing mixed bagof storage, software and service providers, and also showed howmuch better their life would be with IBM storage, software andservices. They agreed to give IBM a chance to prove it, and soa new "Global Storage Study" was launched to take the recommendationsfrom our three ILM studies, and flesh out the details to make aglobally-integrated enterprise work for them. Once completed,it was renamed the "Global Storage Solution" (GSS).
Henk summarized the above with "I am glad to see Tony Pearsonin the audience, who was instrumental to making this all happen."As with many client testimonials, he presented a few charts onwho ABN Amro is today, the 12th largest bank worldwide, 8th largest in Europe. They operate in 53 countries and manage over a trillioneuros in assets.
They have over 20 data centers, with about 7 PB of disk, and over20 PB of tape, both growing at 50 to 70 percent CAGR. About 2/3 of theiroperations are now outsourced to IBM Global Services, the remaining 1/3is non-IBM equipment managed by a different service provider.
ABN Amro deployed IBM TotalStorage Productivity Center, variousIBM System Storage DS family disk systems, SAN Volume Controller (SVC), Tivoli StorageManager (TSM), Tivoli Provisioning Manager (TPM), and several other products. Armed with these products, they performed the following:
Clean Up. IBM uses the term "rationalization" to relate to the assignment of business value, to avoid confusion with theterm "classification" which many in IT relate to identifyingownership, read and write authorization levels. Often, in theinitial phases of an ILM deployment, a portion of the data isdetermined to be eligible for clean up, either to move to a lower-cost tier or deleted immediately. ABN Amro and IBM set a goal to identifyat least 20 percent of their data for clean up.
New tiers. Rather than traditional "storage tiers" which are often justTier 1 for Fibre Channel disk and Tier 2 for SATA disk, ABN Amroand IBM came up with seven "information infrastructure tiers" thatincorporate service levels, availability and protection status.They are:
High-performance, Highly-available disk with Remote replication.
High-performance, Highly-available disk (no remote replication)
Mid-performance, high-capacity disk with Remote replication
Mid-performance, high-capacity disk (no remote replication)
Non-erasable, Non-rewriteable (NENR) storage employinga blended disk and tape solution.
Enterprise Virtual Tape Library with remote replicationand back-end physical tape
Mid-performance physical tape
These tiers are applied equally across their mainframe anddistributed platforms. All of the tiers are priced per "primary GB", so any additional capacity required for replication orpoint-in-time copies, either local or remote, are all folded into the base price.ABN Amro felt a mission-critical applicationon Windows or UNIX deserves the same Tier 1 service level asa mission-critical mainframe application. Exactly!
Deployed storage virtualization for disk and tape. Thisinvolved the SAN Volume Controller and IBM TS7000 series library.
Implemented workflow automation. The key product here is IBM Tivoli Provisioning Manager
Started an investigation for HSM on distributed. This would be policy-based space management to migrate lessfrequently accessed data to the TSM pool for Windows or UNIX data.
While the deployment is not yet complete, ABN Amro feels they have alreadyrecognized business value:
Reduced cost by identifying data that should be stored on lower tiers
Simplified management, consolidated across all operating systems (mainframe, UNIX, Windows)
Increased utilization of existing storage resources
Reduced manual effort through policy-based automation, which can lead to fewer human errors and faster adaptability to new business opportunities
Standardized backup and other operational procedures
Henk and the rest of ABN Amro are quite pleased with the progress so far,although recent developments in terms of the takeover of ABN AMRO by aconsortium of banks means that the model is only implemented so far in Europe. Further rollout depends on the storage strategy of the new owners. Nonetheless,I am glad that I was able to work with Henk, Jason, Barbara, Steve, Tom, Dennis, Craig and othersto be part of this from the beginning and be able to see it rollout successfully over the years.
Continuing my catch-up on past posts, Jon Toigo on his DrunkenData blog, posted a ["bleg"] for information aboutdeduplication. The responses come from the "who's who" of the storage industry, so I will provide IBM'sview. (Jon, as always, you have my permission to post this on your blog!)
Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.
IBM offers two different forms of deduplication. The first is IBM System Storage N series disk system with Advanced Single Instance Storage (A-SIS), and the second is IBM Diligent ProtecTier software. Larry Freeman from NetApp already explains A-SIS in the [comments on Jon's post], so I will focus on the Diligent offering in this post. The key differentiators for Diligent are:
Data agnostic. Diligent does not require content-awareness, format-awareness nor identification of backup software used to send the data. No special client or agent software is required on servers sending data to an IBM Diligent deployment.
Inline processing. Diligent does not require temporarily storing data on back-end disk to post-process later.
Scalability. Up to 1PB of back-end disk managed with an in-memory dictionary.
Data Integrity. All data is diff-compared for full 100 percent integrity. No data is accidentally discarded based on assumptions about the rarity of hash collisions.
InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?
Diligent is focused on backup workloads, which has the best opportunity for deduplication benefits. The two main benefits are:
Keeping more backup data available online for fast recovery.
Mirroring the backup data to another remote location for added protection. With inline processing, only the deduplicated data is sent to the back-end disk, and this greatly reduces the amount of data sent over the wire to the remote location.
Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one inline function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?
As with any storage offering, the three gating factors are typically:
Will this meet my current business requirements?
Will this meet my future requirements for the next 3-5 years that I plan to use this solution?
What is the Total Cost of Ownership (TCO) for the next 3-5 years?
Assuming you already have backup software operational in your existing environment, it is possible to determine thenecessary ingest rate. How many "Terabytes per Hour" (TB/h) must be received, processed and stored from the backup software during the backup window. IBM intends to document its performance test results of specific software/hardwarecombinations to provide guidance to clients' purchase and planning decisions.
For post-process deployments, such as the IBM N series A-SIS feature, the "ingest rate" during the backup only has to receive and store the data, and the rest of the 24-hour period can be spent doing the post-processing to find duplicates. This might be fine now, but as your data grows, you might find your backup window growing, and that leaves less time for post-processing to catch up. IBM Diligent does the processing inline, so is unaffected by an expansion of the backup window.
IBM Diligent can scale up to 1PB of back-end data, and the ingest rate does not suffer as more data is managed.
As for TCO, post-process solutions must have additional back-end storage to temporarily hold the data until the duplicates can be found. With IBM Diligent's inline methodology, only deduplicated data is stored, so less disk space is required for the same workloads.
Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?
IBM Diligent emulates a tape library, so the incoming data appears as files to be written sequentially to tape. A file is a string of bytes. Unlike block-level algorithms that divide files up into fixed chunks, IBM Diligent performs diff-compares of incoming data with existing data, and identifies ranges of bytes that duplicate what already is stored on the back-end disk. The file is then a sequence of "extents" representing either unique data or existing data. The file is represented as a sequence of pointers to these extents. An extent can vary from2KB to 16MB in size.
De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?
For IBM Diligent, all of the data needed to reconstitute the data is stored on back-end disks. Assuming that all of your back-end disks are available after the disaster, either the original or mirrored copy, then you only need the IBM Diligent software to make sense of the bytes written to reconstitute the data. If the data was written by backup software, you would also need compatible backup software to recover the original data.
De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a subpoena or discovery request? Does de-dupe conflict with the non-repudiation requirements of certain laws?
I am not a lawyer, and certainly there are aspects of[non-repudiation] that may or may not apply to specific cases.
What I can say is that storage is expected to return back a "bit-perfect" copy of the data that was written. Thereare laws against changing the format. For example, an original document was in Microsoft Word format, but is converted and saved instead as an Adobe PDF file. In many conversions, it would be difficult to recreate the bit-perfect copy. Certainly, it would be difficult to recreate the bit-perfect MS Word format from a PDF file. Laws in France and Germany specifically require that the original bit-perfect format be kept.
Based on that, IBM Diligent is able to return a bit-perfect copy of what was written, same as if it were written to regular disk or tape storage, because all data is diff-compared byte-for-byte with existing data.
In contrast, other solutions based on hash codes have collisions that result in presenting a completely different set of data on retrieval. If the data you are trying to store happens to have the same hash code calculation as completely different data already stored on a solution, then it might just discard the new data as "duplicate". The chance for collisions might be rare, but could be enough to put doubt in the minds of a jury. For this reason, IBM N series A-SIS, that does perform hash code calculations, will do a full byte-for-byte comparison of data to ensure that data is indeed a duplicate of an existing block stored.
Some say that de-dupe obviates the need for encryption. What do you think?
I disagree. I've been to enough [Black Hat] conferences to know that it would be possible to read thedata off the back-end disk, using a variety of forensic tools, and piece together strings of personal information,such as names, social security numbers, or bank account codes.
Currently, IBM provides encryption on real tape (both TS1120 and LTO-4 generation drives), and is working withopen industry standards bodies and disk drive module suppliers to bring similar technology to disk-based storage systems.Until then, clients concerned about encryption should consider OS-based or application-based encryption from thebackup software. IBM Tivoli Storage Manager (TSM), for example, can encrypt the data before sending it to the IBMDiligent offering, but this might reduce the number of duplicates found if different encryption keys are used.
Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?
Re-constituting the data back to the original format on tape allows the original backup software to interpret the tape data directly to recover individual files. For example, IBM TSM software can write its primary backup copies to an IBM Diligent offering onsite, and have a "copy pool" on physical tape stored at a remote location. The physical tapes can be used for recovery without any IBM Diligent software in the event of a disaster. If the IBM Diligent back-end disk images are lost, corrupted, or destroyed, IBM TSM software can point to the "copy pool" and be fully operational. Individual files or servers could be restored from just a few of these tapes.
An NDMP-like tape backup of a deduplicated back-end disk would require that all the tapes are in-tact, available, and fully restored to new back-end disk before the deduplication software could do anything. If a single cartridge fromthis set was unreadable or misplaced, it might impact the access to many TBs of data, or render the entire systemunusable.
In the case of a 1PB of back-end disk for IBM Diligent, you would be having to recover over a thousand tapes back to disk before you could recover any individual data from your backup software. Even with dozens of tape drives in parallel, could take you several days for the complete process.This represents a longer "Recovery Time Objective" (RTO) than most people are willing to accept.
Some vendors are claiming de-dupe is “green” — do you see it as such?
Certainly, "deduplicated disk" is greener than "non-deduplicated" disk, but I have argued in past posts, supportedby Analyst reports, that it is not as green as storing the same data on "non-deduplicated" physical tape.
De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?
Deduplication can be applied to primary data, as in the case of the IBM System Storage N series A-SIS. As Larrysuggests, MS Exchange and SharePoint could be good use cases that represent the possible savings for squeezing outduplicates. On the mainframe, many master-in/master-out tape applications could also benefit from deduplication.
I do not believe that deduplication products will run efficiently with “update in place” applications, that is high levels of random writes for non-appending updates. OLTP and Database workloads would not benefit from deduplication.
Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?
In general, new technologies are introduced on software first, and then as implementations mature, get hardware-based to improve performance. The same was true for RAID, compression, encryption, etc. The Hifn card does "hash code" calculations that do not benefit the current IBM Diligent implementation. Currently, IBM Diligent performsLZH compression through software, but certainly IBM could provide hardware-based compression with an integrated hardware/software offering in the future. Since IBM Diligent's inline process is so efficient, the bottleneck in performance is often the speed of the back-end disk. IBM Diligent can get improved "ingest rate" using FC instead of SATA disk.
Sorry, Jon, that it took so long to get back to you on this, but since IBM had just acquired Diligent when you posted, it took me a while to investigate and research all the answers.
Well, it's Tuesday again, and we had several announcements this month, so here is a quick recap.We had some things announce May 13, and then some more announcements today, but since I was busywith conferences, will combine them into one post for the entire month of May 2008.
This time, I thought I would go "audio" with a recording from Charlie Andrews, IBM director ofproduct marketing for IBM System Storage:
Continuing my summary of Pulse 2008, the premiere service managementconference focusing on IBM Tivoli solutions, I attended and presentedbreakout sessions on Monday afternoon.
Tivoli Storage "State-of-the-Subgroup" update
Kelly Beavers, IBM director of Tivoli Storage, presented the first breakout for all of the Tivoli Storage subgroup.Tivoli has several subgroups, but Tivoli Storage leads with revenuesand profits over all the others.Tivoli storage has top performing business partner channel of anysubgroup in IBM's Software Group division.IBM is world's #1 provider of storage vendor (hardware, softwareand services), so this came to no surprise to most of the audience.
Looking at just the Storage Software segment, it is estimatedthat customers will spend $3.5 billion US dollars more in the year 2011 than they did last year in 2007. IBM is #2 or #3 in eachof the four major categories: Data Protection, Replication, Infrastructure management, and Resource management. In eachcategory, IBM is growing market share, often taking away share fromthe established leaders.
There was a lot of excitement over the FilesX acquisition.I am still trying to learn more about this, but what I have gathered so far is that it can:
Like turning a "knob", you can adjust the level of backupprotection from traditional discrete scheduled backups, to morefrequent snapshots, to continuous data protection (CDP). Inthe past, you often used separate products or features to dothese three.
Perform "instantaneous restore" by performing a virtualmount of the backup copy. This gives the appearance that therestore is complete.
This year marks the 15th anniversary of IBM Tivoli StorageManager (TSM), with over 20,000 customers. Also, this yearmarks the 6th year for IBM SAN Volume Controller, having soldover 12,000 SVC engines to over 4,000 customers.
Data Protection Strategies
Greg Tevis, IBM software architect for Tivoli Technical Strategy,and I presented this overview of data protection. We coveredthree key areas:
Protecting against unethical tampering with Non-erasable, Non-rewriteable (NENR) storage solutions
Protecting against unauthorized access with encryption ondisk and tape
Protecting against unexpected loss or corruption with theseven "Business Continuity" tiers
There was so much interest in the first two topics that weonly had about 9 minutes left to cover the third! Fortunately,Business Continuity will be covered in more detail throughoutthe week.
Henk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers using IBM systems, software and services.
Making your Disk Systems more Efficient and Flexible
I did not come up with the titles of these presentations. Theteam that did specifically chose to focus on the "business value"rather than the "products and services" being presented. Inthis session, Dave Merbach, IBM software architect, and I presentedhow SAN Volume Controller (SVC), TotalStorage Productivity Center,System Storage Productivity Center, Tivoli Provisioning Managerand Tivoli Storage Process Manager work to make your disk storagemore efficient and flexible.