Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Tonight PBS plans to air Season 38, Episode 6 of NOVA, titled [Smartest Machine On Earth]. Here is an excerpt from the station listing:
"What's so special about human intelligence and will scientists ever build a computer that rivals the flexibility and power of a human brain? In "Artificial Intelligence," NOVA takes viewers inside an IBM lab where a crack team has been working for nearly three years to perfect a machine that can answer any question. The scientists hope their machine will be able to beat expert contestants in one of the USA's most challenging TV quiz shows -- Jeopardy, which has entertained viewers for over four decades. "Artificial Intelligence" presents the exclusive inside story of how the IBM team developed the world's smartest computer from scratch. Now they're racing to finish it for a special Jeopardy airdate in February 2011. They've built an exact replica of the studio at its research lab near New York and invited past champions to compete against the machine, a big black box code -- named Watson after IBM's founder, Thomas J. Watson. But will Watson be able to beat out its human competition?"
Like most supercomputers, Watson runs the Linux operating system. The system runs 2,880 cores (90 IBM Power 750 servers, four sockets each, eight cores per socket) to achieve 80 [TeraFlops]. TeraFlops is the unit of measure for supercomputers, representing a trillion floating point operations. By comparison, Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University (CMU) estimates that the [human brain is about 100 TeraFlops]. So, in the three seconds that Watson gets to calculate its response, it would have processed 240 trillion operations.
Several readers of my blog have asked for details on the storage aspects of Watson. Basically, it is a modified version of IBM Scale-Out NAS [SONAS] that IBM offers commercially, but running Linux on POWER instead of Linux-x86. System p expansion drawers of SAS 15K RPM 450GB drives, 12 drives each, are dual-connected to two storage nodes, for a total of 21.6TB of raw disk capacity. The storage nodes use IBM's General Parallel File System (GPFS) to provide clustered NFS access to the rest of the system. Each Power 750 has minimal internal storage mostly to hold the Linux operating system and programs.
When Watson is booted up, the 15TB of total RAM are loaded up, and thereafter the DeepQA processing is all done from memory. According to IBM Research, "The actual size of the data (analyzed and indexed text, knowledge bases, etc.) used for candidate answer generation and evidence evaluation is under 1TB." For performance reasons, various subsets of the data are replicated in RAM on different functional groups of cluster nodes. The entire system is self-contained, Watson is NOT going to the internet searching for answers.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
Continuing my week in Chicago, for the IBM Storage Symposium 2008, we had sessions that focused on individual products. IBM System Storage SAN Volume Controller (SVC) was a popular topic.
SVC - Everything you wanted to know, but were afraid to ask!
Bill Wiegand, IBM ATS, who has been working with SAN Volume Controller since it was first introduced in 2003. answered some frequently asked questions about IBM System Storage SAN Volume Controller.
Do you have to upgrade all of your HBAs, switches and disk arrays to the recommended firmware levels before upgrading SVC? No. These are recommended levels, but not required. If you do plan to update firmware levels, focus on the host end first, switches next, and disk arrays last.
How do we request special support for stuff not yet listed on the Interop Matrix?
Submit an RPQ/SCORE, same as for any other IBM hardware.
How do we sign up for SVC hints and tips? Go to the IBM
[SVC Support Site] and select the "My Notifications" under the "Stay Informed" box on the right panel.
When we call IBM for SVC support, do we select "Hardware" or "Software"?
While the SVC is a piece of hardware, there are very few mechanical parts involved. Unless there are sparks,
smoke, or front bezel buttons dangling from springs, select "Software". Most of the questions are
related to the software components of SVC.
When we have SVC virtualizing non-IBM disk arrays, who should we call first?
IBM has world-renown service, with some of IT's smartest people working the queues. All of the major storage vendors play nice
as part of the [TSAnet Agreement when a mutual customer is impacted.
When in doubt, call IBM first, and if necessary, IBM will contact other vendors on your behalf to resolve.
What is the difference between livedump and a Full System Dump?
Most problems can be resolved with a livedump. While not complete information, it is generally enough,
and is completely non-disruptive. Other times, the full state of the machine is required, so a Full System Dump
is requested. This involves rebooting one of the two nodes, so virtual disks may temporarily run slower on that
What does "svc_snap -c" do?The "svc_snap" command on the CLI generates a snap file, which includes the cluster error log and trace files from all nodes. The "-c" parameter includes the configuration and virtual-to-physical mapping that can be useful for
disaster recovery and problem determination.
I just sent IBM a check to upgrade my TB-based license on my SVC, how long should I wait for IBM to send me a software license key?
IBM trusts its clients. No software license key will be sent. Once the check clears, you are good to go.
During migration from old disk arrays to new disk arrays, I will temporarily have 79TB more disk under SVC management, do I need to get a temporary TB-based license upgrade during the brief migration period?
Nope. Again, we trust you. However, if you are concerned about this at all, contact IBM and they will print out
a nice "Conformance Letter" in case you need to show your boss.
How should I maintain my Windows-based SVC Master Console or SSPC server?
Treat this like any other Windows-based server in your shop, install Microsoft-recommended Windows updates,
run Anti-virus scans, and so on.
Where can I find useful "How To" information on SVC?
Specify "SAN Volume Controller" in the search field of the
[IBM Redbooks vast library of helpful books.
I just added more managed disks to my managed disk group (MDG), can I get help writing a script to redistribute the extents to improve wide-striping performance?
Yes, IBM has scripting tools available for download on
[AlphaWorks]. For example, svctools will take
the output of the "lsinfo" command, and generate the appropriate SVC CLI to re-migrate the disks around to optimize
performance. Of course, if you prefer, you can use IBM Tivoli Storage Productivity Center instead for a more
Any rules of thumb for sizing SVC deployments?
IBM's Disk Magic tool includes support for SVC deployments. Plan for 250 IOPS/TB for light workloads,
500 IOPS/TB for average workloads, and 750 IOPS/TB for heavy workloads.
Can I migrate virtual disks from one manage disk group (MDG) to another of different extent size?
Yes, the new Vdisk Mirroring capability can be used to do this. Create the mirror for your Vdisk between the
two MDGs, wait for the copy to complete, and then split the mirror.
Can I add or replace SVC nodes non-disruptively? Absolutely, see the Technotes
[SVC Node Replacement page.
Can I really order an SVC EE in Flamingo Pink? Yes. While my blog post that started all
this [Pink It and Shrink It] was initially just some Photoshop humor, the IBM product manager for SVC accepted this color choice as an RPQ option.
The default color remains Raven Black.
Well, we had another successful event in Second Life today.
Unlike our April 26 launch of our System Storage products for IBM Business Partners only, this time we decided this time to make it as a "Meet the Storage Experts" Q&A Panel format, and open up registration to everyone. Thesubject matter experts sat at the front of the room on four stools. We had six rows of chairs arrangedsemi-circularly.
Shown above, from left to right, are the avatars of our four experts:
IBM System Storage N series, focusing on recent N3000 disk system announcements
Harold Pike (holding the microphone while speaking)
IBM System Storage DS3000 and DS4000 series, focusing on recent DS3000 disk system announcements
IBM System Storage TS series, focusing on recent TS2230, TS3400 and TS7700 tape system announcements
IBM storage networking, focusing on recent IBM SAN256B director blade announcements
While Eric was a veteran Second Lifer, having presented at our April event, the other three were trainedon how to raise their hand, speak into the microphone, sit on the stool, and so on. I want to thank allof our experts for putting in this effort!
The event was produced by Katrina H Smith. She did a great job, and made sure we were on top ofall the issues and tasks required to get the job done. Running a Second Life event is every bit ashard as running a real face-to-face event. We had several meetings to discuss venue details, placementof chairs, placement of product demos, audio/video recording, wall decorations, tee-shirt and coffee mug design, logistics, and so on.
I acted as moderator/emcee for the event. That is my back in the picture above. The process wassimple, modeled after the "Birds of a Feather" sessions at events like SHARE and the IBMStorage and Storage Networking Symposium. We threw out a list of topics the experts would cover,and people in the audience would "raise their left hand". I, as the moderator, would then walkover to each person, and hold out the microphone for them to ask the question. I would then repeat the question and ask the appropriate expert to provide an answer. We defined gestures onhow to "raise hand" and "put hand down" that we gave to each registered participant.
We had four dedicated "camera-avatars" in world to capture both video and screenshots.Our video editors are now working to edit "highlight videos" that we can use at future events, for training materials, and for our internal "BlueTube" online video system.
The room was filled with examples of each of our products, made into 3D objects that were dimensionallycorrect, and "textured" with photographs of the actual products. If you click on an object, you get a "notecard" that provided more information. Special thanks to Scott Bissmeyer for making all of theseobjects for us.
We made posters of each expert and placed them in all four corners of the room. On the bottom of each coffee mug was a picture of each of the experts, and if you walked under each of the posters, you were"dispensed" a coffee mug matching the expert shown in the poster.Participants could "Collect all Four!" When you bring the coffee mug up to takea sip, the picture on the bottom of the mug is exposed for all to see.And as a final give-away to the audience, we made a variety of event tee-shirts and polo-shirts.
At the end of the session, we asked everyone to click on the "Survey" kiosk near the exit door. We askedsix simple questions using SurveyMonkey.com that took only a fewminutes to process. We found asking questions immediately at the end of the event was the best way tocapture this feedback.
From a "Green" perspective, we had people registered from the following countries: US, India, Mexico,Australia, United Kingdom, Brazil, Germany, Argentina, Chile, China, Canada, and Venezuela. Second Lifeallows all these people who probably could not travel, or could not afford the time and expense to travel,to participate in a simulated face-to-face meeting without energy consumption of traditional travel methods.
More importantly, we got several leads for business. People often ask "Yes, but is there any businessassociated with this?" This time, there was, based on the answers to the questions, several avatars asked for a real sales call to follow-up on the products and offerings they were discussed.
With such a great success, we have already scheduled our next Second Life event, November 8. Mark your calendars! I'll postmore details on the registration process of the November event when available.
Every year, I teach hundreds of sellers how to sell IBM storage products. I have been doing this since the late 1990s, and it is one task that has carried forward from one job to another as I transitioned through various roles from development, to marketing, to consulting.
This week, I am in the city of Taipei [Taipei] to teach Top Gun sales class, part of IBM's [Sales Training] curriculum. This is only my second time here on the island of Taiwan.
As you can see from this photo, Taipei is a large city with just row after row of buildings. The metropolitan area has about seven million people, and I saw lots of construction for more on my ride in from the airport.
The student body consists of IBM Business Partners and field sales reps eager to learn how to become better sellers. Typically, some of the students might have just been hired on, just finished IBM Sales School, a few might have transferred from selling other product lines, while others are established storage sellers looking for a refresher on the latest solutions and technologies.
I am part of the teach team comprised of seven instructors from different countries. Here is what the week entails for me:
Monday - I will present "Selling Scale-Out NAS Solutions" that covers the IBM SONAS appliance and gateway configurations, and be part of a panel discussion on Disk with several other experts.
Tuesday - I have two topics, "Selling Disk Virtualization Solutions" and "Selling Unified Storage Solutions", which cover the IBM SAN Volume Controller (SVC), Storwize V7000 and Storwize V7000 Unified products.
Wednesday - I will explain how to position and sell IBM products against the competition.
Thursday - I will present "Selling Infrastructure Management Solutions" and "Selling Unified Recovery Management Solutions", which focus on the IBM Tivoli Storage portfolio, including Tivoli Storage Productivity Center, Tivoli Storage Manager (TSM), and Tivoli Storage FlashCopy Manager (FCM). The day ends with the dreaded "Final Exam".
Friday - The students will present their "Team Value Workshop" presentations, and the class concludes with a formal graduation ceremony for the subset of students who pass. A few outstanding students will be honored with "Top Gun" status.
These are the solution areas I present most often as a consultant at the IBM Executive Briefing Center in Tucson, so I can provide real-life stories of different client situations to help illustrate my examples.
The weather here in Taipei calls for rain every day! I was able to take this photo on Sunday morning while it was still nice and clear, but later in the afternoon, we had quite the downpour. I am glad I brought my raincoat!
With all the announcements we had in June, it is easy for some of the more subtle enhancements to get overlooked. While I was at Orlando for the IBM Edge conference, I was able to blog about some of the key featured announcements. Then, later, when I got back from Orlando to Tucson, I was able to then blog about [More IBM Storage Announcements]. For IBM's Scale-Out Network Attach Storage (SONAS), I had simply:
"SONAS v1.3.2 adds support for management by the newly announced IBM Tivoli Storage Productivity Center v5.1 release. Also, IBM now officially supports Gateway configurations that have the storage nodes connected to XIV or Storwize V7000 disk systems. These gateway configurations offer new flexible choices and options for our ever-expanding set of clients."
In my defense, IBM numbers its software releasees with version.release.modification, so 1.3.2 is Version 1, Release 3, Modification 2. Generally, modification announcements don't get much attention. The big announcement for v1.3.0 of SONAS happened last October, see my blog post [October 2011 Announcements - Part I] or
the nice summary post [IBM Scale-out Network Attached Storage 1.3.0] from fellow blogger Roger Luethy.
Here is a diagram showing the three configurations of SONAS.
I have covered the SONAS Appliance model in depth in previous blogs, with options for fast and slow disk speeds, choice of RAID protection levels, a collection of enterprise-class software features provided at no additional charge, and interfaces to support a variety of third party backup and anti-virus checking software.
The basics haven't changed. The SONAS appliance consists of 2 to 32 interface nodes, 2 to 60 storage nodes, and up to 7,200 disk drives. The maximum configuration takes up 17 frames and holds 21.6PB of raw disk capacity, which is about 17PB usable space when RAID6 is configured. An interface nodes has one or two hex-core processors with up to 144GB of RAM to offer up to 3.5GB/sec performance each. This makes IBM SONAS the fastest performing and most scalable disk system in IBM's System Storage product line.
I thought I would go a bit deeper on the gateway models. These models support up to ten storage nodes, organized in pairs. The key difference is that instead of internal disk controllers, the storage nodes connect to external disk systems. There is enough space in the base SONAS rack to hold up to six interface nodes, or you can add a second rack if you need more interface nodes for increased performance.
SONAS with XIV gateway
XIV offers a clever approach to storage that allows for incredibly fast access to data on relatively slow 7200 RPM drives. By scattering data across all drives and taking advantage of parallel processing, rebuild times for a failed 3TB drive are less than 75 minutes. Compare that to typical rebuild times for 3TB drives that could take as much as 9-10 hours under active I/O loads!
In the configuration, each pair of storage nodes can connect to external SAN Fabric switches that then connect to one or two XIV storage systems. How simple is that? These can be the original XIV systems that support 1TB and 2TB drives, or the new XIV Gen3 systems that support 400GB Solid-state drives (SSD) and 3TB spinning disk drives. In both cases, you can acquire additional storage capacity as little as 12 drives at a time (one XIV module holds 12 drives).
The maximum configuration of ten XIV boxes could hold 1,800 drives. At 3TB drive per drive, that would be 2.4PB usable capacity.
The SONAS with XIV gateway does not require the XIV devices to be dedicated for SONAS purposes. Rather, you can assign some XIV storage space for the SONAS, and the rest is available for other servers. In this manner, SONAS just looks like another set of Linux-based servers to the XIV storage system. This in effect gives you "Unified Storage", with a full complement of NAS protocols from the SONAS side (NFS, CIFS, FTP, HTTPS, SCP) as well as block-based protocols directly from the XIV (FCP, iSCSI).
SONAS with Storwize V7000 gateway
The other gateway offering is the SONAS with Storwize V7000. Like the SONAS with XIV gateway model, you connect a pair of SONAS storage nodes to 1 or 2 Storwize V7000 disk systems. However, you do not need a SAN Fabric switch in between. You can instead connect the SONAS storage nodes directly to the Storwize V7000 control enclosures.
To acquire additional storage capacity, you can purchase a single drive at a time. That's right. Not 12 drives, or 60 drives, at a time, but one at a time. The Storwize V7000 supports a wide range of SSD, SAS and NL-SAS drives at different sizes, speeds and capacities. The drives can be configured into various RAID protection levels: RAID 0, 1, 3, 5, 6 and 10.
Each Storwize V7000 control enclosure can have up to nine expansion drawers. If you choose the 2.5-inch 24-bay models, you can have up to 480 drives per storage node pair, for a total of 2,400 drives. If you choose the 3.5-inch 12-bay models, you can have up to 240 drives per node pair, 1,200 drives total. At 3TB per drive, this could be 3.6PB of raw capacity. The usable PB would depend on which RAID level you selected. Of course, you don't have to limit yourself all to one size or the other. Feel free to mix 2.5-inch and 3.5-inch drawers to provide different storage pool capabilities.
All three SONAS configurations support Active Cloud Engine. This is a collection of features that differentiate SONAS from the other scale-out NAS wannabees in the marketplace:
Policy-driven Data Placement -- Different files can be directed to different storage pools. You no longer have to associate certain file systems to certain storage technologies.
High-speed Scan Engine -- SONAS can scan 10 million files per minute, per node. These scans can be used to drive data migration, backups, expirations, or replications, for example. It is over 100 times faster than traditional walk-the-directory-tree approaches employed by other NAS solutions.
Policy-driven Migration -- You can migrate files from one storage pool to another, based on age, days since last reference, size, and other criteria. The files can be moved from disk to disk, or move out of SONAS and stored on external media, such as tape or a virtual tape library. A lot of data stored on NAS systems is dormant, with little or no likelihood of being looked at again. Why waste money keeping that kind of data on expensive disk? With SONAS, you can move those files to tape can save lots of money. The files are stubbed in the SONAS file system, so that an access request to a file will automatically trigger a recall to fetch the data from tape back to the SONAS system.
Policy-driven Expiration -- SONAS can help you keep your system clean, by helping you decide what files should be deleted. This is especially useful for things like logs and traces that tend to just hang around until some deletes them manually.
WAN Caching -- This allows one SONAS to act as a "Cloud Storage Gateway" for another SONAS at a remote location connected by Wide Area Network (WAN). Let's say your main data center has a large SONAS repository of files, and a small branch office has a smaller SONAS. This allows all locations to have a "Global" view of the all the interconnected SONAS systems, with a high-speed user experience for local LAN-based access to the most recent and frequently used files.
If you want to learn more, see the [IBM SONAS landing page]. Next week, I will be across the Pacific Ocean in [Taipei], to teach IBM Top Gun class to sales reps and IBM Business Partners. "Selling SONAS" will be one of the topics I will be covering!
The weather has warmed up here in Tucson so I started my Spring Cleaning early this year and unearthed from my garage a [Bankers Box] full of floppy diskettes.
IBM invented the floppy disk back in 1971, and continued to make improvements and enhancements through the 1980s and 1990s. It will be one of the many inventions celebrated as part of IBM's Centennial (100-year) anniversary. Here is an example [T-shirt]
IBM needed a way to send out small updates and patches for microcode of devices out in client locations. IBM had drives that could write information, and sent out "read-only" drives to the customer locations to receive these updates. These were flexible plastic circles with a magnetic coating, and placed inside a square paper sleeve. Imagine a floppy disk the size of a piece of standard paper. The 8-inch floppy fit conveniently in a manila envelope, sendable by standard mail, and could hold nearly 80KB of data.
I've been using floppies for the past thirty years. Here's some of my fondest memories:
While still in high school, my friend Franz Kurath and I formed "Pearson Kurath Systems", a software development firm. We wrote computer programs to run on UNIX and Personal Computers for small businesses here in Tucson. Whenever we developed a clever piece of code, a subroutine or procedure, we would save it on a floppy disk and re-use it for our next project. We wrote in the BASIC language, and our databases were simple Comma-Separated-Variable (CSV) flat files.
The 5.25-inch floppies we used could hold 360KB, and were flexible like the 8-inch models. Later versions of these 5.25-inch floppies would be able to hold as much as 1.2MB of data. We would convert single-sided floppies into double-sided ones by cutting out a notch in the outer sleeve. Covering up the notches would mark them as read-only.
The 3.5-inch floppies were introduced with a hard plastic shell, with the selling point that you can slap on a mailing label and postage and send it "as is" without the need for a separate envelope. These new 3.5-inch floppies would carry "HD" for high density 720KB, and double-sided versions could hold 1.44MB of data. The term "diskette" was used to associate these new floppies with [hard-shelled tape cassettes]. Sliding a plastic tab would allow floppies to be marked "read-only". IBM has the patent on this clever invention.
Continuing our computer programming business in college, Franz and I took out a bank loan to buy our first Personal Computer, for over $5000 dollars USD. Until then, we had to use equipment belonging to each client. The banks we went to didn't understand why we needed a computer, and suggested we just track our expenses on traditional green-and-white ledger paper. Back then, peronsal computers were for balancing your checkbook, playing games and organizing your collection of cooking recipies. But for us, it was a production machine. A computer with both 5.25-inch and 3.5-inch drives could copy files from one format to another as needed. The boost in productivity paid for itself within months.
Apple launched its Macintosh computer in 1984, with a built-in 3.5-inch disk drive as standard equipment. Here is a YouTube video of an [astronaut ejecting a floppy disk] from an Apple computer in space.
In my senior year at the University of Arizona, my roommate Dave had borrowed my backpack to hold his lunch for a bike ride. He thought he had taken everything out, but forgot to remove my 3.5-inch floppy diskette containing files for my senior project. By the time he got back, the diskette was covered in banana pulp. I was able to rescue my data by cracking open the plastic outer shell, cleaning the flexible magnetic media in soapy water, placing it back into the plastic shell of a second diskette, and then copied the data off to a third diskette.
After graduating from college, Franz and I went our separate ways. I went to work for IBM, and Franz went to work for [Chiat/Day], the advertising agency famous for the 1984 Macintosh commercial. We still keep in touch through Facebook.
At IBM, I was given a 3270 terminal to do my job, and would not be assigned a personal computer until years later. Once I had a personal computer at home and at work, the floppy diskette became my "briefcase". I could download a file or document at work, take it home, work on it til the wee hours of the morning, and then come back the next morning with the updated effort.
To help prepare me for client visits and public speaking at conferences, IBM loaned me out to local schools to teach. This included teaching Computer Science 101 at Pima Community College. When asked by a student whether to use "disc" or "disk", I wrote a big letter "C" on the left side of the chalkboard, and a big letter "K" on the right side. If it is round, I told the students while pointing at the letter "C", like a CD-ROM or DVD, use "disc". If it has corners, pointing to corners of the letter "K", like a floppy diskette or hard disk drive, use "disk".
On one of my business trips to visit a client, we discovered the client had experienced a problem that we had just recently fixed. Normally, this would have meant cutting a Program Trouble Fix (PTF) to a 3480 tape cartridge at an IBM facility, and send it to the client by mail. Unwilling to wait, I offered to download the PTF onto a floppy diskette on my laptop, upload it from a PC connected to their systems, and apply it there. This involved a bit of REXX programming to deal with the differences between ASCII and EBCDIC character sets, but it worked, and a few hours later they were able to confirm the fix worked.
In 1998, Apple would signal the begining of the end of the floppy disk era, announcing their latest "iMac" would not come with an internal built-in floppy drive. David Adams has a great article on this titled [The iMac and the Floppy Drive: A Conspiracy Theory]. You can get external floppy drives that connect via USB, so not having an internal drive is no longer a big deal.
While teaching a Top Gun class to a mix of software and hardware sales reps, one of the students asked what a "U" was. He had noticed "2U" and "3U" next to various products and wondered what that was referring to. The "U" represents the [standard unit of measure for height of IT equipment in standard racks]. To help them visualize, I explained that a 5.25-inch floppy disk was "3U" in size, and a 3.5-inch floppy diskette was "2U". Thus, a "U" is 1.75 inches, the thinnest dimension on a two-by-four piece of lumber. Servers that were only 1U tall would be referred to as "pizza boxes" for having similar dimensions.
Every year, right around November or so, my friends and family bring me their old computers for me to wipe clean. Either I would re-load them with the latest Ubuntu Linux so that their kids could use it for homework, or I would donate it to charity. Last November, I got a computer that could not boot from a CD-ROM, forcing me to build a bootable floppy. This gave me a chance to check out the various 1-disk and 2-disk versions of Linux and other rescue disks. I also have a 3-disk set of floppies for booting OS/2 in command line mode.
So while this unexpected box of nostalgia derailed my efforts to clean out my garage this weekend, it did inspire me to try to get some of the old files off them and onto my PC hard drive. I have already retrieved some low-res photographs, some emails I sent out, and trip reports I wrote. While floppy diskettes were notorious for being unreliable, and this box of floppies has been in the heat and cold for many Arizonan summers and winters, I am amazed that I was able to read the data off most of them so far, all the way back to data written in 1989. While the data is readable, in most cases I can't render it into useful information. This brings up a few valuable lessons:
Backups are not Archives
Some of the files are in proprietary formats, such as my backups for TurboTax software. I would need a PC running a correct level of Windows operating system, and that particular software, just to restore the data. TurboTax shipped new software every year, and I don't know how forward or backward-compatible each new release was.
Another set of floppies are labeled as being in "FDBACK" format. I have no idea what these are. Each floppy has just two files, "backup.001" and "control.001", for example.
Backups are intended solely to protect against unexpected loss from broken hardware or corrupted data. If you plan to keep data as archives for long-term retention, use archive formats that will last a long time, so that you can make sense of them later.
Operating System Compatibility
Windows 7 and all of my favorite flavors of Linux are able to recognize the standard "FAT" file system that nearly all of my floppies are written in. Sadly, I have some files that were compressed under OS/2 operating system using software called "Stacker". I may have to stand up an OS/2 machine just to check out what is actually on those floppies.
You can't judge a book by its cover
Floppies were a convenient form of data interchange. Sometimes, I reused commercially-labeled floppies to hold personal files. So, just because a floppy says "America On-Line (AOL) version 2.5 Installation", I can't just toss it away. It might actually contain something else entirely. This means I need to mount each floppy to check on its actual contents.
So what will I do with the floppies I can't read, can't write, and can't format? I think I will convert them into a [retro set of coasters], to protect my new living room furniture from hot and cold beverages.
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], I presented a session on Storage for the Green Data Center, and attended a System x session on Greening the Data Center. Since they were related, I thought I would cover both in this post.
Storage for the Green Data Center
I presented this topic in four general categories:
Drivers and Metrics - I explained the three key drivers for consuming less energy, and the two key metrics: Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE).
Storage Technologies - I compared the four key storage media types: Solid State Drives (SSD), high-speed (15K RPM) FC and SAS hard disk, slower (7200 RPM) SATA disk, and tape. I had comparison slides that showed how IBM disk was more energy efficient than competition, for example DS8700 consumes less energy than EMC Symmetrix when compared with the exact same number and type of physical drives. Likewise, IBM LTO-5 and TS1130 tape drives consume less energy than comparable HP or Oracle/Sun tape drives.
Integrated Systems - IBM combines multiple storage tiers in a set of integrated systems managed by smart software. For example, the IBM DS8700 offers [Easy Tier] to offer smart data placement and movement across Solid-State drives and spinning disk. I also covered several blended disk-and-tape solutions, such as the Information Archive and SONAS.
Actions and Next Steps - I wrapped up the talk with actions that data center managers can take to help them be more energy efficient, from deploying the IBM Rear Door Heat Exchanger, or improving the management of their data.
Greening of the Data Center
Janet Beaver, IBM Senior Manager of Americas Group facilities for Infrastructure and Facilities, presented on IBM's success in becoming more energy efficient. The price of electricity has gone up 10 percent per year, and in some locations, 30 percent. For every 1 Watt used by IT equipment, there are an additional 27 Watts for power, cooling and other uses to keep the IT equipment comfortable. At IBM, data centers represent only 6 percent of total floor space, but 45 percent of all energy consumption. Janet covered two specific data centers, Boulder and Raleigh.
At Boulder, IBM keeps 48 hours reserve of gasoline (to generate electricity in case of outage from the power company) and 48 hours of chilled water. Many power outages are less than 10 minutes, which can easily be handled by the UPS systems. At least 25 percent of the Computer Room Air Conditioners (CRAC) are also on UPS as well, so that there is some cooling during those minutes, within the ASHRAE guidelines of 72-80 degrees Fahrenheit. Since gasoline gets stale, IBM runs the generators once a month, which serves as a monthly test of the system, and clears out the lines to make room for fresh fuel.
The IBM Boulder data center is the largest in the company: 300,000 square feet (the equivalent of five football fields)! Because of its location in Colorado, IBM enjoys "free cooling" using outside air temperature 63 percent of the year, resulting in a PUE of 1.3 rating. Electricity is only 4.5 US cents per kWh. The center also uses 1 Million KwH per year of wind energy.
The Raleigh data center is only 100,000 Square feet, with a PUE 1.4 rating. The Raleigh area enjoys 44 percent "free cooling" and electricity costs at 5.7 US cents per kWh. The Leadership in Energy and Environmental Design [LEED] has been updated to certify data centers. The IBM Boulder data center has achieved LEED Silver certification, and IBM Raleigh data center has LEED Gold certification.
Free cooling, electricity costs, and disaster susceptibility are just three of the 25 criteria IBM uses to locate its data centers. In addition to the 7 data centers it manages for its own operations, and 5 data centers for web hosting, IBM manages over 400 data centers of other clients.
It seems that Green IT initiatives are more important to the storage-oriented attendees than the x86-oriented folks. I suspect that is because many System x servers are deployed in small and medium businesses that do not have data centers, per se.
The technology industry is full of trade-offs. Take for example solar cells that convert sunlight to electricity. Every hour, more energy hits the Earth in the form of sunlight than the entire planet consumes in an entire year. The general trade-off is between energy conversion efficiency versus abundance of materials:
Get 9-11 percent efficiency using rare materials like indium (In), gallium (Ga) or cadmium (Cd).
Get only 6.7 percent efficiency using abundant materials like copper (Cu), tin (Sn), zinc (Zn), sulfur (S), and selenium (Se)
A second trade-off is exemplified by EMC's recent GeoProtect announcement. This appears similar to the geographic dispersal method introduced by a company called [CleverSafe]. The trade-off is between the amount of space to store one or more copies of data and the protection of data in the event of disaster. Here's an excerpt from fellow blogger Chuck Hollis (EMC) titled ["Cloud Storage Evolves"]:
"Imagine a average-sized Atmos network of 9 nodes, all in different time zones around the world. And imagine that we were using, say, a 6+3 protection scheme.
The implication is clear: any 3 nodes could be completely lost: failed, destroyed, seized by the government, etc.
-- and the information could be completely recovered from the surviving nodes."
For organizations worried about their information falling into the wrong hands (whether criminal or government sponsored!), any subset of the nodes would yield nothing of value -- not only would the information be presumably encrypted, but only a few slices of a far bigger picture would be lost.
Seized by the government?falling into the wrong hands? Is EMC positioning ATMOS as "Storage for Terrorists"? I can certainly appreciate the value of being able to protect 6PB of data with only 9PB of storage capacity, instead of keeping two copies of 6PB each, the trade-off means that you will be accessing the majority of your data across your intranet, which could impact performance. But, if you are in an illicit or illegal business that could have a third of your facilities "seized by the government", then perhaps you shouldn't house your data centers there in the first place. Having two copies of 6PB each, in two "friendly nations", might make more sense.
(In reality, companies often keep way more than just two copies of data. It is not unheard of for companies to keep three to five copies scattered across two or three locations. Facebook keeps SIX copies of photographs you upload to their website.)
ChuckH argues that the governments that seize the three nodes won't have a complete copy of the data. However, merely having pieces of data is enough for governments to capture terrorists. Even if the striping is done at the smallest 512-byte block level, those 512 bytes of data might contain names, phone numbers, email addresses, credit cards or social security numbers. Hackers and computer forensics professionals take advantage of this.
You might ask yourself, "Why not just encrypt the data instead?" That brings me to the third trade-off, protection versus application performance. Over the past 30 years, companies had a choice, they could encrypt and decrypt the data as needed, using server CPU cycles, but this would slow down application processing. Every time you wanted to read or update a database record, more cycles would be consumed. This forced companies to be very selective on what data they encrypted, which columns or fields within a database, which email attachments, and other documents or spreadsheets.
An initial attempt to address this was to introduce an outboard appliance between the server and the storage device. For example, the server would write to the appliance with data in the clear, the appliance would encrypt the data, and pass it along to the tape drive. When retrieving data, the appliance would read the encrypted data from tape, decrypt it, and pass the data in the clear back to the server. However, this had the unintended consequences of using 2x to 3x more tape cartridges. Why? Because the encrypted data does not compress well, so tape drives with built-in compression capabilities would not be able to shrink down the data onto fewer tapes.
(I covered the importance of compressing data before encryption in my previous blog post
[Sock Sock Shoe Shoe].)
Like the trade-off between energy efficiency and abundant materials, IBM eliminated the trade-off by offering compression and encryption on the tape drive itself. This is standard 256-bit AES encryption implemented on a chip, able to process the data as it arrives at near line speed. So now, instead of having to choose between protecting your data or running your applications with acceptable performance, you can now do both, encrypt all of your data without having to be selective. This approach has been extended over to disk drives, so that disk systems like the IBM System Storage DS8000 and DS5000 can support full-disk-encryption [FDE] drives.
Well, it's Tuesday again, and we have more IBM announcements.
XIV asynchronous mirror
For those not using XIV behind SAN Volume Controller, [XIV now offers native asynchronous mirroring] support to another XIV far, far away. Unlike other disk systems that are limited to two or three sites, an XIV can mirror to up to 15 other sites. The mirroring can be at the individual volume, or a consistency group of multiple volumes. Each mirror pair can have its own recovery point objective (RPO). For example, a consistency group of mission critical application data might be given an RPO of 30 seconds, but less important data might be given an RPO of 20 minutes. This allows the XIV to prioritize packets it sends across the network.
As with XIV synchronous mirror, this new asynchronous mirror feature can send the data over either its
Fibre Channel ports (via FCIP) or its Ethernet ports.
The IBM System Storage SAN384B and SAN768B directors now offer [two new blades!]
A 24-port FCoCEE blade where each port can handle 10Gb convergence enhanced Ethernet (CEE). CEE can be used to transmit Fibre Channel, TCP/IP, iSCSI and other Ethernet protocols. This connect directly to server's converged network adapter (CNA) cards.
A 24-port mixed blade, with 12 FC ports (1Gbps, 2Ggbs, 4Gbps, 8Gbps), 10 Ethernet ports (1GbE) and 2 Ethernet ports (10GbE). This would connect to traditional server NIC, TOE and HBA cards as well as traditional NAS, iSCSI and FC based storage devices.
IBM also announced the IBM System Storage [SAN06B-R Fibre Channel router]. This has 16 FC ports (1Gbps up to 8Gbps) and six Ethernet ports (1GbE), with support for both FC routing as well as FCIP extended distance support.
With the holiday season coming up at the end of the year, now is a great time to ask Santa for a new shiny pair of XIV systems, and some extra networking gear to connect them.
I am proud to announce we have yet another IBM blogger for the storosphere, Rich Swain from IBM's Research Triangle Park in Raleigh, North Carolina will blog about
[News and Information on IBM’s N series].
Rich is a Field Technical Sales Specialist with deep-dive knowledge and experience.
He's already posted a dozen or so entries, to give you a feel for the level of technical detail he will provide.
Please welcome Rich by following his blog and posting comments on his posts.
Well, it's Wednesday! Normally, IBM makes its announcements on Tuesday's, but this week that landed on the 13th, and some people are superstitious, so we pushed it back to today. Fellow IBM master inventory Barry Whyte starts the first in a series of posts with: New SVC v5 CF8 node with native SSD support.
There are really two separate items being announced for the IBM System Storage SAN Volume Controller (SVC):
SVC v5 software
The software moves from a 32-bit kernel to a 64-bit kernel. Fortunately, IBM had the foresight to know that would happen back in 2005, so models 8F2, 8F4 and 8G4 can be upgrade to this new software level and gain new functionality. This is because these models have 64-bit capable processors. Those with six-year-old 4F2 will continue to run on SVC 4.3.1, but should consider it's about time to upgrade soon.
New 2145-CF8 model
The CF8 is based on the IBM System x 3550M2. Each node can have up to 4 Solid-State Drives (SSD) that can be treated as SVC Managed Disk Groups. Virtual disks can be easily migrated from hard disk drives (HDD) over to SSD, processed, and then move them back to HDD. By treating the SSD as managed disks, rather than an extension of the cache, we are able to support all of the features and functions in a seamless manner.
As Barry says, IBM has been working on this for quite a while, and based on initial responses looks to be quite successful in the market!
Well, it's Tuesday, and that means IBM announcements! Today is bigger, as there are a lot of Dynamic Infrastructure announcements throughout the company with a common theme, cloud computing and smart business systems that support the new way of doing things. Today, IBM announced its new "IBM Smart Archive" strategy that integrates software, storage, servers and services into solutions that help meet the challenges of today and tomorrow. IBM has been spending the past few years working across its various divisions and acquisitions to ensure that our clients have complete end-to-end solutions.
IBM is introducing new "Smart Business Systems" that can be used on-premises for private-cloud configurations, as well as by cloud-computing companies to offer IT as a service.
IBM [Information Archive] is the first to be unveiled, a disk-only or blended disk-and-tape Information Infrastructure solution that offers a "unified storage" approach with amazing flexibility for dealing with various archive requirements:
For those with applications using the IBM Tivoli Storage Manager (TSM) or IBM System Storage Archive Manager (SSAM) API of the IBM System Storage DR550 data retention solution, the Information Archive will provide a direct migration, supporting this API for existing applications.
For those with IBM N series using SnapLock or the File System Gateway of the DR550, the Information Archive will support various NAS protocols, deployed in stages, including NFS, CIFS, HTTP and FTP access, with Non-Erasable, Non-Rewriteable (NENR) enforcement that are compatible with current IBM N series SnapLock usage.
For those using NAS devices with PACS applications to store X-rays and other medical images, the Information Archive will provide similar NAS protocol interfaces. Information Archive will support both read-only data such as X-rays, as well as read/write data such as Electronic Medical Records.
Information Archive is not just for compliance data that was previously sent to WORM optical media. Instead, it can handle all kinds of data, rewriteable data, read-only data, and data that needs to be locked down for tamper protection. It can handle structured databases, emails, videos and unstructured files, as well as objects stored through the SSAM API.
The Information Archive has all the server, storage and software integrated together into a single machine type/model number. It is based on IBM's General Parallel File System (GPFS) to provide incredible scalability, the same clustered file system used by many of the top 500 supercomputers. Initially, Information Archive will support up to 304TB raw capacity of disk and Petabytes of tape. You can read the [Spec Sheet] for other technical details.
For those who prefer a more "customized" approach, similar to IBM Scale-Out File Services (SoFS), IBM has [Smart Business Storage Cloud]. IBM Global Services can customize a solution that is best for you, using many of the same technologies. In fact, IBM Global Services announced a variety of new cloud-computing services to help enterprises determine the best approach.
In a related announcement, IBM announced [LotusLive iNotes], which you can think of as a "business-ready" version of Google's GoogleApps, Gmail and GoogleCalendar. IBM is focused on security and reliability but leaves out the advertising and data mining that people have been forced to tolerate from consumer-oriented Web 2.0-based solutions. IBM's clients that are already familiar with on-premises version of Lotus Notes will have no trouble using LotusLive iNotes.
There was actually a lot more announced today, which I will try to get to in later posts.
Well, it's Tuesday again, and that means IBM announcements!
We've got a variety of storage-related items today, so here's my quick recap:
DS5020 and EXP520 disk systems
[IBM System Storage DS5020]
provides the functional replacement for DS4700 disk systems. These are combined controller
and 16 drives in a compact 3U package.
The EXP520 expansion drawer provides additional 16 drives per 3U drawer. A DS5020 can
support upo to six additional EXP520, for a total of 112 drives per system.
The DS5020 supports both 8 Gbps FC as well as 1GbE iSCSI.
New Remote Support Manager (DS-RSM model RS2)
The [IBM System Storage DS-RSM Model
RS2] supports of up to 50 disk systems, any mix of DS3000, DS4000 and DS5000 series.
It includes "call home" support, which is really "email home", sending error alerts to IBM
if there are any problems. The RSM also allows IBM to dial-in to perform diagnostics before
arrival, reducing the time needed to resolve a problem. The model RS2 is a beefier model
with more processing power than the prior generation RS1.
New Ethernet Switches
With the increased interest in iSCSI protocol, and the new upcoming Fibre Channel over
Convergence Enhanced Ethernet (FCoCEE), IBM's re-entrance into the ethernet switch market
has drawn a lot of interest.
The [IBM Ethernet Switch r-
series] offers 4-slot, 8-slot, 16-slot, and 32-slot models. Each slot can handle either
16 10GbE ports, or 48 1GbE ports. This means up to 1,536 ports.
The [c-series] now offers a
24-port model. This is either 24 copper and 4 fiber optic, or 24 fiber optic.
The "hybrid fiber" SFP fiber optic can handle either single or multi-mode, eliminating the
need to commit to one or the other, providing greater data center flexibility.
The [IBM Ethernet Switch B24X]
offers 24 fiber optic (that can handle 10GbE or 1GbE) and 4 copper (10/100/1000 MbE RJ45)
Storage Optimization and Integration Services
[IBM Storage Optimization and
Integration Services] are available. IBM service consultants use IBM's own
Storage Enterprise Resource Planner (SERP) software to evaluate your environment and provide
recommendations on how to improve your information infrastructure. This can be especially
helpful if you are looking at deploying server virtualization like VMware or Hyper-V.
As people look towards deploying a dynamic infrastructure, these new offerings can be a
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended two presentations on XIV.
XIV Storage - Best Practices
Izhar Sharon, IBM Technical Sales Specialist for XIV, presented best practices using XIV in various environments.He started out explaining the innovative XIV architecture: a SATA-based disk system from IBM can outperformFC-based disk systems from other vendors using massive parallelism. He used a sports analogy:
"The men's world record for running 800 meters was set in 1997 by Wilson Kipketer of Denmark in a time of 1:41.11.
However, if you have eight men running, 100 meters each, they will all cross the finish line in about 10 seconds."
Since XIV is already self-tuning, what kind of best practices are left to present? Izhar presented best practicesfor software, hosts, switches and storage virtualization products that attach to the XIV. Here's some quickpoints:
Use as many paths as possible.
IBM does not require you to purchase and install multipathing software as other competitors might. Instead, theXIV relies on multipathing capabilities inherent to each operating system.For multipathing preference, choose Round-Robin, which is now available onAIX and VMware vSphere 4.0, for example. Otherwise, fixed-path is preferred over most-recently-used (MRU).
Encourage parallel I/O requests.
XIV architecture does not subscribe to the outdated notion of a "global cache". Instead, the cache is distributed across the modules, to reduce performance bottlenecks. Each HBA on the XIV can handle about 1400requests. If you have fewer than 1400 hosts attached to the XIV, you can further increase parallel I/O requests by specifying a large queue depth in the host bus adapter (HBA).An HBA queue depth of 64 is a good start. Additional settings mightbe required in the BIOS, operating system or application for multiple threads and processes.
For sequential workloads, select host stripe size less than 1MB. For random, select host stripe size larger than 1MB. Set rr_min_io between ten(10) and the queue depth(typically 64), setting it to half of the queue depth is a good starting point.
If you have long-running batch jobs, consider breaking them up into smaller steps and run in parallel.
Define fewer, larger LUNs
Generally, you no longer need to define many small LUNs, a practice that was often required on traditionaldisk systems. This means that you can now define just 1 or 2 LUNs per application, and greatly simplifymanagement. If your application must have multiple LUNs in order to do multiple threads or concurrent I/O requests, then, by all means, define multiple LUNs.
Modern Data Base Management Systems (DBMS) like DB2 and Oracle already parallelize their I/O requests, sothere is no need for host-based striping across many logical volumes. XIV already stripes the data for you.If you use Oracle Automated Storage Management (ASM), use 8MB to 16MB extent sizes for optimal performance.
For those virtualizing XIV with SAN Volume Controller (SVC), define manage disks as 1632GB LUNs, in multiple of six LUNs per managed disk group (MDG), to balance across the six interface modules. Define SVC extent size to 1GB.
XIV is ideal for VMware. Create big LUNs for your VMFS that you can access via FCP or iSCSI.
Organize data to simplify Snapshots.
You no longer need to separate logs from databases for performance reasons. However, for some backup productslike IBM Tivoli Storage Manager (TSM) for Advanced Copy Services (ACS), you might want to keep them separatefor snapshot reasons. Gernally, putting all data for an application on one big LUNgreatly simplifies administration and snapshot processing, without losing performance.If you define multiple LUNs for an application, simply put them into the same "consistencygroup" so that they are all snapshot together.
OS boot image disks can be snapshot before applying any patches, updates or application software, so that ifthere are any problems, you can reboot to the previous image.
Employ sizing tools to plan for capacity and performance.
The SAP Quicksizer tool can be used for new SAP deployments, employing either the user-based orthroughput-based sizing model approach. The result is in mythical unit called "SAPS", which represents0.4 IOPS for ERP/OLTP workloads, and 0.6 IOPS for BI/BW and OLAP workloads.
If you already have SAP or other applications running, use actual I/O measurements. IBM Business Partners and field technical sales specialists have an updated version of Disk Magic that can help size XIV configurations fromPERFMON and iostat figures.
Lee La Frese, IBM STSM for Enteprise Storage Performance Engineering, presented internal lab test results forthe XIV under various workloads, based on the latest hardware/software levels [announced two weeks ago]. Three workloadswere tested:
Web 2.0 (80/20/40) - 80 percent READ, 20 percent WRITE, 40 percent cache hits for READ.YouTube, FlickR, and the growing list at [GoWeb20] are applications with heavy read activity, but because of[long-tail effects], may not be as cache friendly.
Social Networking (50/50/50) - 50 percent READ, 50 percent WRITE, 50 percent cache hits for READ.Lotus Connections, Microsoft Sharepoint, and many other [social networking] usage are more write intensive.
Database (70/30/50) - 70 percent READ, 30 percent WRITE, 50 percent cache hits for READ.The traditional workload characteristics for most business applications, especially databases like DB2 andOracle on Linux, UNIX and Windows servers.
The results were quite impressive. There was more than enough performance for tier 2 application workloads,and most tier 1 applications. The performance was nearly linear from the smallest 6-module to the largest 15-module configuration. Some key points:
A full 15-module XIV overwhelms a single SVC 8F4 node-pair. For a full XIV, consider 4 to 8 nodes 8F4 models, or 2 to 4 nodes of an 8G4. For read-intensive cache-friendly workloads, an SVC in front of XIV was able to deliver over 300,000 IOPS.
A single node TS7650G ProtecTIER can handle 6 to 9 XIV modules. Two nodes of TS7650G were needed to drivea full 15-module XIV. A single node TS7650 in front of XIV was able to ingest 680 MB/sec on the seventh day with17 percent per-day change rate test workload using 64 virtual drives. Reading the data back got over 950 MB/sec.
For SAP environments where response time 20-30 msec are acceptable, the 15-module XIV delivered over 60,000 IOPS. Reducing this down to 25,000-30,000 cut the msec response time to a faster 10-15 msec.
These were all done as internal lab tests. Your mileage may vary.
Not surprisingly, XIV was quite the popular topic here this week at the Storage Symposium. There were many moresessions, but these were the only two that I attended.
Continuing my week in Chicago, for the IBM Storage Symposium 2008, I attended several sessions intended to answer the questions of the audience.
In an effort to be cute, the System x team have a "Meet the xPerts" session at their System x and BladeCenter Technical Conference, so the storage side decided to do the same. Traditionally, these have been called "Birds of a Feature", "Q&A Panel", or "Free-for-All". They allow anyone to throw out a question, and have the experts in the room, either
IBM, Business Partner or another client, answer the question from their experience.
Meet the Experts - Storage for z/OS environments
Here were some of the questions answered:
I've seen terms like "z/OS", "zSeries" and "System z" used interchangeably, can you help clarify what this particular session is about?
IBM's current mainframe servers are all named "System z", such as our System z9 or System z10. These replace the older zSeries models of hardware. z/OS is one of the six operating systems that run on this hardware platform. The other five are z/VM, z/VSE, z/TPF, Linux and OpenSolaris. The focus of this session will be storage attached and used for z/OS specifically, including discussions of Omegamon and DFSMS software products.
What can we do to reduce our MIPS-based software licensing costs from our third party vendors?
Consider using IBM System z Integrated Information Processor
What about 8 Gbps FICON?
IBM has already announced
[FICON Express8] host bus adapter (HBA) cards, that will auto-negotiate to 4Gbps and 2Gbps speeds. If you don't need full 8Gbps speed now, you can
still get the Express8 cards, but put 4/2/1 Gbps SFP ports instead. Currently, LongWave (LW) is only supported to 4km at 8Gbps speed.
I want to use Global Mirror for my DS8100 to my remote DS8100, but also make test copies of my production data to
an older ESS 800 I have locally. Any suggestions? Yes, consider using FlashCopy to simplify this process.
I have Global Mirror (GM) running now successfully with DSCLI, and now want to deploy IBM Tivoli Storage Productivity Center for Replication. Is that possible? Yes, Productivity Center for Replication will detect existing GM relationships, and start managing them.
I have already deployed HyperPAV and zHPF, is there any value in getting Solid-State Drives as well?
HyperPAV and zHPF impact CONN time, but SSD impacts DISC time, so they are mutually complementary.
How should I size my FlashCopy SE pool? SE refers to "Space Efficient", which stores only the changes
between the source and destination copies of each LUN or CKD volume involved. General recommendation is to start with 20 percent and adjust accordingly.
How many RAID ranks should I configure per DS8000 extent pool? IBM recommends 4 to 8 ranks per pool.
Meet the Experts: Storage for Linux, UNIX and Windows distributed systems
This session was focused on storage systems attached to distributed servers, as well as products from Tivoli used to manage them. Here were some of the questions answered:
When we migrated from Tivoli Storage Manager v5 to v6, we lost our favorite "Operational Reporting" tool. How can we get TOR back? You now get the new Tivoli Common Reporting tool.
How can we identify appropriate port distribution for multiple SVC node pairs for load balancing?
IBM Tivoli Storage Productivity Center v4.1 has hot-spot analysis with recommendations for Vdisk migrations.
We tried TotalStorage Productivity Center way back when, but the frequent upgrades were killing us. How has it been lately? It has been much more stable since v3.3, and completely renamed to Tivoli Storage Productivity Center to avoid association with versions 1 and 2 of the predecessor product. The new "lightweight agents" feature of v4.1 resolve many of the problems you were experiencing.
We have over 1600 SVC virtual disks, how do we handle this in IBM Tivoli Storage Productivity Center? Use the Filter capability in combination with clever naming conventions for your virtual disks.
How can we be clever when we are limited to only 15 characters? Ok. We understand.
We are currently using an SSPC with Windows 2003 and 2GB memory, but we are only using the Productivity Center for Replication feature of it. Can we move the DB2 database over to a Windows 2008 server with 4GB of memory?
Consider using the IBM Tivoli Storage Productivity Center for Replication software instead of SSPC for special
circumstances like this.
We love the XIV GUI, how soon will all other IBM storage products have it also? As with every acquisition,
IBM evaluates if there are technologies from new products that can be carried back to existing products.
We are currently using 12 ports on our existing XIV, and love it so much we plan to buy a second frame, but are concerned about consuming another 12 ports on our SAN switch. Any suggestions? Yes, use only six ports per frame. Just because you have more ports, doesn't mean you are required to use them.
We have heard there are concerns from the legal community about using deduplication technology, any ideas how to address that?
Nobody here in the room is a lawyer, and you should consult legal counsel for any particular situation.
None of the IBM offerings intended for non-erasable, non-rewriteable (NENR) data retention records (DR550, WORM tape, N series SnapLock) support dedupe today, and none of IBM's deduplication offerings (TS7650,N series A-SIS,TSM) make any claims for fit-for-purpose for compliance regulatory storage. However, be assured that all of IBM's dedupe technology involves byte-for-byte comparisons so that you never lose any data due to false hash collisions. For all IBM compliance storage, what you write will be read back in the correct sequence of ones and zeros.
Continuing my week in Chicago for the IBM Storage and Storage Networking Symposium and System x and BladeCenter Technical Conference, I presented a variety of topics.
Hybrid Storage for a Green Data Center
The cost of power and cooling has risen to be a #1 concern among data centers. I presented the following hybrid storage solutions that combine disk with tape. These provide the best of both worlds, the high performance access time of disk with the lower costs and reduced energy consumption of tape.
IBM [System Storage DR550] - IBM's Non-erasable, Non-rewriteable (NENR) storage for archive and compliance data retention
IBM Grid Medical Archive Solution [GMAS] - IBM's multi-site grid storage for PACS applications and electronic medical records[EMR]
IBM Scale-out File Services [SoFS] - IBM's scalable NAS solution that combines a global name space with a clustered GPFS file system, serving as the ideal basis for IBM's own[Cloud Computing and Storage] offerings
Not only do these help reduce energy costs, they provide an overall lower total cost of ownership (TCO) thantraditional WORM optical or disk-only storage configurations.
The Convergence of Networks - Understanding SAN, NAS and iSCSI in the Data Center Network
This turned out to be my most popular session. Many companies are at a crossroads in choosing data and storage networking solutions in light of recent announcements from IBM and others. In the span of 75 minutes, I covered:
Block storage concepts, storage virtualization and RAID levels
File system concepts, how file systems map files to block storage
Network Attach Storage, the history of the NFS and CIFS protocols, Pros and Cons of using NAS
Storage Area Networks, the history of SAN protocols including ESCON, FICON and FCP, Pros and Cons of using SAN
IP SAN technologies, iSCSI and Fibre Channel over Ethernet (FCoE), Pros and Cons of using this approach
Network Convergence with Infiniband and Fibre Channel over Convergence Enhanced Ethernet (FCoCEE), why Infiniband was not adopted historically in the marketplace as a storage protocol, and the features and enhancements of Convergence Enhanced Ethernet (CEE) needed to merge NAS, SAN and iSCSI traffic onto a single converged data center network [DCN]
Yes, it was a lot of information to cover, but I managed to get it done on time.
IBM Tivoli Storage Productivity Center version 4.1 Overview and Update
In conferences like these, there are two types of product-level presentations. An "Overview" explains howproducts work today to those who are not familiar with it. An "Update" explains what's new in this version of the product for those who are already familiar with previous releases. I decided to combine these into one sessionfor IBM's new version of [Tivoli Storage Productivity Center].I was one of the original lead architects of this product many years ago, and was able to share many personalexperiences about its evolution in development and in the field at client facilities.Analysts have repeatedly rated IBM Productivity Center as one of the top Storage Resource Management (SRM) tools available in the marketplace.
Information Lifecycle Management (ILM) Overview
Can you believe I have been doing ILM since 1986? I was the lead architect for DFSMS which provides ILM support for z/OS mainframes. In 2003-2005, I spent 18 months in the field performingILM assessments for clients, and now there are dozens of IBM practitioners in Global Technology Services andSTG Lab Services that do this full time. This is a topic I cover frequently at the IBM Executive Briefing Center[EBC], because it addressesseveral top business challenges:
Reducing costs and simplifying management
Improving efficiency of personnel and application workloads
Managing risks and regulatory compliance
IBM has a solution based on five "entry points". The advantage of this approach is that it allows our consultants to craft the right solution to meet the specific requirements of each client situation. These entry points are:
Tiered Information Infrastructure - we don't limit ourselves to just "Tiered Storage" as storage is only part of a complete[information infrastructure] of servers,networks and storage
Storage Optimization and Virtualization - including virtual disk, virtual tape and virtual file solutions
Process Enhancement and Automation - an important part of ILM are the policies and procedures, such as IT Infrastructure Library [ITIL] best practices
Archive and Retention - space management and data retention solutions for email, database and file systems
I did not get as many attendees as I had hoped for this last one, as I was competing head-to-head in the same time slot as Lee La Frese covering IBM's DS8000 performance with Solid State Disk (SSD) drives, John Sing covering Cloud Computing and Storage with SoFS, and Eric Kern covering IBM Cloudburst.
I am glad that I was able to make all of my presentations at the beginning of the week, so that I can then sit back and enjoy the rest of the sessions as a pure attendee.
Last week, I was in Austin, and had dinner at [Rudy's Country Store and BBQ]. They offer their self-proclaimed "Worst BBQ in Austin!" with brisket, sausage and other meats by weight. I got a beer, some potato salad, and creamed corn, all at additional cost, of course. When I went to the cashier to pay, I was offered all the white bread I wanted at no additional charge. Are you kidding me? You are going to charge me for beer, but give me 8 to 12 complimentary slices of white bread (practically half a loaf)? Honestly, I consider bread and beer to be basically the same functional food item, differing only in solid versus liquid form. I chose to have only four slices. The food was awesome!
I am reminded of that from my latest exchange with EMC.It didn't take long after IBM's announcement yesterday of IBM's continued investment in its strategic product set, IBM System Storage DS8000 series, that competitors responded. In particular, fellow blogger BarryB from EMC has a post [DS8000 Finally Gets Thin Provisioning] that pokes fun at the new Thin Provisioning feature.
Interestingly, the attack is not on the technical implementation, which is straightforward and rock-solid, but rather that the feature is charged at a flat rate of $69,000 US dollars (list price) per disk array. BarryB claims that recently EMC Corporate has decided to reduce the price of their own thin provisioning, called Symmetrix Virtual Provisioning (VP) on select subset of models of their storage portfolio, although I have not found an EMC press release to confirm. In other words, EMC will bury the cost of thin provisioning into the total cost for new sales, and stop shafting, er.. over-charging their existing Symmetrix customers that are interesting in licensing this feature.
BarryB claims this was a lucky coincidence that his blog post happened just days before IBM's announcement.
(Update: While the timing appears suspicious, I am not accusing Mr. Burke in anywrongdoing of insider information of IBM's plans, nor am I aware of any investigations on this matter from the SEC or any other government agency, and apologize if my previous attempt at humor suggested otherwise. BarryB claimsthat the reduction in price was motivated to counter publicly announced HDS's "Switch In On" program, that it is not a secret thatEMC reduced VP pricing weeks ago, effective beginning 3Q09, just not widely advertised in any formal EMC press releases.Perhaps this new VP pricing was only disclosed to just EMC's existing Symmetrix customers, Business Partners, and employees. Perhaps EMC's decision not to announce this in a Press Release was to avoid upsetting all the EMC CLARiiON customers that continue to pay for Thin Provisioning, or to avoid a long line of existing VP customers asking for refunds. In any case, people are innocent until proven otherwise, and BarryB rightfully deserves the presumption of innocence in this regard. I'm sorry, BarryB, for any trouble my previous comments may have caused you.)
Instead, let's explore some events over the past year that have led up to this.
Let's start with what EMC previously charged for this feature. Software features like this often follow a common pricing method, based per TB, so larger configurations pay more, but tiered in a manner that larger configurations pay less per TB, combined with a yearly maintenance cost.
(Updated: EMC has asked me nicely not to post their actual list prices,so I will provide rough estimates instead. According to BarryB, these are no longer the current prices, soI present them as historical figures for comparison purposes only.)
Initial List price
Software Maintenance (SWMA) percentage
Software Maintenance per year
Number of years
Software License Cost (4 years)
Holy cow! How did EMC get away charging so much for this? To be fair, these are often deeply discounted, a practice common among the industry. However, it was easy for IBMers to show EMC customers that putting SVC or N series gateways in front of their existing EMC disks was more cost effective. Both SVC and N series, as well as IBM's XIV, provide thin provisioning at no additional charge.
HDS offers their own thin provisioning called Hitachi Dynamic Provisioning.Hitachi also offers an SVC-like capability to virtualize storage behind the USP-V. However, I suspect thatfewer than 10 percent of their install base actually licensed this capability because it cost so much. Under the cost pressure from IBM's thin provisioning capabilities in SVC, XIV and N series, Hitachi launched its ["Switch It On"] marketing campaign to activate virtualization and provide some features at no additional charge, including the first 10TB of Hitachi Dynamic Provisioning.
Last week, Martin Glassborow on his StorageBod blog, argued that EMC and HDS should[Set the Wide Stripes Free]. Here is an excerpt:
HDS and EMC are both extremely guilty in this regard, both Virtual Provisioning and Dynamic Provisioning cost me extra as an end-user to license. But this is the technology upon which all future block-based storage arrays will be built. If you guys want to improve the TCO and show that you are serious about reducing the complexity to manage your arrays, you will license for free. You will encourage the end-user to break free from the shackles of complexity and you will improve the image of Tier-1 storage in the enterprise.
Martin is using the term "free" in two contexts above. In the Linux community, we are careful to clarify "free, as in free speech" or "free, as in free beer". Technically, EMC's virtual provisioning is neither, as one has to purchase the hardware to get the feature, so the term "at no additional charge" is more legally correct.
However, the discussion of "free beer" brings me back to my first paragraph about Rudy's BBQ. Nearly everyone eats bread, with the exception of those with [Celiac Disease] that causesan intolerance for gluten protein in wheat, so burying the cost of white bread in the base cost of the BBQ meat is reasonable. In contrast, not everyone drinks beer, and there are probably several people whowould complain if the cost of beer was included in the cost of the BBQ meat, so charging separately forbeer makes business sense.
The same applies in the storage industry. When all (or most) customers of a product can benefit from a feature, it makes sense to include it at no additional charge. When a significant subset might not want to pay a higher base price because they won't use or benefit from a feature, it makes sense to make it optionally priced.
For the IBM SVC, XIV and N series, all customers can benefit from thin provisioning, so it is included at no additional charge.
For the IBM System Storage DS8000, perhaps some 30 to 40 percent of our clients have only System z and/or System i servers attached, and therefore would not benefit from this new thin provisioning. It may seem unfair to raise the price on everybody. The $69,000 flat rate was competitively priced against the prices EMC, HDS and 3PAR were charging for similar capability, and lower than the cost to add a new SVC cluster in front of the DS8000. IBM also charges an annual maintenance, but far lower than what others charged as well.
(Note: These list prices are approximate, and vary slightly based on whether you are on legacy, ESA, Servicesuite or ServiceElect software and subscription (S&S) service plans, and the machine type/model. The tables were too complicated to include here in this post, so these numbers are rounded for comparison purposes only.)
IBM flat rate
Software Maintenance per year (approx)
Number of years
Software License Cost (4 years)
Pricing is more art than science. Getting the right pricing structure that appears fair to everyone involved can be a complicated process.
Well, it's Tuesday, and you know what that means? IBM announcements!
Today we had several for the IBM System Storage product line. Here are some of them:
DS8000 gets thinner, leaner and faster
The 4.3 level of microcode for the IBM System Storage DS8000 series disk systems [announced enhancements] for both fixed block architecture (FBA) LUNs and count key data (CKD) volumes.
For FBA LUNs that attach to Linux, UNIX and Windows distributed systems, IBM announced DS8000 Thin Provisioning native support. Of course, many people already had this by putting IBM System Storage SAN Volume Controller (SVC) in front, but now DS8000 clients out there without SVC can also achieve benefits ofthin provisioning. This support also improves quick initialization a whopping 2.6 times faster.
For CKD volumes attached to z/OS on System z mainframes, IBM announced zHPF multitrack support for z/OS 1.9 and above. zHPF provide high performance FICON performance, and can now handle multitrack I/O transfers foreven better performance for zFS, HFS, PDSE, and extended striped data sets.
XIV gets better connected
A lot of XIV[announced enhancements] and preview announcements centered around better connectivity. Here's a run down:
Better host attachment connectivity by beefing up the interface modules that hold the FCP and iSCSI interface cards. XIV disk arrays have 3 to 6 of these in different configurations, and since they manage both their own disks,as well as receive host I/O requests for other disks, are basically doing double-duty.These interface modules can now be ordered as [Dual-CPU] modules.
Better infrastructure management by connecting XIV with the industry standard SMI-S interface to IBM Tivoli Storage Productivity Center. Now, XIV can be part of the single pane of glass console that manages all of your other disk arrays, tape libraries and SAN fabrics.
Better copy services for backups by connecting XIV with IBM Tivoli Storage Manager Advanced Copy Services. TSM for Advanced Copy Services is application aware and can coordinate XIV Snapshots similar to its current support for SVC and DS8000 FlashCopy capabilities.
Better connectivity to security systems by supporting LDAP credentials. Before, you had individual userid and passwords for each XIV, and these were probably different than all the other userid/password combinations you have for every other box on your data center floor. IBM is working on getting all products to support theLightweight Directory Access Protocol, or [LDAP] so that we can reach the nirvana of "single sign-on",one userid/password per administrator for all IT devices in the company.
Better support with flexible warranty periods and non-disruptive code load options.
Better remote copy support by connecting to sites far, far away. IBM previewed that it will provideasynchronous disk mirroring from one XIV to another XIV natively. Before this, XIV's synchronous mirroring was limited to 300km distances. Many of our clients do long distance global mirroring of their XIV today behind an SVC, but again, for those out there that don't yet have an SVC, this can be a reasonable alternative.
TS7650 ProtecTIER data deduplication appliance now offers "no dedupe" option
In what some might consider a surprising move, IBM announced a "no dedupe" licensing option on their premiere deduplication solution, which somewhat reminds me of IBM's NOCOPY option on DS8000 FlashCopy. At first I thought "Are you kidding me?!?!" However, this new license option allows the TS7650 appliance to compete with other virtual tape libraries (VTL) that do not offer deduplication capability on an even playing field. It also allows TS7650 to be used for data that doesn'tdedupe very well, such as seismic recordings, satellite images, or what have you. There are also clients who do not yet feel comfortable to dedupe their financial records for compliance reasons.This option now allows IBM to withdraw from marketing the TS7530 non-dedupe library. Having one technology thatdoes both dedupe and no-dedupe is better than offering two separate libraries based on different technologies.
The ProtecTIER series also announced [IP remote distance replication]. This can be used to replicate virtualtape cartridges in one ProtecTIER over to another ProtecTIER at a remote location. You can decide to replicateall or just a subset of your virtual tapes, and this feature can be used to migrate, merge or split ProtecTIERconfigurations as your needs grow. Before this support, our TS7650G clients replicated the disk repositoryusing native disk array replication technology, such as Global Mirror on the DS8000, but that meant that all data was replicated over to the secondary site. Now, with this new IP replication feature, you can be selective, and replicate only those virtual tapes that are mission critical.
The appliance now supports up to 36TB of disk capacity, and the new "IBM i" operating system on System i servers,formerly known as i5/OS.
GPFS does Windows
IBM's General Parallel File System (GPFS) has the lion's marketshare of file systems used in the [Top 500 Supercomputers]. For a while, it was limited to just Linux and AIX operating system support, but version 3.3 [extends this to Windows 2008 on 64-bit architectures]. GPFS isthe file system used in IBM's Scale-Out File Services, the underlying technology of IBM's Cloud Computing and Storage offerings.
Well, I am back from Las Vegas, and had a pleasant [US Memorial Day] holiday yesterday.
Today is Tuesday, and that means more IBM announcements! IBM announced that the DCS9900 now supports an intermix of SAS and SATA drives. The DCS9900 is purpose-built specifically for the High-Performance-Computing (HPC) and Video Broadcasting industries.
The system is a combination of 4U controllers and 3U expansion drawers. The controllers handle either FC or Infiniband attachment to host servers. The expansion drawers hold up to 60 drives each. With the new features of intermix, the following drives are supported:
7200 RPM SATA drives in 500, 750 and 1000 GB capacities
15K RPM SAS drives in 146, 300 and 450 GB capacities
The DCS9900 groups the drives into sets of 10, in RAID-6 ranks of 8+2P. IBM supports either 5, 10 or 20 expansion drawers to make a complete system. The maximum configuration would be 1200 drives of the 1000GB SATA drives, for a total of 1.2 PB in two frames. Each rank must be all the same type and capacity drive, but you can mix different types within the entire system.
The DCS9900 supports "Sleep Mode", an implementation of Massive Array of Idle Disks [MAID] technology, whereby each RAID rank can be either awake and spinning, or in energy-efficient standby mode. This makes for a more "green" storage system for data that is not accessed frequently.
Continuing my ongoing discussion on Solid State Disk (SSD), fellow blogger BarryB (EMC) points out in his [latest post]:
Oh – and for the record TonyP, I don't think I ever said EMC was using a newer or different EFDs than IBM. I just asserted that EMC knows more than IBM about these EFDs and how they actually work a storage array under real-world workloads.
(Here "EFD" is refers to "Enterprise Flash Drive", EMC's marketing term for Single Layer Cell (SLC) NAND Flash non-volatile solid-state storage devices. Both IBM and EMC have been selling solid-state storage for quite some time now, but EMC felt that a new term was required to distinguish the SLC NAND Flash devices sold in their disk systems from solid-state devices sold in laptops or blade servers. The rest of the industry, including IBM, continues to use the term SSD to refer to these same SLC NAND Flash devices that EMC is referring to.)
Although STEC asserts that IBM is using the latest ZeusIOPS drives, IBM is only offering the 73GB and 146GB STEC drives (EMC is shipping the latest ZeusIOPS drives in 200GB and 400GB capacities for DMX4 and V-Max, affording customers a lower $/GB, higher density and lower power/footprint per usable GB.)
Here is where I enjoy the subtleties between marketing and engineering. Does the above seem like he is saying EMC is using newer or different drives? What are typical readers expected to infer from the statement above?
That there are four different drives from STEC, in four different capacities. In the HDD world, drives of different capacities are often different, and larger capacities are often newer than those of smaller capacities.
That the 200GB and 400GB are the latest drives, and that 73GB and 146GB drives are not the latest.
That STEC press release is making false or misleading claims.
Uncontested, some readers might infer the above and come to the wrong conclusions. I made an effort to set the record straight. I'll summarize with a simple table:
Usable (conservative format)
Usable (aggressive format)
So, we all agree now that the 256GB drives that are formatted as 146GB or 200GB are in fact the same drives, that IBM and EMC both sell the latest drives offered by STEC, and that the STEC press release was in fact correct in its claims.
I also wanted to emphasize that IBM chose the more conservative format on purpose. BarryB [did the math himself] and proved my key points:
Under some write-intensive workloads, an aggressive format may not last the full five years. (But don't worry, BarryB assures us that EMC monitors these drives and replaces them when they fail within the five years under their warranty program.)
Conservative formats with double the spare capacity happen to have roughly double the life expectancy.
I agree with BarryB that an aggressive format can offer a lower $/GB than the conservative format. Cost-conscious consumers often look for less-expensive alternatives, and are often willing to accept less-reliable or shorter life expectancy as a trade-off. However, "cost-conscious" is not the typical EMC targeted customer, who often pay a premiumfor the EMC label. To compensate, EMC offers RAID-6 and RAID-10 configurations to provide added protection. With a conservative format, RAID-5 provides sufficient protection.
(Just so BarryB won't accuse me of not doing my own math, a 7+P RAID-5 using conservative format 146GB drives would provide 1022GB of capacity, versus 4+4 RAID-10 configuration using aggressive format 200GB drives only 800GB total.)
In an ideal world, you the consumer would know exactly how many IOPS your application will generate over the next five years, exactly how much capacity you will require, be offered all three drives in either format to choose from, and make a smart business decision. Nothing, however, is ever this simple in IT.
My post last week [Solid State Disk on DS8000 Disk Systems] kicked up some dust in the comment section.Fellow blogger BarryB (a member of the elite [Anti-Social Media gang from EMC]) tried to imply that 200GB solid state disk (SSD) drives were different or better than the 146GB drives used in IBM System Storage DS8000 disk systems. I pointed out that they are the actual same physical drive, just formatted differently.
To explain the difference, I will first have to go back to regular spinning Hard Disk Drives (HDD). There are variances in manufacturing, so how do you make sure that a spinning disk has AT LEAST the amount of space you are selling it as? The solution is to include extra. This is the same way that rice, flour, and a variety of other commodities are sold. Legally, if it says you are buying a pound or kilo of flour, then it must be AT LEAST that much to be legal labeling. Including some extra is a safe way to comply with the law. In the case of disk capacity, having some spare capacity and the means to use it follows the same general concept.
(Disk capacity is measured in multiples of 1000, in this case a Gigabyte (GB) = 1,000,000,000 bytes, not to be confused with [Gibibyte (GiB)] = 1,073,741,824 bytes, based on multiples of 1024.)
Let's say a manufacturer plans to sell 146GB HDD. We know that in some cases there might be bad sectors on the disk that won't accept written data on day 1, and there are other marginally-bad sectors that might fail to accept written data a few years later, after wear and tear. A manufacturer might design a 156GB drive with 10GB of spare capacity and format this with a defective-sector table that redirects reads/writes of known bad sectors to good ones. When a bad sector is discovered, it is added to the table, and a new sector is assigned out of the spare capacity.Over time, the amount of space that a drive can store diminishes year after year, and once it drops below its rated capacity, it fails to meet its legal requirements. Based on averages of manufacturing runs and material variances, these could then be sold as 146GB drives, with a life expectancy of 3-5 years.
With Solid State Disk, the technology requires a lot of tricks and techniques to stay above the rated capacity. For example, you can format a 256GB drive as a conservative 146GB usable, with an additional 110GB (75 percent) spare capacity to handle all of the wear-leveling. You could lose up to 22GB of cells per year, and still have the rated capacity for the full five-year life expectancy.
Alternatively, you could take a more aggressive format, say 200GB usable, with only 56GB (28 percent) of spare capacity. If you lost 22GB of cells per year, then sometime during the third year, hopefully under warranty, your vendor could replace the drive with a fresh new one, and it should last the rest of the five year time frame. The failed drive, having 190GB or so usable capacity, could then be re-issued legally as a refurbished 146GB drive to someone else.
The wear and tear on SSD happens mostly during erase-write cycles, so for read-intensive workloads, such as boot disks for operating system images, the aggressive 200GB format might be fine, and might last the full five years.For traditional business applications (70 percent read, 30 percent write) or more write-intensive workloads, IBM feels the more conservative 146GB format is a safer bet.
This should be of no surprise to anyone. When it comes to the safety, security and integrity of our client's data, IBM has always emphasized the conservative approach.[Read More]
(Note: IBM [Guidelines] prevent me from picking blogfights, so this post is only to set the record straight on some misunderstandings, point to some positive press about IBM's leadership in this area, and for me to provide a different point of view.)
First, let's set the record straight on a few things. The [RedPaper is still in draft form] under review, and so some information has not yet been updated to reflect the current situation.
You can have 16 or 32 SSD per DA pair. However, you can only have a maximum of 128 SSD drives total in any DS8100 or DS8300. In the case of the IBM DS8300 with 8 DA pairs, it makes more senseto spread the SSD out across all 8 pairs, and perhaps this is what confused BarryB.
Yes, you can order an all-SSD model of the IBM DS8000 disk system. I don't see anywhere in the RedPaper that suggests otherwise, and I have confirmed with our offering manager that this is the case.
The 73GB and 146GB are freshly manufactured from STEC. The 146GB drive and 200GB drives are actually the same drive but just formatted differently. The 200GB format does not offer as much spare capacity for wear-leveling, and are therefore intended only for read-intensive workloads. (Perhaps EMC wants you to find this out the hard way so that you replace them more often???) These reduced-spare-capacity formats may not be appropriate with some write-intensive workloads. Don't let anyone from EMC try to misrepresent the 73GB or 146GB drives from STEC as older, obsolete, collecting dust in a warehouse, or otherwise no longer manufactured by STEC.
You can relocate data from HDD to SSD using "Data Set FlashCopy", a feature that does not involve host-based copy services, does not consume any MIPS on your System z mainframe, and is performed inside the DS8000 disk system. You can also use host-based copy services as well, but it is not the only way.
You can use any supported level of z/OS with SSD in the IBM DS8000. There is ENHANCED support mentioned in the RedPaper that you get only with z/OS 1.8 and above, allowing you to create automation policies that place data sets onto SSD or non-SSD storage pools. This synergy makes SSD with IBM DS8000 superior to the initial offerings that EMC had offered without this OS support.
I find it amusing that BarryB's basic argument is that IBM's initial release of SSD disk on DS8000 is less than what the potential architecture could be extended to support much more. Actually, if you look at EMC's November release of Atmos, as well as their most recent announcement of V-Max, they basically say the same thing "Stay Tuned, this is just our initial release, with various restrictions and limitations, but more will follow." Architecturally, IBM DS8000 could support a mix of SSD and non-SSD on the same DA pairs, could support RAID6 and RAID10 as well, and could support larger capacity drives or use higher-capacity read-intensive formats. These could all be done via RPQ if needed, or in a follow-on release.
BarryB's second argument is that IBM is somehow "throwing cold water" on SSD technology. That somehow IBM is trying to discourage people from using SSD by offering disk systems with this technology. IBM offered SSD storage on BladeCenter servers LONG BEFORE any EMC disk system offering, and IBM continues to innovate in ways that allow the best business value of this new technology. Take for example this 24-page IBM Technical Brief:[IBM System z® and System Storage DS8000:Accelerating the SAP® Deposits Management Workload With Solid State Drives]. It is full of example configurations that show that SSD on IBM DS8000 can help in practical business applications. IBM takes a solution view, and worked with DB2, DFSMS, z/OS, High Performance FICON (zHPF), and down the stack to optimize performance to provide real business value innovation. Thanks to this synergy,IBM can provide 90 percent of the performance improvement with only 10 percent of the SSD disk capacity as EMC offerings. Now that's innovative!
The price and performance differences between FC and SATA (what EMC was mostly used to) is only 30-50 percent. But the price and performance differences between SSD and HDD is more than an order of magnitude in some cases 10-30x, similar to the differences between HDD and tape. Of course, if you want hybrid solutions that take best advantage of SSD+HDD, it makes more sense to go to IBM, the leading storage vendor that has been doing HDD+Tape hybrid solutions for the past 30 years. IBM understands this better, and has more experience dealing with these orders of magnitude than EMC.
But don't just take my word for it. Here is an excerpt from Jim Handy, from [Objective Analysis] market research firm, in a recent Weekly Review from [Pund-IT] (Volume 5, Issue 23--May 6, 2009):
"What about IBM? One thing that we are finding is that IBM really “Gets It” in the area offlash in the data center. Readers of the Pund-IT Review will not only recall that IBM Researchpushed its SSD-based “Quicksilver” storage system to one million IOPS using Fusion-ioflash-based storage, but they also may have noticed that the recent MySQL and mem-cachedappliances recently introduced by Schooner Information Technology are both flash-enableddevices introduced in partnership with IBM. Ironically, while other OEMs are takingthe cautious approach of introducing a standard SSD option to their systems first, IBM appearsto have been working on several approaches simultaneously to bring flash to thedata center not only in SSDs, but in innovative ways as well."
As for why STEC put out a press release on their own this week without a corresponding IBM press release, I can only say that IBM already announced all of this support back in February, and I blogged about it in my post [Dynamic Infrastructure - Disk Announcements 1Q09]. This is not the first time one of IBM's suppliers has tried to drum up business in this manner. Intel often funds promotions for IBM System x servers (the leading Intel-based servers in the industry) to help drive more business for their Xeon processor.
So, BarryB, perhaps its time for you to take out your green pen and work up another one of your all-too-common retraction and corrections.[Read More]
It's Tuesday, which means IBM announcements, and today IBM made some major announcementsthat support a [Dynamic Infrastructure]! I hinted at this yesterday, choosing the week's theme to be all about Cloud Computing and Alternative Sourcing. I will briefly highlight today's announcements related to storage here, and try to go into more detail over the next few weeks.
Ethernet switches and routers
In support of Cloud Computing and Cloud Storage, IBM is now back in theEthernet networking business. This is part of storage as protocols likeiSCSI, CIFS and NFS are gaining prominence. Extending IBM's existing OEMrelationship with Brocade, there are four series:
[c-series] - "c" for Compact, these are 1U high fixed port switches
[g-series] - "g" for Pay-as-you-Grow using IronStack stacking technology to allow up to 8 switches to be glommed, glued, er.. "gathered" together as a single virtual chassis.
[m-series] - "m" for Multiprotocol Label Switching [MPLS] which supports routing between LAN and WAN networks over OC12 and OC48 lines.
[s-series] - "s" for slots, the B08S has eight slots, and the B16S has sixteen slots, supporting up to 384 ports. These models support Power-over-Ethernet [PoE] that simplifies attaching Voice-over-IP (VoIP) telephones and IP-based surveillance cameras.
IBM announced it will strengthen its partnership with Juniper Networks, and continues to consider Cisco a strategic partner as well. To help customer position themselves for Cloud Computing and Cloud Storage,IBM also launches some new services:
The IBM [DS5000] now supports self-encrypting disk drives, known also as "full-disk encryption" or FDE, for added security, and 8Gbps Fibre Channel (FC) ports for added performance. The DS5300 model in particular now supports up to 448 disk drives for added scalability.
Comprehensive Data Protection Solution
IBM's [Data Protection Solution] shows off IBM's awesome synergy between servers, storage and software. Combining System x servers, Tivoli Storage Manager FastBack software, and DS5000, DS4000 or DS3000 series disk systems. The solution is designed to both Windows-based servers and their applications, offering bare metal restores, and application–level protection for Oracle, SQL, Exchange and SAP.
Tivoli Storage Productivity Center
Last February, IBM previewed the renaming of TotalStorage Productivity Center to its new name,Tivoli Storage Productivity Center. Today, IBM announces [Tivoli Storage Productivity Center v4.1]. Some key changes include:
Productivity Center for Fabric has been merged into Productivity Center for Disk
Productivity Center for Replication is now integrated, but remains separately licensed
Productivity Center can now feed input to IBM's Novus Storage Enterprise Resource Planner [SERP]
TS7650 ProtecTIER Data Deduplication IP-based replication
IBM previews IP-based replication which allows the TS7650 appliance or TS7650G gateway to sendvirtual tape data over to a remote location. This is instead of having the underlying disk systemsperform the replication on its behalf. Having the TS7650 do the replication is preferred, as itcan maintain virtual cartridge integrity, when a virtual tape is unmounted the replication can beginat that point.
Earlier this week, EMC announced its Symmetrix V-Max, following two trends in the industry:
Using Roman numerals. The "V" here is for FIVE, as this is the successor to the DMX-3 and DMX-4. EMC might have gotten the idea from IBM's success with the XIV (which does refer to the number 14, specifically the 14th class of a Talpiot program in Israel that the founders of XIV graduated from).
Adding "-Max", "-Monkey" or "2.0" at the end of things to make them sound more cool and to appeal to a younger, hipper audience. EMC might have gotten this idea from Pepsi-Max (... a taste of storage for the next generation?)
I took a cue from President Obama and waited a few days to collect my thoughts and do my homework before responding.Special thanks to fellow blogger ChuckH in giving me a [handy list of reactions] for me to pick and choose from. It appears that EMC marketing machine feels it is acceptable for their own folks to claim that EMC is doing something first, or that others are catching up to EMC, but when other vendors do likewise, then that is just pathetic or incoherent. Here are a few reactions already from fellow bloggers:
This was a major announcement for EMC, addressing many of the problems, flaws and weaknesses of the earlier DMX-3 and DMX-4 deliverables. Here's my read on this:
Now you can have as many FCP ports (128) as an IBM System Storage DS8300, although the maximum number of FICON ports is still short, and no mention of ESCON support. The Ethernet ports appear to be 1Gb, not the new 10GbE you might expect.
Support for System z mainframe
V-Max adds some new support to catch up with the DS8000, like Extended Address Volumes (EAV). EMC is still not quite there yet. IBM DS8000 continues to be the best, most feature-rich storage option if you have System z mainframe servers.
Both the IBM DS8000 and HDS USP-V beat the DMX-4 in performance, and in some cases the DMX-4 even lost to the IBM XIV, so EMC had to do something about it. EMC chooses not to participate in industry-standard performance benchmarks like those from the [Storage Performance Council], which limits them to vague comparisons against older EMC gear. I'll give EMC engineers the benefit of the doubt and say that now V-Max is now "comparably as fast as HDS and IBM offerings".
Getting "V" in the name
The "V" appears to be for the roman number five, not to be confused with external heterogeneous storage virtualization that HDS USP-V and IBM SVC provide. There is no mention of synergy with EMC's failed "Invista" product, and I see no support for attaching other vendors disk to the back of this thing.
Switch to Intel processor
Apple switched its computers from PowerPC to Intel-based, and now EMC follows in the same path. There are some custom ASICs still in V-Max, so it is not as pure as IBM's offerings.
Modular, XIV-like Scale-out Architecture
Actually, the packaging appears to follow the familiar system bays and storage bays of the DMX-4 and DMX-4 950 models, but architecturally offers XIV-like attachment across a common switch network between "engines", EMC's term for interface modules.
Non-disruptive data migration
IBM's SoFS, DR550 and GMAS have this already, as does as anything connected behind an IBM SAN Volume Controller.
A long time ago, IBM used to have midrange disk storage systems called "FAStT" which stood for Fibre Array Storage Technology, so this might have given EMC the idea for their "Fully Automated Storage Tiering" acronym. The concept appears similar to what IBM introduced back in 2007 for the Scale-Out-File Services [SofS] which not only provides policy-based placement, movement and expiration on different disk tiers, includes tape tiers as well for a complete solution. I don't see anything in the V-Max announcement that it will support tape anytime soon.
And what ever happend to EMC's Atmos? Wasn't that supposed to be EMC's new direction in storage?
Zero-data loss Three-site replication
IBM already calls this Metro/Global Mirror for its IBM DS8000 series, but EMC chose to call it SRDF/EDP for Extended Distance Protection.
Ease of Use
The most significant part of the announcement is that EMC is finally focusing on ease-of-use.In addition to reducing the requirement for "Bin File" modifications, this box has a redesigned user interface to focus on usability issues. For past DMX models, EMC customers had to either hire EMC to do tasks for them that were just to difficult otherwise, or buy expensive software like their EMC Control Center to manage. EMC willcontinue to sell DMX-4 boxes for a while, as they are probably supply-constrained on the V-Max side, but I doubt they will retro-fit these new features back to DMX-3 and DMX-4.
When IBM announced its acquisition of XIV over a year ago now, customers were knocking down our doors to get one. This caught two particular groups looking like a [deer in headlights]:
EMC Symmetrix sales force: Some of the smarter ones left EMC to go sell IBM XIV, leaving EMC short-handed and having to announce they [were hiring during their layoffs]. Obviously, a few of the smart ones stayed behind, to convince their management to build something like the V-Max.
IBM DS8000 sales force: If clients are not happy with their existing EMC Symmetrix, why don't they just buy an IBM DS8000 instead? What does XIV have that DS8000 doesn't?
Let me contrast this with the situation Microsoft Windows is currently facing.
I am often asked by friends to help them pick out laptops and personal computers. I use Linux, Windows and MacOS, so have personal experience with all three operating systems.
Linux is cheaper, offers the power-user the most options for supporting older, less-powerfulequipment, but I wouldn't have my Mom use it. While distributions like Ubuntu are makinggreat strides, it is just too difficult for some people.
MacOS is nice, I like it, it works out of the box with little or no customization and an intuitive interface. However, some of my friends don't make IBM-level salaries, and have to watch their budget.
In their "I'm a PC" campaign, Microsoft is fighting both fronts. Let's examine two commercials:
In the first commercial, a young eight-year-old puts together a video from pictures oftoy animals and some background music.The message: "Windows is easier to use than Linux!" If they really wanted to send this message, they should have shown senior citizens instead.
In the second commercial, a young college student is asked to find a laptop with 17 inchscreen, and a variety of other qualifications, for under $1000 US dollars. The only modelat the Apple store below this price had a 13 inch screen, but she finds a Windows-based system that had this size screen and met all the other qualifications. The message: "Windows-based hardware from a variety of competitors are less expensive than hardware from Apple!"
Both Microsoft and Apple charge a premium for ease-of-use.In the storage world, things are completely opposite. Vendors don't charge a premium forease-of-use. In fact, some of the easiest to use are also the least expensive.
If you just have Windows and Linux, you can get some entry level system likethe IBM DS3000 series, only a few features, and can be set up in six simple steps.
Next, if you have a more interesting mix of operating systems, Linux, Windows and some flavorsof UNIX like IBM AIX, HP-UX or Sun Solaris, then you might want the features and functionsof more pricier midrange offerings. More options means that configuration and deploymentis more difficult, however.
Finally, if you are serious Fortune 500 company, running your mission critical applications on System z or System i centralized systems in a big data center, that you might be willing to pay top dollar for the most feature-rich offerings of an Enterprise-class machine.Thankfully you have an army of highly-trained staff to handle the highest levels of complexity.
IBM's DS8000, HDS USP-V and EMC's Symmetrix are the key players in the Enterprise-classspace. They tried to be ["all things to all people"], er.. perhaps all things to allplatforms. All of the features and functions came at a price, not just in dollars, butin complexity and difficulty. You needed highly skilled storage admins using expensive storage management software, or be willing to hirethe storage vendor's premium services to get the job done.
IBM recognized this trend early. IBM's SVC, N series and now XIV all offer ease-of-use withenterprise-class features and functions, at lower total cost of ownership than traditional enterprise-class systems. IBM is not the only one, of course, as smaller storage start-ups like 3PAR,Pillar Data Systems, Compellent, and to some extent Dell's EqualLogic all recognized thisand developed clever offerings as well.
While IBM's XIV may not have been the first to introduce a modular, scale-out architectureusing commodity parts managed by sophisticated ease-of-use interfaces, its success might have been the kick-in-the-butt EMC needed to follow the rest of the industry in this direction.
Continuing this week's series on Pulse 2009 video, we have a double header. Bob Dalton discusses our entry-level IBM System Storage [DS3000] and midrange IBM System Storage [DS4000] disk systems, followed by Dan Thompson discussing [IBM Tivoli Storage Manager FastBack] software.
IBM Tivoli Storage Manager FastBack is the result of IBM's [acquisition of FilesX], a company in Israel that developed software to backup servers at remote branch offices running Microsoft Windows operating system.
Well, it's Tuesday, which means IBM makes its announcements!
This week, IBM announces that it now supports 50GB Solid State Disk (SSD) in its [IBM System Storage EXP3000] disk systems.IBM has already made announcements about SSD enablement in the DS8000 and SAN Volume Controller (SVC), but now the EXP3000 brings SSD technology down to smaller System x server deployments.
Adoption of this new exciting technology is still in the early stages, despite the fact that IBM and other vendors have been touting this technology for a while. (For a quick blast to the past, here was my first post on the subject back from December 20, 2006: [Hybrid, Solid State and the future of RAID])Recently, fellow blogger BarryB admitted that EMC have only sold SSD to [hundreds of their customers], and to be fair, I suspect IBM's sales of SSD in its BladeCenter servers [available since July 2007] have been in similar single-digit percentage territory as well.
The advantage of today's announcement is that you can mix and match SSD drives with SAS and SATA drives in the EXP3000. You won't have to buy the entire drawer of SSD, you can start with just a few, depending on your business needs. On the other extreme, you can have up to two drawers, with 12 SSD drives each, for a total of 24 drives directly attached to System x servers via the ServeRAID MR10M SAS/SATA controller adapter.
People are confused over various orders of magnitude. News of the economic meltdownoften blurs the distinction between millions (10^6), billions (10^9), and trillions (10^12).To show how different these three numbers are, consider the following:
A million seconds ago - you might have received your last paycheck (12 days)
A billion seconds ago - you were born or just hired on your current job (31 years)
A trillion seconds ago - cavemen were walking around in Asia (31,000 years)
That these numbers confuse the average person is no surprise, but that it confuses marketing people in the storage industry is even more hilarious. I am often correcting people who misunderstandMB (million bytes), GB (billion bytes) and TB (trillion bytes) of information.Take this graph as an example from a recent presentation.
At first, it looks reasonable, back in 2004, black-and-white 2D X-Ray images were only 1MBin size when digitized, but by 2010 there will be fancy 4D images that now take 1TB, representinga 1000x increase. What?When I pointed out this discrepancy, the person who put this chart together didn't know what to fix.Were 4D images only 1GB in size, or was it really a 1000000x increase.
If a 2D image was 1000 by 1000 pixels, each pixel was a byte of information, then a 3D imagemight either be 1000 by 1000 by 1000 [voxels], or 1000 by 1000 at 1000 frames per second (fps). Thefirst being 3D volumetric space, and the latter called 2D+time in the medical field, the rest of us just say "video".4D images are 3D+time, volumetric scans over time, so conceivably these could be quite large in size.
The key point is that advances in medical equipment result in capturing more data, which canhelp provide better healthcare. This would be the place I normally plug an IBM product, like the Grid Medical Archive Solution [GMAS], a blended disk and tape storage solution designed specifically for this purpose.
So, as government agencies look to spend billions of dollars to provide millions of peoplewith proper healthcare, choosing to spend some of this money on a smarter infrastructure can result in creating thousands of jobs and save everyone a lot of money, but more importantly, save lives.
Short 2-minute [video] argues the case for Smarter Healthcare
For more on this, check out Adam Christensen's blog post on[Smarter Planet], which points to a podcast byDr. Russ Robertson, chairman of the Counsel of Medical Education at Northwestern University’s Feinberg School of Medicine, and Dan Pelino, general manager of IBM's Healthcare and Life Sciences Industry.
An avid reader of this blog pointed me to a blog post [A Small Tec DIGG on IBM XIV], byGowri Ananthan, a System Engineer in Singapore.Basically, she covers past battles, er.. discussions between me and fellow blogger BarryB from EMC, and [blegs] foranswers to three questions.
Gowri, here are your answers:
Q1. Does IBM offer a Pay-as-you-Go [PAYGO] upgrade path for its IBM XIV disk storage system?
The concern was expressed as:
PAYGO also requires the customer to purchase the remaining capacity within 12 months of installation. So it is More of a 12-month installment plan than pay-as-you-grow.
A1. Actually, IBM offers several methods for your convenience:
With IBM's Capacity on Demand (CoD) plan, you get the full framewith 15 modules installed on your data center floor, but only pay for the first four modules 21 TB, then pay for 5.3TB module increments as you need them over the next 12 months. This is ideal for companies that don't know how fast they will grow, but do not want to wait for new modules to be delivered and installed when needed.
With IBM's Partial Rack offering, you can get a system with as little as six modules (27TB),and then over time, add more modules as you need. This does not have to be done within 12 months, you can stay at six modules for as long as you like, and you can take as long asyou want to add more modules. When you are ready for more capacity, the drawer or drawerscan be delivered, and installed non-disruptively.
Neither of these are "payment installment plans", but certainly if you want to spread yourcosts into regularly-scheduled monthlypayments across multiple years, IBM Global Financing can probably work something out.
Q2. Does IBM consider the XIV as green storage?
The concern was expressed as:
You are powering (8.4KW) and cooling all 180 drives for the whole duration, whether you're using the capacity or not. is it what you called Greener power usage..?
A2. Yes. IBM considers the IBM XIV as green storage. The 8.4KW per frame is lessthan the 10-plus KW that a comparable 2-frame EMC DMX-950 system would consume. Theenergy savings in IBM XIV comes from delivering FC-like speeds using slower SATA disks that rotate slower, and therefore take less energy to spin.
In the fully-populated or Capacity on Demand configuration, you would spin all 180disks. However, using the partial rack configuration, the 6-module has only 40 percent ofthe disks, and therefore consumes only 40 percent of the energy. If you don't plan to storeat least 20-30 TB, you might consider the DS3000, DS4000, DS5000, or DS8000 disk system instead.
Q3. How do you connect more than 24 host ports to an IBM XIV?
The concern was expressed as:
And finally do not forget my question on 24-FC Ports… Up to 24 Fiber Channel ports offering 4 Gbps, 2Gbps or 1 Gbps multi-mode and single-mode support.Stop.. stop.. how you gonna squeeze existing bunch of FC cables in 24 ports?
A3. Best practices suggest that if you have ten or more physical servers, each with two separate FC ports, then you should use a SAN switch or director in between. If you require four ports per server, then you would need a SAN switch beyond six servers to connect to the IBM XIV. If you consider that 24 FC ports, at 4Gbps, represents nearly 10 GB/sec of bandwidth, you will recognize that this is not a performance bottleneck for the system.
Wrapping up this week's theme on IBM's Dynamic Infrastructure® strategic initiative, we have a few more goodies in the goody bag.
First item: Dave Bricker shows off the XIV cloud-optimized storage at Pulse 2009
Second item: Rodney Dukes discusses the latest features of the DS8000 disk system at Pulse 2009
Third item: IBM launches the [Dynamic Infrastructure Journal]. You can read the February 2009 edition online, and if you find it useful and interesting, subscribe to learn from IBM's transformation experts how to reduce cost, manage risk and improve service.
Whether or not you attended the IBM Pulse 2009 conference, you might enjoy looking at the rest of the series of videos on [YouTube] and photographs on [Flickr].
Well, it's Tuesday again, and that means more IBM announcements!
Today, IBM announced the enhanced IBM System Storage DS3200 disk system.It is in our DS3000 series, the DS3200 is SAS-attach, DS3300 is iSCSI-attach, and DS3400 is FC-attach. All of them support up to 48 drives, which can be a mix of SAS and SATA drives.
The DS3200 supports the following operating environments (see IBM's [Interop Matrix] for details):
Linux (both Linux-x86 and Linux on POWER)
With today's announcements, the DS3200 can be used to boot from, as well as contain data. This is ideal to combine with IBM BladeCenter. With the IBM BladeCenter you can have 14 blades, either x86 or POWER based processors, attached to a DS3200 via SAS switch modules in the back of the chassis.
Let's take an example of how this can be used for a Scale-Out File Services[SoFS] deployment.
First, we start with servers. We can have either three [IBM System x3650] servers, but this would use up all six of the direct-attach ports. Instead, we'll choose the [BladeCenter H chassis], with three HS21 blades for SoFS, and that leaves us with eleven empty blade slots we could put in a management node, or other blades to run applications.
SAS connectivity modules
The IBM BladeCenter [SAS Connectivity Module] allows the blade servers to connect to a DS3200. Two of them fit right in the back of the BladeCenter chassis, providing full redundancy without consuming additional rack space.
DS3200 and EXP3000 expansion drawers
We'll have one DS3200 controller with twelve internal drives, and three expansion EXP3000 drawers with twelve drives each, for a total of 48 drives. Using 1TB SATA, this would be 48 TB raw capacity.
The end result? You get a 48TB NAS scalable storage solution, supporting up to 7500 concurrent CIFS and NFS users, with up to 700 MB/sec with large block transfers. By using BladeCenter, you can expand performance by adding more blades to the Chassis, or have some blades running SAP or Oracle RAC have direct read/write access to the SoFS data.
Just another example on how IBM can bring together all the components of a solution to provide customer value!
Fellow blogger Chuck Hollis from EMC has a post titled[Whither Frankenstorage] causing quite a stir in the [Stor-o-Sphere]. He is not the firstEMC blogger to use this phrase, I credit [BarryB] for coining the term back in September 2008.Frankenstein serves as the ideal icon for EMC's FUD machine. In the novel, Dr. Frankenstein wasattempting to do something nobody else had ever attempted, to create human life from variousdead body parts, a process full of uncertainty and doubt, with frightful results.
Perhaps it was a coincidence that I discussed IBM's storage strategy in my post[Foundations and Flavorings] on January 28, shortly followed by NetApp's announcing V-series gateway [support of Texas Memory Systems' RamSan-500] on February 3. These two events mighthave been the trigger that pushed ChuckH over the edge to put pen to paper, .. finger to keyboard.
Flinging FUD in all directions was ChuckH's not-so-subtle way to remind the world that EMC is the only major storage vendor to not offer a successful storage virtualization product. Withoutfirst-hand experience with well-designed storage virtualization, ChuckH conjectures that a configuration matching intelligent front-ends to reliable back-ends might be more expensive, might be more difficult to manage, or might be harder to support.
(Note: Rest assured, IBM can demonstrate that a modular approach, combining intelligent front-ends to reliableback-ends can help reduce costs, be easier to manage, and be fully supported. Contact yourlocal IBM Business Partner or storage sales rep for details.)
My favorite was from Nigel Poulton's post on[Ruptured Monkey]. Here's an excerpt:
In fact, I'm fairly certain that EMC don't back away from customers who run HP or IBM servers and say "sorry we cant help you here, an end to end HP or IBM solution would be much better for you when it comes to troubleshooting……. putting our storage in would only add extra layers of complexity and make things messy….."
On most other days, ChuckH has well-written, insightful blog posts that show that EMC brings some value to the industry. I could have made a snarky reference to[Dr Jekyll and Mr Hyde], or indicate this post proves that nobody at EMC is editing or reviewingChuck's thoughts before they get posted. But it's too late, Chuck already got the message, and added the following to bring the discussion back to civility:
When considering the broad range of storage media service levels available today (flash, FC, SATA, spin-down, etc.) what's the best way to offer these media choices in an array? Is the answer (a) combine smaller arrays from different vendors together behind a virtualization head, or (b) invest the time and effort to build arrays that can directly support all of these media types?
Would anyone like to try a cogent response to the question posed, please?
To address ChuckH's question, Nigel's post gave me the idea to use today's 200th year celebration of [Charles Darwin].
Over millions of years, Charles Darwin argued, evolution results in change in the inherited traits of a population of organisms from one generation to the next.A key component of this is a biological process called [mitosis] that allows a single cell to split and become two cells. In some cases, these individual daughter cells can then specialize to specific functions, such as nerve cells, muscle cells or bone cells. Over time, adaptations that work well carry forward, and thosethat don't get left behind.
I find it interesting that before [On the Origin of Species] was published in 1859, works of fiction like Mary Shelley's[Frankenstein] had monsters being"created", and afterward, monsters were the result of mutation or selective adaptation.
Nigel compares EMC's monolithic approach to placing an intelligent front-end with a reliable back-end as "One man band, where one guy is trying playing all the instruments himself" versus the "Philharmonic Orchestra". I would take it one step further, comparing single-cell organisms to multi-cell life forms.
Innovative companies like Google and Amazon can't wait for a completely integrated solution from a major IT vendor to meet their needs. Why should they? There are open standards, and ways to interconnect the best intelligence into a [dynamic infrastructure®.].You don't need to wait another million years to see which way the IT marketplace considers the better approach. Just look at the last 60 years. Back then, computer systems were all integrated, server, storage, and the wires that connected them were all inside a huge container. Then, mitosis happened, and IBM created external tape storage in 1952, and external disk storage in 1956. Open standards for interfaces allowed third party manufacturers like HDS, StorageTek and EMC to offer plug-compatible storage devices.
On the server side, it didn't take long for functionality in mainframes to split off. Mitosis happened again, with front-end UNIX systems processing incoming data, and mainframes handling the back-end data bases and printing. The client-server era replaced dumb terminals with more intelligent desktops and workstations, and these could handle the front-end processing to display information, with the back-end storage and number-crunching being handled by the UNIX and mainframe systems they connected to.Connections between desktops and servers, and from servers to storage, have also evolved. From thousands of direct-attach cables to networks of switches and directors.
Charles Darwin was particularly interested in cases where evolution happened faster or slower than in other cases. While IBM and Microsoft encouraged third-party innovations on the PC side, Apple resisted mitosis, trying to keep its machines pure single-cell, integrated solutions.For the same reasons that you can't fight the laws of nature, Apple ended up having to support I/O ports to external devices. Thanks to open standards like USB and Firewire, you can connect third-party storage to Apple computers. My little Mac Mini at home has more devices hanging off it than any of my Windows or Linux boxes! And Apple's iPod is successful because its iTunes software runs on both Windows and Mac OS operating systems.
Every time mitosis happens in the IT industry, it opens up opportunities to specialize, to innovate, to adapt to a dynamically changing world. When mitosis is suppressed, you get limiting products and frustratedengineers leaving to form their own start-up companies.But when mitosis is encouraged, you get successful products, solutions and partnerships positioned for a smarter planet.
Now that IBM XIV has proven that 1TB SATA are safe for high-end tier-1 enterprise class use, we extended DS8000 support to include SATA support also. DS8000 supports RAID-6 and RAID-10 for these.
Intelligent Write Caching
IBM Research conducts extensive investigations into improved algorithms for cache management. Intelligent Write Caching boosts performance for both temporal and spatial locality.
Remote Pair FlashCopy®
This allows you to FlashCopy volume A to volume B, with Volume B remotely mirrored to Volume C at a secondary location, via Metro Mirror. This allows you to have a consistent copy of your data at both locations.
IBM was the first in the industry to deliver tape-drive encryption, so it makes sense that IBM is also the first in the industry to deliver disk-drive encryption. These are 15K rpm drives in standard 146GB, 300GB and 450GB capacities. As with tape, encrypting at the disk device eliminates the huge overhead from server-based encryption methods.
Solid State Drive (SSD)
You can also have Solid State Disk drives in your DS8000, in 73GB and 146GB capacities, protected by RAID-5.If you are wondering what data to put on these much-faster drives, IBM has taken the work and worry out by havingintelligence in DB2 to optimize what gets placed on SSD to get the most performance improvement.
IBM System Storage XIV
Continuing the incredible marketplace excitement over its Cloud-Opimized Storage[XIV series], IBM now has announced[new capacity options]. The IBM XIV R2 that we announced last August 2008 was a fixed 15 module configuration. In thenew configurations, you can start with as little as six modules, representing a 40% partial rack of the originalfull model. Here is a table that shows the details:
Useable Capacity (TB)
Fibre Channel Ports
Cache Memory (GB)
IBM System Storage N series
And last, but not least, we have two new models in IBM's[N6000 series].The [N6060]has model A12 (single controller) and model A22 (dual controller). These are disk-less controllers thatyou can configure in either appliance mode or gateway mode. In appliance mode, you can attachdisk drawers such as the EXN1000, EXN2000 or EXN4000. In gateway mode, you attach external disk systems, suchas the IBM DS8000 or XIV above.
It's ruggedized to handle earthquakes. IBM brings a feature that we've had for a while on other disk systems to the N series with a collection of bolts and anchors to secure the rack from physical tremors.
It's instrumented for IBM Active Energy Manager, a component of IBM Systems Director. New iPDUs are designed to help measure and monitor energy management components. As companies get more concerned about thefate of the planet, monitoring energy consumption can help reduce carbon footprint.
I'll cover the rest of the announcements tomorrow!
We've been quite busy here at the Tucson Executive Briefing Center. I am often asked to explain the relationship between IBM's various storage products. While automakers don't have to explain why they sell sports coupes, pickup trucks and minivans, this analogy does not adequately cover IT storage products. So, I have come up with a new analogy that seems to be a better fit: foundations and flavorings.
All over the world, meals are often comprised of a foundation, perhaps rice, potatoes or pasta, covered with some form of flavoring, sauces, pieces of meat or fish, grated cheese and spices. In Puerto Rico, I had dishes where the foundation was mashed bananas called [plantains]. Sandwich shops often let you pick your choice of bread, the foundation, and then your meats and cheeses, the flavorings.At our local steakhouse,[McMahon's], the menulists a set of steaks, the foundation such as Rib Eye, Filet Mignon, Prime Rib or New York Strip, andvarious flavorings, such as sauces and rubs to cover the steak. Last night, I had the Delmonico steak with the Cristiani sauce consisting of Portobello mushrooms, garlic and aged Romano cheese.
This serves as a useful analogy for IBM's storage strategy. Allowing thefoundations and flavorings to be separately orderable greatly simplifies the selection menu and providesa nearly any-to-any approach to meeting a variety of client needs.Let's take a look at both.
IBM's foundation products are the DS family [DS3000, DS4000, DS5000, DS6000 and DS8000 series], [DS9900 series], and [XIV] for disk, and the TS family [TS1000, TS2000, TS3000] series for tape drives and libraries. In much thesame way you might prefer brown rice instead of white rice, or linguine instead of penne pasta, you might find the attributes of one storagefoundation more attractive based on its performance, scalability and availability features for yourparticular application workloads.
Fellow IBM blogger Barry Whyte discusses SVC at great length on his [Storage Virtualization] blog. Flavoring disk foundation storage with SAN Volume Controller can provide you additionalfeatures and functions, and help improve the scalability, performance or availability characteristics.For example, if you have DS4000, DS8000 and XIV, you might use SVC to provide a consistent methodologyfor asynchronous replication, a form of consistent "flavoring" if you will.
N series Gateways
The [N series gateways] offerflavoring to disk foundation, including unified NAS, iSCSI and FCP protocol host attachment, and application aware capabilities. (As for our IBM N series appliances or "filers", these could be foundational storage behind an SVC, but that's perhaps a topic for another post.)
SoFS provides a global namespace with clustered NAS access to files. This is a blended disk-and-tape solution with built-in backup and Information Lifecycle Management [ILM]. Policies can be used to place different files onto different tiers of storage, automate the movement from tier to tier, including migration to tape, and even expiration when the data is no longer needed.
The [IBM System Storage DR550] provides Non-erasable, Non-rewriteable (NENR) flavoring to storage. While the DR550 comes with internal disk storage, it can front end a tape library filled with WORM cartridges. The DR550 hasbeen paired up with small libraries (TS3200 or TS3310) as well as larger libraries like the TS3500.
The IBM Grid Medical Archive Solution [GMAS] provides a variety of capabilities for storing and accessing medical images, using a blended disk-and-tape approach. This allows hospital and clinicnetworks to provide access for doctors and radiologists from multiple locations.
Many of the flavorings are called "gateways". The IBM TS7650G flavors disk that provides a virtualtape library[VTL] with inline data deduplication capability. Recent performance tests pairing the TS7650G flavoring with XIV foundation storage found this combination to be an excellent match.
Let me know what you think. Does this help you understand IBM's storage strategy and acquisitions? Enteryour comments below.
"If you've spent any time in the storage biz, you probably realize that the server vendors sell more storage than they have any right to."
This is the old [Supermarkets-vs-Specialty Shops] debate I discussed over a year ago. The debate goes along the lines that some peopleprefer to buy their entire information infrastructure (servers, storage, software and services)from a single vendor, one-stop shopping, while others might prefer to buy their pieces ascomponents from different vendors that specialize in each technology. Because of this, Specialty shops tend to focus on other Specialty shops as their primary competitors (EMC vs. NetApp), whileSupermarkets tend to focus on other Supermarkets (IBM vs. HP).
The apparent contradiction is that Chuck feels the Supermarkets (IBM, HP, Sun and Dell) should not have any right to sell storage, in the same manner that butchers, bakers and candlestick makersdo not believe that Supermarkets should have any right to sell meat, bread or candles?If servers and storage are so different, how can self-proclaimed storage-only specialist EMC have the right to sell their non-storage offerings, from server virtualization (VMware) to cloud-computing services? With EMC's latest announcement of DW/BI centers, I think we can safely take EMC off the list of storage-only specialists. We will needto come up with a third category for those caught in limbo between being one-stop shopping Supermarkets like IBM and being a pure storage-only Specialists like NetApp. Perhaps EMC has become the IT equivalent of Wal-Mart's[Neighborhood Market].(No offense intended to my friends at Wal-Mart!)
Then Chuck continues with these statements:
"It is rarely is it the case that a server vendor can offer you a better storage product, or better service, or better functionality than what a storage specialist can do.
...Interestingly enough, Dell appears to do a sizable amount of storage business "off base" with EMC products -- outside the context of a specific server transaction."
This second contradiction relates to products that are manufactured by specialty shops, butsold through supermarket channels. Chuck would like to imply that the only storage products anyone should consider is gear made by specialty shops, whether you get it directly from them, or through Supermarket's with appropriate OEM agreements. Storage made by Supermarkets, either organicallydeveloped or through acquisitions, should not be considered? What happens when a Supermarket acquires a specialty shop? We've already seen how negative EMC has been against IBM's acquisitions of XIV and Diligent, which allowed a Supermarket like IBM to provide better products in both cases than what is available from any specialty shop. Kind of pokes a big hole in that argument!
But Dell also acquired EqualLogic, which Chuck admits might have a "fit in the marketplace".As it turns out, companies would rather buy EMCequipment from Dell sales people, than from EMC directly, and perhaps this is becauseDell, like IBM, sees the big picture. Dell, IBM and the rest of the IT Supermarkets understand theentire information infrastructure, not just the storage components of a data center. With HP and Sun selling HDS gear, and IBM selling NetApp gear, it becomes obvious that EMC needs Dell more than Dell needs EMC.
Chuck then pokes fun at NetApp in comparing the EMC NX4 to NetApp's FAS2020, comparable to IBM System Storage N series N3300. Here's an excerpt:
Like other Celerras, it does the full unified storage thing: iSCSI, NAS and "real deal" FC that isn't emulated.
The irony, of course, is that the NX4 does not actually use "real" Fibre Channel drives,but rather SAS and SATA drives. I guess Chuck's concern is that the NetApp, which doesuse "real" Fibre Channel drives, provides FC-attached LUNs to the host through its WAFL mapping,rather than through EMC's traditional RAID-rank mapping approach.How Chuck can imply that anything in the IT industry that is "emulated" is somehow seriouslyworse than "real", but then spend 40 percent of his posts devoted to the benefits of VMware,which offers "emulated" virtual machines, seems to be yet another contradiction.
"Cloud computing" has been ill-defined and over-hyped, yet storage vendors have been quick to trot out their own "cloud storage" offerings and end users are wondering whether there's significant cost savings in these services for them, particularly in tough economic times.
"Cloud-speak" can be downright confusing....
"Surprisingly, Gartner considers the amorphous nature of the term to be good news: 'The very confusion and contradiction that surrounds the term 'cloud computing' signifies its potential to change the status quo in the IT market,' the IT research firm said earlier this year."
Consistent with Scott Adams's original prediction, the barriers of entry have lowered for storage vendors as well.Rather than competing on function and price through valued relationships and trusted expertise, some vendors would rather confuse instead. EMC tries to paint the NX4 as being "just as good as" anNetApp or IBM N series for unified storage, and EMC tries to create new categories, like Cloud-Oriented Storage (COS), to give their me-too products the impression they are in a league of their own.All of this to discourage customers from making their own comparisons and doing their own research.
IBM doesn't play that way. If you want straight talk aboutIBM's products, contact your local IBM Business Partner or sales rep.
This wraps up my week in Las Vegas for the 27th Annual [Data Center Conference]. This conference follows the common approach of ending at noon on Friday, so that attendees can get home to their families for the weekend, or start their weekend in Las Vegas early to watch the 50th annual Wrangler National Finals Rodeo.
I attended the last few sessions. Here is my recap:
Where, When and Why do I need a Solid-State Drive?
The internet provides transport of digital data between any devices. All other uses have evolved from this aim. Increasing data storage on any node on the Web therefore increases the possibilities at every other point. We are just now beginning to recognize the implications of this. The two speakers co-presented this session to cover how Solid State Disk (SSD) may participate.
Some electronic surveys of the audience provided some insight. Only 12 percent are deploying SSD now. 59 percent are evaluating the technology. A whopping 89 percent did not understand SSD technology, or how it would apply to their data center. Here is the expected time linefor SSD adoption:
17 percent - within 1 year
60 percent - around 3 years from now
21 percent - 5 years or later
The main reasons cited for adopting SSD were increasing IOPS, reducing power and floorspace requirements, and expanding global networks. Here's a side-by-side comparison between HDD and SSD:
Disk array with 120 HDD, 73GB drives
Disk array with 120 SSD, 32GB drives
Per 73GB drive
Per 32GB drive
100MB/sec per drive
Read 250 MB/sec per drive Write 170 MB/sec per drive
300 IOPS per drive
35,000 IOPS per drive
12 Watts per drive
2.4 Watts per drive
However, the cost-per-GB for SSD is still 25x over traditional spinning disk, andthe analysts expected SSD to continue to be 10-20x for a while. For now, they estimatethat SSD will be mostly found in blade servers, enterprise-class disk systems, andhigh-end network directors.
The speakers gave examples such as Sun's ZFS Hybrid, and other products from NetApp,Compellent, Rackable, Violin, and Verari Systems.
Taking fear out of IT Disaster Recovery Exercises
The analyst presented best practices for disaster recovery testing with a "Pay Now or Pay Later"pre-emptive approach. Here were some of the suggestions:
Schedule adequate time for DR exercises
Build DR considerations into change control procedures and project lifecycle planning
Document interdependencies between applications and business processes
Bring in the "crisis team" on even the smallest incidents to keep skill sharp
Present the "State of Disaster Recovery" to Senior Management annually
The speaker gave examples of different "tiers" for recovery, with appropriate RPO and RTOlevels, and how often these should be tested per year. A survey of the audience found that70 percent already have a tiered recovery approach.
In addition to IT staff, you might want to consider inviting others to the DR exerciseas reviewers for oversight, including: Line of Business folks, Facilities/Operations, Human Resources, Legal/Compliance officers, even members of government agencies.
DR exercises can be performed at a variety of scope and objectives:
Tabletop Test - IBM calls these "walk-throughs", where people merely sit around the table and discuss what actions they would take in the event of a hypothetical scenario. This is a good way to explore all kinds of scenarios from power outages, denial of service attacks, or pandemic diseases.
Checklist Review - Here a physical inventory is taken of all the equipment needed at the DR site.
Stand-alone Test - Sometimes called a "component test" or "unit test", a single application is recovered and tested.
End-to-End simulation - All applications for a business process are recovered for a full simulation.
Full Rehearsal - Business is suspended to perform this over a weekend.
Production Cut-Over - If you are moving data center locations, this is a good time to consider testing some procedures. Other times, production is cut-over for a week over to the DR site and then returned back to the primary site.
Mock Disaster - Management calls this unexpectedly to the IT staff, certain IT staff are told to participate, and others are told not to. This helps to identify critical resources, how well procedures are documented, and members of the team are adequately cross-trained.
For exercise, set the appropriate scope and objectives, score the results, and then identifyaction plans to address the gaps uncovered. Scoring can be as simple as "Not addressed","Needs Improvement" and "Met Criteria".
Full Speed Ahead for iSCSI
The analyst presented this final session of the conference. He recognized IBM's early leadership in this area back in 1999, with the IP200i disk system. Today, there are many storage vendors that provide iSCSI solutions, the top three being:
23 percent - Dell/EqualLogic
15 percent - EMC
14 percent - HP/LeftHand Networks
This protocol has been mostly adopted for Windows, Linux and VMware, but has been largelyignored by the UNIX community. The primary value proposition is to offer SAN-like functionality at lower cost. When using the existing NICs that come built-in on most servers, iSCSI canbe 30-50 percent less expensive than FC-based SANs. Even if you install TCP-Offload-Engine (TOE) cards into the servers, iSCSI can still represent a 16-19 percent cost savings. ManyIBM servers now have TOE functionality built-in.
Since lower costs are the primary motivator, most iSCSI deployments are on 1GbE. The new10Gbps Ethernet is still too expensive for most iSCSI configurations. For servers runninga single application, 2 1GbE NICs is sufficient. For servers running virtualization with multiple workloads might need 4 or 5 NICs (1GbE), or consider 2 10GbE NICs if 10Gbps is available.
The iSCSI protocol has been most successful for small and medium sized businesses (SMB) lookingfor one-stop shopping. Buying iSCSI storage from the same vendor as your servers makes a lot of sense: EqualLogic with Dell servers, LeftHand software with HP servers, and IBM's DS3300 or N series with IBM System x servers.The average iSCSI unit was 10TB for about $24,000 US dollars.
Security and Management software for iSCSI is not as fully developed as for FC-based SANs.For this reason, most network vendors suggest having IP SANs isolated from your regular LAN.If that is not possible, consider VPN or encryption to provide added security.Issues of security and management imply that iSCSI won't dominate the large enteprise data center. Instead, many arewatching closely the adoption of Fibre Channel over Ethernet (FCoE), based on revised standardsfor 10Gbps Ethernet. FCoE standards probably won't be finalized till mid-2009, with productsfrom major vendors by 2010, and perhaps taking as much as 10 percent marketshare by 2011.
I hope you have enjoyed this series of posts. In addition to the sessions I attended, theconference has provided me with 67 presentations for me to review. Those who attended couldpurchase all the audio recordings and proceedings of every session for $295 US dollars, and those who missed the event can purchase these for $595 US dollars. These are reasonable prices, when you realize that the average Las Vegas visitor spends 13.9 hours gambling, losing an average of $626 US dollars per visit. The audio recordings and proceedings can provide more than 13.9 hours of excitement for less money!
The title of this post is inspired by Baxter Black's [latest book]. Rathera recap of the break-out sessions, I thought I would comment on a fewsentences, phrases or comments I heard in the afternoon and evening.
Stop buying storage from EMC or NetApp
The lunch was sponsored by Symantec. Rod Soderbery presented "Taking the cost out ofcost savings", explaining some ideas to reduce IT costs immediately.
First, he suggested to "stop buying storage" from EMC or NetApp that charge a premiumfor tier-one products. Instead, Rod suggested that people should "think like a Web company"and buy only storage products based on commodity hardware to save money, and to use SRM software to identify areas of poor storage utilization. IBM's TotalStorage Productivity Center softwareis often used to help with this analysis.
His other suggestions were to adopt thin provisioning, data deduplication, and virtualization.The discussion at my table started with someone asking, "How do we adopt those functions without buying new storage capacity with those features already built-in?" I explained that IBM's SAN Volume Controller (SVC),N series gateways, and TS7650G ProtecTIER virtual tape gateway can all provide one or moreof these features to your existing disk storage capacity.
IBM and HP are leaders in blade servers
In the session "Future of Server and OS: Disappearing Boundaries", the audience confirmedby electronic survey that IBM and HP are the leaders in blade servers, although blades representonly 8-10 percent of the overall server market.
Interestingly, 22 percent of the audience has deployed both x86 and non-x86 (POWER, SPARC, etc.) blade servers.The presenters considered this an interesting insight.
Another survey of the audience found that 3 percent considered Sun/STK as their primary storagevendor. One of the presenters was delighted that Sun is still hanging in there.
IBM Business Partners deliver the best of IBM and mask the worst
Elaine Lennox, IBM VP, and Mark Wyllie, CEO of Flagship Solutions Group, Inc. presentedIBM-sponsored back to back sessions. Elaine presented IBM's vision, the New Enterprise Data Center, and the challenges that demand a smarter planet.
Mark focused on his company's experience working with IBM through Innovation Workshops. Theseare assessments that can help someone identify where you are now, where you want to be, andthen action plans to address the gaps.
Cats and Dogs, Oil and Water, Microsoft Windows and Mission-critical applications, what do all of these have in common?
NEC Corporation of America sponsored some sessions on some x86-based solutions they have to offer.The first part, titled "Rats Nests, Snow Drifts and Trailers" focused unified storage, andthe second part, presented by Michael Nixon, focused on how to bring Microsoft Windows servers into the data center for mission-critical applications.
The Economy might be slowing, but storage is still growing
Two analysts co-presented "The Enterprise Storage Scenario". Unlike computing capacity, thereis no on/off switch for storage, not from applications nor from end-users. The cost ofpower for storage is expected to be 3x by 2013. Virtual servers, includingVMware and Microsoft's Hyper-V will drive the need for shared external disk storage.A survey of the audience found 20 percent were expecting to purchase additional storagecapacity 4Q08.
When someone reaches age 52, they expect to coast the rest of their career
At dinner with analysts, the discussion of financial meltdown and bailouts is unavoidable,including everyone's views about the proposed bailout of the Big 3 automakers. I can'tdefend Ford, GM and Chrysler paying their people $70 US dollars per hour, when their UScounterparts at Toyota or Honda are only paid $45 to $50 dollars per hour.
However, I have a close friend who retired after 20 years working for the fire department,and a cousin who retired after 20 years serving in the Navy (the US Navy, not the BolivianNavy), and both are still in their forties in age. A long time ago, IT professionalsretired after 30 years, in some cases with 50 to 60 percent of their base pay as theirpension for the rest of their lives. A 52-year-old that has worked 30 years might expect to enjoy the rest of his old age playing golf and pursuing other hobbies. This is not "coasting", it is called "retirement". The few of my colleagues that I have seen who worked 35 to 40 years did so becausethey enjoyed the challenge of work at IBM. They enjoyed solving tough engineering problems and helping customers.As long as they were having fun on the job,IBM was glad to keep their wealth of experience on board and actively engaged.
Unfortunately, many people rely on their own investments in the stock market for retirement, ratherthan company pensions. With the current financial crisis, I suspect many people my age arereconsidering their previous retirement plans.
We're going to need more trains!
I took the monorail back to my hotel. The ride includes funny announcements and statistics,including this gem:
"Since 1940, Las Vegas has doubled in population every ten years, which means thatby the year 2230, we will have over 1 trillion people calling Las Vegas home. We're goingto need more trains!"
That wraps up Tuesday, Day 2 of my attendance here! Now for some sleep.
This week is Thanksgiving holiday in the USA, so I thought a good theme would be things I am thankful for.
I'll start with saying that I am thankful EMC has finally announcedAtmos last week. This was the "Maui" part of the Hulk/Maui rumors we heard over a year ago. To quickly recap, Atmos is EMC's latest storage offeringfor global-scale storage intended for Web 2.0 and Digital Archive workloads. Atmos can be sold as just software, or combined with Infiniflex,EMC's bulk, high-density commodity disk storage systems. Atmos supports traditionalNFS/CIFS file-level access, as well as SOAP/REST object protocols.
I'm thankful for various reasons, here's a quick list:
It's hard to compete against "vaporware"
Back in the 1990s, IBM was trying to sell its actual disk systems against StorageTek's rumored "Iceberg" project. It took StorageTek some four years to get this project out,but in the meantime, we were comparing actual versus possibility. The main feature iswhat we now call "Thin Provisioning". Ironically, StorageTek's offering was not commercially successful until IBM agreed to resell this as the IBM RAMAC Virtual Array (RVA).
Until last week, nobody knew the full extent of what EMC was going to deliver on the many Hulk/Maui theories. Severalhinted as to what it could have been, and I am glad to see that Atmos falls short of those rumored possibilities. This is not to say that Atmos can't reach its potential, and certainly some of the design is clever, such as offering native SOAP/REST access.
Instead, IBM now can compare Atmos/Infiniflex directly to the features and capabilities of IBM's Scale Out File Services [SoFS], which offers a global-scale multi-site namespace with policy-based data movement, IBM System Storage Multilevel Grid Access Manager[GAM] that manages geographical distrubuted information,and IBM [XIV Storage System] that offers high-density bulk storage.
Web 2.0 and Digital Archive workloads justify new storage architectures
When I presented SoFS and XIV earlier this year, I mentioned they were designed forthe fast-growing Web 2.0 and Digital Archive workloads that were unique enough to justify their own storage architectures. One criticism was that SoFS appeared to duplicate what could be achieved with dozens of IBM N series NAS boxes connected with Virtual File Manager (VFM). Why invent a new offering with a new architecture?
With the Atmos announcement, EMC now agrees with IBM that the Web 2.0 and DigitalArchive workloads represent a unique enough "use case" to justify a new approach.
New offerings for new workloads will not impact existing offerings for existing workloads
I find it amusing that EMC is quickly defending that Atmos will not eat into its DMXbusiness, which is exactly the FUD they threw out about IBM XIV versus DS8000 earlier this year. In reality, neither the DS8000 nor the DMX were used much for Web 2.0 andDigital Archive workloads in the past. Companies like Google, Amazon and others hadto either build their own from piece parts, or use low-cost midrange disk systems.
Rather, the DS8000 and DMX can now focus on the workloads they were designed for,such as database applications on mainframe servers.
Cloud-Oriented Storage (COS)
Just when you thought we had enough terminology already, EMC introduces yet another three-letter acronym [TLA]. Kudos to EMC for coining phrases to help move newconcepts forward.
Now, when an RFP asks for Cloud-oriented storage, I am thankful this phrase will help serve as a trigger for IBM to lead with SoFS and XIV storage offerings.
Digital archives are different than Compliance Archives
EMC was also quick to point out that object-storage Atmos was different from theirobject-storage EMC Centera. The former being for "digital archives" and the latter for"compliance archives". Different workloads, Different use cases, different offerings.
Ever since IBM introduced its [IBM System Storage DR550] several years ago, EMC Centera has been playing catch-up to match IBM'smany features and capabilities. I am thankful the Centera team was probably too busy to incorporate Atmos capabilities, so it was easier to make Atmos a separate offering altogether. This allows the IBM DR550 to continue to compete against Centera's existingfeature set.
Micro-RAID arrays, logical file and object-level replication
I am thankful that one of the Atmos policy-based feature is replicating individualobjects, rather than LUN-based replication and protection. SoFS supports this forlogical files regardless of their LUN placement, GAM supports replication of files and medical images across geographical sites in the grid, and the XIV supports this for 1MBchunks regardless of their hard disk drive placement. The 1MB chunk size was basedon the average object size from established Web 2.0 and DigitalArchive workloads.
I tried to explain the RAID-X capability of the XIV back in January, under muchcriticism that replication should only be done at the LUN level. I amthankful that Marc Farley on StorageRap coined the phrase[Micro-RAID array] to helpmove this new concept further. Now, file-level, object-level and chunk-level replication can be considered mainstream.
Much larger minimum capacity increments
The original XIV in January was 51TB capacity per rack, and this went up to 79TB per rack for the most recent IBM XIV Release 2 model. Several complained that nobody would purchase disk systems at such increments. Certainly, small and medium size businessesmay not consider XIV for that reason.
I am thankful Atmos offers 120TB, 240TB and 360TB sizes. The companies that purchasedisk for Web 2.0 and Digital Archive workloads do purchase disk capacity in these large sizes. Service providers add capacity to the "Cloud" to support many of theirend-clients, and so purchasing disk capacity to rent back out represents revenue generating opportunity.
Renewed attention on SOAP and REST protocols
IBM and Microsoft have been pushing SOA and Web Services for quite some time now.REST, which stands for [Representational State Transfer] allows static and dynamic HTML message passing over standard HTTP.SOAP, which was originally [Simple Object Access Protocol], and then later renamed to "Service Oriented Architecture Protocol", takes this one step further, allowingdifferent applications to send "envelopes" containing messages and data betweenapplications using HTTP, RPC, SMTP and a variety of other underlying protocols.Typically, these messages are simple text surrounded by XML tags, easily stored asfiles, or rows in databases, and served up by SOAP nodes as needed.
It's hard to show leadership until there are followers
IBM's leadership sometimes goes unnoticed until followerscreate "me, too!" offerings or establish similar business strategies. IBM's leadership in Cloud and Grid computing is no exception.Atmos is the latest me-too product offering in this space, trying pretty muchto address the same challenges that SoFS and XIV were designed for.
So, perhaps EMC is thankful that IBM has already paved the way, breaking throughthe ice on their behalf. I am thankful that perhaps I won't have to deal with as much FUD about SoFS, GAM and XIV anymore.
Well it's Tuesday, and ["election day"] here in the USA, and again IBM has more announcements.
IBM announced [IBM Tivoli Key Lifecycle Manager v1.0] (TKLM) to manage encryption keys. This provides a graphical interface to manage encryption keys, including retention criteria when sharing keys with other companies.
TKLM is supported on AIX, Solaris, Windows, Red Hat and SUSE Linux. IBM plans to offer TKLM forz/OS in 2009. TKLM can be used with Firefox or Internet Explorer web browser. This will include the Encryption Key Manager (EKM) that IBM offered initially to support encryption keys for the TS1120, TS1130, and LTO-4 drives.
While this is needed today for tape, IBM positions this software to also manage the encryption keys for "Full Drive Encryption" (FDE) disk drive modules (DDM) in IBM disk systems in 2009.
There's some good discussion in the comments section over at Robin Harris' StorageMojo blog for hispost [Building a 1.8 Exabyte Data Center].To summarize, a student is working on a research archive and asked Robin Harris for his opinion. The archive will consist of 20-40 million files averaging 90 GB in size each, for a total of 1800 PB or 1.8 EB. By comparison, anIBM DS8300 with five frames tops out at 512TB, so it would take nearly 3600 of these to hold 1.8 EB. While this might seem like a ridiculous amount of data, I think the discussion is valid as our world is certainly headed in that direction.
IBM works with a lot of research firms, and the solution is to put most of this data on tape, with just enough disk for specific analysis. Robin mentions a configurion with Sun Fire 4540 disk systems (aka Thumper). Despite Sun Microsystems' recent [$1.7 Billion dollar quarterly loss], I think even the experts at Sun would recommend a blended disk-and-tape solution for this situation.
Take for example IBM's Scale Out File Services [SoFS] which today handles 2-3 billion files in a single global file system, so 20-40 million would present no problem. SoFS supports a mix of disk and tape, with built-in movement, so that files that were referenced would automatically be moved to disk when needed, and moved back to tape when no longer required, based on policies set by the administrator. Depending on the analysis, you may only need 1 PB or less of disk to perform the work, which can easily be accomplished with a handful of disk systems, such as IBM DS8300 or IBM XIV, for example.
The rest would be on tape. Let's consider using the IBM TS3500 with [S24 High Density] frames. A singleTS3500 tape library with fifteen of these HD frames could hold 45PB of data, assuming 3:1 compression on 1TB-size 3592 cartridges. You wouldneed 40 (forty) of these libraries to get to the full 1800 PB required, and these could hold even more as higher capacity cartridges are developed. IBM has customers with over 40 tape libraries today (not all with these HD frames, of course), but the dimensions and scale that IBM is capable lies within this scope.
(For LTO fans, fifteen S54 frames would hold 32PB of data, assuming 2:1 compression on 800GB-size LTO-4 cartridges.so you would need 57 libraries instead of 40 in the above example.)
This blended disk-and-tape approach would drastically reduce the floorspace and electricity requirements when compared against all-disk configurations discussed in the post.
People are rediscovering tape in a whole new light. ComputerWorld recently came out with an 11-page Technology Brief titled [The Business Value of Tape Storage],sponsored by Dell. (Note: While Dell is a competitor to IBM for some aspects of their business, they OEM their tape storage systems from IBM, so in that respect, I can refer to them as a technology partner.) Here are some excerpts from the ComputerWorld brief:
For IT managers, the question isnot whether to use tape, but whereand how to best use tape as part of acomprehensive, tiered storage architecture.In the modern storage architecture,tape plays a role not onlyin data backup, but also in long-termarchiving and compliance.
“Long-term archiving is the primaryreason any company shoulduse tape these days,” says MikeKarp, senior analyst at EnterpriseManagement Associates in Boulder,Colo. Companies are increasinglylikely to use disk in conjunctionwith tape for backup, but for long-termarchiving needs, tape remainsunbeatable.
After factoring inacquisition costs of equipment andmedia, as well as electricity and datacenter floor space, Clipper Groupfound that the total cost of archivingsolutions based on SATA disk, theleast expensive disk, was up to 23times more expensive than archivingsolutions involving tape. Calculatingenergy costs for the competing approaches,the costs for disk jumpedto 290 times that of tape.
“Tape isalways the winner anywhere costtrumps anything else,” says Karp.No matter how the cost is figured,tape is less expensive.
Beyond IT familiarity with tape,analysts point to other reasons whyorganizations will likely keep tapein their IT storage infrastructures.Energy savings, for example, is themost recent reason to stick withtape. “The economics of tape arepretty compelling, especially whenyou figure in the cost of power,”Schulz says.
So, whether you are planning for an Exabyte-scale data center, or merely questioning the logic of a disk-for-everything storage approach, you might want to consider tape. It's "green" for the environment, and less expensive on your budget.
Perhaps the recent financial meltdown is making storage vendors nervous.Both IBM and EMC gained market share in 3Q08, but EMC is acting strangelyat IBM's latest series of plays and announcements. Almost contradictory!
Benchmarks bad, rely on your own in-house evaluations instead
Let's start with fellow blogger Barry Burke from EMC, who offers his latest post[Benchmarketing Badly] with commentaryabout Enterprise Strategy Group's [DS5300 Lab Validation Report]. The IBM System Storage DS5300 is one of IBM's latest midrange disk systems recently announced. Take for example this excerpt from BarryB's blog post:
"I was pleasantly surprised to learn that both IBM and ESG agree with me about the relevance and importance of the Storage Performance Council benchmarks.
That is, SPC's are a meaningless tool by which to measure or compare enterprise storage arrays."
Nowhere in the ESG report says this, nor have I found any public statements from either IBM nor ESG that makes this claim. Instead, the ESG report explains that traditional benchmarks from the Storage Performance Council [SPC] focus on a single, specific workload, and ESG has chosen to complement this with a variety of other benchmarks to perform their product validation, including VMware's "VMmark", Oracle's Orion Utility, and Microsoft's JetStress.
Benchmarks provide prospective clients additional information to make purchasedecisions. IBM understands this, ESG understands this, and other well-respected companies like VMware, Oracle and Microsoft understand this. EMC is afraid that benchmarks mightencourage a client to "mistakenly" purchase a faster IBM product than a slower EMC product. Sunshine makes a great disinfectant, but EMC (and vampires) prefer their respective "prospects" remain in the dark.
Perhaps stranger still is BarryB's postscript. Here's an excerpt:
"... a customer here asked me if EMC would be willing to participate in an initiative to get multiple storage vendors to collaborate on truly representative real-world "enterprise-class" benchmarks, and I reassured him that I would personally sponsor active and objective participation in such an effort - IF he could get the others to join in with similar intent."
As I understand it, EMC was once part of the Storage Performance Council a long time ago, then chose to drop out of it. Why re-invent the wheel by creating yet another storage industry benchmark group? EMC is welcome to come back to SPC anytime! In addition to the SCP-1 and SPC-2 workloads, there is work underway for an SPC-3 benchmark. Each SPC workload provides additional insight for product comparisons to help with purchase decisions. If EMC can suggest an SPC-4 benchmark that it feels is more representative of real-world conditions, they are welcome to join the SPC party and make that a reality.
The old adage applies: ["It's better to light a candle than curse the darkness"]. EMC has been cursing the lack of what it considers to be acceptable benchmarks but has yet to offer anything more realistic or representative than SPC.What does EMC suggest you do instead? Get an evaluation box and run your own workloads and see for yourself! EMC has in the past offered evaluation units specifically for this purpose.
In-house evaluations bad, it's a trap!
Certainly, if you have the time and staff to run your own evaluation, with your own applications in your own environment, then I agree with EMC that this can provide better insight for your particular situation than standardized benchmarks.
In fact, that is exactly what IBM is doing for IBM XIV storage units, which are designed for Web 2.0 and Digital Archive workloads that current SPC benchmarks don't focus on. Fellow blogger Chuck Hollis from EMC opines in his post[Get yer free XIV!]. Here's an excerpt:
"Now that I think about it, this could get ugly. Imagine a customer who puts one on the floor to evaluate it, and -- in a moment of desperation or inattention -- puts production data on the device.
Nobody was paying attention, and there you are. Now IBM comes calling for their box back, and you've got a choice as to whether to go ahead and sign the P.O., or migrate all your data off the thing. Maybe they'll sell you an SVC to do this?
Yuck. I bet that happens more than once. And I can't believe that IBM (or the folks at XIV) aren't aware of this potentially happening."
Perhaps Chuck is speaking from experience here, as this may have happened with customers with EMC evaluation boxes, and is afraid this could happen with IBM XIV. I don't see anything unique about IBM XIV in the above concern. Typical evaluations involve copying test data onto the box, test it out with some particular application or workload, and then delete the data no longer required. Repeat as needed. Moving data off an IBM XIV is aseasy as moving data off an EMC DMX, EMC CLARiiON or EMC Celerra, and I am sure IBM wouldgladly demonstrate this on any EMC gear you now have.
Thanks to its clever RAID-X implementation, losing data on an IBM XIV is less likely thanlosing data on any RAID-5 based disk array from any storage vendor. Of course, there will always be skeptics about new technology that will want to try the box out for themselves.
If EMC thought the IBM XIV had nothing unique to offer, that its performance was just "OK",and is not as easy to manage as IBM says it is, then you would think EMC would gladly encourage such evaluations and comparisons, right?
No, I think EMC is afraid that companies will discover what they already know, that IBM has quality products that would stand a fair chance of side-by-side comparisons with their own offerings.We have enough fear, uncertainty and doubt from our current meltdown of the global financial markets, don't let EMC add any more.
Have a safe and fun Halloween! If you need to add some light to your otherwise dark surroundings, consider some of these ideas for [Jack-O-Lanterns]!
Well, it's Tuesday again, and that means more IBM announcements!
Storage Area Network (SAN)
IBM and Cisco announced [three new blades] for the Cisco MDS 9500 seriesdirectors: 24-port 8 Gbps, 48-port 8 Gbps, and 4/44 blended. The 4/44blended has 4 of the faster 8 Gbps ports, and 44 of the 4 Gpbs ports,so that you can auto-negotiate down to 1 Gbps for your older gear, andstill take advantage of the faster 8 Gbps speeds during the transition.
On the Brocade side, IBM announced the newIBM System Storage Data Center Fabric Manager [DCFM] V10 software. This replaces the products formerly known as BrocadeFabric Manager and McData Enterprise Fabric Connection Manager (EFCM).This software can support up to 24 distinct fabrics, up to 9000 ports,including a mix of FCP, FICON, FCIP and iSCSI protocols.
(On a related note, I heard that Microsoft is planning to rename "Windows Vista" to "Windows 7" next year! Like we say here in Tucson,if it ends in "-ista" it is going to fail in the marketplace! Perhaps EMC should rename their storage virtualization product to "In-7"?).
IBM System Storage DR550
IBM announced today that it now supports [RAID 6 onthe DR550] compliance and retention storage system.
There are a few RAID-5 based EMC Centera customers out there who have notyet switched over to the IBM DR550, and now this might be just the littlenudge they need. For long-term retention of regulatory compliance data,RAID-5 doesn't cut it, you need an advanced RAID scheme, such as RAID-6, RAID-DP or RAID-X.
The DR550 provides non-erasable, non-rewriteable (NENR) storage supportto keep retention-managed data on disk and tape media. It supports 1 TBSATA disk drives and 1TB tape cartridges to provide high capacity at lowcost and "green" low energy consumption.
IBM System Storage N series
Several of our disk systems got improved and enhanced. Let's start withthe IBM System Storage N series[hardware and software] enhancements. IBM now offers high-speed 450GB 15K RPM drives. These are Fibre Channel (FC) drives for the EXN4000 expansion drawers, and Serial Attached SCSI (SAS) drives for the entry-levelN3300 and N3600 models.
The "gateway" models now support a variety of functions that were formerlyonly available on the appliance models. This includes Advanced Single Instance Storage (A-SIS), Disk Sanitization, and FlexScale.
A-SIS is IBM's "other" deduplication function, and I talked about this in my post [A-SIS Storage Savings Estimator Tool]. Disk Sanitization will physicallywrite ones and zeros over existing data to eliminate it, what IBM sometimes calls "Data Shredding".
The last feature, FlexScale, might be new for many. It is software toenable to use of the "Performance Accelerator Module" (PAM). The PAM isa PCI-Express card with 16GB on-board RAM that acts as a secondary cachebehind main memory of the N series controller. Depending on the model,you can have one to five of these cards fit into the controller itself,boosting random read performance, metadata access, and write block destage.
IBM System Storage DS5000
IBM's latest entry into the DS family has been hugely successful.In addition to Linux, Windows and AIX, the DS5000 now supports [Novell Netware and Sun Solaris] operating systems.
For infrastructure management, IBM has enhanced the Remote Support Manager [RSM]that supports DS3000 and DS4000 has been extended to support DS5000 as well. This software can monitor up to 50 disk systems, will e-mail alerts to IBM when something goes wrong, and allow IBM to dial in via modem to get more diagnostic information to improve service to the client. Also, the IBM System Storage Productivity Center [SSPC]which now supports the DS8000 and SAN Volume Controller (SVC) has been extended to also support the DS5000.
IBM XIV Storage System
In addition to 1-year and 3-year maintenance agreements, IBM now offers[2-year, 4-year and 5-year] software maintenance agreements.
RFID labels for IBM tape media
IBM 3589 (20-pack of LTO cartridges) and IBM 3599 (20-pack of 3592 cartridges for TS1100 series)now offer [RFID labels]. These labels match the volume serial (VOLSER) with a 216-bit unique identifier and 256 bits of user-defined content. This can help with tape inventory,and to prevent people from walking out of the building with a tape cartridge stuffed in their jacket.
32GB memory stick
While not technically part of the IBM System Storage matrix of offerings, Lenovo announced their new[Essential Memory Key] which holds 32GB of memory and workswith both USB 1.1 and USB 2.0 protocols.
I wish I could say this is it for the IBM announcements for October, given that this is the last Tuesday of the month, but there are three days left, so there might be just a few more!
Last month, HP and Oracle jointly announced their new "Exadata Storage Server".This solution involves HP server and storage paired up with Oracle software, designed for Data Warehouse andBusiness Intelligence workloads (DW/BI).
I immediately recognized the Exadata Storage Server as a "me too" product, copying the idea from IBM's [InfoSphere Balanced Warehouse]which combines IBM servers, IBM storage and IBM's DB2 database software to accomplish this, but from a singlevendor, rather than a collaboration of two vendors.The Balanced Warehouse has been around for a while. I even blogged about this last year, in my post[IBMCombo trounces HP and Sun] when IBM announced its latest E7100 model. IBM offers three different sizes: C-class for smaller SMB workloads, D-class for moderate size workloads, and E-class for large enterprise workloads.
One would think that since IBM and Oracle are the top two database software vendors, and IBM and HP are the toptwo storage hardware vendors, that IBM would be upset or nervous on this announcement. We're not. I would gladlyrecommend comparing IBM offerings with anything HP and Oracle have to offer. And with IBM's acquisition of Cognos,IBM has made a bold statement that it is serious about competing in the DW/BI market space.
But apparently, it struck a nerve over at EMC.
Fellow blogger Chuck Hollis from EMC went on the attack, and Oracle blogger Kevin Closson went on the defensive.For those readers who do not follow either, here is the latest chain of events:
When it comes to blog fights like these, there are no clear winners or losers, but hopefully, if done respectfully,can benefit everyone involved, giving readers insight to the products as well as the company cultures that produce them.Let's see how each side fared:
Chuck implies that HP doesn't understand databases and Oracle doesn't understand server and storage hardware, socobbling together a solution based on this two-vendor collaboration doesn't make sense to him. The few I know who work at HP and Oracle are smart people, so I suspect this is more a claim againsteach company's "core strengths". Few would associate HP with database knowledge, or Oracle with hardware expertise,so I give Chuck a point on this one.
Of course, Chuck doesn't have deep, inside knowledge of this new offering, nor do I for that matter, and Kevin is patient enough to correct all of Chuck's mistaken assumptions and assertions. Kevin understands that EMC's "core strengths" isn't in servers or databases, so he explains things in simple enough terms that EMC employees can understand, so I give Kevin a point on this one.
If two is bad, then three is worse! How much bubble gum and bailing wire do you need in your data center? The better option is to go to the one company that offers it all and brings it together into a single solution: IBM InfoSphere Balanced Warehouse.
Well, it's Tuesday again, and that means more announcements from IBM!
In conjunction with IBM's new [System z10 Business Class (BC)] mainframe designed for Small and Medium-sized Businesses (SMB), IBM also announced related storage productenhancements.
Yes, it's alive! Contrary to the FUD you might have read from our competitors, IBM continues to sell thousands and thousands of IBM System Storage DS6800 disk systems, and now enhances them with the optionfor 450GB 15K RPM drives. What is nice about these 450GB drives is that they are as fast or faster* than 300GBdrives, so the typical trade-off between performance and capacity do not apply.
(* I compared Seagate 15.6K (450GB) with 15.5K (300GB) models.
Avg Seek time (Read)
Avg Seek time (Write)
Full Seek time (Read)
Full Seek time (Write)
This may or may not result in application performance improvements, depending on workload pattern. Your mileage may vary.)
Our clients report back that these are incredibly stable systems that they don't have toworry about. This enhancement applies to both the [511/EX1 models] and [522/EX2 models].
Understanding that clients want complete solutions from single vendors, IBM offers synergy between System z and the IBM System Storage DS8000 disk systems. The latest R4.1 microcode upgrade offers two key features onthe various models [2107,
zHPF - High Performance FICON for System z. IBM was able to increase the throughput on 4 Gbps links. For OLTP workloads randomly accessing 4KB blocks, IBM internal tests showed zHPF doubled performance from 13,000 IOPSto 26,000 IOPS per channel. For sequential workloads, such as batch processing, zHPF increased performance 50 percent, from 350 MB/sec to 525 MB/sec.
In February, IBM previewed[IncrementalResync] for z/OS Metro Global Mirror. However, some concepts are better explained with pictures.
One way to set up a 3-site disaster recovery protection is to have your production synchronously mirrored to a second site nearby, and at the same time asynchronously mirrored to a remote location. On the System z, you can have site "A" using synchronous IBM System Storage Metro Mirror over to nearby site "B", and also have site "A" sending data over to site "C" asynchronously using z/OS Global Mirror. This is called "z/OS Metro Global Mirror".
In the past, if the disk system in site A failed, you would switch over to site B, which would have to resend send all the data again to site C to be resynchronized. This is because site B was not tracking what the System Data Mover (SDM) reader had or had not yet processed.
With DS8000 4.1, the "incremental resync" function that, along with using IBM HyperSwap, requires site B to only send and resync the data that was in-flight when the outage occurred. When you compare the difference in sending this limited amount of in-flight data with the traditional complete volume of data, you can see how "Incremental Resync" can resynchronize the data 95% faster, and also greatly decrease your bandwidth requirements. This reduces the risk in case a subsequent outage occurs.
Introduced originally in 1997 as the IBM Virtual Tape Server (VTS), the [IBMSystem Storage TS7700] series supports Grid capabilityto replicate tape image data across locations. Here's a quick recap of today's announcement:
Existing TS7740 can be upgraded up to 9TB of disk cache. New models can have up to 13TB of disk cache.
A new "tape-less" TS7720 that has up to 70TB of disk cache.
Integrate Library Management support. I discussed[IntegratedRemovable Media Manager (IRMM)] before, and this is basically IRMM inside. For those with TS3500 tape libraries,this support eliminates the need for a separate IBM 3953 L05 Library Manager.
TS1130 back-end tape drive support. These are the fastest 1TB drives in the industry, with support of built-in encryption, and now can be used asthe physical tape back-end for the virtual tape TS7740 repository.
While our competitors might be boarding up their windows in preparation for the economic downturn in the USAeconomy, IBM remains generating solid results. San Jose Mercury News has an article that discusses this titled[IBM's 3Q profit strong on global sales].There has never been a better time to buy from, or invest in, IBM!
IBM hired independent analyst Enterprise Strategy Group[ESG] to validate the box, and run workload-specific benchmarks. I agreewith Chris, the results are impressive! The report includes results from Microsoft Exchange JetStresstool to provide insight into email performance, and another benchmark to simulate Web server IOPS.
Also, the published SPC-1 benchmark for the DS5300 puts it at about 29 percent improvement over the DS4800.Chris argues the DS5300 is similar in class to NetApp FAS3170, which IBM sells as the IBM System Storage N6070.
If you are interesting in either the DS5300 or N6070, contact your local IBM Business Partner or sales rep.
Lakota Industries made news with the introduction of its [Sarah-Cuda Hunting Bow], named after moose-huntingU.S. Vice President nominee and Governor of Alaska [Sarah Palin]. This has all the same features as their other high-end hunting bows, but is lighter, smaller and available in Pink Camo. This "pink-it-and-shrink-it" move was designed to broaden the market share of hunting bows by reaching out to the needs of women hunters.
Not to be outdone, today, at the Storage Networking World Conference, IBM announced the new IBM System Storage SAN Volume Controller Entry Edition [SVC EE].
The new SVC Entry Edition, available in Flamingo Pink* or traditional Raven Black.
* RPQ required. Default color is Raven Black.
You might be thinking: "Wait! IBM SVC is already the leading storage virtualization product among SMB clients today,why introduce a less expensive model?" With the global economy in the tank, IBM thought it would be nice to help outour smaller SMB clients with this new option.
This new offering is actually a combination of new software (SVC 4.3.1) and new hardware (2145-8A4). Here are thekey differences:
by usable capacity managed, up to 8 PB
by number of disk drives, up to 60 drives
2145-4F2, 8F2, 8F4, 8G4, 8A4
1, 2, 3 or 4 node-pairs, depending on performance requirements
only one node-pair needed
FlashCopy, Metro Mirror and Global Mirror, licensed by subset of capacity used
FlashCopy, Metro Mirror and Global Mirror, but with simplified licensing
The SVC EE is not a "dumbed-down" version of the SVC Classic. It has all the features and functions of theSVC Classic, including thin provisioning with "Space-efficient volumes", Quality of Service (QoS) performance prioritization for more important applications, point-in-time FlashCopy, and both synchronous and asynchronous disk mirroring (Metro and Global Mirror).
While IBM has not yet have SPC-1 benchmarks published, IBM is positioning the SVC EE as roughly 60 percent of the performance, at 60 percent of the list price, compared to a comparable SVC Classic 2145-8G4 configuration. The SVC Classic is already one of the fastest disk systems in the industry. By comparison, the SVC EE is twice as fast as the original SVC 2145-4F2 introduced five years ago.If you outgrow the SVC EE, no problem! The 2145-8A4 can be used in traditional SVC Classic mode, and the SVC EE software can be converted into the SVC Classic software license for upgrade purposes, protecting your originalinvestment!
For those considering an HP EVA 4400 or EMC CX-4 disk system, you might want to look at combining an SVC EE with [IBM System Storage DS3400] disk. The combination offers more features and capabilities, and helps reduce your IT costs at the same time.
And if you are worried you can't afford it right now, IBM Global Financing is offering a ["Why Wait?" world-wide deferral of interest and payments] for 90 days, so you don't have to make your first payment until 2009, applicable to all IBM System Storage products, including the SVC EE, SVC Classic and DS3400 disk systems.
Well, it's Tuesday, and more IBM announcements were made today. Many of my colleagues are in Dallas, Texas for the[Storage Networking World conference], and hopefully I will get some feedback from them before the week is over.
Today, IBM made announcements for Storage Area Networking (SAN) gear and disk systems.
8 Gbps Longwave transceivers
IBM now offers 8 Gbps Longwave SFP transceivers on the[IBM System Storage SAN256B and SAN768B] directors, as well as the IBM System Storage SAN24B-4 Express, SAN40B-4, and SAN80B-4 switches (orderable as [machine type models] or [partnumbers] ).These transceivers support single mode fiber up to 10km in distance, comparedto the 50-75 meters supported by the Shortwave SFP transceivers.
Like theShortwave SFP transceivers we already have available, these Longwave transceivers have "N-2" support, which means they can support two generations back: auto-negotiate down to 4 Gbps and 2 Gbps speeds. If you still have 1 Gbps equipment, now is a good time to consider upgrading those, or keep a few 4 Gbps ports available that can auto-negotiate down to 1 Gbps speed.
Mainframe clients that sent data to a remote Business Continuity/Disaster Recovery (BC/DR) location often used "channel extenders", which were special boxes used to minimize performance delays when transmitting FICON across long distances. This was especially helpful for z/OS Global Mirror (what we used to call XRC) as well as electronic vaulting to tape.
Now, this functionality can be part of the directors and routers, eliminating the need for separate equipment.This is available for the SAN768B and SAN256B directors, as well as SAN18B-R and SAN04B-R routers.
Before the merger between Brocade and McDATA, IBM offered SAN18B-R routers from Brocade, and SAN04M-R routers from McDATA. The former had 16 Fibre Channel (FC) ports and two Ethernet ports, and the latter was less expensive with just four ports.Brocade came up with a clever replacement for both. The [IBMSystem Storage SAN04B-R] router comes by default withtwo active FC ports and two Ethernet ports, but also with 14 additional FC ports inactive. A "High Performance Extension" feature activates these additional ports, bringing the SAN04B-R up to the SAN18B-R level, and allows it to support the FICON Accelerator feature above.
So, instead of having specialized channel extenders at both primary and secondary sites, you can havea director with FICON Accelerator at the primary site, sending FICON over Ethernet to a 1U-high router (also running the FICON Accelerator) at the secondary site, whichcan greatly reduce costs. The FICON Accelerator can in some cases double the amount of data transfer throughput,but of course, your mileage may vary.
On the disk side, the [IBMSystem Storage DS3000 series] disk systems have been enhanced, withsupport for 450GB high-speed 15K RPM SAS drives, RAID-6 double-drive protection, more FlashCopy point-in-time copies,and more partitions.On the DS3000, "storage partitions" is what the rest of the industry calls "LUN masking". A storage partition allowsyou to isolate a set of LUNs to only be seen by a single host server, or host cluster that shares the same set ofLUNs. Some clients felt that the default of four partitions was too low, so now up to 32 partitions can be configured.(This is not to be confused with "Logical Partitions" that isolate processor and cache resources available on theIBM System Storage DS8000 and other high-end storage disk systems.)
IBM also extended the Operating System support.The DS3000 series now supports Solaris, either on x86 or SPARC-based servers. The DS3300 iSCSI support now supportsLinux on POWER. The DS3400 allows support of IBM i (the new name for i5/OS V6R1) through the VIOS feature.
The [IBMSystem Storage DCS9900] is a bigger, faster version of the DCS9550. Like the DCS9550, the DCS9900 is designedfor high performance computing (HPC) workloads. The DCS9550 supported up to 960TB in two frames, with 2.8 GB/sec throughput,and an optional disk spin-down capability.The new DCS9900 can support up to 1.2 PB in two frames, with 5.6 GB/sec throughput, but no spin-down capability.
So whether your data center is filled with System z mainframes, or other open systems, IBM has a solution for you.
Well, it's Tuesday again, which means IBM announcement day. With our [big launches] we had this year, there might be some confusion on IBM terminology on how announcements are handled.Basically, there are three levels:
Technology demonstrations show IBM's leadership, innovation and investment direction, without having to detail a specificproduct offering.Last month's[Project Quicksilver], for example, demonstrated the ability to handle over 1 million IOPS with Solid State Disk.IBM is committed to develop solid state storage to create real-world uses across a broad range of applications, middleware, and systems offerings.
A preview announcement does entail a specific product offering, but may not necessarily include pricing, packagingor specific availability dates.
An announcement also entails a specific product offering, and does include pricing, packaging and specific availability dates.
With our September 8 launch of the IBM Information Infrastructure strategic initiative, there were a mix of all three of these. Many of the preview announcements will be followed up with full announcements later this year. Today, the IBM Tivoli Advanced Backup andRecovery for z/OS v2.1 was announced.
Note: If you don't use z/OS on a System z mainframe, you can stop reading now.
As many of my loyal readers know, I was lead architect for DFSMS until 2001, and so functions related to DFSMS and z/OS are very near and dear to my heart. For Business Continuity, IBM created Aggregate Backup andRecovery Support (ABARS) as part of the DFSMShsm component. This feature created a self-contained backupimage from data that could be either on disk or tape, including migrated data. In the event of a disaster,an ABARS backup image can be used to bring back just the exact programs and data needed for a specific application, speeding up the recovery process, and allowing BC/DR plans to prioritize what is most important.
To help manage ABARS, IBM has partnered with [Mainstar Software Corporation]to offer a product that helps before, during and after the ABARS processing.
ABARS requires the storage admin to have a "selection list" of data sets to process as an aggregate.IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® ASAP™ to help identify the appropriatedata sets for specific applications, using information from job schedulers, JCL, and SMF records.
ABARS has two simple commands: ABACKUP to produce the backup image, and ARECOVER to recover it. However, ifyou have hundreds of aggregates, and each aggregate has several backups, you may need some help identifyingwhich image to recover from.IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® ABARS Manager™ to present a list ofinformation, making it easy to choose from. To help prep the ICF Catalogs, there is a CATSCRUB feature for either"empty" or "full" catalog recovery at the recovery site.
The fact that storage admins may not be intimately familiar with the applications they are backing up is a commonsource of human error. IBM Tivoli Advanced Backup and Recovery for z/OS includes Mainstar® All/Star™ to help validate that the data setsprocessed by ABACKUP are complete, to support any regulatory audit or application team verification.This critical data tracking/inventory reporting not only identifies what isn't backed up, so you can ensure that you are not missing critical data, but also can identify which data sets are being backed up multiple times by more than one utility, so you can reduce the occurrence of redundant backups.
With v2.1 of Tivoli Advanced Backup and Recovery for z/OS, IBM has integrated Tivoli Enterprise Portal (TEP)support. This allows you to access these functions through IBM Tivoli Monitor v6 GUI on a Linux, UNIX or Windowsworkstation. IBM Tivoli Monitor has full support to integrate Web 2.0, multi-media and frames. This meansthat any other product that can be rendered in a browser can be embedded and supported with launch-in-contextcapability.
(If you have not separately purchased a license to IBM Tivoli Monitoring V6.2, don't worry, you can obtainthe TEP-based function by acquiring a no-charge, limited use license to IBM Tivoli MonitoringServices on z/OS, V6.2.)
In addition to supporting IBM's many DFSMS backup methods, from ABARS to IDCAMS to IEBGENER, IBM Tivoli Advanced Backup and Recovery v2.1 can also support third-party products from Innovation Data Processing and Computer Associates.
As many people re-discover the mainframe as the cost-effective platform that it has always been, migratingapplications back to the mainframe to reduce costs, they need solutions that work across both mainframe anddistributed systems during this transition. IBM Tivoli Advanced Backup and Recovery for z/OS can help.
"IBM announced that Northwest Radiology Network has gone live with a new virtualized enterprise of IBM servers and storage to support its growing medical imaging needs, giving its four locations an enterprise-class infrastructure which enables its doctors to recover medical image reports faster for analysis and enables remote 24x7 access to its medical image report system.
Founded in 1967, Northwest Radiology (NWR) is ranked as one of the largest physician groups in the Indianapolis, Indiana area. With 180 employees who offer the Central Indiana community comprehensive inpatient and outpatient imaging services such as mammography, ultrasonography, CT scans, PET-CT scans, bone density scans and MRIs – the Network had a dramatic need to develop a centralized infrastructure where large amounts of data could be stored and shared. A new data center would benefit the company’s clientele; which includes area hospitals and doctor’s offices serving thousands of patients each year.
Storing more than ten thousand medical imaging reports and radiographic images each month for doctors to analyze, the Network realized it had single points of failure and at one point a critical report server failed. Northwest Radiology turned to IBM and IBM Business Partner Software Information Systems (SIS) for a more efficient solution to prevent any possible downtime in the future.
SIS recommended and installed a virtualized infrastructure with IBM servers and storage as the heart of Northwest Radiology’s Indianapolis data center. By April 2007, Northwest Radiology replaced eight servers and direct attached storage with just two IBM System x3650 servers connected to an IBM System Storage DS3400. Today, the new servers run 15 virtual servers to ensure the availability of their services 24x7. When the business needs it, a new server can be provisioned in just minutes. With a Fibre Channel on the SAN Disk, the DS3400 not only increased performance but also met NWR’s requirement to not have one single point of failure. With three TB of storage capacity, they can meet the demands of increased business well into the future. The systems are also now easily managed from a remote site."
“Uptime is paramount in our business. We selected IBM based on the reliability and flexibility of IBM System x servers and the IBM System Storage DS3400,” said Marty Buening, IT Director, Northwest Radiology Network. “The virtualized infrastructure and the SAN storage array that SIS and IBM brought to the table is improving our service and giving our doctors and staff piece of mind knowing each patient’s medical imaging reports are always available.”
Second, we have [Iowa Health System], a large enterprise with over 19,000 employees, managing four million patients and hundreds of TBs of data.
Here is a 4-minute video on IBM TV from the good folks at Iowa Health System discussing theIBM Grid Medical Archive Solution (GMAS) as part of their information infrastructure for theirPicture Archiving and Communication Systems (PACS) application.
In both cases, IBM technology was able to provide remote access to medical information, making images and patient records available to more doctors, specialists and radiologists. Last January, in my post[Five in Five], IBM had predicted that remote access to healthcare would have an impact over the next five years.
Whether you are a small company or a large one, IBM probably has the right solution for you.
Continuing this week's theme on customer references of IBM's solutions, today I will discussthe success at Kantana Animation Studios.
Here is a 3-minute video from the good folks at Kantana Animation Studios,part of the [Kantana Group].They produced the animated movie [Khan Kluay]using IBM Scale-out File Services (SoFS), a product IBM announced last November 2007.
As a film-maker myself (see this sample [Highlights clip])and active member of the Tucson Film Society,I am pleased to see IBM so greatly involved in the film industry. I've had the pleasure to visit some of theseanimation studios myself and meet with other film-makers at various conferences.
For more details on Kantana's implementation, see the [Case Study]
Continuing my quest to "set the record straight" about [IBM XIV Storage System] and IBM's other products, I find myself amused at some of the FUD out there. Some are almost as absurd as the following analogy:
Humans share over 50 percent of DNA with bananas. [source]
If you peel a banana, and put the slippery skin down on the sidewalk outside your office building, it couldpose a risk to your employees
If you peel a human, the human skin placed on the sidewalk in a similar manner might also pose similar risks.
Mr. Jones, who applied for the opening in your storage administration team, is a human being.
You wouldn't hire a banana to manage your storage, would you? This might be too risky!
The conclusion we are led to believe is that hiring Mr. Jones, a human being, is as risky as puttinga banana peel down on the sidewalk. Some bloggers argue that they are merely making a series of factual observations,and letting their readers form their own conclusions. For example, the IBM XIV storage system has ECC-protected mirrored cache writes. Some false claims about this were [properly retracted]using strike out font to show the correction made, other times the same statement appears in another post from the same blogger that[have not yet beenretracted] (Update: has now been corrected). Other bloggers borrow the false statement [for their own blog], perhaps not realizing theretractions were made elsewhere. Newspapers are unable to fix a previous edition, so are forced to publishretractions in future papers. With blogs, you can edit the original and post the changed version, annotated accordingly, so mistakes can be corrected quickly.
While it is possible to compare bananas and humans on a variety of metrics--weight, height, and dare I say it,caloric value--it misses the finer differences of what makes them different. Humans might share 98 percent withchimpanzees, but having an opposable thumb allows humans to do things that chimpanzeesother animals cannot.
Full Disclosure: I am neither vegetarian nor cannibal, and harbor no ill will toward bananas nor chimpanzees.No bananas or chimpanzees were harmed in the writing of this blog post. Any similarity between the fictitiousMr. Jones in the above analogy and actual persons, living or dead, is purely coincidental.
So let's take a look at some of IBM XIV Storage System's "opposable thumbs".
The IBM XIV system comes pre-formatted and ready to use. You don't have to spend weeks in meetings deciding betweendifferent RAID levels and then formatting different RAID ranks to match those decisions. Instead, you can start using the storage on the IBM XIV Storage System right away.
The IBM XIV offers consistent performance, balancing I/O evenly across all disk drive modules, even when performing SnapShot processing, or recovering from component failure. You don't have to try to separate data to prevent one workload from stealing bandwidth from another. You don't have to purchase extra software to determine where the "hot spots" are on the disk. You don't have to buy othersoftware to help re-locate and re-separate the data to re-balance the I/Os. Instead, you just enjoy consistentperformance.
The IBM XIV offers thin provisioning, allowing LUNs to grow as needed to accommodate business needs. You don'thave to estimate or over-allocate space for planned future projects. You don't have to monitor if a LUN is reaching80 or 90 percent full. You don't have to carve larger and larger LUNs and schedule time on the weekends to move thedata over to these new bigger spaces. Instead, you just write to the disk, monitoring the box as a whole, ratherthan individual LUNs.
The IBM XIV Storage System's innovative RAID-X design allows drives to be replaced with drives of any larger or smaller capacity. You don't have to find the exact same 73GB 10K RPM drive to match the existing 73GB 10K RPM drive that failed. Some RAID systems allow "larger than original" substitutions, for example a 146GB drive to replace a 73GB drive, but the added capacity is wasted, because of the way most RAID levels work. The problemis that many failures happen 3-5 years out, and disk manufacturers move on to bigger capacities and differentform factors, making it sometimes difficult to find an exact replacement or forcing customers to keep their own stockof spare drives. Instead, with the IBM XIV architecture, you sleep well at night, knowing it allows future drive capacities to act as replacements, and getting the full value and usage of that capacity.
In the case of IBM XIV Storage System, it is not clear whether
"Vendors" are those from IBM and IBM Business Partners, including bloggers like me employed by IBM,and "everybody else" includes IBM's immediate competitors, including bloggers employed by them.
-- or --
"Vendors" includes IBM and its competitors including any bloggers, so that "everybody else" refers instead to anyone not selling storage systems, but opinionated enough to not qualify as "objective third-party sources".
-- or --
"Vendors" includes official statements from IBM and its competitors, and "everybody else" refers to bloggerspresenting their own personal or professional opinions, that may or may not correspond to their employers.
That said, feel free to comment below on which of these you think the last two points of Steinhardt's rule istrying to capture. Certainly, I can't argue with the top two: a customer's own experience and the experiencesof other customers, which I mentioned previously in my post[Deceptively Delicious].
In that light, here is a 5-minute video on IBM TV with a customer testimonial from the good folksat [NaviSite], one of our manycustomer references for the IBM XIV Storage System.
Last week, I presented IBM's strategic initiative, the IBM Information Infrastructure, which is part of IBM's New Enterprise Data Center vision. This week, I will try to get around to talking about some of theproducts that support those solutions.
I was going to set the record straight on a variety of misunderstandings, rumors or speculations, but I think most have been taken care of already. IBM blogger BarryW covered the fact that SVC now supports XIV storage systems, in his post[SVC and XIV],and addressed some of the FUD already. Here was my list:
Now that IBM has an IBM-branded model of XIV, IBM will discontinue (insert another product here)
I had seen speculation that XIV meant the demise of the N series, the DS8000 or IBM's partnership with LSI.However, the launch reminded people that IBM announced a new release of DS8000 features, new models of N series N6000,and the new DS5000 disk, so that squashes those rumors.
IBM XIV is a (insert tier level here) product
While there seems to be no industry-standard or agreement for what a tier-1, tier-2 or tier-3 disk system is, there seemed to be a lot of argument over what pigeon-hole category to put IBM XIV in. No question many people want tier-1 performance and functionality at tier-2 prices, and perhaps IBM XIV is a good step at giving them this. In some circles, tier-1 means support for System z mainframes. The XIV does not have traditional z/OS CKD volume support, but Linux on System z partitions or guests can attach to XIV via SAN Volume Controller (SVC), or through NFS protocol as part of the Scale-Out File Services (SoFS) implementation.
Whenever any radicalgame-changing technology comes along, competitors with last century's products and architectures want to frame the discussion that it is just yet another storage system. IBM plans to update its Disk Magic and otherplanning/modeling tools to help people determine which workloads would be a good fit with XIV.
IBM XIV lacks (insert missing feature here) in the current release
I am glad to see that the accusations that XIV had unprotected, unmirrored cache were retracted. XIV mirrors all writes in the cache of two separate modules, with ECC protection. XIV allows concurrent code loadfor bug fixes to the software. XIV offers many of the features that people enjoy in other disksystems, such as thin provisioning, writeable snapshots, remote disk mirroring, and so on.IBM XIV can be part of a bigger solution, either through SVC, SoFS or GMAS that provide thebusiness value customers are looking for.
IBM XIV uses (insert block mirroring here) and is not as efficient for capacity utilization
It is interesting that this came from a competitor that still recommends RAID-1 or RAID-10 for itsCLARiiON and DMX products.On the IBM XIV, each 1MB chunk is written on two different disks in different modules. When disks wereexpensive, how much usable space for a given set of HDD was worthy of argument. Today, we sell you abig black box, with 79TB usable, for (insert dollar figure here). For those who feel 79TB istoo big to swallow all at once, IBM offers "capacity on demand" pricing, where you can pay initially for as littleas 22TB, but get all the performance, usability, functionality and advanced availability of the full box.
IBM XIV consumes (insert number of Watts here) of energy
For every disk system, a portion of the energy is consumed by the number of hard disk drives (HDD) andthe remainder to UPS, power conversion, processors and cache memory consumption. Again, the XIV is a bigblack box, and you can compare the 8.4 KW of this high-performance, low-cost storage one-frame system with thewattage consumed by competitive two-frame (sometimes called two-bay) systems, if you are willing to take some trade-offs. To getcomparable performance and hot-spot avoidance, competitors may need to over-provision or use faster, energy-consuming FC drives, and offer additional software to monitor and re-balance workloads across RAID ranks.To get comparable availability, competitors may need to drop from RAID-5 down to either RAID-1 or RAID-6.To get comparable usability, competitors may need more storage infrastructure management software to hide theinherent complexity of their multi-RAID design.
Of course, if energy consumption is a major concern for you, XIV can be part of IBM's many blended disk-and-tapesolutions. When it comes to being green, you can't get any greener storage than tape! Blended disk-and-tapesolutions help get the best of both worlds.
Well, I am glad I could help set the record straight. Let me know what other products people you would like me to focus on next.
This post will focus on Information Compliance, the fourth and final part of the four-part series this week.I have received a few queries on my choice of sequence for this series: Availability, Security, Retention andCompliance.
Why not have them in alphabetical order? IBM avoids alphabetizing in one language, because thenit may not be alphabetized when translated to other languages.
Why not have them in a sequence that spells outan easy to remember mnemonic, like "CARS"? Again, when translated to other languages, those mnemonics no longerwork.
Instead, I worked with our marketing team for a more appropriate sequence, based on psychology and the cognitive bias of [primacy and recency effects].
Here's another short 2-minute video, on Information Compliance
Full disclosure: I am not a lawyer. The following will delveinto areas related to government and industry regulations. Consultyour risk officer or legal counsel to make sure any IT solution is appropriatefor your country, your industry, or your specific situation.
IBM estimates there are over 20,000 regulations worldwide related to information storage and transmission.
For information availability, some industry regulations mandate a secondary copy a minimum distance away toprotect against regional disasters like hurricanes or tsunamis.IBM offers Metro Mirror (up to 300km) and Global Mirror (unlimited distance) disk mirroring to support theserequirements.
For information security, some regulations relate to privacy and prevention of unauthorized access. Twoprominent ones in the United States are:
Health Insurance Portability and Accountability Act (HIPAA) of 1996
HIPAA regulates health care providers, health plans, and health care clearinghouses in how they handle the privacy of patient's medical records. These regulations apply whether the information is on film, paper, or storedelectronically. Obviously, electronic medical records are easier to keep private. Here is an excerpt froman article from [WebMD]:
"There are very good ways to protect data electronically. Although it sounds scary, it makes data more protected than current paper records. For example, think about someone looking at your medical chart in the hospital. It has a record of all that is happening -- lab results, doctor consultations, nursing notes, orders, prescriptions, etc. Anybody who opens it for whatever reason can see all of this information. But if the chart is an electronic record, it's easy to limit access to any of that. So a physical therapist writing physical therapy notes can only see information related to physical therapy. There is an opportunity with electronic records to limit information to those who really need to see it. It could in many ways allow more privacy than current paper records."
GLBA regulates the handling of sensitive customer information by banks, securities firms, insurance companies, and other financial service providers. Financial companies use tape encryption to comply with GLBA when sending tapes from one firm to another. IBM was the first to deliver tape drive encryption withthe TS1120, and then later with LTO-4 and TS1130 tape drives.
For information retention, there are a lot of regulations that deal with how information is stored, in some casesimmutable to protect against unethical tampering, and when it can be discarded. Two prominent regulations inthe United States are:
U.S. Securities and Exchange Commission (SEC) 17a-4 of 1997
In the past, the IT industryused the acronym "WORM" which stands for the "Write Once, Read Many" nature of certain media, like CDs, DVDs,optical and tape cartridges. Unfortunately, WORM does not apply to disk-based solutions, so IBM adopted the languagefrom SEC 17a-4 that calls for storage that is "Non-Erasable, Non-Rewriteable" or NENR. This new umbrella term applies to disk-based solutions, as well as tape and optical WORM media.
SEC 17a-4 indicates that broker/dealers and exchange members must preserve all electronic communications relating to the business of their firmm a specific period of time. During this time, the information must not be erased or re-written.
Sarbanes-Oxley (SOX) Act of 2002
SOX was born in the wake of [Enron and other corporate scandals]. It protects the way that financial information is stored, maintained and presented to investors, as well as disciplines those who break its rules. It applies onlyto public companies, i.e. those that offer their securities (stock shares, bonds, liabilities) to be sold to the publicthrough a listing on a U.S. exchange, such as NASDAQ or NYSE.
SOX focuses on preventing CEOs and other executives from tampering the financial records.To meet compliance, companies are turning to the [IBM System Storage DR550] which providesNon-erasable, Non-rewriteable (NENR) storage for financial records. Unlike competitive products like EMC Centera thatfunction mostly as space-heaters on the data center floor once they filled up, the DR550 can be configured as a blended disk-and-tape storage system, so that the most recent, and most likely to be accessed data, remains on disk, but the older, least likely to be accessed data, is moved automatically to less expensive, more environment-friendly "green" tape media.
Did SOX hurt the United States' competitiveness? Critics feared that these new regulations would discourage newcompanies from going public. Earnst & Young found these fears did not come true, and published a study [U.S. Record IPO Activity from 2006 Continues in 2007]. In fact, the improved confidence that SOX has given investors has given rise to similarlegislation in other parts of the world: Euro-Sox for the European Union Investor Protection Act, and J-SOX Financial Instruments and Exchange Law for Japan.
For those who only read the first and last paragraphs of each post, here is my recap:Information Compliance is ensuring that information is protected against regional disasters, unauthorizedaccess, and unethical tampering, as required to meet industry and government regulations. Such regulationsoften apply if the information is stored on traditional paper or film media, but can often be handled more cost-effectively when stored electronically. Appropriate IT governance can help maintain investor confidence.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. The launch was presented at the IBM Storage and Storage Networking Symposium to over 400 attendeesin Montpelier, France, with corresponding standing-room-only crowds in New York and Tokyo.
This post will focus on Information Retention, the third of the four-part series this week.
Here's another short 2-minute video, on Information Retention
Let's start with some interesting statistics.Fellow blogger Robin Harris on his StorageMojo blog has an interesting post:[Our changing file workloads],which discusses the findings of study titled"Measurement and Analysis of Large-Scale Network File System Workloads"[14-page PDF]. This paper was a collaborationbetween researchers from University of California Santa Cruz and our friends at NetApp.Here's an excerpt from the study:
Compared to Previous Studies:
Both of our workloads are more write-oriented. Read to write byte ratios have significantly decreased.
Read-write access patterns have increased 30-fold relative to read-only and write-only access patterns.
Most bytes are transferred in longer sequential runs. These runs are an order of magnitude larger.
Most bytes transferred are from larger files. File sizes are up to an order of magnitude larger.
Files live an order of magnitude longer. Fewer than 50 percent are deleted within a day of creation.
Files are rarely re-opened. Over 66 percent are re-opened once and 95% fewer than five times.
Files re-opens are temporally related. Over 60 percent of re-opens occur within a minute of the first.
A small fraction of clients account for a large fraction of file activity. Fewer than 1 percent of clients account for50 percent of file requests.
Files are infrequently shared by more than one client. Over 76 percent of files are never opened by more than one client.
File sharing is rarely concurrent and sharing is usually read-only. Only 5 percent of files opened by multiple clients are concurrent and 90 percent of sharing is read-only.
Most file types do not have a common access pattern.
Why are files being kept ten times longer than before? Because the information still has value:
Provide historical context
Gain insight to specific situations, market segment demographics, or trends in the greater marketplace
Help innovate new ideas for products and services
Make better, smarter decisions
National Public Radio (NPR) had an interesting piece the other day. By analyzing old photos, a researcher for Cold War Analysis was able to identify an interesting [pattern for Russian presidents]. (Be sure to listen to the 3-minute audio to hear a hilarious song about the results!)
Which brings me to my own collection of "old photos". I bought my first digital camera in the year 2000,and have taken over 15,000 pictures since then. Before that,I used 35mm film camera, getting the negatives developed and prints made. Some of these date back to my years in High School and College. I have a mix of sizes, from 3x5, 4x6 and 5x7 inches,and sometimes I got double prints.Only a small portion are organized intoscrapbooks. The rest are in envelopes, prints and negatives, in boxes taking up half of my linen closet in my house.Following the success of the [Library of Congress using flickr],I decided the best way to organize these was to have them digitized first. There are several ways to do this.
This method is just too time consuming. Lift the lid place 1 or a few prints face down on the glass, close the lid,press the button, and then repeat. I estimate 70 percent of my photos are in [landscape orientation], and 30 percent in [portrait mode]. I can either spend extra time toorient each photo correctly on the glass, or rotate the digital image later.
I was pleased to learn that my Fujitsu ScanSnap S510 sheet-feed scanner can take in a short stack (dozen or so) photos, and generate JPEG format files for each. I can select 150, 300 or 600dpi, and five levels of JPEG compression.All the photos feed in portrait mode, which I can then rotate later on the computer once digitized.A command line tool called [ImageMagick] can help automate the rotations.While I highly recommend the ScanSnap scanner, this is still a time-consuming process for thousands of photos.
"The best way to save your valuable photos may be by eliminating the paper altogether. Consider making digital images of all your photos."
Here's how it works:You ship your prints (or slides, or negatives) totheir facility in Irvine, California. They have a huge machine that scans them all at 300dpi, no compression, andthey send back your photos and a DVD containing digitized versions in JPEG format, all for only 50 US dollars plusshipping and handling, per thousand photos. I don't think I could even hire someone locally to run my scanner for that!
The deal got better when I contacted them. For people like me with accounts on Facebook, flickr, MySpace or Blogger,they will [scan your first 1000 photos for free] (plus shipping and handling). I selected a thousand 4x6" photos from my vast collection, organized them into eight stacks with rubber bands,and sent them off in a shoe box. The photos get scanned in landscape mode, so I had spent about four hours in preparing what I sent them, making sure they were all face up, with the top of the picture oriented either to the top or left edge.For the envelopes that had double prints, I "deduplicated" them so that only one set got scanned.
The box weighed seven pounds, and cost about 10 US dollars to send from Tucson to Irvinevia UPS on Tuesday. They came back the following Monday, all my photos plus the DVD, for 20 US dollars shipping and handling. Each digital image is about 1.5MB in size, roughly 1800x1200 pixels in size, so easily fit on a single DVD. The quality is the sameas if I scanned them at 300dpi on my own scanner, and comparable to a 2-megapixel camera on most cell phones.Certainly not the high-res photos I take with my Canon PowerShot, but suitable enough for email or Web sites. So, for about 30 US dollars, I got my first batch of 1000 photos scanned.
ScanMyPhotos.com offers a variety of extra priced options, like rotating each file to the correct landscape or portrait orientation, color correction, exact sequence order, hosting them on their Web site online for 30 days to share with friends and family, and extra copies of the DVD.All of these represent a trade-off between having them do it for me for an additional fee, or me spending time doing it myself--either before in the preparation, or afterwards managing the digital files--so I can appreciate that.
Perhaps the weirdest option was to have your original box returned for an extra $9.95? If you don't have a hugecollection of empty shoe boxes in your garage, you can buy a similarly sized cardboard box for only $3.49 at the local office supply store, so I don't understand this one. The box they return all your photos in can easily be used for the next batch.
I opted not to get any of these extras. The one option I think they should add would be to have them just discardthe prints, and send back only the DVD itself. Or better yet, discard the prints, and email me an ISO file of the DVD that I can burn myself on my own computer.Why pay extra shipping to send back to me the entire box of prints, just so that I can dump the prints in the trash myself? I will keep the negatives, in case I ever need to re-print with high resolution.
Overall, I am thoroughlydelighted with the service, and will now pursue sending the rest of my photos in for processing, and reclaim my linen closet for more important things. Now that I know that a thousand 4x6 prints weighs 7 pounds, I can now estimate how many photos I have left to do, and decide on which discount bulk option to choose from.
With my photos digitized, I will be able to do all the things that IBM talks about with Information Retention:
Place them on an appropriate storage tier. I can keep them on disk, tape or optical media.
Easily move them from one storage tier to another. Copying digital files in bulk is straightforward, and as new techhologies develop, I can refresh the bits onto new media, to avoid the "obsolescence of CDs and DVDs" as discussed in this article in[PC World].
Share them with friends and family, either through email, on my Tivo (yes, my Tivo is networked to my Mac and PC and has the option to do this!), or upload themto a photo-oriented service like [Kodak Gallery or flickr].
Keep multiple copies in separate locations. I could easily burn another copy of the DVD myself and store in my safe deposit box or my desk at work.With all of the regional disasters like hurricanes, an alternative might be to backup all your files, including your digitized photos, with an online backup service like [IBM Information Protection Services] from last year's acquisition of Arsenal Digital.
If the prospect of preserving my high school and college memories for the next few decades seems extreme,consider the [Long Now Foundation] is focused on retaining information for centuries.They areeven suggesting that we start representing years with five digits, e.g., 02008, to handle the deca-millennium bug which will come into effect 8,000 years from now. IBM researchers are also working on [long-term preservation technologies and open standards] to help in this area.
For those who only read the first and last paragraphs of each post, here is my recap:Information Retention is about managing [information throughout its lifecycle], using policy-based automation to help with the placement, movement and expiration. An "active archive" of information serves to helpgain insight, innovate, and make better decisions. Disk, tape, and blended disk-and-tape solutions can all play a part in a tiered information infrastructure for long-term retention of information.
In Monday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. For you podcast fans, IBM Vice Presidents Bob Cancilla (Disk Systems), Craig Smelser (Storage and Security Software), and Mike Riegel (Information Protection Services), highlight some of the new products and offerings in this 12-minute recording:
This post will focus on Information Security, the second of the four-part series this week.
Here's another short 2-minute video, on Information Security
Security protects information against both internal and external threats.
For internal threats, most focus on whether person A has a "need-to-know" about information B. Most of the time, thisis fairly straightforward. However, sometimes production data is copied to support test and development efforts. Here is the typical scenario: the storage admin copies production data that contains sensitive or personal informationto a new copy and authorizes software engineers or testers full read/write access to this data.In some cases, the engineers or testers may be employees, other times they might be hired contractors from an outside firm.In any case, they may not be authorized to read this sensitive information. To solve this IBM announced the[IBM Optim Data Privacy Solution] for a variety of environments, including Siebel and SAP enterprise resource planning (ERP)applications.
I found this solution quite clever. The challenge is that production data is interrelated and typically liveinside [relational databases].For example, one record in one database might have a name and serial number, and then that serial number is used to reference a corresponding record in another database. The IBM Optim Data Privacy Solution applies a range of"masks" to transform complex data elements such as credit card numbers, email addresses and national identifiers, while retaining their contextual meaning. The masked results are fictitious, but consistent and realistic, creating a “safe sandbox” for application testing. This method can mask data from multiple interrelated applications to create a “production-like” test environment that accurately reflects end-to-end business processes.The testers get data they can use to validate their changes, and the storage admins can rest assured theyhave not exposed anyone's sensitive information.
Beyond just who has the "need-to-know", we might also be concerned with who is "qualified-to-act".Most systems today have both authentication and authorization support. Authentication determines that youare who you say you are, through the knowledge of unique userid/passwords combinations, or other credentials. Fingerprint, eye retinal scans or other biometrics look great in spy movies, but they are not yetwidely used. Instead, storage admins have to worry about dozens of different passwords on differentsystems. One of the many preview announcements made by Andy Monshaw on Monday's launch was that IBM isgoing to integrate the features of [Tivoli Access Manager for Enterprise Single Sign-On] into IBM's Productivity Center software, and be renamed "IBM Tivoli Storage Productivity Center".You enter one userid/password, and you will not have to enter the individual userid/password of all the managedstorage devices.
Once a storage admin is authenticated,they may or may not be authorized to read or act on certain information.Productivity Center offers role-based authorization, so that people can be identifiedby their roles (tape operator, storage administrator, DBA) and that would then determine what they areauthorized to see, read, or act upon.
For external threats, you need to protect data both in-flight and at-rest. In-flight deals with data thattravels over a wire, or wirelessly through the air, from source to destination. When companies have multiplebuildings, the transmissions can be encrypted at the source, and decrypted on arrival.The bigger threat is data at-rest. Hackers and cyber-thieves looking to download specific content, like personal identifiable information, financial information, and other sensitive data.
IBM was the first to deliver an encrypting tape drive, the TS1120. The encryption process is handled right at the driveitself, eliminating the burden of encryption from the host processing cycles, and eliminating the need forspecialized hardware sitting between server and storage system. Since then, we have delivered encryption onthe LTO-4 and TS1130 drives as well.
When disk drives break or are decommissioned, the data on them may still be accessible. Customers have a tough decision to make when a disk drive module (DDM) stops working:
Send it back to the vendor or manufacturer to have it replaced, repaired or investigated, exposing potentialsensitive information.
Keep the broken drive, forfeit any refund or free replacement, and then physically destroy the drive. Thereare dozens of videos on [YouTube.com] on different ways to do this!
The launch previewed the [IBM partnership with LSI and Seagate] to deliver encryption technology for disk drives, known as "Full Drive Encryption" or FDE.Having all data encrypted on all drives, without impacting performance, eliminates having to decide which data gets encryptedand which doesn't. With data safely encrypted, companies can now send in their broken drives for problemdetermination and replacement.Anytime you can apply a consistent solution across everything, without human intervention anddecision making, the less impact it will have. This was the driving motivation in both disk and tape driveencryption.
(Early in my IBM career, some lawyers decided we need to add a standard 'paragraph' to our copyright text in the upper comment section of our software modules, and so we had a team meeting on this. The lawyer that presented to us that perhaps only20 to 35 percent of the modules needed to be updated with this paragraph, and taught us what to look for to decidewhether or not the module needed to be changed. Myteam argued how tedious this was going to be, that this will take time to open up each module, evaluate it, and make the decision. With thousands of modules involved the process could take weeks. The fact that this was going to take us weeks did not seem to concern our lawyer one bit, it was just thecost of doing business.Finally, I asked if it would be legal to just add the standard paragraph to ALL the modules without any analysis whatsoever. The lawyer was stunned. There was no harm adding this paragraph to all the modules, he said, but that would be 3-5x more work and why would I even suggest that. Our team laughed, recognizing immediately that it was the fastest way to get it done. One quick program updated all modules that afternoon.)
To manage these keys, IBM previewed the Tivoli Key Lifecycle Manager (TKLM).This software helps automate the management of encryption keys throughout their lifecycle to help ensure that encrypted data on storage devices cannot be compromised if lost or stolen. It will apply to both disk and tapeencryption, so that one system will manage all of the encryption keys in your data center.
For those who only read the first and last paragraphs of each post, here is my recap:Information Security is intended as an end-to-end capability to protect against both internal and external threats, restricting access only to those who have a "need-to-know" or are "qualified-to-act". Security approacheslike "single sign-on" and encryption that applies to all tapes and all disks in the data center greatly simplify the deployment.
In yesterday's post, [IBM Information Infrastructure launches today], I explained how this strategic initiative fit into IBM's New EnterpriseData Center vision. For those who prefer audio podcasts, here is Marissa Benekos interviewing Andy Monshaw, IBM General Manager of IBM System Storage.
This post will focus on Information Availability, the first of the four-part series this week.
Here's another short 2-minute video, on Information Availability
I am not in marketing department anymore, so have no idea how much IBM spentto get these videos made, but hate for the money to go wasted. I suspect theonly way they will get viewed is if I include them in my blog. I hope youlike them.
As with many IT terms, "availability" might conjure up different meanings for different people.
Some can focus on the pure mechanics of delivering information. An information infrastructure involves all of thesoftware, servers, networks and storage to bring information to the application or end user, so all of the chainsin the link must be highly available: software should not crash, servers should have "five nines" (99.999%) uptime, networks should be redundant, and storage should handle the I/O request with sufficient performance. For tape libraries, the tape cartridge must be available, robotics are needed to fetch the tape, and a drive must be available toread the cartridge. All of these factors represent the continuous operations and high availability features of business continuity.
In addition to the IT equipment, you need to make sure your facilities that support that equipment, such aspower and cooling, are also available.Independent IT analyst Mark Peters from Enterprise Strategy Group (ESG) summarizes his shock about the findings in a recent [survey commissioned by Emerson Network Power]on his post [Backing Up Your Back Up]. Here is an excerpt:
"The net take-away is that the majority of SMBs in the US do not have back-up power systems. As regional power supplies get more stretched in many areas, the possibility of power outages increases and obviously many SMBs would be vulnerable. Indeed, while the small business decision makers questioned for the survey ranked such power outages ahead of other threats (fires, government regulation, weather, theft and employee turnover) only 39% had a back-up power system. Yeah, you could say, but anything actually going wrong is unlikely; but apparently not, as 79% of those surveyed had experienced at least one power outage during 2007. Yeah, you might say, but maybe the effects were minor; again, apparently not, since 42% of those who'd had outages had to actually close their businesses during the longest outages. The DoE says power outages cost $80 billion a year and businesses bear 98% of those costs."
Others might be more concerned about outages resulting from planned and unplanned downtime. Storage virtualizationcan help reduce planned downtime, by allowing data to be migrated from one storage device to another withoutdisrupting the application's ability to read and write data. The latest "Virtual Disk Mirroring" (VDM) feature of the IBM System Storage SAN Volume Controller takes it one stepfurther, providing high-availability even for entry-level and midrange disk systems managed by the SVC.For unplanned downtime, IBM offers a complete range of support, from highly available clusters, two-site and three-site disaster recovery support, and application-aware data protection through IBM Tivoli Storage Manager.
Many outages are caused by human error, and in many cases it is the human factor that prevent quick resolution.Storage admins are unable to isolate the failing component, identify the configuration or provide the appropriateproblem determination data to the technical team ready to offer support and assistance. For this, IBM TotalStorageProductivity Center software, and its hardware-version the IBM System Storage Productivity Center, can helpreduce outage time and increase information availability. It can also provide automation to predict or provideearly warning of impending conditions that could get worse if not taken care of.
But perhaps yet another take on information availability is the ability to find and communicate the right informnationto the right people at the right time. Recently, Google announced a historic milestone, their search engine nowindexes over [One trillion Web pages]!Google and other search engines have changed the level of expectations for finding information. People ask whythey can find information on the internet so quickly, yet it takes weeks for companies to respond to a judge foran e-discovery request.
Lastly, the team at IBM's[Eightbar blog] pointedme to Mozilla Lab's Ubiquity project for their popular FireFox browser. This project aims to help people communicate the information in a more natural way, rather than unfriently URL links on an email. It is still beta, of course, but helps show what "information availability" might be possible in the near future.Here is a 7-minute demonstration:
For those who only read the first and last paragraphs of each post, here is my recap:Information Availability includes Business Continuity and Data Protection to facilitatequick recovery, storage virtualization to maximize performance and minimize planned downtime, infrastructure management and automation to reduce human error, and the ability to find and communicate information to others.
Earlier this year, IBM launched its[New Enterprise Data Center vision]. The average data center was built 10-15 years ago,at a time when the World Wide Web was still in its infancy, some companies were deploying their first storage areanetwork (SAN) and email system, and if you asked anyone what "Google" was, they might tell you it was ["a one followed by a hundred zeros"]!
Full disclosure: Google, the company, justcelebrated its [10th anniversary] yesterday, and IBM has partnered with Google on a varietyof exciting projects. I am employed by IBM, and own stock in both companies.
In just the last five years, we saw a rapid growth in information, fueled by Web 2.0 social media, email, mobile hand-held devices, and the convergenceof digital technologies that blurs the lines between communications, entertainment and business information. This explosion in information is not just "more of the same", but rather a dramatic shift from predominantly databases for online transaction processing to mostly unstructured content. IT departments are no longer just the"back office" recording financial transactions for accountants, but now also take on a more active "front office" role. For a growing number of industries, information technology plays a pivotal role in generating revenue, making smarter business decisions, and providing better customer service.
IBM felt a new IT model was needed to address this changing landscape, so IBM's New Enterprise Data Center vision has these five key strategic initiatives:
Highly virtualized resources
Business-driven Service Management
Green, Efficient, Optimized facilities
In February, IBM announced new products and features to support the first two initiatives, including the highlyvirtualized capability of the IBM z10 EC mainframe, and and related business resiliency features of the [IBM System Storage DS8000 Turbo] disk system.
In May, IBM launched its Service Management strategic initiative at the Pulse 2008 conference. I was there in Orlando, Florida at the Swan and Dolphin resort to present to clients. You can read my three posts:[Day 1; Day 2 Main Tent; Day 2 Breakout sessions].
In June, IBM launched its fourth strategic initiative "Green, Efficient and Optimized Facilities" with [Project BigGreen 2.0], which included the Space-Efficient Volume (SEV) and Space-Efficient FlashCopy (SEFC) capabilitiesof the IBM System Storage SAN Volume Controller (SVC) 4.3 release. Fellow blogger and IBM master inventor Barry Whyte (BarryW) has three posts on his blog about this:[SVC 4.3.0Overview; SEV and SEFCdetail; Virtual Disk Mirroring and More]
Some have speculated that the IBM System Storage team seemed to be on vacation the past two months, with few pressreleases and little or no fanfare about our July and August announcements, and not responding directly to critics and FUD in the blogosphere.It was because we were holding them all for today's launch, taking our cue from a famous perfume commercial:
"If you want to capture someone's attention -- whisper."
My team and I were actually quite busy at the [IBM Tucson Executive Briefing Center]. In between doing our regular job talking to excited prospects and clients,we trained sales reps and IBM Business Partners, wrote certification exams, and updated marketing collateral. Fortunately, competitors stopped promotingtheir own products to discuss and demonstrate why they are so scared of what IBM is planning.The fear was well justified. Even a few journalists helped raise the word-of-mouth buzz and excitement level. A big kiss to Beth Pariseau for her article in [SearchStorage.com]!
(Last week we broke radio silence to promote our technology demonstration of 1 million IOPS using Solid StateDisk, just to get the huge IBM marketing machine oiled up and ready for today)
Today, IBM General Manager Andy Monshaw launchedthe fifth strategic initiative, [IBM Information Infrastructure], at the[IBM Storage and Storage Networking Symposium] in Montpellier, France. Montpellier is one of the six locations of our New Enterprise Data Center Leadership Centers launched today. The other five are Poughkeepsie, Gaithersburg, Dallas, Mainz and Boebligen, with more planned for 2009.
Although IBM has been using the term "information infrastructure" for more than 30 years, it might be helpful to define it for you readers:
“An information infrastructure comprises the storage, networks, software, and servers integrated and optimized to securely deliver information to the business.”
In other words, it's all the "stuff" that delivers information from the magnetic surface recording of the disk ortape media to the eyes and ears of the end user. Everybody has an information infrastructure already, some are just more effective than others. For those of you not happy with yours, IBM hasthe products, services and expertise to help with your data center transformation.
IBM wants to help its clients deliver the right information to theright people at the right time, to get the most benefits of information, while controlling costs and mitigatingrisks. There might be more than a dozen ways to address the challenges involved, but IBM's Information Infrastructure strategic initiative focuses on four key solution areas:
Last, but not least, I would like to welcome to the blogosphere IBM's newest blogger, Moshe Yanai, formerly the father of the EMC Symmetrix and now leading the IBM XIV team. Already from his first poston his new [ThinkStorage blog], I can tell he is not going to pullany punches either.
Next Monday, September 1, 2008, marks my two year "blogoversary" for this blog!
I won't be blogging on Monday, of course, because that is [Labor Day] holiday here in the United States.
(From a Canadian colleague: US is not the only country who celebrates Labor Day on the first weekend in September. Canada also celebrates Labour Day on the first weekend in September. It's the only holiday(other than Christmas/New Years) where we are in sync with US. Our Thanksgiving Days are different as is your July 4 vs our July 1. But for Labour Day we are one with the Borg...)
(From an Australian colleague: each province of Australia has its own day to celebrate Labor Day, see [Australia Public Holidays])
The rest of the world celebrates Labor Day on May 1, but the USA celebrates this on the first Monday of September, which this year lands on September 1.Originally, the day is intended to be a "day off for working citizens", IBM is kind enough to let managers and marketingpersonnel have the day off also. (Not that anyone is going to notice no press releases next Monday, right?)
I started this blog on September 1, 2006 as part of IBM's big["50 Years of Disk Systems Innovation"] campaign. IBM introduced the first commercial disk system on September 13, 1956 and so the 50th anniversary was in 2006. Last year, IBM celebrated the 55th anniversary of tape systems.
Several readers have asked me why I haven't talked about recent current events, such as the Olympic Games in Beijing, or the U.S. National Conventions for the race for U.S. President. I have to remind them of one of the key precepts of IBMblogging guidelines:
8. Respect your audience. Don’t use ethnic slurs, personal insults, obscenity, or engage in any conduct that would not be acceptable in IBM’s workplace. You should also show proper consideration for others’ privacy and for topics that may be considered objectionable or inflammatory - such as politics and religion.
I made subtle references to my senator from Arizona, John McCain, in my post [ILM for my iPod], and to Barack Obama in my post [Searching for matching information]. I don't think anyone would mind that I send a "Happy Birthday!" wish to both of them.Senator McCain turns 72 years old today, and Senator Obama turned 47 years old earlier this month.
And lastly, Tucson itself [celebrates this entire month] its 233rd birthday. That's right,Tucson, the 32nd largest city of the USA, and headquarters for IBM System Storage, is older than the USA itself.While the Tucson area has been continuously inhabited by humans for over 3500 years, it officially became Tucsonon August 20, 1775.
Fellow blogger Justin Thorp has opined that [blogging is like jogging]. Somedays, you are just too busy to do it, and other days, you make time for it, because you know it is important.For the record, it is not my job to blog for IBM, that ended last September 2007. I continue to blog anyways because I have benefited from it, both personally and professionally.I want to thank all of you readers out there for making this blog a great success! Being named one of the top 10 blogs of the IT storage industry by Network World, two back-to-back Brand Impact awards from Liquid Agency, and recently earning a "31" Technorati ranking, has really helped keep me going.
So, I look forward to next month, and beginning my third year on this blog. I am sure there will be lots of surprises and announcements you can all look forward to in the next coming weeks and months that I will have plenty to write about.
(Note: The following paragraphs have been updated to clarify the performance tests involved.)
This time, IBM breaks the 1 million IOPS barrier, achieved by running a test workload consisting of a 70/30 mix of random 4K requests. That is 70 percent reads, 30 percent writes, with 4KB blocks. The throughput achieved was 3.5x times that obtained by running the identical workload on the fastest IBM storage system today (IBM System Storage SAN Volume Controller 4.3),
and an estimated EIGHT* times the performance of EMC DMX. With an average response time under 1 millisecond, this solution would be ideal for online transaction processing (OLTP) such as financial recordings or airline reservations.
(*)Note: EMC has not yet published ANY benchmarks of their EMC DMX box with SSD enterprise flash drives (EFD). However, I believe that the performance bottleneck is in their controller and not the back-end SSD or FC HDD media, so I have givenEMC the benefit of the doubt and estimated that their latest EMC DMX4 is as fast as an[IBMDS8300 Turbo] with Fibre Channel drives. If or when EMC publishes benchmarks, the marketplace can make more accurate comparisons. Your mileage may vary.
IBM used 4 TB of Solid State Disk (SSD) behind its IBM SAN Volume Controller (SVC) technology to achieve this amazing result. Not only does this represent a significantly smaller footprint, but it uses only 55 percent of the power and cooling.
The SSD drives are made by [Fusion IO] and are different than those used by EMC made by STEC.
The SVC addresses the one key problem clients face today with competitive disk systems that support SSD enterprise flash drives: choosing what data to park on those expensive drives? How do you decide which LUNs, which databases, or which files should be permanently resident on SSD? With SVC's industry-leading storage virtualization capability, you are not forced to decide. You can move data into SSD and back out again non-disruptively, as needed to meet performance requirements. This could be handy for quarter-end or year-end processing, for example.
Once again it's Tuesday, which means IBM announcement day!
Today IBM announced [two new DS3400 SAN Express Models]. These two new models will replace the IBM System Storage DS3400 SAN Express Kit model 41U and 42U to be withdrawn from marketing today. The DS3000 series of scalable, flexible, and affordable storage solutions support IBM System x, System p, and BladeCenter servers.
Two new IBM System Storage DS3400 SAN Express Kits are being introduced that provide the parts needed to setup and configure a SAN with the exception of a SAN switch that can be ordered separately. The IBM System Storage DS3400 SAN Express Kits contain Emulex EZPilot software that enables automated installation and configuration of the SAN components. IBM System Storage DS3400 SAN Express Kits models 41S and 42S and Emulex EZPilot software work in conjunction with the IBM TotalStorage SAN16B-2 Express Model Switch which comes with eight ports and eight 4 Gbps SFPs. The EZPilot software can support configurations with either one or two SAN16B-2 switches.
The 41S is a single-controller model DS3400 with two HBA cards and four cables. The 42S is the dual-controller model with two HBA cards and eight cables.
"The murals in restaurants are on par with the food in museums." --- Peter De Vries
The quote above applies to blogs as well. Those about competitive products of which the blogger has little to no hands-on experience tend to be terribly misleading or technically inaccurate. We saw this last month as Sun Microsystems' Jeff Savit tried to discuss the IBM System z10 EC mainframe.
This time, it comes from EMC bloggers discussing NetApp equipment, and by association, IBM System Storage N series gear.I was going to comment on the ridiculous posts by fellow bloggers from EMC about SnapLock compliance feature on the NetApp, but my buddies at NetApp had already done this for me, saving me the trouble.
The hysterical nature of writing from EMC, and the calm responses from NetApp, speak volumes about the culturesof both companies.
The key point is that none of the "Non-erasable, Non-Rewriteable" (NENR) storage out there are certified as compliant by any government agency on the planet. Governments just aren't in the business of certifying such things. The best you can get is a third-party consultant, such as [Cohasset Associates], to help make decisions that are best for each particular situation.
In addition to SnapLock on N series, IBM offers the [IBM System Storage DR550], WORM tape and optical systems, all of which have been deemed compliant to the U.S. Securities and Exchange Commission [SEC 17a-4] federal regulations by Cohasset Associates. For medical patient records and images like X-rays, IBM offers the Grid Medical Archive Solution [GMAS]designed to meet the requirements of the U.S. Health Insurance Portability and Accountability Act[HIPAA].For other government or industry regulations, consult with your legal counsel.
Well, it's Tuesday, and so it is "announcement day" again! Actually, for me it is Wednesday morning herein Mumbai, India, but since I was "press embargoed" until 4pm EDT in talking about these enhancements, I had to wait until Wednesday morning here to talk about them.
World's Fastest 1TB tape drive
IBM announced its new enterprise [TS1130 tape drive]and corresponding [TS3500 tape library support]. This one has a funny back-story. Last week while we were preparing the Press Release, we debated on whether we should compare the 1TB per cartridge capacity as double that of Sun's Enterprise T10000 (500GB), or LTO-4 (800GB). The problem changed when Sun announced on Monday they too had a 1TB tape drive, so now instead ofsaying that we had the "World's First 1TB tape drive", we quickly changed this to the "World's Fastest 1TB tape drive" instead. At 160MB/sec top speed, IBM's TS1130 is 33 percent faster than Sun's latest announcement. Sun was rather vague when they will actually ship their new units, so IBM may still end up being first to deliver as well.
While EMC and other disk-only vendors have stopped claiming that "tape is dead", these recent announcements from IBM and Sun indicate that indeed tape is alive and well. IBM is able to borrow technologies from disk, such as the Giant Magneto Resistive (GMR) head over to its tape offerings, which means much of the R&D for disk applies to tape, keeping both forms ofstorage well invested. Tape continues to be the "greenest" storage option, more energy efficient than disk, optical, film, microfiche and even paper.
On the LTO front, IBM enhanced the reporting capabilities of its[TS3310] midrange tape library. This includes identifying the resource utilization of the drives, reporting on media integrity, and improved diagnostics to support library-managed encryption.
IBM System Storage DR550
As a blended disk-and-tape solution, the [IBM System Storage DR550] easily replaces the EMC Centera to meet compliance storagerequirements. IBM announced that we have greatly expanded its scalability, being able to support both 1TBdisk drives, as well as being able to attach to either IBM or Sun's 1TB tape drives.
Massive Array of Idle Disks (MAID)
IBM now offers a "Sleep Mode" in the firmware of the [IBM System Storage DCS9550], which is often called "Massive Array of Idle Disks" (MAID) or spin-down capability. This can reduce the amount of power consumed during idle times.
That's a lot of exciting stuff. I'm off to breakfast now.
Continuing my week in Tokyo, Japan, I was going to title this post "Chunks, Extents and Grains", but decidedinstead to use the fairy tale above.
Fellow blogger BarryB from EMC, on his The Storage Anarchist blog, once again shows off his [PhotoShop talents], in his post [the laurel and hardy of thin provisioning]. This time, BarryB depicts fellow blogger and IBM master inventor, Barry Whyte, as Stan Laurel and fellow blogger Hu Yoshida from HDS as Oliver Hardy.
At stake is the comparison in various implementations of thin provisioning among the major storage vendors.On the "thick end", Hu presents his case for 42MB chunks on his post [When is Thin Provisioning Too Thin]. On the "thin end", IBMer BarryW presents the "fine-grained" details of Space-efficient Volumes (SEV), made available with the IBM System Storage SAN Volume Controller (SVC) v4.3, in his series of posts:
BarryB paints both implementations as "extremes" in inefficiency. Some excerpts from his post:
"... Hitachi's "chubby" provisioning is probably more performance efficient with external storage than is the SVC's "thin" approach. But it is still horribly inefficient in context of capacity utilization.
... the "thin extent" size used by Symmetrix Virtual Provisioning is both larger than the largest that SVC uses, and (significantly) smaller than what Hitachi uses."
"free" may be the most expensive solution you can buy...
Before you rush off to put a bunch of SVCs running (free) SEV in front of your storage arrays, you might want to consider the performance implications of that choice. Likewise, for Hitachi's DP, you probably want to understand the impact on capacity utilization that DP will have. DP isn't free, and it isn't very space efficient, either."
BarryB would like you to think that since EMC has chosen an "extent" size between 257KB and 41MB it must therefore be the optimal setting, not too hot, and not too cold. As I mentioned last January in my post[DoesSize Really Matter for Performance?], EMC engineers had not yet decided what that extent size should be, andBarryB is noticeably vague on the current value.According to this [VMware whitepaper],the thin extent size is currently 768 KBin size. Future versions of the EMC Enginuity operating environment may change the thin extent size. (I am sure theEMC engineers are smarter and more decisive than BarryB would lead us to believe!)
BarryB is correct that any thin provisioning implementation is not "free", even though IBM's implementation is offeredat no additional charge. Some writes may be slowed downwaiting for additional storage to be allocated to satisfy the request, and some amount of storage must be set asideto hold the metadata directory to point to all these chunks, extents or grains. For the convenience of not havingto dynamically expand LUNs manually as more space is needed, you will pay both a performance and capacity "price".
However, as they say, the [proof of the pudding is in the eating], or perhaps I should say porridge in this case.Given that the DMX4 is slower than both HDS USP-V and IBM SVC, you won't see EMC publishing industry-standard[SPC benchmarks] using their"thin extent" implementation anytime soon. IBM allows a choice of grain size, from 32KB to 256KB, in an elegantdesign that keeps the metadata directory less than 0.1 to 0.5 percent overhead. I would be surprised if EMC canmake a case to be more efficient than that! The performance tests are stillbeing run, but what I have seen so far, people will be very pleased with the minimal impact from IBM SEV, an acceptable trade-off for improved utilization and reduced out-of-space conditions.
So if you are a client waiting for your EMC equipment to be fully depreciated so you can replace it for faster equipment from IBM or HDS, you can at least improveits performance and capacity utilization today by virtualizing it with IBM SAN Volume Controller.
Based on this success, and perhaps because I am also fluent in Spanish, I was asked to help with Proyecto Ceibal, the team for OLPC Uruguay. Normally theXS school server resides at the school location itself, so that even if the internet connection is disrupted or limited, the school kids can continue to access each other and the web cache content until internet connection is resumed.However, with a diverse developmentteam with people in United States, Uruguay, and India, we first looked to Linux hosting providers that wouldagree to provide free or low-cost monthly access. We spent (make that "wasted") the month of May investigating.Most that I talked to were not interested in having a customized Linux kernel on non-standard hardware on their shop floor, and wanted instead to offer their own standard Linux build on existing standard servers, managed by theirown system administrators, or were not interested in providing it for free. Since the XS-163 kernel is customizedfor the x86 architecture, it is one of those exceptions where we could not host it on an IBM POWER or mainframe as a virtual guest.
This got picked up as an [idea] for the Google's[Summer of Code] and we are mentoring Tarun, a 19-year-old student to actas lead software developer. However, summer was fast approaching, and we wanted this ready for the next semester. In June, our project leader, Greg, came up with a new plan. Build a machine and have it connected at an internet service provider that would cover the cost of bandwidth, and be willing to accept this with remote administration. We found a volunteer organization to cover this -- Thank you Glen and Vicki!
We found a location, so the request to me sounded simple enough: put together a PC from commodity parts that meet the requirements of the customizedLinux kernel, the latest release being called [XS-163]. The server would have two disk drives, three Ethernet ports, and 2GB of memory; and be installed with the customized XS-163 software, SSHD for remote administration, Apache web server, PostgreSQL database and PHP programming language.Of course, the team wanted this for as little cost as possible, and for me to document the process, so that it could be repeated elsewhere. Some stretch goals included having a dual-boot with Debian 4.0 Etch Linux for development/test purposes, an alternative database such as MySQL for testing, a backup procedure, and a Recover-DVD in case something goes wrong.
Some interesting things happened:
The XS-163 is shipped as an ISO file representing a LiveCD bootable Linux that will wipe your system cleanand lay down the exact customized software for a one-drive, three-Ethernet-port server. Since it is based on Red Hat's Fedora 7 Linux base, I found it helpful to install that instead, and experiment moving sections of code over.This is similar to geneticists extracting the DNA from the cell of a pit bull and putting it into the cell for a poodle. I would not recommend this for anyone not familiar with Linux.
I also experimented with modifying the pre-built XS-163 CD image by cracking open the squashfs, hacking thecontents, and then putting it back together and burning a new CD. This provided some interesting insight, but in the end was able to do it all from the standard XS-163 image.
Once I figured out the appropriate "scaffolding" required, I managed to proceed quickly, with running versionsof XS-163, plain vanilla Fedora 7, and Debian 4, in a multi-boot configuration.
The BIOS "raid" capability was really more like BIOS-assisted RAID for Windows operating system drivers. This"fake raid" wasn't supported by Linux, so I used Linux's built-in "software raid" instead, which allowed somepartitions to be raid-mirrored, and other partitions to be un-mirrored. Why not mirror everything? With two160GB SATA drives, you have three choices:
No RAID, for a total space of 320GB
RAID everything, for a total space of 160GB
Tiered information infrastructure, use RAID for some partitions, but not all.
The last approach made sense, as a lot of of the data is cache web page images, and is easily retrievable fromthe internet. This also allowed to have some "scratch space" for downloading large files and so on. For example,90GB mirrored that contained the OS images, settings and critical applications, and 70GB on each drive for scratchand web cache, results in a total of 230GB of disk space, which is 43 percent improvement over an all-RAID solution.
While [Linux LVM2] provides software-based "storage virtualization" similar to the hardware-based IBM System Storage SAN Volume Controller (SVC), it was a bad idea putting different "root" directories of my many OS images on there. With Linux, as with mostoperating systems, it expects things to be in the same place where it last shutdown, but in a multi-boot environment, you might boot the first OS, move things around, and then when you try to boot second OS, it doesn'twork anymore, or corrupts what it does find, or hangs with a "kernel panic". In the end, I decided to use RAIDnon-LVM partitions for the root directories, and only use LVM2 for data that is not needed at boot time.
While they are both Linux, Debian and Fedora were different enough to cause me headaches. Settings weredifferent, parameters were different, file directories were different. Not quite as religious as MacOS-versus-Windows,but you get the picture.
During this time, the facility was out getting a domain name, IP address, subnet mask and so on, so I testedwith my internal 192.168.x.y and figured I would change this to whatever it should be the day I shipped the unit.(I'll find out next week if that was the right approach!)
Afraid that something might go wrong while I am in Tokyo, Japan next week (July 7-11), or Mumbai, India the following week (July 14-18), I added a Secure Shell [SSH] daemon that runs automaticallyat boot time. This involves putting the public key on the server, and each remote admin has their own private key on their own client machine.I know all about public/private key pairs, as IBM is a leader in encryption technology, and was the first todeliver built-in encryption with the IBM System Storage TS1120 tape drive.
To have users have access to all their files from any OS image required that I either (a) have identical copieseverywhere, or (b) have a shared partition. The latter turned out to be the best choice, with an LVM2 logical volumefor "/home" directory that is shared among all of the OS images. As we develop the application, we might findother directories that make sense to share as well.
For developing across platforms, I wanted the Ethernet devices (eth0, eth1, and so on) match the actual ports they aresupposed to be connected to in a static IP configuration. Most people use DHCP so it doesn't matter, but the XSsoftware requires this, so it did. For example, "eth0" as the 1 Gbps port to the WAN, and "eth1/eth2" as the two 10/100 Mbps PCI NIC cards to other servers.Naming the internet interfaces to specific hardware ports wasdifferent on Fedora and Debian, but I got it working.
While it was a stretch goal to develop a backup method, one that could perform Bare Machine Recovery frommedia burned by the DVD, it turned out I needed to do this anyways just to prevent me from losing my work in case thingswent wrong. I used an external USB drive to develop the process, and got everything to fit onto a single 4GB DVD. Using IBM Tivoli Storage Manager (TSM) for this seemed overkill, and [Mondo Rescue] didn't handle LVM2+RAID as well as I wanted, so I chose [partimage] instead, which backs up each primary partition, mirrored partition, or LVM2 logical volume, keeping all the time stamps, ownerships, and symbolic links in tact. It has the ability to chop up the output into fixed sized pieces, which is helpful if you are goingto burn them on 700MB CDs or 4.7GB DVDs. In my case, my FAT32-formatted external USB disk drive can't handle files bigger than 2GB, so this feature was helpful for that as well. I standardized to 660 GiB [about 692GB] per piece, sincethat met all criteria.
The folks at [SysRescCD] saved the day. The standard "SysRescueCD" assigned eth0, eth1, and eth2 differently than the three base OS images, but the nice folks in France that write SysRescCD created a customized[kernel parameter that allowed the assignments to be fixed per MAC address ] in support of this project. With this in place, I was able to make a live Boot-CD that brings up SSH, with all the users, passwords,and Ethernet devices to match the hardware. Install this LiveCD as the "Rescue Image" on the hard disk itself, and also made a Recovery-DVD that boots up just like the Boot-CD, but contains the 4GB of backup files.
For testing, I used Linux's built-in Kernel-based Virtual Machine [KVM]which works like VMware, but is open source and included into the 2.6.20 kernels that I am using. IBM is the leadingreseller of Vmware and has been doing server virtualization for the past 40 years, so I am comfortable with thetechnology. The XS-163 platform with Apache and PostgreSQL servers as a platform for [Moodle], an open source class management system, and the combination is memory-intensive enough that I did not want to incur the overheads running production this manner, but it wasgreat for testing!
With all this in place, it is designed to not need a Linux system admin or XS-163/Moodle expert at the facility. Instead, all we need is someone to insert the Boot-CD or Recover-DVD and reboot the system if needed.
Just before packing up the unit for shipment, I changed the IP addresses to the values they need at the destination facility, updated the [GRUB boot loader] default, and made a final backup which burned the Recover-DVD. Hopefully, it works by just turning on the unit,[headless], without any keyboard, monitor or configuration required. Fingers crossed!
So, thanks to the rest of my team: Greg, Glen, Vicki, Tarun, Marcel, Pablo and Said. I am very excited to bepart of this, and look forward to seeing this become something remarkable!
Two weeks ago, I mentioned in my post [Pulse 2008 - Day 2 Breakout sessions] thatHenk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers. I am no stranger to ABN Amro, having helped "ABN" and "Amro" banks merge their mainframe data in 1991. Henk has agreed to let me share with my readers more ofthis success story here on my blog:
Back in December 2005, Henkand his colleagues had come to visit the IBM Tucson ExecutiveBriefing Center (EBC) to hear about IBM products and services. At the time, I was part of our "STG Lab Services" team that performed ILM assessments at client locations. I explained to ABN Amro that the ILM methodology does not requirean all-IBM solution, and that ILM could even provide benefits with their current mix of storage, software and service providers.The ABN Amro team liked what I had to say, andmy team was commissioned to perform ILM assessments atthree of their data centers:
Sao Paulo (Brazil)
Chicago, IL (USA)
Each data center had its own management, its owndecision making, and its own set of issues, so we structuredeach ILM assessment independently. When we presented our results,we showed what each data center could do better with their existing mixed bagof storage, software and service providers, and also showed howmuch better their life would be with IBM storage, software andservices. They agreed to give IBM a chance to prove it, and soa new "Global Storage Study" was launched to take the recommendationsfrom our three ILM studies, and flesh out the details to make aglobally-integrated enterprise work for them. Once completed,it was renamed the "Global Storage Solution" (GSS).
Henk summarized the above with "I am glad to see Tony Pearsonin the audience, who was instrumental to making this all happen."As with many client testimonials, he presented a few charts onwho ABN Amro is today, the 12th largest bank worldwide, 8th largest in Europe. They operate in 53 countries and manage over a trillioneuros in assets.
They have over 20 data centers, with about 7 PB of disk, and over20 PB of tape, both growing at 50 to 70 percent CAGR. About 2/3 of theiroperations are now outsourced to IBM Global Services, the remaining 1/3is non-IBM equipment managed by a different service provider.
ABN Amro deployed IBM TotalStorage Productivity Center, variousIBM System Storage DS family disk systems, SAN Volume Controller (SVC), Tivoli StorageManager (TSM), Tivoli Provisioning Manager (TPM), and several other products. Armed with these products, they performed the following:
Clean Up. IBM uses the term "rationalization" to relate to the assignment of business value, to avoid confusion with theterm "classification" which many in IT relate to identifyingownership, read and write authorization levels. Often, in theinitial phases of an ILM deployment, a portion of the data isdetermined to be eligible for clean up, either to move to a lower-cost tier or deleted immediately. ABN Amro and IBM set a goal to identifyat least 20 percent of their data for clean up.
New tiers. Rather than traditional "storage tiers" which are often justTier 1 for Fibre Channel disk and Tier 2 for SATA disk, ABN Amroand IBM came up with seven "information infrastructure tiers" thatincorporate service levels, availability and protection status.They are:
High-performance, Highly-available disk with Remote replication.
High-performance, Highly-available disk (no remote replication)
Mid-performance, high-capacity disk with Remote replication
Mid-performance, high-capacity disk (no remote replication)
Non-erasable, Non-rewriteable (NENR) storage employinga blended disk and tape solution.
Enterprise Virtual Tape Library with remote replicationand back-end physical tape
Mid-performance physical tape
These tiers are applied equally across their mainframe anddistributed platforms. All of the tiers are priced per "primary GB", so any additional capacity required for replication orpoint-in-time copies, either local or remote, are all folded into the base price.ABN Amro felt a mission-critical applicationon Windows or UNIX deserves the same Tier 1 service level asa mission-critical mainframe application. Exactly!
Deployed storage virtualization for disk and tape. Thisinvolved the SAN Volume Controller and IBM TS7000 series library.
Implemented workflow automation. The key product here is IBM Tivoli Provisioning Manager
Started an investigation for HSM on distributed. This would be policy-based space management to migrate lessfrequently accessed data to the TSM pool for Windows or UNIX data.
While the deployment is not yet complete, ABN Amro feels they have alreadyrecognized business value:
Reduced cost by identifying data that should be stored on lower tiers
Simplified management, consolidated across all operating systems (mainframe, UNIX, Windows)
Increased utilization of existing storage resources
Reduced manual effort through policy-based automation, which can lead to fewer human errors and faster adaptability to new business opportunities
Standardized backup and other operational procedures
Henk and the rest of ABN Amro are quite pleased with the progress so far,although recent developments in terms of the takeover of ABN AMRO by aconsortium of banks means that the model is only implemented so far in Europe. Further rollout depends on the storage strategy of the new owners. Nonetheless,I am glad that I was able to work with Henk, Jason, Barbara, Steve, Tom, Dennis, Craig and othersto be part of this from the beginning and be able to see it rollout successfully over the years.
IBM is hosting a webcast about storage for SAP Environments. Learn how integrated IBM infrastructure solutions, specifically, customized for your SAP environments, can help lower your business costs, increases productivity in SAP development and test tasks, and improve resource utilization. This will include discussion of archive solutions with WebDAV, ArchiveLink and DR550;IBM Business Intelligence (BI) Accelerator; IBM support for SAP [Adaptive Computing]; and performance benchmark results. The session is intended for SAP and storage administrators, IT directorsand managers.
Here are the details:
Date: Wednesday, June 18, 2008
Time: 11:00am EDT (8:00am for those of us in Arizona or California)
Continuing my catch-up on past posts, Jon Toigo on his DrunkenData blog, posted a ["bleg"] for information aboutdeduplication. The responses come from the "who's who" of the storage industry, so I will provide IBM'sview. (Jon, as always, you have my permission to post this on your blog!)
Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.
IBM offers two different forms of deduplication. The first is IBM System Storage N series disk system with Advanced Single Instance Storage (A-SIS), and the second is IBM Diligent ProtecTier software. Larry Freeman from NetApp already explains A-SIS in the [comments on Jon's post], so I will focus on the Diligent offering in this post. The key differentiators for Diligent are:
Data agnostic. Diligent does not require content-awareness, format-awareness nor identification of backup software used to send the data. No special client or agent software is required on servers sending data to an IBM Diligent deployment.
Inline processing. Diligent does not require temporarily storing data on back-end disk to post-process later.
Scalability. Up to 1PB of back-end disk managed with an in-memory dictionary.
Data Integrity. All data is diff-compared for full 100 percent integrity. No data is accidentally discarded based on assumptions about the rarity of hash collisions.
InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?
Diligent is focused on backup workloads, which has the best opportunity for deduplication benefits. The two main benefits are:
Keeping more backup data available online for fast recovery.
Mirroring the backup data to another remote location for added protection. With inline processing, only the deduplicated data is sent to the back-end disk, and this greatly reduces the amount of data sent over the wire to the remote location.
Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one inline function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?
As with any storage offering, the three gating factors are typically:
Will this meet my current business requirements?
Will this meet my future requirements for the next 3-5 years that I plan to use this solution?
What is the Total Cost of Ownership (TCO) for the next 3-5 years?
Assuming you already have backup software operational in your existing environment, it is possible to determine thenecessary ingest rate. How many "Terabytes per Hour" (TB/h) must be received, processed and stored from the backup software during the backup window. IBM intends to document its performance test results of specific software/hardwarecombinations to provide guidance to clients' purchase and planning decisions.
For post-process deployments, such as the IBM N series A-SIS feature, the "ingest rate" during the backup only has to receive and store the data, and the rest of the 24-hour period can be spent doing the post-processing to find duplicates. This might be fine now, but as your data grows, you might find your backup window growing, and that leaves less time for post-processing to catch up. IBM Diligent does the processing inline, so is unaffected by an expansion of the backup window.
IBM Diligent can scale up to 1PB of back-end data, and the ingest rate does not suffer as more data is managed.
As for TCO, post-process solutions must have additional back-end storage to temporarily hold the data until the duplicates can be found. With IBM Diligent's inline methodology, only deduplicated data is stored, so less disk space is required for the same workloads.
Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?
IBM Diligent emulates a tape library, so the incoming data appears as files to be written sequentially to tape. A file is a string of bytes. Unlike block-level algorithms that divide files up into fixed chunks, IBM Diligent performs diff-compares of incoming data with existing data, and identifies ranges of bytes that duplicate what already is stored on the back-end disk. The file is then a sequence of "extents" representing either unique data or existing data. The file is represented as a sequence of pointers to these extents. An extent can vary from2KB to 16MB in size.
De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?
For IBM Diligent, all of the data needed to reconstitute the data is stored on back-end disks. Assuming that all of your back-end disks are available after the disaster, either the original or mirrored copy, then you only need the IBM Diligent software to make sense of the bytes written to reconstitute the data. If the data was written by backup software, you would also need compatible backup software to recover the original data.
De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a subpoena or discovery request? Does de-dupe conflict with the non-repudiation requirements of certain laws?
I am not a lawyer, and certainly there are aspects of[non-repudiation] that may or may not apply to specific cases.
What I can say is that storage is expected to return back a "bit-perfect" copy of the data that was written. Thereare laws against changing the format. For example, an original document was in Microsoft Word format, but is converted and saved instead as an Adobe PDF file. In many conversions, it would be difficult to recreate the bit-perfect copy. Certainly, it would be difficult to recreate the bit-perfect MS Word format from a PDF file. Laws in France and Germany specifically require that the original bit-perfect format be kept.
Based on that, IBM Diligent is able to return a bit-perfect copy of what was written, same as if it were written to regular disk or tape storage, because all data is diff-compared byte-for-byte with existing data.
In contrast, other solutions based on hash codes have collisions that result in presenting a completely different set of data on retrieval. If the data you are trying to store happens to have the same hash code calculation as completely different data already stored on a solution, then it might just discard the new data as "duplicate". The chance for collisions might be rare, but could be enough to put doubt in the minds of a jury. For this reason, IBM N series A-SIS, that does perform hash code calculations, will do a full byte-for-byte comparison of data to ensure that data is indeed a duplicate of an existing block stored.
Some say that de-dupe obviates the need for encryption. What do you think?
I disagree. I've been to enough [Black Hat] conferences to know that it would be possible to read thedata off the back-end disk, using a variety of forensic tools, and piece together strings of personal information,such as names, social security numbers, or bank account codes.
Currently, IBM provides encryption on real tape (both TS1120 and LTO-4 generation drives), and is working withopen industry standards bodies and disk drive module suppliers to bring similar technology to disk-based storage systems.Until then, clients concerned about encryption should consider OS-based or application-based encryption from thebackup software. IBM Tivoli Storage Manager (TSM), for example, can encrypt the data before sending it to the IBMDiligent offering, but this might reduce the number of duplicates found if different encryption keys are used.
Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?
Re-constituting the data back to the original format on tape allows the original backup software to interpret the tape data directly to recover individual files. For example, IBM TSM software can write its primary backup copies to an IBM Diligent offering onsite, and have a "copy pool" on physical tape stored at a remote location. The physical tapes can be used for recovery without any IBM Diligent software in the event of a disaster. If the IBM Diligent back-end disk images are lost, corrupted, or destroyed, IBM TSM software can point to the "copy pool" and be fully operational. Individual files or servers could be restored from just a few of these tapes.
An NDMP-like tape backup of a deduplicated back-end disk would require that all the tapes are in-tact, available, and fully restored to new back-end disk before the deduplication software could do anything. If a single cartridge fromthis set was unreadable or misplaced, it might impact the access to many TBs of data, or render the entire systemunusable.
In the case of a 1PB of back-end disk for IBM Diligent, you would be having to recover over a thousand tapes back to disk before you could recover any individual data from your backup software. Even with dozens of tape drives in parallel, could take you several days for the complete process.This represents a longer "Recovery Time Objective" (RTO) than most people are willing to accept.
Some vendors are claiming de-dupe is “green” — do you see it as such?
Certainly, "deduplicated disk" is greener than "non-deduplicated" disk, but I have argued in past posts, supportedby Analyst reports, that it is not as green as storing the same data on "non-deduplicated" physical tape.
De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?
Deduplication can be applied to primary data, as in the case of the IBM System Storage N series A-SIS. As Larrysuggests, MS Exchange and SharePoint could be good use cases that represent the possible savings for squeezing outduplicates. On the mainframe, many master-in/master-out tape applications could also benefit from deduplication.
I do not believe that deduplication products will run efficiently with “update in place” applications, that is high levels of random writes for non-appending updates. OLTP and Database workloads would not benefit from deduplication.
Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?
In general, new technologies are introduced on software first, and then as implementations mature, get hardware-based to improve performance. The same was true for RAID, compression, encryption, etc. The Hifn card does "hash code" calculations that do not benefit the current IBM Diligent implementation. Currently, IBM Diligent performsLZH compression through software, but certainly IBM could provide hardware-based compression with an integrated hardware/software offering in the future. Since IBM Diligent's inline process is so efficient, the bottleneck in performance is often the speed of the back-end disk. IBM Diligent can get improved "ingest rate" using FC instead of SATA disk.
Sorry, Jon, that it took so long to get back to you on this, but since IBM had just acquired Diligent when you posted, it took me a while to investigate and research all the answers.
I'm glad to be back home in Tucson for a few weeks. All of these conferences kept mefrom reading up with what was going on in the blogosphere.
A few of us at IBM found it odd that EMC would announce their new Geographically Dispersed Disaster Restart (GDDR) the weekBEFORE their "EMC World" conference. Why not announce all of the stuff all at once instead at the conference?Were they worried that the admission that "Maui" software is still many months awaythat much of a negative stigma? The decision probably went something like this:
EMCer #1: GDDR is finally ready, should we announce now, or wait ONE week to make it part of the thingswe announce at EMC World?
EMCer #2: We are not announcing much at EMC World and what people really want us to talk about, Maui, wearen't delivering for a while. Why can't people understand we are company of hardware engineers, not software programmers! So, better not be associated with that quagmire at all.
EMCer #1: Yes, boss, I see your point. We'll announce this week then.
My fellow blogger and intellectual sparring partner, Barry Burke, on his Storage Anarchist blog, posted [are you wasting money on your mainframe dr solution?"] to bringup the GDDR announcement. The key difference is that IBM GDPS works withIBM, EMC and HDS equipment, being the fair-and-balanced folks that IBM clientshave come to expect, but it appears EMC GDDR works only with EMC equipment.Because GDDR does less, it also costs less. I can accept that. You get whatyou pay for. Of course, IBM does have a variety of protection levels, one probably will meet your budget and your business continuity needs.
To correct Barry's misperception, companies that buy IBM mainframe servers do have a choice.They can purchase their operating system from IBM, get their Linux or OpenSolarisfrom someone else like Red Hat or Novell, or build their own OS distribution fromreadily available open source. And unlike other servers that might require at leastone OS partition from the vendor, IBM mainframes can run 100 percent Linux.GDPS supports a mix of OS data. z/OS and Linux data can all be managed by GDPS.Companies that own mainframes know this. I can forgive the misperception from Barry,as EMC is focused on distributed servers instead, and many in their company may not have muchexposure to mainframe technology, or have ever spoken to mainframe customers.
But what almost had me fall out of my chair was this little nugget from his post:
"If you're an IBM mainframe customer, you are - by definition - IBM's profit stream."
Honestly, is there anyone out there that does not realize that IBM is a for-profitcorporation? In contrast, Barry would like his readers to believe that EMC is selling GDDR at cost, andthat EMC is a non-profit organization. While IBM has been delivering actual solutions thatour clients want, EMC continues to rumor that someday they might get around to offering something worthwhile.In the last six months, the shareholders have interpreted both strategies for what they really are,and the stock prices reflect that:
(Disclosure: I own IBM stock. I do not own EMC stock. Stock price comparisonsby Yahoo were based on publicly reported information. The colors blue and red to represent IBM and EMC, respectively, were selected by Yahoo graph-making facility. The color red does not necessarily imply EMC is losing money or having financial troubles.)
Of course, I for one would love to help Barry's dream of EMC non-profitability come true. If anyone has any suggestions how we can help EMC approach this goal, please post a comment below.
Well, it's Tuesday again, and we had several announcements this month, so here is a quick recap.We had some things announce May 13, and then some more announcements today, but since I was busywith conferences, will combine them into one post for the entire month of May 2008.
This time, I thought I would go "audio" with a recording from Charlie Andrews, IBM director ofproduct marketing for IBM System Storage:
Continuing my summary of Pulse 2008, the premiere service managementconference focusing on IBM Tivoli solutions, I attended and presentedbreakout sessions on Monday afternoon.
Tivoli Storage "State-of-the-Subgroup" update
Kelly Beavers, IBM director of Tivoli Storage, presented the first breakout for all of the Tivoli Storage subgroup.Tivoli has several subgroups, but Tivoli Storage leads with revenuesand profits over all the others.Tivoli storage has top performing business partner channel of anysubgroup in IBM's Software Group division.IBM is world's #1 provider of storage vendor (hardware, softwareand services), so this came to no surprise to most of the audience.
Looking at just the Storage Software segment, it is estimatedthat customers will spend $3.5 billion US dollars more in the year 2011 than they did last year in 2007. IBM is #2 or #3 in eachof the four major categories: Data Protection, Replication, Infrastructure management, and Resource management. In eachcategory, IBM is growing market share, often taking away share fromthe established leaders.
There was a lot of excitement over the FilesX acquisition.I am still trying to learn more about this, but what I have gathered so far is that it can:
Like turning a "knob", you can adjust the level of backupprotection from traditional discrete scheduled backups, to morefrequent snapshots, to continuous data protection (CDP). Inthe past, you often used separate products or features to dothese three.
Perform "instantaneous restore" by performing a virtualmount of the backup copy. This gives the appearance that therestore is complete.
This year marks the 15th anniversary of IBM Tivoli StorageManager (TSM), with over 20,000 customers. Also, this yearmarks the 6th year for IBM SAN Volume Controller, having soldover 12,000 SVC engines to over 4,000 customers.
Data Protection Strategies
Greg Tevis, IBM software architect for Tivoli Technical Strategy,and I presented this overview of data protection. We coveredthree key areas:
Protecting against unethical tampering with Non-erasable, Non-rewriteable (NENR) storage solutions
Protecting against unauthorized access with encryption ondisk and tape
Protecting against unexpected loss or corruption with theseven "Business Continuity" tiers
There was so much interest in the first two topics that weonly had about 9 minutes left to cover the third! Fortunately,Business Continuity will be covered in more detail throughoutthe week.
Henk de Ruiter from ABN Amro bank presented his success storyimplementing Information Lifecycle Management (ILM) across hisvarious data centers using IBM systems, software and services.
Making your Disk Systems more Efficient and Flexible
I did not come up with the titles of these presentations. Theteam that did specifically chose to focus on the "business value"rather than the "products and services" being presented. Inthis session, Dave Merbach, IBM software architect, and I presentedhow SAN Volume Controller (SVC), TotalStorage Productivity Center,System Storage Productivity Center, Tivoli Provisioning Managerand Tivoli Storage Process Manager work to make your disk storagemore efficient and flexible.
I attended the main tent sessions on Day 2 (Monday). The focuswas on Visibility, Control and Automation.
Steve is IBM senior VP and Group Executive of the IBM Software Group, and presented someinsightful statistics from the IBM Global Technology Outlookstudy, some recent IBM wins, and other nuggets of IT trivia:
In 2001, there were about 60 million transistors per humanbeing. By 2010, this is estimated to increase to one billion per human
In 2005, there were about 1.3 billion RFID tags, by 2010this is estimated to grow to over 30 billion
IBM helped the City of Stockholm, Sweden, reduce traffic congestion 20-25% using computer technology
Only about 25% data is original, the remaining75% is replicated
In 2007, there were approximately 281 Exabytes (EB), expected to increase to 1800 EB by the year 2011
70 percent of unstructured data is user-created content, but 85 percent of this will be managed by enterprises
Only 20% of data is subject to compliance rules and standards, and about 30% subject to security applications
Human error is the primary reason for breaches, with34% of organizations experiencing a major breach in 2006
10% of IT budget is energy costs (power and cooling), and thiscould rise to 50% in the next decade
30 to 60 percent of energy is wasted. During the next 5 years, people will spend as much on energy as they will on new hardware purchases.
Al Zollar is the General Manager of IBM Tivoli. He discussedthe 20 some recent software acquisitions, including Encentuate and FilesX earlier this year.
"The time has come to fully industrialize operations" -- Al Zollar
What did Al mean about "industrizalize"? This is theclosed-loop approach of continuous improvement, including design, delivery and management.
Al used several examples from other industries:
Henry Ford used standardized parts and processautomation. Assembly of an automatobile went from 12 hours by master craftsmen, to delivering a new model T every 23 seconds off anassembly line.
Power generation was developed by Thomas Edison. A satellite picture showed the extent of the [Blackout of 2003 in Northeast US and Canada]. The time for "smart grid" has arrived, making sensors andmeters more intelligent. This allows non-essential IP-enabled appliances in our home or office to be turned off to reduce energy consumption.
[McCarran International Airport] integrated the management of 13,000 assets with IBM Tivoli Maximo Enterprise Asset Management (EAM) software, and was able to increase revenues through more accurate charge-back. Unlike traditional EnterpriseResource Planning (ERP) applications, EAM offers the deep management of four areas: production equipment, facilities, transportation, and IT.
When compared to these other industries, management of IT is in itsinfancy. The expansion of [Web 2.0] and Service-Oriented Architecture [SOA] is driving this need.What people need is a "new enterprise data center" that IBM Tivoli software can help you manage across operational boundaries. IBM can integrate through open standards with management software from Cisco, Sun, OracleMicrosoft, CA, HP, BMC Software, Alcatel Lucent, and SAP.Together with our ecosystems of technology partners, IBM ismeeting these challenges.
IBM clients have achieved return on investment from gettingbetter control of their environment. This week there are client experience presentations Sandia National Labs, Spirit AeroSystems, Bank of America, and BT Converged communication services.
Chris O'Connor used some of his staff as "actors" to show an incredible live demo of various Tivoli and Maximo products for the mythical launch of "Project Vitalize", thenew online web store for a new "Aero Z bike" from the mythical VCA Bike and Motorcycle company.
Shoel Perelman played the role of "CIO".The CIO locked down all spending, and asked the IT staff to make the shift from bricks-and-mortar to web salesof this new product on in 15 months. While the company andsituation were mythical, all the products that were part of thelive demo are all readily available.The CIO had three goals:
What do we have? where is it? what's connected to what?Traditionally, these would be answered from lists in spreadsheets.The CIO had a goal to deploy IBM Tivoli Application DependenceDiscover Manager (TADDM) which discovered all hardware and software,with an easy to understand view, and how each piece serves the business applications.
Each of the teams have processes, and needed them consistent andrepeatable, tightly linked together. Time is often wasted on thephone coordinating IT changes. For this, the CIO had a goalto deploy Tivoli Change and Configuration Management Database (CCMDB) for "strict change control".The process dashboard is accessible for all teams, to see how all projects are progressing. There is also aCompliance dashboard, which identifies all changes by role, clearly spelling out who can do what.
There is a lot of computerized machinery, Manufacturing assets and robotics. The CIO set a goal to "do more with existing people", and needed to automate key processes.Sales rep wanted to add a new distributor to key web portal.This was all done through their "service catalog", When they needed to deploy a new application, they were able to find servers with available capacity and adjust using automatic provisioning. Thanks to IBM, the IT staff no longer get paged at 3am in the morning, and fewer days are spent in the "war room". They now have confidence that thelaunch will be successful.
Ritika Gunnar played the role of "Operations manager". She highlightedfive areas:
"Service viewer" dashboard with green/yellow/red indicators forall of their edge, application and datbase servers. This allowsher to get data 4-5 times faster and more accurate.
Tivoli Enterprise Portal eliminates bouncingaround various products.
Tivoli Common Reporting for CPU utilization of all systems, helps find excess capacity usingIBM Tivoli Monitor
On average, 85 percent of problems are caused by IT changes to the environment. IBM can help find dependencies, so that changes in one area do not impact other areas unexpectedly
Process Automation will Show changes that have been completed, in progress, or overdue.She can see all steps in a task or change request. A"workflow" automates all the key steps that need to be taken.
Laura Knapp played the role of "Facilities manager". She wanted to See all processes that apply to her work using a role-based process dashboard. The advantage of using IBM is that it changes work habits, reduces overtimeby 42 percent, improves morale. The IT staff now works as team,collaborates more, and jobs get done faster with fewer mistakes.Employees are online, accessing, monitoring and managing dataquicker. In days not weeks.
IBM Tivoli Enterprise Console (TEP) served as a common vehicle.She was able to pull up floor plan online, displaying all of the managed assets and mapped features. With the temperature overlay from Maximo Spatial, she was able to review hot spots on data center floor. Heat can cause servers to fail or shut down.
Power utilization chart at peak loadsCan now anticipate, predict and watch power consumption,and were able to justify replacement with newer, more energy-efficient equipment.
The CIO got back on stage, and explained the great success of thelaunch. They use Webstore usage tracking, security tools tracking all new registrations, and trackingserver and storage load.It now only takes hours, not weeks, to add new business partners and distributors.Tivoli Service Quality Assurance toolstrack all orders placed, processed, and shipped.Faster responsiveness is competitive advantage. TheirIT department is no longer seen as stodgy group, but as a world classorganization.
The live demo showed how IBM can help clients with rapid decisionmaking, speed and accuracy of change processes, and automation to take actions quickly. The result is a strong return on investment (ROI).
Liz Smith, IBM General Manager of Infrastructure Services, presented the results of an IBM survey to CEOs and CIOs asking questions like: What is the next big impact? Where are you investing?What will new datacenter look like?
The five key traits they found for companies of the future:
They were hungry for change
Innovative beyond customer imagination
Disruptive by nature
Genuine, not just generous
The IT infrastructure must be secure, reliable, and flexible.Taking care of environment is a corporate responsibility, notjust a way to reduce costs.
The five entry points for IBM Service Management: Integrate, Industrialize,Discover,Monitor and Protect.IBM Service management and compliance are critical for theGlobally Integrated Enterprise, with repeatable, scalable and consistent processes that enablechange to an automated workflow. This reduces errors, risks and costs, and improves productivity.IBM has talent, assets and experience to help any client get there.
Lance lives in Austin, TX, where IBM Tivoli is headquartered,so this made a good choice as a keynote speaker.He is best known for winning seven "Tour de France" bicycle races in a row, but he spoke instead gave an inspirational talk about how he survived cancer.
In 1996, Lance was diagnosed with cancer. Surprisingly, He said it was thegreatest thing that happened to him, and gave him new perspective on his life, family and the sport ofbicycling.Back then, there wasn't a webMD, Google or other Web 2.0 socialnetworking sites for Lance to better understand what he wasgoing through, learn more about treatment options, or find othersgoing through the same ordeal.
After his treatment, he was considered "damaged goods" by manyof the leading European bicycle teams. So, he joined the US Postal Serviceteam, not known for their wins, but often invited to sell TVrights to American audiences. Collaborating with his coachesand other members of his team, he revolutionized the bicycling sport, analyzed everything about the race, and built up morale.He won the first "yellow jersey" in 1999, and did so each yearfor a total of seven wins.
Lance formed the [Livestrong foundation] to help other cancer survivors. Nike came to him and proposed donating 5 million "rubber bracelets"colored yellow to match his seven yellow jerseys, with the name "Livestrong" embossed on them, that his foundation couldthen sell for one dollar apiece to raise funds. What some thought was a silly idea at first has started amovement.At the 2004 Olympics, many athletes from all nations and religious backgrounds, wore these yellow braceletsto show solidarity with this cause.To date, the foundation has sold over 72 million yellow bracelets, and these have served to provide a symbol,a brand, a color identity, to his cause.
He explained that doctor's have a standard speech to cancer survivors.As a patient, you can go out this doorway and never tell anyone,keep the situation private. Or you can go out this other doorway, you tell everybody your story. Lance chose the latter, and he felt it was the best decision he ever made.He wrote a book titled [It's Not About the Bike: My Journey Back to Life].
His call to action for the audience: find out what can you do to make a difference.A million non-governmental organizations[NGO] have started in the past 10 years. Don't just give cash, also give your time and passion.
Continuing this week in Los Angeles, I went to some interesting sessions today at theSystems Technical Conference (STC08).
System Storage Productivity Center (SSPC) - Install and Configuration
Dominic Pruitt, an IBM IT specialist in our Advanced Technical Support team, presented SSPC and howto install and configure it. For those confused between the difference of TotalStorage ProductivityCenter and System Storage Productivity Center, the former is pure software that you install on aWindows or Linux server, and the latter is an IBM server, pre-installed with Windows 2003, TotalStorageProductivity Center software, TPCTOOL command line interface, DB2 Universal Database, the DS8000 Element Manager, SVC GUI and CIMOM, and [PuTTY] rLogin/SSH/Telnet terminal application software.
Of course, the problem with having a server pre-installed with a lot of software is that there is alwayssomeone that wants to customize it further. For those who just want to manage their DS8000 disk systems,for example, it is possible to uninstall the SVC GUI, CIMOM and PuTTY, and re-install them later when youchange your mind. As a general rule, it is not wise to mix CIMOMs on the same machine, as it might causeconflicts with TCP ports or Java level requirements, so if you want a different CIMOM than SVC, uninstallthe SVC CIMOM first. For those who have SVC, the SSPC replaces the SVC Master Console, so you can safelyturn off the SVC CIMOM on your existing SVC Master Consoles.
The base level is TotalStorage Productivity Center "Basic Edition", but you can upgrade the Productivity Centerfor Disk, Data and Fabric components with license keys. You can also run Productivity Center for Replication,but IBM recommends adding processor and memory to do this (IBM offers this as an orderable option).Whether you have the TotalStorage software or SSPC hardware, Productivity Center has a cool role-to-groups mapping feature.You can create user groups, either on the Windows server, the Active Directory, or other LDAP, and then map which roles should be assigned to users in each group.
Since Productivity Center manages a variety of different disk systems, it has made anattempt to standardize some terminology. The term "storage pool" refers to an extentpool on the DS8000, or a managed disk group on the SAN Volume Controller. Since the DS8000 can support both mainframe CKD volumes and LUNs for distributed systems, theterm "volume" refers to a CKD volume or LUN, and "disk" refers to the hard disk drive (HDD).
To help people learn Productivity Center, IBM offers single-day "remote workshops"that use Windows Remote Desktop to allow participants to install, customize and usethe software with no travel required.
IBM Integrated Approach to Archiving
Dan Marshall, IBM global program manager for storage and data services on our Global Technology Services team, presented IBM's corporate-wide integration to support archive across systems, software and services.One attendee asked me why I was there, given that "archive" is one of my areas of subject matter expertise that I present often at the Tucson Executive Briefing Center. I find it useful to watch others present the material, even material that I helped to develop, to see a different slant or spin on each talking point.
Archive is one area that brings all parts of IBM together: systems, software and services.Dan provided a look at archive from the services angle, providing an objective unbiasedview of the different software and systems available to solve specific challenges.
Encryption Key Manager (EKM) Design and Implementation
Jeff Ziehm, IBM tape technical sales specialist, presented IBM's EKM software, how it works in a tape environment, and how to deploy it in various environments. Since IBM is allabout being open and non-proprietary, the EKM software runs on Java on a variety ofIBM and non-IBM operating systems. IBM offers "keytool" command line interface (CLI) for the LTO4 and TS1120 tape systems, and "iKeyMan" graphical user interface (GUI) for theTS1120. Since it runs on Java, IBM Business Partners and technical support personneloften just [download and install EKM]onto their own laptops to learn how to use it.
Virtual Tape Update
We had three presenters at this one. First, Jeff Mulliken, formerly from Diligent and now a full IBM employee, presented the current ProtecTier softwarewith the HyperFactor technology, then Abbe Woodcock, IBM tape systems, compared Diligent with IBM's TS7520 and just-announced TS7530virtual tape libraries, and finally Randy Fleenor, IBM tape sales leader, presented IBM's strategy going forward in tape virtualization.
Let's start with Diligent. The ProtecTier software runs on any x86-64 server withat least four cores and the correct Emulex host bus adapter (HBA) cards. Using Red HatEnterprise Linux (RHEL) as a base, the ProtecTier software performs its deduplication entirely in-lineat an "ingest rate" of 400-450 MB/sec. This is all possible using 4GB memory-resident "dictionary table" that can map up to 1 PB of back end physical storage, which could represent as much as 25PB of "nominal" storage. Theserver is then point-to-point or SAN-attached to Fibre Channel disk systems.
As we learned yesterday from Toby Marek's session, there are four ways to performdeduplication:
full-file comparisons. Store only one copy of identical files.
fixed-chunk comparisons. Files are carved up into fixed-size chunks, and each chunkis compared or hashed to existing chunks to eliminate duplicates.
variable-chunk comparisons. Variable-length chunks are hashed or diffed to eliminate duplicate data.
content-aware comparisons. If you knew data was in Powerpoint format, for example,you could compare text, photos or charts against other existing Powerpoint files toeliminate duplicates.
IBM System Storage N series Advanced Single Instance Storage (A-SIS) uses fixed-chunkmethod, and Diligent uses variable-chunk comparisons. Diligent does this using "dataprofiling". For example, let's say most of my photographs are pictures of people, buildings, landscapes, flowers and IT equipment. When I back these up, the Diligentserver "profiles" each, and determines if any existing data have a similar profilethat might have at least 50 percent similar content. Diligent than reads in the data that is mostly likely similar, does a byte-for-byte ["diff" comparison], and creates variable-lengthchunks that are either identical or unique to sections of the existing data. Theunique data is compressed with LZH and written to disk, and the sequential series of pointer segments representing the ingested file is written in a separate section on disk.
That Diligent can represent profiles for 1PB of data in as little as 4GB memory-residentdictionary is incredible. By comparison, 10TB data would require 10 million entries on a content-aware solution, and 1.25 billion entries for one based on hash-codes.
Abbe Woodcock presented the TS7530 tape system that IBM announced on Tuesday. It has some advantages over the current Diligent offering:
Hardware-based compression (TS7520 and Diligent use software-based compression)
1200 MB/sec (faster ingest rate than Diligent)
1.7PB of SATA disk (more disk capacity than Diligent)
Support for i5/OS (Diligent's emulation of ATL P3000 with DLT7000 tapes not supported on IBM's POWER systems running i5/OS)
Ability to attach a real tape library
NDMP backup to tape
tape "shredding" (virtual equivalent of degaussing a physical tape to erase all previously stored data)
Randy Fleenor wrapped up the session telling us IBM's strategy going forward with all of thevirtual tape systems technologies. Until then, IBM is working on "recipes" or "bundles", puttingDiligent software with specific models of IBM System x servers and IBM System Storage DS4000 disk systemsto avoid the "do-it-yourself" problems of its current software-only packaging.
Understanding Web 2.0 and Digital Archive Workloads
I got to present this in the last time slot of the day, just before everyone headed off to the [Westin Bonaventure hotel] for our big fancy barbecue dinner. Like my previous sessionon IBM Strategy, this session was more oriented toward a sales audience, but both garnereda huge turn-out and were well-received by the technical attendees.
This session was requested because these new applications and workloads are what is driving IBM to acquire small start-ups like XIV, deploy Scale-Out File Services (SOFS), and develop the innovative iDataPlex server rack.
The session was fun because it was a mix of explanation of the characteristics ofWeb 2.0 services; my own experience as a blogger and user of Google Docs, FlickR, Second Life andTivo; and an exploration in how database and digital archives will impact thegrowth in computing and storage requirements.
I'll expand on some of these topics in later blog posts.
My session was the first in the morning, at 8:30am, but managed to pack the room full of people. A few looklike they just rolled in from Brocade's special get-together in Casey's Irish Pub the night before.I presented how IBM's storage strategy for the information infrastructure fits into the greater corporate-wide themes.To liven things up, I gave out copies of my book[Inside System Storage: Volume I] to those who asked or answered the toughest questions.
Data Deduplication and IBM Tivoli Storage Manager (TSM)
IBM Toby Marek compared and contrasted the various data deduplication technologies and products available, andhow to deploy them as the repository for TSM workloads. She is a software engineer for our TSM software product,and gave a fair comparison between IBM System Storage N series Advanced Single Instance Storage (A-SIS), IBMDiligent, and other solutions out in the marketplace.If you are going to combine technologies, then it isbest to dedupe first, then compress, and finally encrypt the data. She also explained about the many cleverways that TSM does data reduction at the client side greatly reduces the bandwidth traffic over the LAN,as well as reducing disk and tape resources for storage. This includes progressive "incremental forever" backup for file selection, incremental backups for databases, and adaptive sub-file backup.Because of these data reduction techniques, you may not get as much benefit as deduplication vendors claim.
The Business Value of Energy Efficiency Data Centers
Scott Barielle did a great job presenting the issues related to the Green IT data center. He is part of IBM"STG Lab Services" team that does energy efficiency studies for customers. It is not unusual for his teamto find potential savings of up to 80 percent of the Watts consumed in a client's data center.
IBM has done a lot to make its products more energy efficient. For example, in the United States, most datacenters are supplied three-phase 480V AC current, but this is often stepped down to 208V or 110V with powerdistribution units (PDUs). IBM's equipment allows for direct connection to this 480V, eliminating the step-downloss. This is available for the IBM System z mainframe, the IBM System Storage DS8000disk system, and larger full-frame models of our POWER-based servers, and will probably be rolled out to someof our other offerings later this year. The end result saves 8 to 14 percent in energy costs.
Scott had some interesting statistics. Typical US data centers only spend about 9 percent of their IT budgeton power and cooling costs. The majority of clients that engage IBM for an energy efficiency study are not tryingto reduce their operational expenditures (OPEX), but have run out, or close to running out, of total kW ratingof their current facility, and have been turned down by their upper management to spend the average $20 million USDneeded to build a new one. The cost of electricity in the USA has risen very slowly over the past 35 years, andis more tied the to fluctuations of Natural Gas than it is to Oil prices.(a recent article in the Dallas News confirmed this:["As electricity rates go up, natural gas' high prices, deregulation blamed"])
Cognos v8 - Delivering Operational Business Intelligence (BI) on Mainframe
Mike Biere, author of the book [BusinessIntelligence for the Enterprise], presented Cognos v8 and how it is being deployed for the IBMSystem z mainframe. Typically, customers do their BI processing on distributed systems, but 70 percent of the world's business data is on mainframes, so it makes sense to do yourBI there as well. Cognos v8 runs on Linux for System z, connecting to z/OS via [Hypersockets].
There are a variety of other BI applications on the mainframe already, including DataQuant,AlphaBlox, IBI WebFocus and SAS Enterprise Business Intelligence. In addition to accessing traditional onlinetransaction processing (OLTP) repositories like DB2, IMS and VSAM, using the [IBM WebSphere ClassicFederation Server], Cognos v8 can also read Lotus databases.
Business Intelligence is traditionally query, reporting and online analytics process (OLAP) for the top 10 to 15 percent of the company, mostly executives andanalysts, for activities like business planning, budgeting and forecasting. Cognos PowerPlay stores numericaldata in an [OLAP cube] for faster processing.OLAP cubes are typically constructed with a batch cycle, using either "Extract, Transfer, Load" [ETL], or "Change Data Capture" [CDC], which playsto the strength of IBM System z mainframe batch processing capabilities.If you are not familiar with OLAP, Nigel Pendse has an article[What is OLAP?] for background information.
Over the past five years, BI is now being more andmore deployed for the rest of the company, knowledge workers tasked with doing day-to-day operations. Thisphenomenom is being called "Operational" Business Intelligence.
IBM Glen Corneau, who is on the Advanced Technical Support team for AIX and System p, presented the IBMGeneral Parellel File System (GPFS), which is available for AIX, Linux-x86 and Linux on POWER.Unfortunately, many of the questions were related to Scale Out File Services (SOFS), which my colleague GlennHechler was presenting in another room during this same time slot.
GPFS is now in its 11th release since its introducing in 1997. All of the IBM supercomputers on the [Top 500 list] use GPFS. The largest deployment of GPFS is 2241 nodes.A GPFS environment can support up to 256 file systems, each file system can have up to 2 billion filesacross 2 PB of storage. GPFS supports "Direct I/O" making it a great candidate for Oracle RAC deployments.Oracle 10g automatically detects if it is using GPFS, and sets the appropriate DIO bits in the stream totake advantage of GPFS features.
Glen also covered the many new features of GPFS, such as the ability to place data on different tiers ofstorage, with policies to move to lower tiers of storage, or delete after a certain time period, all conceptswe call Information Lifecycle Management. GPFS also supports access across multiple locations and offersa variety of choices for disaster recovery (DR) data replication.
Perhaps the only problem with conferences like this is that it can be an overwhelming["fire hose"] of information!
My theme this week was to focus on "Do-it-Yourself" solutions, such as the "open storage" concept presentedby Sun Microsystems, but it has morphed into a discussion on vendor lock-in. Both deserve a bit of furtherexploration.
There were several reasons offered on why someone might pursue a "Do-it-Yourself" course of action.
Building up skills
In my post [Simply Dinners and Open Storage], I suggested that building a server-as-storage solution based on Sun's OpenSolaris operating system could serve to learn more about [OpenSolaris], and by extension, the Solaris operating system.Like Linux, OpenSolaris is open source and has distributions that run on a variety of chipsets, from Sun's ownSPARC, to commodity x86 and x86-64 hardware. And as I mentioned in my post [Getting off the island], a version of OpenSolaris was even shown to run successfully on the IBM System z mainframe.
"Learning by Doing" is a strong part of the [Constructivism] movement in education. TheOne Laptop Per Child [OLPC] uses this approach. IBM volunteers in Tucson and 40other sites [help young students build robots]constructed from [Lego Mindstorms]building blocks.Edward De Bono uses the term [operacy] to refer to the"skills of doing", preferred over just "knowing" facts and figures.
However, I feel OpenSolaris is late to the game. Linux, Windows and MacOS are all well-established x86-based operating systems that most home office/small office users would be familiar with, and OpenSolaris is positioning itself as "the fourth choice".
In my post[WashingtonGets e-Discovery Wakeup Call], I suggested that the primary motivation for the White House to switch from Lotus Notes over to Microsoft Outlookwas familiarity with Microsoft's offerings. Unfortunately, that also meant abandoning a fully-operational automated email archive system, fora manual do-it-yourself approach copying PST files from journal folders.
Familiarity also explains why other government employees might print out their emails and archive them on paperin filing cabinets. They are familiar with this process, it allows them to treat email in the same manner as they have treated paper documents in the past.
Cost, Control and Unique Requirements
The last category of reasons can often result if what you want is smaller or bigger than what is availablecommercially. There are minimum entry-points for many vendors. If you want something so small that it is notprofitable, you may end up doing it yourself. On the other end of the scale, both Yahoo and Google ended up building their data centers with a do-it-yourself approach, because no commercial solutions were available atthe time. (IBM now offers [iDataPlex], so that has changed!)
While you could hire a vendor to build a customized solution to meet your unique requirements, it might turn outto be less costly to do-it-yourself. This might also provide some added control over the technologies and components employed. However, as EMC blogger Chuck Hollis correctly pointed out for[Do-it-yourself storage],your solution may not be less costly than existingoff-the-shelf solutions from existing storage vendors, when you factor in scalability and support costs.
Of course, this all assumes that storage admins building the do-it-yourself storage have enough spare time to do so. When was the last time your storage admins had spare time of any kind?Will your storage admins provide the 24x7 support you could get from established storage vendors? Will theybe able to fix the problem fast enough to keep your business running?
From this, I would gather that if you have storage admins more familiar with Solaris than Linux, Windows or MacOS,and select commodity x86 servers from IBM, Sun, HP, or Dell, they could build a solution that has less vendor lock-in than something off-the-shelf from Sun. Let's explore the fears of vendor lock-in further.
The storage vendor goes out of business
Sun has not been doing so well, so perhaps "open storage" was a way to warn existing Sun storage customers thatbuilding your own may be the next alternative.The New York Times title of their article says it all:["Sun Microsystems Posts Loss and Plans to Reduce Jobs"]. Sun is a big company, so I don't expect them to close their doors entirely this year,but certainly fear of being locked-in to any storage vendor's solution gets worse if you fear the vendor might go out of business.
The storage vendor will get acquired by a vendor you don't like
We've seen this before. You don't like vendor A, so you buy kit from vendor B, only to have vendor A acquire vendorB after your purchase. Surprise!
The storage vendor will not support new applications, operating systems, or other new equipment
Here the fear is that the decisions you make today might prevent you from choices you want to make in the future.You might want to upgrade to the latest level of your operating system, but your storage vendor doesn't supportit yet. Or maybe you want to upgrade your SAN to a faster bandwidth speed, like 8 Gbps, but your storage vendordoesn't support it yet. Or perhaps that change would require re-writing lots of scripts using the existingcommand line interfaces (CLI). Or perhaps your admins would require new training for the new configuration.
The storage vendor will raise prices or charge you more than you expect on follow-on upgrades
For most monolithic storage arrays, adding additional disk capacity means buying it from the same vendor as the controller. I heard of one company recently who tried to order entry-level disk expansion drawer, at a lower price, solely to move the individual disk drives into a higher-end disk system. Guess what? It didn't work. Most storage vendors would not support such mixed configurations.
If you are going to purchase additional storage capacity to an existing disk system, it should cost no more thanthe capacity price rate of your original purchase. IBM offers upgrades at the going market rate, but not all competitors are this nice. Some take advantage of the vendor lock-in, charging more for upgrades and pocketing the difference as profit.
Vendor lock-in represents the obstacles in switching vendors in the event the vendor goes out of business, failsto support new software or hardware in the data center, or charges more than you are comfortable with. These obstacles can make it difficult to switch storage vendors, upgrade your applications, or meet otherbusiness obligations. IBM SANVolume Controller and TotalStorage Productivity Center can help reduce or eliminate many of these concerns. IBMGlobal Services can help you, as much or as little, as you want in this transformation. Here are the four levelsof the do-it-yourself continuum:
Let me figure it out myself
Tell me what to do
Help me do it
Do it for me
This is the self-service approach. Go to our website, download an [IBM Redbook], figure out whatyou need, and order the parts to do-it-yourself.
IBM Global Business Services can help understand your business requirementsand tell you what you need to meet them.
IBM Global Technology Services can help design, assemble and deploy asolution, working with your staff to ensure skill and knowledge transfer.
IBM Managed Storage Services can manage your storage, on-site at your location, or at an IBM facility. IBM provides a varietyof cloud computing and managed hosting services.
So, if you are currently a Sun server or storage customer concerned about these latest Sun announcements, give IBM a call, we'll help you switch over!
He feels I was unfair to accuse EMC of "proprietary interfaces" without spelling out what I was referring to. Here arejust two, along with the whines we hear from customers that relate to them.
EMC Powerpath multipathing driver
Typical whine: "I just paid a gazillion dollars to renew my annual EMC Powerpath license, so you will have to come back in 12 months with your SVC proposal. I just can't see explaining to my boss that an SVC eliminates the need for EMC Powerpath, throwing away all the good money we just spent on it, or to explain that EMC chooses not to support SVC as one of Powerpath's many supported devices."
EMC SRDF command line interface
Typical whine: "My storage admins have written tons of scripts that all invoke EMC SRDF command line interfacesto manage my disk mirroring environment, and I would hate for them to re-write this to use IBM's (also proprietary) command line interfaces instead."
Certainly BarryB is correct that IBM still has a few remaining "proprietary" items of its own. IBM has been in business over 80 years, but it was only the last 10-15 years that IBM made a strategic shift away from proprietary and over to open standards and interfaces. The transformation to "openness" is not yet complete, but we have made great progress. Take these examples:
The System z mainframe - IBM had opened the interfaces so that both Amdahl and Fujitsu made compatible machines.Unlike Apple which forbids cloning of this nature, IBM is now the single source for mainframes because the other twocompetitors could not keep up with IBM's progress and advancements in technology.
Update: Due to legal reasons, the statements referring to Hercules and other S/390 emulators havebeen removed.
The z/OS operating system - While it is possible to run Linux on the mainframe, most people associate the z/OSoperating system with the mainframe. This was opened up with UNIX System Services to satisfy requests from variousgovernments. It is now a full-fledged UNIX operating system, recognized by the [Open Group] that certifies it as such.
As BarryB alludes, the unique interfaces for disk attachment to System z known as Count-Key-Data (CKD) was published so that both EMC and HDS can offer disk systems to compete with IBM's high-end disk offerings. Linux on System zsupports standard Fibre Channel, allowing you to attach an IBM SVC and anyone's storage. Both z/OS and Linux on System z support NAS storage, so IBM N series, NetApp, even EMC Celerra could be used in that case.
The System i itself is still proprietary, but recently IBM announced that it will now support standard block size (512 bytes) instead of the awkward 528 byte blocks that only IBM and EMC support today. That means that any storage vendor will be ableto sell disk to the System i environment.
Advanced copy services, like FlashCopy and Metro Mirror, are as proprietary as the similar offerings from EMCand HDS, with the exception that IBM has licensed them to both EMC and HDS. Thanks to cross-licensing, you can do [FlashCopy on EMC] equipment. Getting all the storage vendors to agree to open standards for these copy services is still workin progress under [SNIA], but at least people who have coded z/OS JCL batchjobs that invoke FlashCopy utilities can work the same between IBM and EMC equipment.
So for those out there who thought that my comment about EMC's proprietary interfaces in any way implied thatIBM did not have any of its own, the proverbial ["pot calling the kettle black"] so to speak, I apologize.
BarryB shows off his [PhotoShop skills] with the graphic below. I take it as a compliment to be compared to anAll-American icon of business success.
TonyP and Monopoly's Mr. Pennybags Separated at Birth?
However, BarryB meant it as a reference back to long time ago when IBMwas a monopoly of the IT industry, which according to [IBM's History], ended in 1973. In other words, IBMstopped being a monopoly before EMC ever existed as a company, and long before I started working for IBM myself.
The anti-trust lawsuit that BarryB mentions happened in 1969, which forced IBM to separate some of the software from its hardware offerings, and prevented IBM from making various acquisitions for years to follow, forcing IBM instead into technology partnerships. I'm glad that's all behind us now!
I had a great weekend, participating in this year's ["World Laughter Day"] yesterday, and preparingfor tonight's festivities, found me pulling out the various packages from "Simply Dinners" from my freezer.
A Tucson-based company, [Simply Dinners] offers an alternative to restaurant eating.My sister went there, assembled a set of freezer-proof plastic bags containingall the right ingredients based on specific recipes, and gave them to me for my birthday, and they have been sitting in my freezer ever since... until last weekend.
My sister was careful to choose items that fit my [Paleolithic Diet] that my nutritionist has me on. However, I was skepticalthat any plastic bag full of frozen groceries would be any better than anything I could assemble on my own.I did, after all, attend "chef school" and do know how to cook well. Each package was intended to be a "dinner for two" but since I am single, was two meals each for me.
So, I decided to try them out, which would also give me more room in my freezer for incoming items, and theycame out very well. The outside of each plastic bag was a label that explained all the steps required to heatthe food. Partially-cooked vegetables were wrapped in foil, and went in for the last 10 minutes of cooking the meat.The process was straightforward, and the meals were delicious, but nothing I could not have done on my own witha recipe and a trip to the grocery store.
The question is whether someone with little or no skills could achieve similar, or acceptable results. I havefriends who are limited to assembling sandwiches from luncheon meats and cheese slices, as anything involvingheat other than simply boiling water is beyond their skills.
The key difference between "cooking for yourself" and "building your own storage" is that you aren't buildingstorage for just yourself. Unless you are a one-person SMB company, you are building storage that all of youremployees and managers count on to do their jobs, and by extension your customers and stockholders count on.
Of course I had to read responses from others before jumping in with my thoughts.Dave Raffo from Storage Soup writes [Sun going down in storage],feeling this is yet another indication that Sun has lost their mind, recounting previous events that supportthat theory.EMC blogger Mark Twomey in his StorageZilla posts [When Open Isn't] felt a littlebit guilty kicking a competitor when down. EMC blogger Chuck Hollis questions the reasons peoplemight be tempted to even try this in his post [Do-it-Yourself Storage]. Here'san excerpt:
I really, really struggle with this concept, I do. Here's why:
Anything I use and get comfortable with -- well, I'm "locked in" to a certain degree. If I use a lot of storage software X; well, I'm sorta locked in, aren't I? Or, if I put my servers-as-storage on a three-year lease, I'm kind of locked in, aren't I?"
(For EMC, vendor lock-in is great when customers are using and comfortable with EMC products, and awful when they use andare comfortable with storage from someone else. But nobody who is "comfortable" with what they have ever complain about"vendor lock-in" do they? It's the ones who are growing uncomfortable and feel trapped in changing. Howinvolved a company's use of EMC's proprietary interfaces are can greatly determine the obstacles in switching toa different vendor.Of course, if you count yourself as someone growing uncomfortable with your existing storage vendor, IBM can help you fix that problem, but that is a subject for another post.)
Worried about "vendor lock-in"? Try "admin lock-in" where you must keep a storage admin around because he or shewas the one that put your storage together. I've seen several companies held hostage by their system adminsfor home-grown scripts that serve as "duct tape for the enterprise".The other issue is whether you have storage admins who have the necessary hardware and software engineering skillsto put suitable storage together. There are some very smart storage admins I know who could, and others that wouldhave a difficult time with this.
No doubt this is promising for the home office. I myself have taken several PCs that were running older versions of Windows,but not powerful enough to upgrade to Windows Vista, wiped them clean, loaded Linux, and configured them from everythingfrom simple browser workstations to full LAMP application server configurations. While this might sound easy, I am a professional hardwareand software engineer with Linux skills.I have no doubt that someone with sufficient engineering and Solaris skills could put together a storage system for home use.
One area where Sun definitely benefits from this "Open Storage" approach is to develop Solaris skills. I have no personal experience with OpenSolaris, but assume that if you learn it, you would be able to switch overto full Solaris quite easily.Today, most people have Windows, Linux and/or MacOS skills coming into the workforce, and this could be Sun's way of getting new fresh faces who understand Solaris commands to replace retiring "baby boomers". The lack of Solaris-knowledgeable admins is perhaps one reason why companies are switching to IBM AIX, Linux or Windows in theirdata center.
Certainly, IBM's strategic choice to support Linuxhas been a great success. People learn Linux on their home systems, and at school, and are able to carry those skillsto Linux running on everything from the smallest IBM blade server to IBM's biggest mainframe.
The videos on Sun for the "recipes" on how to put together various "storage configurations in ten minutes" appear simplerthan last summer's "How to hack an Apple iPhone to switch away from AT&T" procedures.
"Our survey data shows that over the past 12 months, more firms have bought their storage from a single vendor. While this might not be for everyone, it's worth serious consideration for your environment. Maybe you won't get the best price per gigabyte every time, but you'll probably save money in the long run because of simpler management, increased staff specialization, increased capacity utilization, and better customer service."
A Forrester survey of 170 companies ranging from SMBs to large enterprises in North America and Europe found that more than 80 percent bought their primary storage from one vendor over the last year. That includes 64 percent of the companies with more than 500 TB of raw storage.
The report, written by analyst Andrew Reichman, says using more than one primary storage vendor can make it more complex to manage, provision and support the storage environment. And while using multiple vendors can often bring better pricing, buying from one vendor can result in volume discounts.
“You may have tried to contain costs by forcing multiple incumbent vendors to continuously compete against each other, with price as the primary differentiator,” Reichman writes. “This strategy can reduce prices and limit vendor lock-in, but it can also lead to management complexity and poor capacity utilization.”
The report recommends keeping things simple by and using fewer vendors when possible. However, that advice comes with several caveats: buying all storage from one vendor means taking the bad with the good, and some vendors’ product families differ so much “they may as well come from different vendors.”
As if by coincidence, fellow blogger from EMC Chuck Hollis gives his reflections on this same topic. Here's an excerpt:
When it comes to buying storage (or any infrastructure technology, for that matter), there seem to be two camps:
Best-of-breed (i.e. multivendor): -- buy what's best, get the best price, keep all the vendors on their toes, etc. etc.
Single vendor: primarily use one vendor's offerings, and hold them accountable for the outcome.
If Chuck had said "multivendor" versus "single vendor", then that would have been a true dichotomy, but interestinglyhe equates best-of-breed with a multivendor approach. Let's consider two examples:
Disk from one vendor, Tape from another
Here is a multivendor strategy, and if you have a clear idea of what best-of-breed means to you, then you couldpick the best disk in the market, and the best tape in the market. However, I don't think this keeps either vendor"on their toes", or helps you negotiate lower prices by threatening to switch to the other vendor. In shops likethis, the staffing usually matches, so there are disk administration and tape operations, with little or no overlap, andlittle or no interest in retraining to use a new set of gear. It is true that disk-based VTL could be used where real tape libraries are used, but this may not be enough to threaten your existing vendors that you will switch all your disk to tape, or all your tape to disk.
One could argue that the vendor that sells the besttape could be the exact same vendor that sells the best disk. In this case, your multivendor strategy would actuallywork against you, forcing you away from one of your best-of-breed choices.
Disk and Tape from one vendor for some workloads, Disk and Tape from another vendor for other workloads
Here is a different multivendor strategy. Having disk and tape for the same vendor allows you to take advantageof possible synergies. The IT staff knows how to use the products from both vendors. This strategy does let you keep your vendors "on their toes". You can legitimately threaten to shift your budget from one vendor over another.However, whatever your definition of best-of-breed is, chances are the product from one vendor is, and the other vendor is not. Both meet some lowest common denominator, meeting some minimum set of requirements, which would allow you to swap out one for the other.
I guess I look at it differently. The equipment in your data center should be thought of as a team. Do your servers, storage and software work well together?
While Americans like to celebrate the accomplishments of individual musicians, athletes or executives, it is actually bands that compete against other bands, sports teams that compete against other sport teams, and companies that compete against other companies. Teamwork in the data center is not just for the people who work there, but also for the IT equipment. Just as a new incoming athlete may not get along well with teammates, shiny new equipment may not get along with your existing gear. Conversely, your existing infrastructure may not let the talents or features of your new equipment shine through.
Putting together the best parts from different teams might serve as a great diversion for those who enjoy["fantasy football"], it may not be the best approach for the data center. Instead, focus on managing your data center as a team, perhaps with theuse of IBM TotalStorage Productivity Center to minimize the heterogeneity of your different equipment. Pick an ITvendor that sells "team players" for your servers, storage and software, with broad support for interoperability and compatibility.
The "Storage Symposium Mexico - 2008" conference was a great success this week!
Day 1 - The plan was for me to arrive for the Wednesday night reception. Eachattendee was given a copy of my latest book[Inside System Storage: Volume I] and I was planning to sign them. I thought perhaps we should have a "book signing" tablelike all of the other published authors have.
Things didn't go according to plan. Thunderstorms at the Mexico City airport forced our pilot to find an alternate airport. Nearby Acapulco airport was the logical choice, but was full from all the otherflights, so the plane ended up in a tiny town called McAllen, Texas. I did not arrive until the morning of Day 2,so ended up signing the books throughout Thursday and Friday, during breaks and meals, wherever they couldfind me!
Special thanks to fellow IBMer Ian Henderson who picked me up from the airport at such an awkward hour anddrive me all the way to Cuernavaca!
All of us, IBMers, Business Partners and clients alike, all donned black tee-shirtswith a white eightbar logo for a group photo with one of those "wide lens" cameras. While we werebeing assembled onto the bleachers, I took this quick snapshot of myself and some of the guys behind me.
I was original scheduled to be first to speak, but with my flight delays, was moved to a time slot after lunch.After a big Mexican lunch, the conference coordinators were afraid the attendees might fall asleep,a Mexican tradition called [siesta], so I wasinstructed to WAKE THEM UP! Fortunately, my topic was Information Lifecycle Management, a topicI am very passionate about, since my days working on DFSMS on the mainframe. With 30percent reduction in hardware capital expenditures, 30 percent reduction in operational costs, and typical payback periods between 15 to 24 months, the presentation got everyone's attention.
Of course, a lot happens outside of the formal meetings. We had a Japanese theme dinner, where we woreJapanese Hachimaki [headbands]with the eightbar logo. For those not familiar with Japanese culture, hachimaki are worn today not so much for the practical purpose to catch the perspiration but rather for mental stimulation to express one's determination. Some students wear hachimaki when they study to put themselves in the right spirit and frame of mind.
Shown here are presenters Mike Griese (Infrastructure Management with IBM TotalStorage Productivity Center),Dave Larimer (Backup and Storage Management with IBM Tivoli Storage Manager), myself, and John Hamano(Unified Storage with IBM System Storage N series).
Day 3 - Wrapping up the week, I presented two more times.
First, I covered IBM Disk Virtualization with IBM SAN Volume Controller. One interesting question was if the SAN Volume Controller could be made to looklike a Virtual Tape Library. I explained that this was never part of the original design, but that if you wantto combine SVC with a VTL into a combined disk-and-tape blended solution, consider using theIBM product called Scale-Out File Services[SoFS] which I covered in my post[Moredetails about IBM clustered scalable NAS].
During one of the breaks, I took a picture of the behind-the-scenes staff that put this together. They had created these huge blocks representing puzzle pieces, emphasizing how IBM is one of the few ITvendors that can bring all the pieces together for a complete solution.
Shown hereare Mike Griese (presenter), Cyntia Martinez, Claudia Aviles, Cesar Campos (IBM Business Unit Executive forSystem Storage in Mexico), and Claudia Lopez. Each day the staff wore matching shirts so that it was easyto find them.
Later, I covered Archive and Compliance Solutions to highlight our complete end-to-end set of solutions.When asked to compare and contrast the architectures of the IBM System Storage DR550 with EMC Centera, I explainedthat the DR550 optimizes the use of online disk access for the most recent data. For example, if you aregoing to keep data for 10 years, maybe you keep the most recent 12 months on disk, and the rest is moved,using policy-based automation, to a tape library for the remaining nine years. This means that the disk insidethe DR550 is always being used to read and write the most recent data, the data you are most likely to retrievefrom an archive system. Data older than a year is still accessible, but might take a minute or two for the tapelibrary robot to fetch.The EMC Centera, on the other hand, is a disk-only solution. It offers no option to move older data to tape,nor the option to spin-down the drives to conserve power. It fills up after the same 12 months or so, and then you get towatch it the remaining nine years, consuming electricity and heating your data center.
I don't know about you, butI have never seen anyone purposely put in "space heaters" into their data center, but certainly a full EMC Centeradoes little else. Both devices use SATA drives and support disk mirroring between locations, but IBM DR550 offers dual-parity RAID-6, and supports encryption of the data on both the disk and the tape in the DR550. EMC Centerastill uses only RAID-5, and has not yet, as far as I know, offered any level of encryption. IBM System StorageDR550 was clocked at about three times faster than Centera at ingesting new archive objects over a 1GbE Ethernet connection.
This last photo is me and fellow IBMer Adriana Mondragón. She was one of my students in the [System Storage Portfolio Top Gun class],last February in Guadalajara, Mexico.She graduated in the top 10 percent of her group, earning her the prestigious titleof "Top Gun" storage sales specialist.
The conference wrapped up with a Mexican lunch with a traditional Mariachi band. I took pictures, but figured you allalready know what [Mariachi players] look like, and I didn't wantto detract from the otherwise serious tone of this blog post! This was the first System Storage Symposium in Mexico, butbased on its success, we might continue these annually.
Normally, IBM only makes announcements on Tuesdays, but today, Friday, IBM announces that it acquired Diligent Technologies. What? I got a lot ofquestions about this, so I thought I would start with this...
When I posted in January that[IBM Acquires XIV],fellow EMC blogger Mark Twomey of StorageZilla fame, sent me a comment:
"Ah now Tony I wasn't poking fun. Indeed I find it fascinating that Moshe who's been sitting out on the fringes for years having been banished for being an obstructionist to EMC entering the mid-market is now back.
Which reminds me what happens with Diligent? There his as well aren't they or has he packed his stake in that in?"
As you might have guessed, I am privy to a lot of stuff going on behind the scenes at IBM that I can't talk about in this blog, and all these rumors in the blogosphere about IBM acquisition of Diligent was a topic I couldn't officially recognize, defend or deny, until official IBM announcements were made.
In his latest post, Mark wonders about[the last Tape and Mainframe sales person on earth]. He recounts my interaction with fellow HDS blogger Hu Yoshia about the energy benefits ofVirtual Tape Libraries. Knowing that we were going to announcement IBM's acquisition of Diligent soon, I thoughtthis would be a worthy exchange, driving up the sales of Diligent boxes (whether you buy them from IBM or HDS).Diligent already had reselling arrangements with HDS, and IBM plans to continue thosearrangements going forward with HDS. As I have explained before in my post [Supermarketsand Specialty Shops], IBM and HDS cater to different customers, so if a customer who wants the best technologyfrom a specialty shop, they can buy IBM Diligent products from HDS, but if they want one-stop shopping, they can buyIBM Diligent directly from IBM or its other IBM Business Partners.
(Perhaps a more tricky situation is that Diligent also had an arrangement with Sun Microsystems, which competesdirectly against IBM as another IT supermarket vendor, but I have not heard how IBM has decided to handle thisgoing forward.)
For more on this intricate mess of interconnected companies, alliances and partnerships, read Dave Raffo's article[Data dedupe dance cardfilling up] over at Storage Soup.
So, let's tackle the first question:
Q1. What will happen to IBM's real tape library business?
Come on! IBM is Number one in tape, we've had virtual tape libraries since 1997 (the first in the industry)and continue to do well in both virtual and real tape libraries. Both provide value to the customer, and bothhave their place as part of the overall "information infrastructure". This acquisition provides yet another choicefor clients on our "supermarket" shelf.
(For those following the ["which is greener"] discussion, the robot of the IBM TS3500 real tape library consumes185W per frame (when moving) and each tape drive consumes 50W (when actively working on a tape). Compared to 13W per SATA disk drive, each 6-drive frame of a TS3500 consumes as much electricity as 37 SATA disk drives. If you are not running backups 24x7, the total KWh per day for your tape library is actually quite less, but as several people have pointed out, there are customers that do run backups 80-90 percent of the time. LTO-4 tapes can hold 800GB uncompressed, and SATA disk are now available in 1TB (1000 GB) size, so you can have fun with your own comparisons.)
Meanwhile, Scott Waterhouse, one of the few people at EMC who understand tape workloadslike backup and archive, takes me to task in his Backup Blog with his post[I want a Red Ferrari].For those who are surprised that anyone at EMC might understand backup workloads, EMC did acquire a company calledLegato, and perhaps Scott came from that acquisition. I've never met Scott in person, but based solely only fromhis writings, he seems to know his stuff and makes strong arguments for using IBM Tivoli Storage Manager (TSM) with deduplication and virtual tape libraries.
While TSM does a good job of "deduplicating" at the client first, backing up only changed data, Scott feels database and email repositories must be backed up entirely each time, which is what happens in many other backup software products. Some clients might have 80 percent database/email and only 20 percent files, while others might have less than 20 percent database/email and 80 percent files, so this might influence whether deduplication will have small or big benefit.If TSM has to backup the entire database, even though little has changed since the last backup, that is where deduplication on a virtual tape library can come in handy. For IBM DB2 and Oracle databases, IBM TSM application-aware Tivoli Data Protection module interface backs up only changed data, not the entire file. Thanks to IBM's FilesX acquisition-- (also coincidently from Israel) --IBM can extend this support now to SQL Server databases as well.However, to be fair, Scott is partly correct, TSM does backup some database and email repositories in their entirety, which is why it is a good idea to have BOTH an IBM virtual tape library with deduplication and Tivoli Storage Manager to handle all cases. This brings us to the next question:
Q2. What will happen to IBM's patented "progressive backup" technology?
IBM will continue to use TSM's progressive backup technology. TSM already works great with Diligent virtual tapelibraries. One example is LAN-free backup. In this configuration, the TSM client writes its backups directly toa virtual or real tape library, over the SAN, and then sends the list of files backed up to the TSM server over theLAN to record in its database. This can greatly reduce IP traffic on your LAN during peak backup periods. For more about this, see the IBM Redbook titled["Get More Out of Your SAN with IBM Tivoli Storage Manager"].
Jon Toigo from DrunkenData asks[Did IBM Do Due Diligence Before Making Diligent Acquisition a Done Deal?] which is probably always a valid question. Unlike XIV, I wasn't part of the Diligent acquisition team, so I can't provide first hand account of the process. I am told that the IBM team did all the right things to make sure everything is going to turn out right.Sadly, many companies that make acquisitions in the IT industry fail to make them work. Fortunately, IBM is one of the few companies that has a great success record, with over 60 acquisitions in the past six years.In the Xconomy forum, Wade Rousch writes[IBM and the Art of Acquisitions]and gives some insight why IBM is different. Jon did not understand why Cindy Grossman, IBM VP of tape and archive solutions, ran the analyst conference call for this announcement, which brings me to the next question:
Q3. What is Diligent virtual tape library going to be categorized as, a disk system or a tape system?
IBM organizes its storage systems based on the host application workloads.Products to address disk workloads (SVC, DS8000 series, DS6000 series, DS4000 series, DS3000 series, N series, XIV Nextra) are in our disk systems group. Storage that appears to host applications like a tape system to address workloads like backup and archive (tape drives, libraries and tape virtualization) are in our tape and archive group. IBM Diligent has two products, one for big workloads and one for medium workloads. Both look liketape systems, so our tape and archive team, who understand tape workloads like backup and archive the best, are obviously the best choice to support IBM Diligent in the mix.
IBM will offer both N series and Diligent deduplication capabilities. For disk workloads, IBM N series offers a post-process deduplication feature at no additional charge. For tape workloads, IBM will now offer an in-line deduplication feature with Diligent Technologies. Different workloads, different offerings.
As with any acquisition, there will be some changes. The 100 folks from Diligent will get to learn the IBM wayof doing things. This brings me to our fifth and final question:
Q5. What is the correct spelling: deduplication or de-duplication?
It appears that Diligent has a corporate-wide standard to hyphenate this term (de-duplication), but the "word police" at IBM that control and standardize all "proper spellings, trademarks, and capitalization" have sent me corporate instructions a few days ago that IBM does not to hyphenate this term (deduplication). So, going forward, it will be "deduplication", or "dedupe" for short.I suspect one of the first tasks that our new IBMers from Diligent will be doing is removing all those hyphens fromthe [Diligent Technologies website]!
That's all for now, I'm off to Chicago, Illinois tomorrow!