This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on January 2, 2020. This blog will no longer be available unless an extension has been requested. More details available on our FAQ.
"If you've spent any time in the storage biz, you probably realize that the server vendors sell more storage than they have any right to."
This is the old [Supermarkets-vs-Specialty Shops] debate I discussed over a year ago. The debate goes along the lines that some peopleprefer to buy their entire information infrastructure (servers, storage, software and services)from a single vendor, one-stop shopping, while others might prefer to buy their pieces ascomponents from different vendors that specialize in each technology. Because of this, Specialty shops tend to focus on other Specialty shops as their primary competitors (EMC vs. NetApp), whileSupermarkets tend to focus on other Supermarkets (IBM vs. HP).
The apparent contradiction is that Chuck feels the Supermarkets (IBM, HP, Sun and Dell) should not have any right to sell storage, in the same manner that butchers, bakers and candlestick makersdo not believe that Supermarkets should have any right to sell meat, bread or candles?If servers and storage are so different, how can self-proclaimed storage-only specialist EMC have the right to sell their non-storage offerings, from server virtualization (VMware) to cloud-computing services? With EMC's latest announcement of DW/BI centers, I think we can safely take EMC off the list of storage-only specialists. We will needto come up with a third category for those caught in limbo between being one-stop shopping Supermarkets like IBM and being a pure storage-only Specialists like NetApp. Perhaps EMC has become the IT equivalent of Wal-Mart's[Neighborhood Market].(No offense intended to my friends at Wal-Mart!)
Then Chuck continues with these statements:
"It is rarely is it the case that a server vendor can offer you a better storage product, or better service, or better functionality than what a storage specialist can do.
...Interestingly enough, Dell appears to do a sizable amount of storage business "off base" with EMC products -- outside the context of a specific server transaction."
This second contradiction relates to products that are manufactured by specialty shops, butsold through supermarket channels. Chuck would like to imply that the only storage products anyone should consider is gear made by specialty shops, whether you get it directly from them, or through Supermarket's with appropriate OEM agreements. Storage made by Supermarkets, either organicallydeveloped or through acquisitions, should not be considered? What happens when a Supermarket acquires a specialty shop? We've already seen how negative EMC has been against IBM's acquisitions of XIV and Diligent, which allowed a Supermarket like IBM to provide better products in both cases than what is available from any specialty shop. Kind of pokes a big hole in that argument!
But Dell also acquired EqualLogic, which Chuck admits might have a "fit in the marketplace".As it turns out, companies would rather buy EMCequipment from Dell sales people, than from EMC directly, and perhaps this is becauseDell, like IBM, sees the big picture. Dell, IBM and the rest of the IT Supermarkets understand theentire information infrastructure, not just the storage components of a data center. With HP and Sun selling HDS gear, and IBM selling NetApp gear, it becomes obvious that EMC needs Dell more than Dell needs EMC.
Chuck then pokes fun at NetApp in comparing the EMC NX4 to NetApp's FAS2020, comparable to IBM System Storage N series N3300. Here's an excerpt:
Like other Celerras, it does the full unified storage thing: iSCSI, NAS and "real deal" FC that isn't emulated.
The irony, of course, is that the NX4 does not actually use "real" Fibre Channel drives,but rather SAS and SATA drives. I guess Chuck's concern is that the NetApp, which doesuse "real" Fibre Channel drives, provides FC-attached LUNs to the host through its WAFL mapping,rather than through EMC's traditional RAID-rank mapping approach.How Chuck can imply that anything in the IT industry that is "emulated" is somehow seriouslyworse than "real", but then spend 40 percent of his posts devoted to the benefits of VMware,which offers "emulated" virtual machines, seems to be yet another contradiction.
"Cloud computing" has been ill-defined and over-hyped, yet storage vendors have been quick to trot out their own "cloud storage" offerings and end users are wondering whether there's significant cost savings in these services for them, particularly in tough economic times.
"Cloud-speak" can be downright confusing....
"Surprisingly, Gartner considers the amorphous nature of the term to be good news: 'The very confusion and contradiction that surrounds the term 'cloud computing' signifies its potential to change the status quo in the IT market,' the IT research firm said earlier this year."
Consistent with Scott Adams's original prediction, the barriers of entry have lowered for storage vendors as well.Rather than competing on function and price through valued relationships and trusted expertise, some vendors would rather confuse instead. EMC tries to paint the NX4 as being "just as good as" anNetApp or IBM N series for unified storage, and EMC tries to create new categories, like Cloud-Oriented Storage (COS), to give their me-too products the impression they are in a league of their own.All of this to discourage customers from making their own comparisons and doing their own research.
IBM doesn't play that way. If you want straight talk aboutIBM's products, contact your local IBM Business Partner or sales rep.
Continuing this week's theme, my team here at theTucson Executive Briefing Center (TEBC) have made these two videos for me, usingcloud-computing facilities from OfficeMax and the folks at JibJab.Only five people were allowed per video, so we had to make two to get everyone in.
If you have been to the Tucson Executive Briefing Center, perhaps you can recognizesome of our faces!
This wraps up my week in Las Vegas for the 27th Annual [Data Center Conference]. This conference follows the common approach of ending at noon on Friday, so that attendees can get home to their families for the weekend, or start their weekend in Las Vegas early to watch the 50th annual Wrangler National Finals Rodeo.
I attended the last few sessions. Here is my recap:
Where, When and Why do I need a Solid-State Drive?
The internet provides transport of digital data between any devices. All other uses have evolved from this aim. Increasing data storage on any node on the Web therefore increases the possibilities at every other point. We are just now beginning to recognize the implications of this. The two speakers co-presented this session to cover how Solid State Disk (SSD) may participate.
Some electronic surveys of the audience provided some insight. Only 12 percent are deploying SSD now. 59 percent are evaluating the technology. A whopping 89 percent did not understand SSD technology, or how it would apply to their data center. Here is the expected time linefor SSD adoption:
17 percent - within 1 year
60 percent - around 3 years from now
21 percent - 5 years or later
The main reasons cited for adopting SSD were increasing IOPS, reducing power and floorspace requirements, and expanding global networks. Here's a side-by-side comparison between HDD and SSD:
Disk array with 120 HDD, 73GB drives
Disk array with 120 SSD, 32GB drives
Per 73GB drive
Per 32GB drive
100MB/sec per drive
Read 250 MB/sec per drive Write 170 MB/sec per drive
300 IOPS per drive
35,000 IOPS per drive
12 Watts per drive
2.4 Watts per drive
However, the cost-per-GB for SSD is still 25x over traditional spinning disk, andthe analysts expected SSD to continue to be 10-20x for a while. For now, they estimatethat SSD will be mostly found in blade servers, enterprise-class disk systems, andhigh-end network directors.
The speakers gave examples such as Sun's ZFS Hybrid, and other products from NetApp,Compellent, Rackable, Violin, and Verari Systems.
Taking fear out of IT Disaster Recovery Exercises
The analyst presented best practices for disaster recovery testing with a "Pay Now or Pay Later"pre-emptive approach. Here were some of the suggestions:
Schedule adequate time for DR exercises
Build DR considerations into change control procedures and project lifecycle planning
Document interdependencies between applications and business processes
Bring in the "crisis team" on even the smallest incidents to keep skill sharp
Present the "State of Disaster Recovery" to Senior Management annually
The speaker gave examples of different "tiers" for recovery, with appropriate RPO and RTOlevels, and how often these should be tested per year. A survey of the audience found that70 percent already have a tiered recovery approach.
In addition to IT staff, you might want to consider inviting others to the DR exerciseas reviewers for oversight, including: Line of Business folks, Facilities/Operations, Human Resources, Legal/Compliance officers, even members of government agencies.
DR exercises can be performed at a variety of scope and objectives:
Tabletop Test - IBM calls these "walk-throughs", where people merely sit around the table and discuss what actions they would take in the event of a hypothetical scenario. This is a good way to explore all kinds of scenarios from power outages, denial of service attacks, or pandemic diseases.
Checklist Review - Here a physical inventory is taken of all the equipment needed at the DR site.
Stand-alone Test - Sometimes called a "component test" or "unit test", a single application is recovered and tested.
End-to-End simulation - All applications for a business process are recovered for a full simulation.
Full Rehearsal - Business is suspended to perform this over a weekend.
Production Cut-Over - If you are moving data center locations, this is a good time to consider testing some procedures. Other times, production is cut-over for a week over to the DR site and then returned back to the primary site.
Mock Disaster - Management calls this unexpectedly to the IT staff, certain IT staff are told to participate, and others are told not to. This helps to identify critical resources, how well procedures are documented, and members of the team are adequately cross-trained.
For exercise, set the appropriate scope and objectives, score the results, and then identifyaction plans to address the gaps uncovered. Scoring can be as simple as "Not addressed","Needs Improvement" and "Met Criteria".
Full Speed Ahead for iSCSI
The analyst presented this final session of the conference. He recognized IBM's early leadership in this area back in 1999, with the IP200i disk system. Today, there are many storage vendors that provide iSCSI solutions, the top three being:
23 percent - Dell/EqualLogic
15 percent - EMC
14 percent - HP/LeftHand Networks
This protocol has been mostly adopted for Windows, Linux and VMware, but has been largelyignored by the UNIX community. The primary value proposition is to offer SAN-like functionality at lower cost. When using the existing NICs that come built-in on most servers, iSCSI canbe 30-50 percent less expensive than FC-based SANs. Even if you install TCP-Offload-Engine (TOE) cards into the servers, iSCSI can still represent a 16-19 percent cost savings. ManyIBM servers now have TOE functionality built-in.
Since lower costs are the primary motivator, most iSCSI deployments are on 1GbE. The new10Gbps Ethernet is still too expensive for most iSCSI configurations. For servers runninga single application, 2 1GbE NICs is sufficient. For servers running virtualization with multiple workloads might need 4 or 5 NICs (1GbE), or consider 2 10GbE NICs if 10Gbps is available.
The iSCSI protocol has been most successful for small and medium sized businesses (SMB) lookingfor one-stop shopping. Buying iSCSI storage from the same vendor as your servers makes a lot of sense: EqualLogic with Dell servers, LeftHand software with HP servers, and IBM's DS3300 or N series with IBM System x servers.The average iSCSI unit was 10TB for about $24,000 US dollars.
Security and Management software for iSCSI is not as fully developed as for FC-based SANs.For this reason, most network vendors suggest having IP SANs isolated from your regular LAN.If that is not possible, consider VPN or encryption to provide added security.Issues of security and management imply that iSCSI won't dominate the large enteprise data center. Instead, many arewatching closely the adoption of Fibre Channel over Ethernet (FCoE), based on revised standardsfor 10Gbps Ethernet. FCoE standards probably won't be finalized till mid-2009, with productsfrom major vendors by 2010, and perhaps taking as much as 10 percent marketshare by 2011.
I hope you have enjoyed this series of posts. In addition to the sessions I attended, theconference has provided me with 67 presentations for me to review. Those who attended couldpurchase all the audio recordings and proceedings of every session for $295 US dollars, and those who missed the event can purchase these for $595 US dollars. These are reasonable prices, when you realize that the average Las Vegas visitor spends 13.9 hours gambling, losing an average of $626 US dollars per visit. The audio recordings and proceedings can provide more than 13.9 hours of excitement for less money!
The booths at a typical week-long tradeshow only go from day 2 to day 4, so that day 1 and day 5 can be used for unpacking and repacking all of the demo equipment and displays. This was the case here at the27th annual [Data Center Conference] here in Las Vegas.
The solution showcase ended Thursday afternoon.
From left to right:George Lane, Ron Houston, Cris Espinosa, Patty Congdon, David Bricker, Paula Koziol, Steve Sams, Tony Pearson,Gary Fierko, Diane Hill, David Share, Nick Sardino, Carla Fleming, Bruce Otte.
Gary Fierko and I discuss the IBM's vision and strategy, the TS7650G ProtecTIER gateway, and the differences between LTO-4 and IBM Enterprise tape, with an attendees at the booth.
Behind the scenes were folks from the [George P. Johnson company] that run events.Deniese Dunavin here helped us be successful at this conference!
Here are just a portion of all the sponsors that made this event possible, printed on bags given to each attendee.
After the booths closed down, we were invited to several different hospitality suites, sponsoredby different vendors.
The Cisco hospitality suite had an Elvis impersonator and a beautiful bride. Her name was Trixie.
The bouncers at the Computer Associates (CA) hospitality suite wore the same shade of green and blue colors from their logo.
The APC hospitality suite went with an Island/Pirate theme.
The Brocade hospitality suite rocked the Casbah! Yes, that is a REAL snake she is holding.
Michael Nixon, a presenter from NEC Corporation of America.
By the time we got to the Data Domain hospitality suite, they were out of "dedupe-tinis", most ofthe attendees had left, but they were giving out these bumper stickers. For those considering Data Domain,you might want to look at the IBM TS7650G Virtual Tape gateway, which also provides inline datadeduplication, but about six times faster ingest rate.
Lagasse, Inc. sells janitorial supplies, such as mops, cleaning chemicals, waste receptacles, and garbage can liners. Of the 1000 employees of Lagasse nationwide, about 200 associates were located in New Orleans at their main Headquarters, primary customer care center, and primary IT computing center.
Amazingly, Lagasse did not have a formally documented BCP (Business Continuity Plan) but more of aBCI (Business Continuity Idea). They chose to take a ["donut tire"] approach, putting older previous-generation equipment at their DR site. They knew that in the event of a disaster,they would not be processing as many transactions per second. That was a business trade-offthey could accept.
Evaluating all the different threat scenarios for impact and likelihood, and focused on hurricanes and floods.They had experienced previous hurricanes, learning from each,with the most recent being 2004 Hurricane Ivan and 2005 Hurricane Dennis. From this, they wereable to categorize three levels of DR recovery:
Tier 1 - The most mission-critical, which for them related to picking, packing and shipping products.
Tier 2 - The next most important, focused on maintaining good customer service
Tier 3 - Everything else, including reporting and administrative functions
The time-line of events went as follows:
The US Government issues warning that a hurricane may hit New Orleans
August 27 - 7pm
Lagasse declares a disaster, starts recovery procedures to an existing IT facility in Chicago, owned by their parent company. A temporary "Southeast" Headquarters were set up in Atlanta.Remote call centers were identified in Dallas, Atlanta, San Antonio, and Miami.
August 28 - just after midnight
In just five hours, they recovered their "Tier 1" applications.
August 28 - 7:30pm
In just over 24 hours, they recovered their "Tier 2" applications.
August 29 - 6am
The Hurricane hits land. With 73 levees breached, the city of New Orleans was flooded.
The following week
Lagasse was fully operational, and recorded their second and third best sales days ever.
I was quite impressed with their company's policy for how they treat their employees during a disaster. For many companies, people during a disaster prioritize on their families, not their jobs.If any associate was asked to work during a disaster, the company would take care of:
The safety of their family
The safety of their pets. (In the weeks following this hurricane, I sponsored people in Tucson to go to New Orleans to attend to lost and stray dogs and cats, many of which were left behind when rescuers picked up people from their rooftops.)
Any emergency repairs to secure the home they leave behind
Marshall felt that if you don't know the names of the spouse and kids of your key employees, you are not emotionally-invested enough to be successful during a disaster.
For communications, cell phones were useless. They could call out on them, but anyone with acell phone with 504 area code had difficulty receiving calls, as the calls had to be processedthrough New Orleans. Instead, they used Voice over IP (VoIP) to redirect calls to whichever remote call center each associate went to. Laptops, Citrix, VPN and email were considered powerful tools during this process. They did not have Instant Messaging (IM) at the time.
While the disk and tapes needed to recover Tiers 1 and 2 were already in Chicago, the tapes for Tier 3 were stored locally by a third-party provider. When Lagasse asked for thier DR tapes back, the third-party refused, based on their [force majeure] clause. Force majeure is a common clause in many business contracts to free parties from liabilityduring major disasters.Marshall advised everyone to strike out any "force majeure" clauses out of any future third-party DR protection contracts.
Hurricane Katrina hit the US hard, killing over 1400 people, and America still has not fully recovered. The recovery of thecity of New Orleans has been slow. Massive relocations has caused a deficit of talent inthe area, not just IT talent, but also in the areas of medicine, education and other professions. The result has been degraded social services, encouraging others to relocate as well. Some have called it the "liberation effect", a major event that causespeople to move to a new location or take on a new career in a different field.
On a personal note, I was in New Orleans for a conference the week prior to landfall, and helped clients with their recoveries the weeks after. For more on how IBM Business Continuity Recovery Services (BCRS) helped clients during Hurricane Katrina, see the following [media coverage].
Continuing my coverage of the 27th annual[Data Center Conference], the weather here in Las Vegas has been partly cloudy,which leads me to discuss some of the "Cloud Computing" sessions thatI attended on Wednesday.
The x86 Server Virtualization Storm 2008-2012
Along with IBM, Microsoft is recognized as one of the "Big 5" of Cloud Computing. With theirrecent announcements of Hyper-V and Azure, the speaker presented pros-and-cons between thesenew technologies versus established offerings from VMware. For example, Microsoft's Hyper-Vis about three times cheaper than VMware and offers better management tools. That could beenough to justify some pilot projects. By contrast, VMware is more lightweight, only 32MB,versus Microsoft Hyper-V that takes up to 1.5GB. VMware has a 2-3 year lead ahead of Microsoft, and offers some features that Microsoft does not yet offer.
Electronic surveys of the audience offered some insight. Today, 69 percent were using VMware only, 8 percent had VMware plus other, including Xen-based offerings from Citrix,Virtual Iron and others. However, by 2010, the audience estimated that 39 percent would be VMware+Microsoft and another 23 percent VMware plus Xen, showing a shift away from VMware'scurrent dominance. Today, there are 11 VMware implementations to Microsoft Hyper-V, and thisis expected to drop to 3-to-1 by 2010.
Of the Xen-based offerings, Citrix was the most popular supplier. Others included Novell/PlateSpin,Red Hat, Oracle, Sun and Virtual Iron. Red Hat is also experimenting with kernel-based KVM.However, the analyst estimated that Xen-based virtualization schemes would never get past8 percent marketshare. The analyst felt that VMware and Microsoft would be the two dominant players with the bulk of the marketshare.
For cloud computing deployments, the speaker suggested separating "static" VMs from "dynamic" ones. Centralize your external storage first, and implement data deduplicationfor the OS load images. Which x86 workloads are best for server virtualization? The speaker offered this guidance:
The "good" are CPU-bound workloads, small/peaky in nature.
The "bad" are IO-intensive, those that exploit the features of native hardware
The "ugly" refers to workloads based on software with restrictive licenses and those not fully supported on VMs. If you have problems, the software vendor may not help resolve them.
Moving to the Cloud: Transforming the Traditional Data Center
IBM VP Willie Chiu presented the various levels of cloud computing.
Software-as-a-Service (SaaS) provides the software application, operating system and hardware infrastructure, such as SalesForce.com or Google Apps. Either the software meets your needs or it doesn't, but has the advantage that the SaaS provider takes care of all the maintenance.
Platform-as-a-Service (PaaS) provides operating system, perhaps some middleware like database or web application server, and the hardware infrastructure to run it on. The PaaS provider maintains the operating system patches, but you as the client must maintain your own applications. IBM has cloud computing centers deployed in nine different countries across the globe offering PaaS today.
Infrastructure-as-a-Service (IaaS) provides the hardware infrastructure only. The client must maintain and patch the operating system, middleware and software applications. This can be very useful if you have unique requirements.
In one case study, Willie indicated that moving a workload from a traditional data center to the cloud lowered the costs from $3.9 million to $0.6 million, an 84 percent savings!
We've Got a New World in Our View
Robert Rosier, CEO of iTricity, presented their "IaaS" offering. "iTricity" was coined from the concept of "IT as electricity". iTricity is the largest Cloud Computing company in continental Europe, hosting 2500 servers with 500TB of disk storage across three locations in the Netherlands and Germany.
Those attendees I talked to that were at this conference before commented that this year's focus on virtualization and cloud computing is noticeably more than in previous years. For more on this, read this 12-page whitepaper:[IBM Perspective on Cloud Computing]
Continuing this week's coverage of the 27th annual [Data Center Conference] I attended some break-out sessions on the "storage" track.
Effectively Deploying Disruptive Storage Architectures and Technologies
Two analysts co-presented this session. In this case, the speakers are using the term "disruptive" in the [positive sense] of the word, as originally used by Clayton Christensen in hisbook[The Innovator's Dilemma], andnot in the negative sense of IT system outages. By a show of hands,they asked if anyone had more storage than they needed. No hands went up.
The session focused on the benefits versus risks of new storage architectures, and which vendors they felt would succeed in this new marketplace around the years 2012-2013.
By electronic survey, here were the number of storage vendors deployed by members of the audience:
14 percent - one vendor
33 percent - two vendors, often called a "dual vendor" strategy
24 percent - three vendors
29 percent - four or more storage vendors
For those who have deployed a storage area network (SAN), 84 percent also have NAS, 61 percent also have some form or archive storage such as IBM System Storage DR550, and 18 percent also have a virtual tape library (VTL).
The speaker credited IBM's leadership in the now popular "storage server" movement to the IBM Versatile Storage Server [VSS] from the 1990s, the predecessor to IBM's popular Enterprise Storage Server (ESS). A "storage server" is merely a disk or tape system built using off-the-shelf server technology, rather than customized [ASIC] chips, lowering thebarriers of entry to a slew of small start-up firms entering the IT storage market, and leading to newinnovation.
How can a system designed for now single point of failure (SPOF) actually then fail? The speaker convenientlyignored the two most obvious answers (multiple failures, microcode error) and focused instead on mis-configuration. She felt part of the blame falls on IT staff not having adequate skills to deal with the complexities of today's storage devices, and the other part of the blame falls on storage vendors for making such complicated devices in the first place.
Scale-out architectures, such as IBM XIV and EMC Atmos, represent a departure from traditional "Scale-up" monolithic equipment. Whereas scale-up machines are traditionally limited in scalability from their packaging, scale-out are limited only by the software architecture and back-end interconnect.
To go with cloud computing, the analyst categorized storage into four groups: Outsourced, Hosted, Cloud, and Sky Drive. The difference depended on where servers, storage and support personnel were located.
How long are you willing to wait for your preferred storage vendor to provide a new feature before switching to another vendor? A shocking 51 percent said at most 12 months! 34 percent would be willing to wait up to 24 months, and only 7 percent were unwilling to change vendors. The results indicate more confidence in being able to change vendors, rather than pressures from upper management to meet budget or functional requirements.
Beyond the seven major storage vendors, there are now dozens of smaller emerging or privately-held start-ups now offering new storage devices. How willing were the members of the audience to do business with these? 21 percent already have devices installed from them, 16 percent plan to in the next 12-24 months, and 63 percent have no plans at all.
The key value proposition from the new storage architectures were ease-of-use and lower total cost of ownership.The speaker recommended developing a strategy or "road map" for deploying new storage architectures, with focus on quantifying the benefits and savings. Ask the new vendor for references, local support, and an acceptance test or "proof-of-concept" to try out the new system. Also, consider the impact to existing Disaster Recovery or other IT processes that this new storage architecture may impact.
Tame the Information Explosion with IBM Information Infrastructure
Susan Blocher, IBM VP of marketing for System Storage, presented this vendor-sponsored session, covering theIBM Information Infrastructure part of IBM's New Enterprise Data Center vision. This was followed by BradHeaton, Senior Systems Admin from ProQuest, who gave his "User Experience" of the IBM TS7650G ProtecTIER virtual tape library and its state-of-the-art inline data deduplication capability.
Best Practices for Managing Data Growth and Reducing Storage Costs
The analyst explained why everyone should be looking at deploying a formal "data archiving" scheme. Not just for "mandatory preservation" resulting from government or industry regulations, but also the benefits of "optional preservation" to help corporations and individual employees be more productive and effective.
Before there were only two tiers of storage, expensive disk and inexpensive tape. Now, with the advent of slower less-expensive SATA disks, including storage systems that emulate virtual tape libraries, and others that offer Non-Erasable, Non-Rewriteable (NENR) protection, IT administrators now have a middle ground to keep their archive data.
New software innovation supports better data management. The speaker recalled when "storage management" was equated to "backup" only, and now includes all aspects of management, including HSM migration, compliance archive, and long term data preservation. I had a smile on my face--IBM has used "storage management" to refer to these other aspects of storage since the 1980s!
The analyst felt the best tool to control growth is the "Delete" the data no longer needed, but felt that nobody uses Storage Resource Management (SRM) tools needed to make this viable. Until then, people willchose instead to archive emails and user files to less expensive media.The speaker also recommended looking into highly-scalable NAS offerings--such as IBM's Scale-Out File Services (SoFS), Exanet, Permabit, IBRIX, Isilon, and others--when fast access to files is worth the premium price over tape media.The speaker also made the distinction between "stub-based" archiving--such as IBM TSM Space Manager, Sun's SAM-FS, and EMC DiskXtender--from "stub-less" archive accomplished through file virtualization that employes a global namespace--such as IBM Virtual File Manager (VFM), EMC RAINfinity or F5's ARX.
She made the distinction between archives and backups. If you are keeping backups longer than four weeks, they are not really backups, are they? These are really archives, but not as effective. Recent legal precedent no longer considers long-term backup tapes as valid archive tapes.
To deploy a new archive strategy, create a formal position of "e-archivist", chose the applications that will be archived and focus on requirements first, rather than going out and buying compliance storage devices. Try to get users to pool their project data into one location, to make archiving easier. Try to have the storage admins offer a "menu" of options to Line-of-Business/Legal/Compliance teams that may not be familiar with subtle differences in storage technologies.
While I am familiar with many of these best practices already, I found it useful to see which competitiveproducts line up with those we have already within IBM, and which new storage architectures others find mostpromising.
Well, it's Wednesday, day three at the [Data Center Conference] here in Las Vegas, Nevada. Unlike other conferencesthat concentrate all of their keynote sessions at the front of the agenda,this conference spread them out over several days. They had three on Tuesday, two more Wednesday, and the last one on Thursday. Here are my thoughts on the two keynote sessions on Wednesday.
Top 10 Disruptive Technologies affecting the Data Center
The analyst presented his "top ten" technologies to watch:
Storage Virtualization - I was glad this made top of the list!
Cloud Computing - IBM was recognized for its leadership in this space. Cloud computing brings together new models of acquisition, billing, access, and deployment of new technology.
Servers: Beyond Blades - Currently, distributed servers have fixed CPU, memory and I/O capability, as manufactured at the factory, but what if you can re-assign these resources dynamically? New technologies mightmake this possible.
Virtualization for desktops - not just hosted virtual desktops, the speaker proposed having"portable personalities" that an employee might carry around on a CDrom or USB memory stick, andthen use whatever computer equipment was nearby.
Enterprise Mashups - You know analysts have too much time on their hands when they come up withtheir own eight-layer reference architecture for enterprise adoption of Web 2.0 technologies.
Specialized Systems - These are sometimes called heterogeneous systems, hybrids, or application-specific appliances. Unlike general purposes servers, these are more difficult to re-purpose as your needs change. However, if done right, can provide better performance for specific workloads.
Social Software and Social Networking - A survey of the audience found 18 percent were alreadyusing Mashups in the enterprise, but 65 percent haven't looked at this at all. Because traditionalhierarchically-organized companies can't re-structure their employees fast enough, the use ofsocial software to develop "virtual teams" and "communities of interest" can be an effective wayto get the "wisdom of crowds" from your employees. Rather than just installing this kind of software, the speaker felt it was better to just "plant seeds" and let social networks grow withinthe enterprise.
Unified Communications - Do you use different providers or software for cell phone, land line, wi-fi, internet, Instant Messaging (IM), audio conferencing, video conferencing, and email? The promise of Unified Communications is to bring this all together.
Zones and Pods - In the 1990s, traditional design for data centers tried to anticipate growthover the next 15-20 years, and build accordingly. These did not foresee all the changes in IT.The new best practice is a "pod approach" where you only build what you need for the next 5 to 7years, with the architecture to expand as needed. A traditional 9000-square-foot data center thatsupports 150 "watts-per-square-foot" would cost over $20 million to build, and over $1 million inelectricity every year. A pod alternative might cost less than $12 million to build, and nearlycut electricity costs in half.
Green IT - rapid "green" improvements are being demanded on IT operations, not just forpolitical correctness, but also for cost savings. A survey of the audience found 7 percentwilling to pay a premium price for green solutions, and another 26 percent willing to pay aslightly higher price for green features and attributes.
Don McMillan, Computer Engineer turned Stand Up Comic
Don gave a hilarious look at the IT industry. While most comics that are often hired to entertainthe audience have only a layman's knowledge of what we do, Don has a masters degree in ElectricalEngineering from Stanford and worked at a variety of IT companies, including AT&T Bell Labs andVLSI Technology. You can see more of his bio on his[Technically Funny] Web site.
Here's Don in a [four-minute video] demonstrating the kind of observational humor he performs.
It's good to see a bit of humor at IT conferences. With the pressures of IT staff and managementto manage explosive growth with shrinking budgets, the attendees appreciated the mix of serious with the not-so-serious.
The title of this post is inspired by Baxter Black's [latest book]. Rathera recap of the break-out sessions, I thought I would comment on a fewsentences, phrases or comments I heard in the afternoon and evening.
Stop buying storage from EMC or NetApp
The lunch was sponsored by Symantec. Rod Soderbery presented "Taking the cost out ofcost savings", explaining some ideas to reduce IT costs immediately.
First, he suggested to "stop buying storage" from EMC or NetApp that charge a premiumfor tier-one products. Instead, Rod suggested that people should "think like a Web company"and buy only storage products based on commodity hardware to save money, and to use SRM software to identify areas of poor storage utilization. IBM's TotalStorage Productivity Center softwareis often used to help with this analysis.
His other suggestions were to adopt thin provisioning, data deduplication, and virtualization.The discussion at my table started with someone asking, "How do we adopt those functions without buying new storage capacity with those features already built-in?" I explained that IBM's SAN Volume Controller (SVC),N series gateways, and TS7650G ProtecTIER virtual tape gateway can all provide one or moreof these features to your existing disk storage capacity.
IBM and HP are leaders in blade servers
In the session "Future of Server and OS: Disappearing Boundaries", the audience confirmedby electronic survey that IBM and HP are the leaders in blade servers, although blades representonly 8-10 percent of the overall server market.
Interestingly, 22 percent of the audience has deployed both x86 and non-x86 (POWER, SPARC, etc.) blade servers.The presenters considered this an interesting insight.
Another survey of the audience found that 3 percent considered Sun/STK as their primary storagevendor. One of the presenters was delighted that Sun is still hanging in there.
IBM Business Partners deliver the best of IBM and mask the worst
Elaine Lennox, IBM VP, and Mark Wyllie, CEO of Flagship Solutions Group, Inc. presentedIBM-sponsored back to back sessions. Elaine presented IBM's vision, the New Enterprise Data Center, and the challenges that demand a smarter planet.
Mark focused on his company's experience working with IBM through Innovation Workshops. Theseare assessments that can help someone identify where you are now, where you want to be, andthen action plans to address the gaps.
Cats and Dogs, Oil and Water, Microsoft Windows and Mission-critical applications, what do all of these have in common?
NEC Corporation of America sponsored some sessions on some x86-based solutions they have to offer.The first part, titled "Rats Nests, Snow Drifts and Trailers" focused unified storage, andthe second part, presented by Michael Nixon, focused on how to bring Microsoft Windows servers into the data center for mission-critical applications.
The Economy might be slowing, but storage is still growing
Two analysts co-presented "The Enterprise Storage Scenario". Unlike computing capacity, thereis no on/off switch for storage, not from applications nor from end-users. The cost ofpower for storage is expected to be 3x by 2013. Virtual servers, includingVMware and Microsoft's Hyper-V will drive the need for shared external disk storage.A survey of the audience found 20 percent were expecting to purchase additional storagecapacity 4Q08.
When someone reaches age 52, they expect to coast the rest of their career
At dinner with analysts, the discussion of financial meltdown and bailouts is unavoidable,including everyone's views about the proposed bailout of the Big 3 automakers. I can'tdefend Ford, GM and Chrysler paying their people $70 US dollars per hour, when their UScounterparts at Toyota or Honda are only paid $45 to $50 dollars per hour.
However, I have a close friend who retired after 20 years working for the fire department,and a cousin who retired after 20 years serving in the Navy (the US Navy, not the BolivianNavy), and both are still in their forties in age. A long time ago, IT professionalsretired after 30 years, in some cases with 50 to 60 percent of their base pay as theirpension for the rest of their lives. A 52-year-old that has worked 30 years might expect to enjoy the rest of his old age playing golf and pursuing other hobbies. This is not "coasting", it is called "retirement". The few of my colleagues that I have seen who worked 35 to 40 years did so becausethey enjoyed the challenge of work at IBM. They enjoyed solving tough engineering problems and helping customers.As long as they were having fun on the job,IBM was glad to keep their wealth of experience on board and actively engaged.
Unfortunately, many people rely on their own investments in the stock market for retirement, ratherthan company pensions. With the current financial crisis, I suspect many people my age arereconsidering their previous retirement plans.
We're going to need more trains!
I took the monorail back to my hotel. The ride includes funny announcements and statistics,including this gem:
"Since 1940, Las Vegas has doubled in population every ten years, which means thatby the year 2230, we will have over 1 trillion people calling Las Vegas home. We're goingto need more trains!"
That wraps up Tuesday, Day 2 of my attendance here! Now for some sleep.
I did not register soon enough to get into the MGM Grand itself, so I am staying at a Hiltonat the other end of the Las Vegas strip, but am able to hop on the "Monorail" to get to the MGM,just in time for the breakfast and first welcome session.
This conference has a familiar set up: six keynote sessions, 62 break-out sessions, and fourtown hall meetings. Thanks to electronic survey devices on the seats, speakers were able to gatherreal-time demographics. A large portion of attendees, including myself, are attending this conference for theirfirst time. Here's my recap of the first three keynote sessions:
The Future of Infrastructure and Operations: The Engine of Cloud Computing
How much do companies spend just to keep current? As much as 70 percent! The speaker noted thatthe best companies can get this down to 10 to 30 percent, leaving the rest of the IT budget to facilitate transformation. He predicts that companies are transforming their data centers fromsprawled servers to virtualization, towards a fully automated, service-oriented, real-time infrastructure.
Whereas the original motivation for IT virtualization was to reduce costs, companies now recognizethat they greatly improve agility, the ability to rapidly provision resources for new workloads, and that this will then lead to opportunites for alternative sourcing, such as cloud computing.
The operating system is becoming commoditized, focusing attention instead to a new concept: the"Meta OS". VMware's Virtual Data Center and Microsoft's Azure Fabric Controller are just two examples.Currently, analysts estimate only about 12 percent of x86 workloads are running virtualized, but thatthis could be over 50 percent by 2012.In this same time frame, year 2012, storage Terabytes is expected to increase 6.5x fold, and WAN bandwidthgrowing 35 percent per year.
Virtualization is not just for business applications. There are opportunities to eliminate the mostcostly part of any business: the Personal Computer, poster child of the skyrocketing costs of the client/server movement. Remote hosting of applications, streaming of applications,software as a service (SaaS) and virtual machines for the desktop can greatly reduce costs of customizedPC images and help desk support.
Cloud computing not only reduces per costs per use, but provides a lower barrier of entry and somemuch needed elasticity.Draw a line anywhere along the application-to-hardware software/hardware stack, and you can define acloud computing platform/service. About 65 percent of the attendees surveyed indicated that they were already doing something with CloudComputing, or were planning to in the next four years.
To help get there, the speaker felt that Value-added Resellers (VAR) and System Integrators (SI) wouldevolve into "service brokers", providing Small and Medium sized Businesses (SMB) "one throat to choke" in mixedmultisourced operations. The term "multisource" caught me a bit off-guard, referring to having someworkloads run internally (insourced) while other workloads run out on the Cloud (outsourced). Largerenterprises might have a "Dynamic Sourcing Team", a set of key employees serving as decision makers, employing both business and IT skills to determine the best sourcing for each application workload.
What are the biggest obstacles to getting there? The speaker felt it was the IT staff. People and cultureare the most difficult to change. The second are lack of appropriate metrics. Here were the survey resultsof the attendees:
41 percent had metrics for infrastructure economic attributes
49 percent had metrics for qualities of service (QoS)
12 percent had metrics to measure agility, speed of resource provisioning
The Data Center Scenario: Planning for the Future
This second keynote had two analyst "co-presenters". The focus was on the importance of having a documented Data Center strategy and architecture. Unfortunately, most Data Centers "happen on their own", with a majoroverhaul every 5 to 10 years. The speakers presented some "best practices" for driving this effort.
The first issue was to identify tiers of criticality, similar to those by the[Uptime Institute]. In their example, the most criticalworkloads would have perhaps recovery point objectives (RPO) of zero, and recover time objectives of lessthan 15 minutes. This is achievable using synchronous mirroring with fully automation to handle the failover.
The second issue was to recognize that many applications were designed for local area networks (LAN), butmany companies have distributed processing over a wide area network (WAN). Latency over these longer distancescan kill distributed performance of these applications.
The third issue was that different countries offer different levels of security, privacy and law enforcement.Canada and Ireland, for example, had the lowest risk, countries like India had medium risk, and countries likeChina and Russia had the highest risk, based on these factors.
The speakers suggested the following best practices:
Get a better understanding of the costs involved in providing IT services
Centralize applications that are not affected by latency, but regionalize those that are affected toremote locations to minimize distance delays.
Work towards a "lights out" data center facility, with operations personnel physically separated fromdata center facilities.
For the unfortunate few that are trying to stretch out more life from their existing aging data centers,the speakers offered this advice:
Build only what you need
Decommission orphaned servers and storage, which can be 1 to 12 percent of your operations
Target for replacement any hardware over five years old, not just to reduce maintenance costs, butalso to get more energy-efficient equipment.
Consider moving test workloads, and as much as half of your web servers, off UPS and onto the nativeelectricity grid. In the event of an outage, this reduces UPS consumption.
Implement power-capping and load-shedding, especially during peak times.
Enacting these changes can significantly improve the bottom line. Archaic data centers, those typically over 10 years old with power usage effectiveness (PUE) over 3.0 can cost over twice as much as a moreefficient data center. To learn more about PUE as a metric, see the Green Grid's whitepaper[Data Center power efficiency metrics:PUE and DCiE].
While virtualization can help with these issues, it also introduces new problems, such as VM sprawl anddealing with antiquated licensing schemes of software companies.
The Four Traits of the World's Best-Performing Business Leaders
Best-selling author Jason Jennings presented his findings in researching his various books:
It's Not the Big That Eat the Small... It's the Fast That Eat the Slow : How to Use Speed as a Competitive Tool in Business
Less Is More : How Great Companies Use Productivity As a Competitive Tool in Business
Think Big, Act Small
Hit the Ground Running : A Manual for New Leaders
Jason identified the best companies and interviewed their leaders, including such companies as Koch Industries, Nucor Steel, and IKEA furniture. The leaders he interviewed felt a calling to serveas stewards of their companies, not just write mission and vision statements, and be willingto let go of projects or people that aren't working out.
Jasonindicated a 2007 Gallup poll on the American workplace indicates that 70 percent of employees do notfeel engaged in their jobs.The focus of these leaders isto hire people with the right attitudes, rather than the right aptitudes, and give those people with the knowledge and the right to make business decisions. If done well,employees will think and act as owners, and hold themselves accountable for their economic results. Jason found cases where 25-year-olds were givenresponsibility to make billion-dollar decisions!
I found his talk inspiring! The audience felt motivated to do their jobs better, and be more engagedin the success of their companies.
These keynote sessions set the mood for the rest of the week. I can tell already that the speakers willtoss out a large salad of buzzwords and IT industry acronyms. I saw several people in the audience confusedon some of the terminology, and hopefully they will come over to IBM booth 20 at the Solutions Expofor straight talk and explanation.
I helped set up the IBM booth at the Solutions Center, third floor, where we will have variousproducts on display, as well as subject matter experts to handle all the questions.
I also went ahead and got my conference badge. While most of my cohorts have purple badges, limiting them to the Solution Centers area, I have a red badge, so that I can attend the variouskeynote and break-out sessions this week.
In keeping with our "green" theme, we have all been given matching light green shirts, and these are 70 percent Bamboo cloth, and 30 percent cotton. They are very comfortable,and sustainable! If you see me, come up and just feel my shirt, go ahead, I won't mind!
Tomorrow, the fun begins with the keynote speakers!
During the Republican primaries, Mitt Romney promised Michigan he wouldbring back all those jobs back to the Auto Industry, while his opponent,John McCain, told the audience that those jobs are gone forever, time tostart learning new skills. Mitt won the state, but lost the nomination,and perhaps this snapped him back to reality. Mitt now has a new prescription for what ails the US Auto industry--straight talk that he should have been saying during his campaign,telling people what they should hear, rather than what they wanted to hear.
Gaurav takes this argument one step further, referring to IBM's amazingturn-around back in 1993. Whereas the US Auto Industry has pushed backagainst inevitable globalization, IBM has embraced it, re-inventing itself into aGlobally Integrated Enterprise [GIE] and helping our clients do the same.I've been working for IBM since 1986, so I remember the pre-1993 IBM and how different it is now in the post-1993 era.
The marketplace has responded positively. Since 2004, more than 5,000 companies worldwide have replaced their HP, Sun, and EMC products with energy-efficient IBM Systems: Servers and Storage. Companies have invested in IBM's servers and storage to tackle their most challenging business objectives and to help reduce sprawling data center costs for labor, energy and real estate.This announcement was part of IBM's[Press Release]for its Migration Factory offering. The Migration Factory includes competitive server assessments, migration services, and other resources to help customers achieve energy and space savings and lower their cost of ownership.
Earlier this month, IBM's Chairman and CEO Sam Palmisano recently outlined the possibilities of a smarter planet to the Council on Foreign Relations.Steve Lohr of the New York Times weighs in with his article [I.B.M. Has Tech Answer for Woes of Economy], and Dr. Fern Halper of Hurwitz & Associates gives her take over at [IT-Director.com].
Transcontinental flights and the[Travel Channel] have made the world smaller.Thomas Friedman argued the world has also become "flatter",thanks to advances in computers and global communication, in his 2005 book[The World is Flat].Now, IBM recognizes that InformationTechnology (I.T.) can help us solve the financial meltdown, global warming, and other major problems the world is now faced with.
How? First, our world is becoming instrumented. Sensors, RFID tags and other equipmentare now inexpensive and readily available to be placed wherever they are needed. Second, our world is becoming more interconnected. We are closely approaching two billion internet users andfour billion mobile subscribers, andthese can connect to the trillions of RFID tags, sensors and other instrumentation. Third,our world needs to get more intelligent. Not just US auto workers learning new skills,but all these instruments providing information that can be acted on with intelligentalgorithms. Algorithms can help with automobile traffic in large cities, enhance energyexploration, or improve healthcare.
Well, I'm back from my vacation from Bali and Singapore, and am glad to seethat my fellow blogger BarryB [aka Storage Anarchist] also had a chance to take a break to exotic locations.
Next Thursday, in the USA, is [Thanksgiving holiday], so this will give me a chance to catch up on my email and read everyone's blog posts and product announcements.
The following week, December 2-5, I'll be attending the 27th annual [Data Center Conference] at the MGM Grand hotel and casino in Las Vegas, Nevada. IBM is a Premier and Platinum sponsor for this event.Look for me in one of the many break-out sessions, one-on-oneexecutive meetings, or IBM's "booth 20" at the solution center. Our team will be showingoff IBM's XIV, SVC and TotalStorage Productivity Center offerings, aswell as explaining IBM Information Infrastructure and the rest of theNew Enterprise Data Center strategy.
Well it's Tuesday, and ["election day"] here in the USA, and again IBM has more announcements.
IBM announced [IBM Tivoli Key Lifecycle Manager v1.0] (TKLM) to manage encryption keys. This provides a graphical interface to manage encryption keys, including retention criteria when sharing keys with other companies.
TKLM is supported on AIX, Solaris, Windows, Red Hat and SUSE Linux. IBM plans to offer TKLM forz/OS in 2009. TKLM can be used with Firefox or Internet Explorer web browser. This will include the Encryption Key Manager (EKM) that IBM offered initially to support encryption keys for the TS1120, TS1130, and LTO-4 drives.
While this is needed today for tape, IBM positions this software to also manage the encryption keys for "Full Drive Encryption" (FDE) disk drive modules (DDM) in IBM disk systems in 2009.
There's some good discussion in the comments section over at Robin Harris' StorageMojo blog for hispost [Building a 1.8 Exabyte Data Center].To summarize, a student is working on a research archive and asked Robin Harris for his opinion. The archive will consist of 20-40 million files averaging 90 GB in size each, for a total of 1800 PB or 1.8 EB. By comparison, anIBM DS8300 with five frames tops out at 512TB, so it would take nearly 3600 of these to hold 1.8 EB. While this might seem like a ridiculous amount of data, I think the discussion is valid as our world is certainly headed in that direction.
IBM works with a lot of research firms, and the solution is to put most of this data on tape, with just enough disk for specific analysis. Robin mentions a configurion with Sun Fire 4540 disk systems (aka Thumper). Despite Sun Microsystems' recent [$1.7 Billion dollar quarterly loss], I think even the experts at Sun would recommend a blended disk-and-tape solution for this situation.
Take for example IBM's Scale Out File Services [SoFS] which today handles 2-3 billion files in a single global file system, so 20-40 million would present no problem. SoFS supports a mix of disk and tape, with built-in movement, so that files that were referenced would automatically be moved to disk when needed, and moved back to tape when no longer required, based on policies set by the administrator. Depending on the analysis, you may only need 1 PB or less of disk to perform the work, which can easily be accomplished with a handful of disk systems, such as IBM DS8300 or IBM XIV, for example.
The rest would be on tape. Let's consider using the IBM TS3500 with [S24 High Density] frames. A singleTS3500 tape library with fifteen of these HD frames could hold 45PB of data, assuming 3:1 compression on 1TB-size 3592 cartridges. You wouldneed 40 (forty) of these libraries to get to the full 1800 PB required, and these could hold even more as higher capacity cartridges are developed. IBM has customers with over 40 tape libraries today (not all with these HD frames, of course), but the dimensions and scale that IBM is capable lies within this scope.
(For LTO fans, fifteen S54 frames would hold 32PB of data, assuming 2:1 compression on 800GB-size LTO-4 cartridges.so you would need 57 libraries instead of 40 in the above example.)
This blended disk-and-tape approach would drastically reduce the floorspace and electricity requirements when compared against all-disk configurations discussed in the post.
People are rediscovering tape in a whole new light. ComputerWorld recently came out with an 11-page Technology Brief titled [The Business Value of Tape Storage],sponsored by Dell. (Note: While Dell is a competitor to IBM for some aspects of their business, they OEM their tape storage systems from IBM, so in that respect, I can refer to them as a technology partner.) Here are some excerpts from the ComputerWorld brief:
For IT managers, the question isnot whether to use tape, but whereand how to best use tape as part of acomprehensive, tiered storage architecture.In the modern storage architecture,tape plays a role not onlyin data backup, but also in long-termarchiving and compliance.
“Long-term archiving is the primaryreason any company shoulduse tape these days,” says MikeKarp, senior analyst at EnterpriseManagement Associates in Boulder,Colo. Companies are increasinglylikely to use disk in conjunctionwith tape for backup, but for long-termarchiving needs, tape remainsunbeatable.
After factoring inacquisition costs of equipment andmedia, as well as electricity and datacenter floor space, Clipper Groupfound that the total cost of archivingsolutions based on SATA disk, theleast expensive disk, was up to 23times more expensive than archivingsolutions involving tape. Calculatingenergy costs for the competing approaches,the costs for disk jumpedto 290 times that of tape.
“Tape isalways the winner anywhere costtrumps anything else,” says Karp.No matter how the cost is figured,tape is less expensive.
Beyond IT familiarity with tape,analysts point to other reasons whyorganizations will likely keep tapein their IT storage infrastructures.Energy savings, for example, is themost recent reason to stick withtape. “The economics of tape arepretty compelling, especially whenyou figure in the cost of power,”Schulz says.
So, whether you are planning for an Exabyte-scale data center, or merely questioning the logic of a disk-for-everything storage approach, you might want to consider tape. It's "green" for the environment, and less expensive on your budget.
This is page 34 of Sequoia Capital's[56-slide presentation] about the current financial meltdown. In the past, IT spending tracked closely to the rest of the economy, but the latest downturn has not yet reflected in IT spend.
The rest of the deck is worth going through, with interesting stats presented in a clear manner.
Well, it's Tuesday again, and that means more IBM announcements!
Storage Area Network (SAN)
IBM and Cisco announced [three new blades] for the Cisco MDS 9500 seriesdirectors: 24-port 8 Gbps, 48-port 8 Gbps, and 4/44 blended. The 4/44blended has 4 of the faster 8 Gbps ports, and 44 of the 4 Gpbs ports,so that you can auto-negotiate down to 1 Gbps for your older gear, andstill take advantage of the faster 8 Gbps speeds during the transition.
On the Brocade side, IBM announced the newIBM System Storage Data Center Fabric Manager [DCFM] V10 software. This replaces the products formerly known as BrocadeFabric Manager and McData Enterprise Fabric Connection Manager (EFCM).This software can support up to 24 distinct fabrics, up to 9000 ports,including a mix of FCP, FICON, FCIP and iSCSI protocols.
(On a related note, I heard that Microsoft is planning to rename "Windows Vista" to "Windows 7" next year! Like we say here in Tucson,if it ends in "-ista" it is going to fail in the marketplace! Perhaps EMC should rename their storage virtualization product to "In-7"?).
IBM System Storage DR550
IBM announced today that it now supports [RAID 6 onthe DR550] compliance and retention storage system.
There are a few RAID-5 based EMC Centera customers out there who have notyet switched over to the IBM DR550, and now this might be just the littlenudge they need. For long-term retention of regulatory compliance data,RAID-5 doesn't cut it, you need an advanced RAID scheme, such as RAID-6, RAID-DP or RAID-X.
The DR550 provides non-erasable, non-rewriteable (NENR) storage supportto keep retention-managed data on disk and tape media. It supports 1 TBSATA disk drives and 1TB tape cartridges to provide high capacity at lowcost and "green" low energy consumption.
IBM System Storage N series
Several of our disk systems got improved and enhanced. Let's start withthe IBM System Storage N series[hardware and software] enhancements. IBM now offers high-speed 450GB 15K RPM drives. These are Fibre Channel (FC) drives for the EXN4000 expansion drawers, and Serial Attached SCSI (SAS) drives for the entry-levelN3300 and N3600 models.
The "gateway" models now support a variety of functions that were formerlyonly available on the appliance models. This includes Advanced Single Instance Storage (A-SIS), Disk Sanitization, and FlexScale.
A-SIS is IBM's "other" deduplication function, and I talked about this in my post [A-SIS Storage Savings Estimator Tool]. Disk Sanitization will physicallywrite ones and zeros over existing data to eliminate it, what IBM sometimes calls "Data Shredding".
The last feature, FlexScale, might be new for many. It is software toenable to use of the "Performance Accelerator Module" (PAM). The PAM isa PCI-Express card with 16GB on-board RAM that acts as a secondary cachebehind main memory of the N series controller. Depending on the model,you can have one to five of these cards fit into the controller itself,boosting random read performance, metadata access, and write block destage.
IBM System Storage DS5000
IBM's latest entry into the DS family has been hugely successful.In addition to Linux, Windows and AIX, the DS5000 now supports [Novell Netware and Sun Solaris] operating systems.
For infrastructure management, IBM has enhanced the Remote Support Manager [RSM]that supports DS3000 and DS4000 has been extended to support DS5000 as well. This software can monitor up to 50 disk systems, will e-mail alerts to IBM when something goes wrong, and allow IBM to dial in via modem to get more diagnostic information to improve service to the client. Also, the IBM System Storage Productivity Center [SSPC]which now supports the DS8000 and SAN Volume Controller (SVC) has been extended to also support the DS5000.
IBM XIV Storage System
In addition to 1-year and 3-year maintenance agreements, IBM now offers[2-year, 4-year and 5-year] software maintenance agreements.
RFID labels for IBM tape media
IBM 3589 (20-pack of LTO cartridges) and IBM 3599 (20-pack of 3592 cartridges for TS1100 series)now offer [RFID labels]. These labels match the volume serial (VOLSER) with a 216-bit unique identifier and 256 bits of user-defined content. This can help with tape inventory,and to prevent people from walking out of the building with a tape cartridge stuffed in their jacket.
32GB memory stick
While not technically part of the IBM System Storage matrix of offerings, Lenovo announced their new[Essential Memory Key] which holds 32GB of memory and workswith both USB 1.1 and USB 2.0 protocols.
I wish I could say this is it for the IBM announcements for October, given that this is the last Tuesday of the month, but there are three days left, so there might be just a few more!