Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
I helped set up the IBM booth at the Solutions Center, third floor, where we will have variousproducts on display, as well as subject matter experts to handle all the questions.
I also went ahead and got my conference badge. While most of my cohorts have purple badges, limiting them to the Solution Centers area, I have a red badge, so that I can attend the variouskeynote and break-out sessions this week.
In keeping with our "green" theme, we have all been given matching light green shirts, and these are 70 percent Bamboo cloth, and 30 percent cotton. They are very comfortable,and sustainable! If you see me, come up and just feel my shirt, go ahead, I won't mind!
Tomorrow, the fun begins with the keynote speakers!
I did not register soon enough to get into the MGM Grand itself, so I am staying at a Hiltonat the other end of the Las Vegas strip, but am able to hop on the "Monorail" to get to the MGM,just in time for the breakfast and first welcome session.
This conference has a familiar set up: six keynote sessions, 62 break-out sessions, and fourtown hall meetings. Thanks to electronic survey devices on the seats, speakers were able to gatherreal-time demographics. A large portion of attendees, including myself, are attending this conference for theirfirst time. Here's my recap of the first three keynote sessions:
The Future of Infrastructure and Operations: The Engine of Cloud Computing
How much do companies spend just to keep current? As much as 70 percent! The speaker noted thatthe best companies can get this down to 10 to 30 percent, leaving the rest of the IT budget to facilitate transformation. He predicts that companies are transforming their data centers fromsprawled servers to virtualization, towards a fully automated, service-oriented, real-time infrastructure.
Whereas the original motivation for IT virtualization was to reduce costs, companies now recognizethat they greatly improve agility, the ability to rapidly provision resources for new workloads, and that this will then lead to opportunites for alternative sourcing, such as cloud computing.
The operating system is becoming commoditized, focusing attention instead to a new concept: the"Meta OS". VMware's Virtual Data Center and Microsoft's Azure Fabric Controller are just two examples.Currently, analysts estimate only about 12 percent of x86 workloads are running virtualized, but thatthis could be over 50 percent by 2012.In this same time frame, year 2012, storage Terabytes is expected to increase 6.5x fold, and WAN bandwidthgrowing 35 percent per year.
Virtualization is not just for business applications. There are opportunities to eliminate the mostcostly part of any business: the Personal Computer, poster child of the skyrocketing costs of the client/server movement. Remote hosting of applications, streaming of applications,software as a service (SaaS) and virtual machines for the desktop can greatly reduce costs of customizedPC images and help desk support.
Cloud computing not only reduces per costs per use, but provides a lower barrier of entry and somemuch needed elasticity.Draw a line anywhere along the application-to-hardware software/hardware stack, and you can define acloud computing platform/service. About 65 percent of the attendees surveyed indicated that they were already doing something with CloudComputing, or were planning to in the next four years.
To help get there, the speaker felt that Value-added Resellers (VAR) and System Integrators (SI) wouldevolve into "service brokers", providing Small and Medium sized Businesses (SMB) "one throat to choke" in mixedmultisourced operations. The term "multisource" caught me a bit off-guard, referring to having someworkloads run internally (insourced) while other workloads run out on the Cloud (outsourced). Largerenterprises might have a "Dynamic Sourcing Team", a set of key employees serving as decision makers, employing both business and IT skills to determine the best sourcing for each application workload.
What are the biggest obstacles to getting there? The speaker felt it was the IT staff. People and cultureare the most difficult to change. The second are lack of appropriate metrics. Here were the survey resultsof the attendees:
41 percent had metrics for infrastructure economic attributes
49 percent had metrics for qualities of service (QoS)
12 percent had metrics to measure agility, speed of resource provisioning
The Data Center Scenario: Planning for the Future
This second keynote had two analyst "co-presenters". The focus was on the importance of having a documented Data Center strategy and architecture. Unfortunately, most Data Centers "happen on their own", with a majoroverhaul every 5 to 10 years. The speakers presented some "best practices" for driving this effort.
The first issue was to identify tiers of criticality, similar to those by the[Uptime Institute]. In their example, the most criticalworkloads would have perhaps recovery point objectives (RPO) of zero, and recover time objectives of lessthan 15 minutes. This is achievable using synchronous mirroring with fully automation to handle the failover.
The second issue was to recognize that many applications were designed for local area networks (LAN), butmany companies have distributed processing over a wide area network (WAN). Latency over these longer distancescan kill distributed performance of these applications.
The third issue was that different countries offer different levels of security, privacy and law enforcement.Canada and Ireland, for example, had the lowest risk, countries like India had medium risk, and countries likeChina and Russia had the highest risk, based on these factors.
The speakers suggested the following best practices:
Get a better understanding of the costs involved in providing IT services
Centralize applications that are not affected by latency, but regionalize those that are affected toremote locations to minimize distance delays.
Work towards a "lights out" data center facility, with operations personnel physically separated fromdata center facilities.
For the unfortunate few that are trying to stretch out more life from their existing aging data centers,the speakers offered this advice:
Build only what you need
Decommission orphaned servers and storage, which can be 1 to 12 percent of your operations
Target for replacement any hardware over five years old, not just to reduce maintenance costs, butalso to get more energy-efficient equipment.
Consider moving test workloads, and as much as half of your web servers, off UPS and onto the nativeelectricity grid. In the event of an outage, this reduces UPS consumption.
Implement power-capping and load-shedding, especially during peak times.
Enacting these changes can significantly improve the bottom line. Archaic data centers, those typically over 10 years old with power usage effectiveness (PUE) over 3.0 can cost over twice as much as a moreefficient data center. To learn more about PUE as a metric, see the Green Grid's whitepaper[Data Center power efficiency metrics:PUE and DCiE].
While virtualization can help with these issues, it also introduces new problems, such as VM sprawl anddealing with antiquated licensing schemes of software companies.
The Four Traits of the World's Best-Performing Business Leaders
Best-selling author Jason Jennings presented his findings in researching his various books:
It's Not the Big That Eat the Small... It's the Fast That Eat the Slow : How to Use Speed as a Competitive Tool in Business
Less Is More : How Great Companies Use Productivity As a Competitive Tool in Business
Think Big, Act Small
Hit the Ground Running : A Manual for New Leaders
Jason identified the best companies and interviewed their leaders, including such companies as Koch Industries, Nucor Steel, and IKEA furniture. The leaders he interviewed felt a calling to serveas stewards of their companies, not just write mission and vision statements, and be willingto let go of projects or people that aren't working out.
Jasonindicated a 2007 Gallup poll on the American workplace indicates that 70 percent of employees do notfeel engaged in their jobs.The focus of these leaders isto hire people with the right attitudes, rather than the right aptitudes, and give those people with the knowledge and the right to make business decisions. If done well,employees will think and act as owners, and hold themselves accountable for their economic results. Jason found cases where 25-year-olds were givenresponsibility to make billion-dollar decisions!
I found his talk inspiring! The audience felt motivated to do their jobs better, and be more engagedin the success of their companies.
These keynote sessions set the mood for the rest of the week. I can tell already that the speakers willtoss out a large salad of buzzwords and IT industry acronyms. I saw several people in the audience confusedon some of the terminology, and hopefully they will come over to IBM booth 20 at the Solutions Expofor straight talk and explanation.
The title of this post is inspired by Baxter Black's [latest book]. Rathera recap of the break-out sessions, I thought I would comment on a fewsentences, phrases or comments I heard in the afternoon and evening.
Stop buying storage from EMC or NetApp
The lunch was sponsored by Symantec. Rod Soderbery presented "Taking the cost out ofcost savings", explaining some ideas to reduce IT costs immediately.
First, he suggested to "stop buying storage" from EMC or NetApp that charge a premiumfor tier-one products. Instead, Rod suggested that people should "think like a Web company"and buy only storage products based on commodity hardware to save money, and to use SRM software to identify areas of poor storage utilization. IBM's TotalStorage Productivity Center softwareis often used to help with this analysis.
His other suggestions were to adopt thin provisioning, data deduplication, and virtualization.The discussion at my table started with someone asking, "How do we adopt those functions without buying new storage capacity with those features already built-in?" I explained that IBM's SAN Volume Controller (SVC),N series gateways, and TS7650G ProtecTIER virtual tape gateway can all provide one or moreof these features to your existing disk storage capacity.
IBM and HP are leaders in blade servers
In the session "Future of Server and OS: Disappearing Boundaries", the audience confirmedby electronic survey that IBM and HP are the leaders in blade servers, although blades representonly 8-10 percent of the overall server market.
Interestingly, 22 percent of the audience has deployed both x86 and non-x86 (POWER, SPARC, etc.) blade servers.The presenters considered this an interesting insight.
Another survey of the audience found that 3 percent considered Sun/STK as their primary storagevendor. One of the presenters was delighted that Sun is still hanging in there.
IBM Business Partners deliver the best of IBM and mask the worst
Elaine Lennox, IBM VP, and Mark Wyllie, CEO of Flagship Solutions Group, Inc. presentedIBM-sponsored back to back sessions. Elaine presented IBM's vision, the New Enterprise Data Center, and the challenges that demand a smarter planet.
Mark focused on his company's experience working with IBM through Innovation Workshops. Theseare assessments that can help someone identify where you are now, where you want to be, andthen action plans to address the gaps.
Cats and Dogs, Oil and Water, Microsoft Windows and Mission-critical applications, what do all of these have in common?
NEC Corporation of America sponsored some sessions on some x86-based solutions they have to offer.The first part, titled "Rats Nests, Snow Drifts and Trailers" focused unified storage, andthe second part, presented by Michael Nixon, focused on how to bring Microsoft Windows servers into the data center for mission-critical applications.
The Economy might be slowing, but storage is still growing
Two analysts co-presented "The Enterprise Storage Scenario". Unlike computing capacity, thereis no on/off switch for storage, not from applications nor from end-users. The cost ofpower for storage is expected to be 3x by 2013. Virtual servers, includingVMware and Microsoft's Hyper-V will drive the need for shared external disk storage.A survey of the audience found 20 percent were expecting to purchase additional storagecapacity 4Q08.
When someone reaches age 52, they expect to coast the rest of their career
At dinner with analysts, the discussion of financial meltdown and bailouts is unavoidable,including everyone's views about the proposed bailout of the Big 3 automakers. I can'tdefend Ford, GM and Chrysler paying their people $70 US dollars per hour, when their UScounterparts at Toyota or Honda are only paid $45 to $50 dollars per hour.
However, I have a close friend who retired after 20 years working for the fire department,and a cousin who retired after 20 years serving in the Navy (the US Navy, not the BolivianNavy), and both are still in their forties in age. A long time ago, IT professionalsretired after 30 years, in some cases with 50 to 60 percent of their base pay as theirpension for the rest of their lives. A 52-year-old that has worked 30 years might expect to enjoy the rest of his old age playing golf and pursuing other hobbies. This is not "coasting", it is called "retirement". The few of my colleagues that I have seen who worked 35 to 40 years did so becausethey enjoyed the challenge of work at IBM. They enjoyed solving tough engineering problems and helping customers.As long as they were having fun on the job,IBM was glad to keep their wealth of experience on board and actively engaged.
Unfortunately, many people rely on their own investments in the stock market for retirement, ratherthan company pensions. With the current financial crisis, I suspect many people my age arereconsidering their previous retirement plans.
We're going to need more trains!
I took the monorail back to my hotel. The ride includes funny announcements and statistics,including this gem:
"Since 1940, Las Vegas has doubled in population every ten years, which means thatby the year 2230, we will have over 1 trillion people calling Las Vegas home. We're goingto need more trains!"
That wraps up Tuesday, Day 2 of my attendance here! Now for some sleep.
Well, it's Wednesday, day three at the [Data Center Conference] here in Las Vegas, Nevada. Unlike other conferencesthat concentrate all of their keynote sessions at the front of the agenda,this conference spread them out over several days. They had three on Tuesday, two more Wednesday, and the last one on Thursday. Here are my thoughts on the two keynote sessions on Wednesday.
Top 10 Disruptive Technologies affecting the Data Center
The analyst presented his "top ten" technologies to watch:
Storage Virtualization - I was glad this made top of the list!
Cloud Computing - IBM was recognized for its leadership in this space. Cloud computing brings together new models of acquisition, billing, access, and deployment of new technology.
Servers: Beyond Blades - Currently, distributed servers have fixed CPU, memory and I/O capability, as manufactured at the factory, but what if you can re-assign these resources dynamically? New technologies mightmake this possible.
Virtualization for desktops - not just hosted virtual desktops, the speaker proposed having"portable personalities" that an employee might carry around on a CDrom or USB memory stick, andthen use whatever computer equipment was nearby.
Enterprise Mashups - You know analysts have too much time on their hands when they come up withtheir own eight-layer reference architecture for enterprise adoption of Web 2.0 technologies.
Specialized Systems - These are sometimes called heterogeneous systems, hybrids, or application-specific appliances. Unlike general purposes servers, these are more difficult to re-purpose as your needs change. However, if done right, can provide better performance for specific workloads.
Social Software and Social Networking - A survey of the audience found 18 percent were alreadyusing Mashups in the enterprise, but 65 percent haven't looked at this at all. Because traditionalhierarchically-organized companies can't re-structure their employees fast enough, the use ofsocial software to develop "virtual teams" and "communities of interest" can be an effective wayto get the "wisdom of crowds" from your employees. Rather than just installing this kind of software, the speaker felt it was better to just "plant seeds" and let social networks grow withinthe enterprise.
Unified Communications - Do you use different providers or software for cell phone, land line, wi-fi, internet, Instant Messaging (IM), audio conferencing, video conferencing, and email? The promise of Unified Communications is to bring this all together.
Zones and Pods - In the 1990s, traditional design for data centers tried to anticipate growthover the next 15-20 years, and build accordingly. These did not foresee all the changes in IT.The new best practice is a "pod approach" where you only build what you need for the next 5 to 7years, with the architecture to expand as needed. A traditional 9000-square-foot data center thatsupports 150 "watts-per-square-foot" would cost over $20 million to build, and over $1 million inelectricity every year. A pod alternative might cost less than $12 million to build, and nearlycut electricity costs in half.
Green IT - rapid "green" improvements are being demanded on IT operations, not just forpolitical correctness, but also for cost savings. A survey of the audience found 7 percentwilling to pay a premium price for green solutions, and another 26 percent willing to pay aslightly higher price for green features and attributes.
Don McMillan, Computer Engineer turned Stand Up Comic
Don gave a hilarious look at the IT industry. While most comics that are often hired to entertainthe audience have only a layman's knowledge of what we do, Don has a masters degree in ElectricalEngineering from Stanford and worked at a variety of IT companies, including AT&T Bell Labs andVLSI Technology. You can see more of his bio on his[Technically Funny] Web site.
Here's Don in a [four-minute video] demonstrating the kind of observational humor he performs.
It's good to see a bit of humor at IT conferences. With the pressures of IT staff and managementto manage explosive growth with shrinking budgets, the attendees appreciated the mix of serious with the not-so-serious.
Continuing this week's coverage of the 27th annual [Data Center Conference] I attended some break-out sessions on the "storage" track.
Effectively Deploying Disruptive Storage Architectures and Technologies
Two analysts co-presented this session. In this case, the speakers are using the term "disruptive" in the [positive sense] of the word, as originally used by Clayton Christensen in hisbook[The Innovator's Dilemma], andnot in the negative sense of IT system outages. By a show of hands,they asked if anyone had more storage than they needed. No hands went up.
The session focused on the benefits versus risks of new storage architectures, and which vendors they felt would succeed in this new marketplace around the years 2012-2013.
By electronic survey, here were the number of storage vendors deployed by members of the audience:
14 percent - one vendor
33 percent - two vendors, often called a "dual vendor" strategy
24 percent - three vendors
29 percent - four or more storage vendors
For those who have deployed a storage area network (SAN), 84 percent also have NAS, 61 percent also have some form or archive storage such as IBM System Storage DR550, and 18 percent also have a virtual tape library (VTL).
The speaker credited IBM's leadership in the now popular "storage server" movement to the IBM Versatile Storage Server [VSS] from the 1990s, the predecessor to IBM's popular Enterprise Storage Server (ESS). A "storage server" is merely a disk or tape system built using off-the-shelf server technology, rather than customized [ASIC] chips, lowering thebarriers of entry to a slew of small start-up firms entering the IT storage market, and leading to newinnovation.
How can a system designed for now single point of failure (SPOF) actually then fail? The speaker convenientlyignored the two most obvious answers (multiple failures, microcode error) and focused instead on mis-configuration. She felt part of the blame falls on IT staff not having adequate skills to deal with the complexities of today's storage devices, and the other part of the blame falls on storage vendors for making such complicated devices in the first place.
Scale-out architectures, such as IBM XIV and EMC Atmos, represent a departure from traditional "Scale-up" monolithic equipment. Whereas scale-up machines are traditionally limited in scalability from their packaging, scale-out are limited only by the software architecture and back-end interconnect.
To go with cloud computing, the analyst categorized storage into four groups: Outsourced, Hosted, Cloud, and Sky Drive. The difference depended on where servers, storage and support personnel were located.
How long are you willing to wait for your preferred storage vendor to provide a new feature before switching to another vendor? A shocking 51 percent said at most 12 months! 34 percent would be willing to wait up to 24 months, and only 7 percent were unwilling to change vendors. The results indicate more confidence in being able to change vendors, rather than pressures from upper management to meet budget or functional requirements.
Beyond the seven major storage vendors, there are now dozens of smaller emerging or privately-held start-ups now offering new storage devices. How willing were the members of the audience to do business with these? 21 percent already have devices installed from them, 16 percent plan to in the next 12-24 months, and 63 percent have no plans at all.
The key value proposition from the new storage architectures were ease-of-use and lower total cost of ownership.The speaker recommended developing a strategy or "road map" for deploying new storage architectures, with focus on quantifying the benefits and savings. Ask the new vendor for references, local support, and an acceptance test or "proof-of-concept" to try out the new system. Also, consider the impact to existing Disaster Recovery or other IT processes that this new storage architecture may impact.
Tame the Information Explosion with IBM Information Infrastructure
Susan Blocher, IBM VP of marketing for System Storage, presented this vendor-sponsored session, covering theIBM Information Infrastructure part of IBM's New Enterprise Data Center vision. This was followed by BradHeaton, Senior Systems Admin from ProQuest, who gave his "User Experience" of the IBM TS7650G ProtecTIER virtual tape library and its state-of-the-art inline data deduplication capability.
Best Practices for Managing Data Growth and Reducing Storage Costs
The analyst explained why everyone should be looking at deploying a formal "data archiving" scheme. Not just for "mandatory preservation" resulting from government or industry regulations, but also the benefits of "optional preservation" to help corporations and individual employees be more productive and effective.
Before there were only two tiers of storage, expensive disk and inexpensive tape. Now, with the advent of slower less-expensive SATA disks, including storage systems that emulate virtual tape libraries, and others that offer Non-Erasable, Non-Rewriteable (NENR) protection, IT administrators now have a middle ground to keep their archive data.
New software innovation supports better data management. The speaker recalled when "storage management" was equated to "backup" only, and now includes all aspects of management, including HSM migration, compliance archive, and long term data preservation. I had a smile on my face--IBM has used "storage management" to refer to these other aspects of storage since the 1980s!
The analyst felt the best tool to control growth is the "Delete" the data no longer needed, but felt that nobody uses Storage Resource Management (SRM) tools needed to make this viable. Until then, people willchose instead to archive emails and user files to less expensive media.The speaker also recommended looking into highly-scalable NAS offerings--such as IBM's Scale-Out File Services (SoFS), Exanet, Permabit, IBRIX, Isilon, and others--when fast access to files is worth the premium price over tape media.The speaker also made the distinction between "stub-based" archiving--such as IBM TSM Space Manager, Sun's SAM-FS, and EMC DiskXtender--from "stub-less" archive accomplished through file virtualization that employes a global namespace--such as IBM Virtual File Manager (VFM), EMC RAINfinity or F5's ARX.
She made the distinction between archives and backups. If you are keeping backups longer than four weeks, they are not really backups, are they? These are really archives, but not as effective. Recent legal precedent no longer considers long-term backup tapes as valid archive tapes.
To deploy a new archive strategy, create a formal position of "e-archivist", chose the applications that will be archived and focus on requirements first, rather than going out and buying compliance storage devices. Try to get users to pool their project data into one location, to make archiving easier. Try to have the storage admins offer a "menu" of options to Line-of-Business/Legal/Compliance teams that may not be familiar with subtle differences in storage technologies.
While I am familiar with many of these best practices already, I found it useful to see which competitiveproducts line up with those we have already within IBM, and which new storage architectures others find mostpromising.
Continuing my coverage of the 27th annual[Data Center Conference], the weather here in Las Vegas has been partly cloudy,which leads me to discuss some of the "Cloud Computing" sessions thatI attended on Wednesday.
The x86 Server Virtualization Storm 2008-2012
Along with IBM, Microsoft is recognized as one of the "Big 5" of Cloud Computing. With theirrecent announcements of Hyper-V and Azure, the speaker presented pros-and-cons between thesenew technologies versus established offerings from VMware. For example, Microsoft's Hyper-Vis about three times cheaper than VMware and offers better management tools. That could beenough to justify some pilot projects. By contrast, VMware is more lightweight, only 32MB,versus Microsoft Hyper-V that takes up to 1.5GB. VMware has a 2-3 year lead ahead of Microsoft, and offers some features that Microsoft does not yet offer.
Electronic surveys of the audience offered some insight. Today, 69 percent were using VMware only, 8 percent had VMware plus other, including Xen-based offerings from Citrix,Virtual Iron and others. However, by 2010, the audience estimated that 39 percent would be VMware+Microsoft and another 23 percent VMware plus Xen, showing a shift away from VMware'scurrent dominance. Today, there are 11 VMware implementations to Microsoft Hyper-V, and thisis expected to drop to 3-to-1 by 2010.
Of the Xen-based offerings, Citrix was the most popular supplier. Others included Novell/PlateSpin,Red Hat, Oracle, Sun and Virtual Iron. Red Hat is also experimenting with kernel-based KVM.However, the analyst estimated that Xen-based virtualization schemes would never get past8 percent marketshare. The analyst felt that VMware and Microsoft would be the two dominant players with the bulk of the marketshare.
For cloud computing deployments, the speaker suggested separating "static" VMs from "dynamic" ones. Centralize your external storage first, and implement data deduplicationfor the OS load images. Which x86 workloads are best for server virtualization? The speaker offered this guidance:
The "good" are CPU-bound workloads, small/peaky in nature.
The "bad" are IO-intensive, those that exploit the features of native hardware
The "ugly" refers to workloads based on software with restrictive licenses and those not fully supported on VMs. If you have problems, the software vendor may not help resolve them.
Moving to the Cloud: Transforming the Traditional Data Center
IBM VP Willie Chiu presented the various levels of cloud computing.
Software-as-a-Service (SaaS) provides the software application, operating system and hardware infrastructure, such as SalesForce.com or Google Apps. Either the software meets your needs or it doesn't, but has the advantage that the SaaS provider takes care of all the maintenance.
Platform-as-a-Service (PaaS) provides operating system, perhaps some middleware like database or web application server, and the hardware infrastructure to run it on. The PaaS provider maintains the operating system patches, but you as the client must maintain your own applications. IBM has cloud computing centers deployed in nine different countries across the globe offering PaaS today.
Infrastructure-as-a-Service (IaaS) provides the hardware infrastructure only. The client must maintain and patch the operating system, middleware and software applications. This can be very useful if you have unique requirements.
In one case study, Willie indicated that moving a workload from a traditional data center to the cloud lowered the costs from $3.9 million to $0.6 million, an 84 percent savings!
We've Got a New World in Our View
Robert Rosier, CEO of iTricity, presented their "IaaS" offering. "iTricity" was coined from the concept of "IT as electricity". iTricity is the largest Cloud Computing company in continental Europe, hosting 2500 servers with 500TB of disk storage across three locations in the Netherlands and Germany.
Those attendees I talked to that were at this conference before commented that this year's focus on virtualization and cloud computing is noticeably more than in previous years. For more on this, read this 12-page whitepaper:[IBM Perspective on Cloud Computing]
It's Thursday here at the [Data Center Conference] here in Las Vegas. Trying to keep up with all the sessions and activities has been quite challenging. As is often the case, there are more sessions that I want to attend than I physically am able to, so have to pick and choose.
Making the Green Data Center a Reality
The sixth and final keynote was an expert panel session, with Mark Bramfitt from Pacific Gas and Electric [PG&E], and Mark Thiele from VMware.
Mark explained PG&E's incentive program to help data centers be more energyefficient. They have spent $7 million US dollars so far on this, and he has requested another$50 million US dollars over the next three years. One idea was to put "shells" aroundeach pod of 28 or so cabinets to funnel the hot air up to the ceiling, rather than havingthe hot air warm up the rest of the cold air supply.
The fundamental disconnect for a "green" data center is that the Facilities team pay for the electricity, but it is the IT department that makes decisions that impact its use. The PG&E rebates reward IT departments for making better decisions. The best metric available is"Power Usage Effectiveness" or [PUE], which is calculated by dividing total energy consumed in the data center, divided by energy consumed by the IT equipment itself.Typical PUE runs around 3.0 which means for every Watt used for servers, storage or network switches, another 2 Watts are used for power, cooling, and facilities. Companies are tryingto reduce their PUE down to 1.6 or so. The lower the better, and 1.0 is the ideal.The problem is that changing the data center infrastructure is as difficult as replacingthe phone system or your primary ERP application.
While California has [Title 24], stating energy efficiency standards for both residential and commercial buildings, it does notapply to data centers. PG&E is working to add data center standards into this legislation.
The two speakers also covered Data Center [bogeymans], unsubstantiated myths that prevent IT departments fromdoing the right thing. Here are a few examples:
Power cycles - some people believe that x86 servers can typically only handle up to 3000 shutdowns, and so equipment is often left running 24 hours a day to minimize these. Most equipment is kept less than 5 years (1826 days), so turning off non-essential equipment at night, and powering it back on the next morning, is well below this 3000 limit and can greatly reduce kWh.
Dust - many are so concerned about dust that they run extra air-filters which impactsthe efficiency of cooling systems air flow. New IT equipment tolerates dust much betterthan older equipment.
Humidity - Mark had a great story on this one. He said their "de-humidifier" broke,and they never got around to fixing it, and they went years without it, realizing they didn't need to de-humidify.
The session wrapped up with some "low hanging fruit", items that can provide immediate benefit with little effort:
Cold-aisle containment--Why are so few data centers doing this?
Colocation providers need to meter individual clients' energy usage -- IBM offers the instrumentation and software to make this possible
Air flow management--Simply organizing cables under the floor tiles could help this.
Virtualization and Consolidation.
High-efficiency power supplies
Managing IT from a Business Service Perspective
The "other" future of the data center is to manage it as a set of integrated IT services,rather than a collection of servers, storage and switches.IT Infrastructure Library (ITIL) is widely-accepted as a set of best practices to accomplish this "service management" approach. The presenter from ASG Software Solutions presented their Configuration Management Data Base (CMDB) and application dependency dashboard. Theyhave some customers with as many as 200,000 configuration items (CIs) in their CMDB.
The solution looked similar to the IBM Tivoli software stack presented earlier this yearat the [Pulse conference].Both ASG and IBM "eat their own dog food", or perhaps more accurately "drink their own champagne", using these software products to run their own internal IT operations.
For many, the future of a "green" data center managed as a set of integrated service are years away, but the technologies and products are available today, and there is no reasonto postpone these projects any longer than necessary. For more about IBM's approach togreen data center, see [Energy EfficiencySolutions]. You can also take IBM's[IT Service Management self-assessment] to help determine whichIBM tools you need for your situation.
Lagasse, Inc. sells janitorial supplies, such as mops, cleaning chemicals, waste receptacles, and garbage can liners. Of the 1000 employees of Lagasse nationwide, about 200 associates were located in New Orleans at their main Headquarters, primary customer care center, and primary IT computing center.
Amazingly, Lagasse did not have a formally documented BCP (Business Continuity Plan) but more of aBCI (Business Continuity Idea). They chose to take a ["donut tire"] approach, putting older previous-generation equipment at their DR site. They knew that in the event of a disaster,they would not be processing as many transactions per second. That was a business trade-offthey could accept.
Evaluating all the different threat scenarios for impact and likelihood, and focused on hurricanes and floods.They had experienced previous hurricanes, learning from each,with the most recent being 2004 Hurricane Ivan and 2005 Hurricane Dennis. From this, they wereable to categorize three levels of DR recovery:
Tier 1 - The most mission-critical, which for them related to picking, packing and shipping products.
Tier 2 - The next most important, focused on maintaining good customer service
Tier 3 - Everything else, including reporting and administrative functions
The time-line of events went as follows:
The US Government issues warning that a hurricane may hit New Orleans
August 27 - 7pm
Lagasse declares a disaster, starts recovery procedures to an existing IT facility in Chicago, owned by their parent company. A temporary "Southeast" Headquarters were set up in Atlanta.Remote call centers were identified in Dallas, Atlanta, San Antonio, and Miami.
August 28 - just after midnight
In just five hours, they recovered their "Tier 1" applications.
August 28 - 7:30pm
In just over 24 hours, they recovered their "Tier 2" applications.
August 29 - 6am
The Hurricane hits land. With 73 levees breached, the city of New Orleans was flooded.
The following week
Lagasse was fully operational, and recorded their second and third best sales days ever.
I was quite impressed with their company's policy for how they treat their employees during a disaster. For many companies, people during a disaster prioritize on their families, not their jobs.If any associate was asked to work during a disaster, the company would take care of:
The safety of their family
The safety of their pets. (In the weeks following this hurricane, I sponsored people in Tucson to go to New Orleans to attend to lost and stray dogs and cats, many of which were left behind when rescuers picked up people from their rooftops.)
Any emergency repairs to secure the home they leave behind
Marshall felt that if you don't know the names of the spouse and kids of your key employees, you are not emotionally-invested enough to be successful during a disaster.
For communications, cell phones were useless. They could call out on them, but anyone with acell phone with 504 area code had difficulty receiving calls, as the calls had to be processedthrough New Orleans. Instead, they used Voice over IP (VoIP) to redirect calls to whichever remote call center each associate went to. Laptops, Citrix, VPN and email were considered powerful tools during this process. They did not have Instant Messaging (IM) at the time.
While the disk and tapes needed to recover Tiers 1 and 2 were already in Chicago, the tapes for Tier 3 were stored locally by a third-party provider. When Lagasse asked for thier DR tapes back, the third-party refused, based on their [force majeure] clause. Force majeure is a common clause in many business contracts to free parties from liabilityduring major disasters.Marshall advised everyone to strike out any "force majeure" clauses out of any future third-party DR protection contracts.
Hurricane Katrina hit the US hard, killing over 1400 people, and America still has not fully recovered. The recovery of thecity of New Orleans has been slow. Massive relocations has caused a deficit of talent inthe area, not just IT talent, but also in the areas of medicine, education and other professions. The result has been degraded social services, encouraging others to relocate as well. Some have called it the "liberation effect", a major event that causespeople to move to a new location or take on a new career in a different field.
On a personal note, I was in New Orleans for a conference the week prior to landfall, and helped clients with their recoveries the weeks after. For more on how IBM Business Continuity Recovery Services (BCRS) helped clients during Hurricane Katrina, see the following [media coverage].
The booths at a typical week-long tradeshow only go from day 2 to day 4, so that day 1 and day 5 can be used for unpacking and repacking all of the demo equipment and displays. This was the case here at the27th annual [Data Center Conference] here in Las Vegas.
The solution showcase ended Thursday afternoon.
From left to right:George Lane, Ron Houston, Cris Espinosa, Patty Congdon, David Bricker, Paula Koziol, Steve Sams, Tony Pearson,Gary Fierko, Diane Hill, David Share, Nick Sardino, Carla Fleming, Bruce Otte.
Gary Fierko and I discuss the IBM's vision and strategy, the TS7650G ProtecTIER gateway, and the differences between LTO-4 and IBM Enterprise tape, with an attendees at the booth.
Behind the scenes were folks from the [George P. Johnson company] that run events.Deniese Dunavin here helped us be successful at this conference!
Here are just a portion of all the sponsors that made this event possible, printed on bags given to each attendee.
After the booths closed down, we were invited to several different hospitality suites, sponsoredby different vendors.
The Cisco hospitality suite had an Elvis impersonator and a beautiful bride. Her name was Trixie.
The bouncers at the Computer Associates (CA) hospitality suite wore the same shade of green and blue colors from their logo.
The APC hospitality suite went with an Island/Pirate theme.
The Brocade hospitality suite rocked the Casbah! Yes, that is a REAL snake she is holding.
Michael Nixon, a presenter from NEC Corporation of America.
By the time we got to the Data Domain hospitality suite, they were out of "dedupe-tinis", most ofthe attendees had left, but they were giving out these bumper stickers. For those considering Data Domain,you might want to look at the IBM TS7650G Virtual Tape gateway, which also provides inline datadeduplication, but about six times faster ingest rate.
This wraps up my week in Las Vegas for the 27th Annual [Data Center Conference]. This conference follows the common approach of ending at noon on Friday, so that attendees can get home to their families for the weekend, or start their weekend in Las Vegas early to watch the 50th annual Wrangler National Finals Rodeo.
I attended the last few sessions. Here is my recap:
Where, When and Why do I need a Solid-State Drive?
The internet provides transport of digital data between any devices. All other uses have evolved from this aim. Increasing data storage on any node on the Web therefore increases the possibilities at every other point. We are just now beginning to recognize the implications of this. The two speakers co-presented this session to cover how Solid State Disk (SSD) may participate.
Some electronic surveys of the audience provided some insight. Only 12 percent are deploying SSD now. 59 percent are evaluating the technology. A whopping 89 percent did not understand SSD technology, or how it would apply to their data center. Here is the expected time linefor SSD adoption:
17 percent - within 1 year
60 percent - around 3 years from now
21 percent - 5 years or later
The main reasons cited for adopting SSD were increasing IOPS, reducing power and floorspace requirements, and expanding global networks. Here's a side-by-side comparison between HDD and SSD:
Disk array with 120 HDD, 73GB drives
Disk array with 120 SSD, 32GB drives
Per 73GB drive
Per 32GB drive
100MB/sec per drive
Read 250 MB/sec per drive Write 170 MB/sec per drive
300 IOPS per drive
35,000 IOPS per drive
12 Watts per drive
2.4 Watts per drive
However, the cost-per-GB for SSD is still 25x over traditional spinning disk, andthe analysts expected SSD to continue to be 10-20x for a while. For now, they estimatethat SSD will be mostly found in blade servers, enterprise-class disk systems, andhigh-end network directors.
The speakers gave examples such as Sun's ZFS Hybrid, and other products from NetApp,Compellent, Rackable, Violin, and Verari Systems.
Taking fear out of IT Disaster Recovery Exercises
The analyst presented best practices for disaster recovery testing with a "Pay Now or Pay Later"pre-emptive approach. Here were some of the suggestions:
Schedule adequate time for DR exercises
Build DR considerations into change control procedures and project lifecycle planning
Document interdependencies between applications and business processes
Bring in the "crisis team" on even the smallest incidents to keep skill sharp
Present the "State of Disaster Recovery" to Senior Management annually
The speaker gave examples of different "tiers" for recovery, with appropriate RPO and RTOlevels, and how often these should be tested per year. A survey of the audience found that70 percent already have a tiered recovery approach.
In addition to IT staff, you might want to consider inviting others to the DR exerciseas reviewers for oversight, including: Line of Business folks, Facilities/Operations, Human Resources, Legal/Compliance officers, even members of government agencies.
DR exercises can be performed at a variety of scope and objectives:
Tabletop Test - IBM calls these "walk-throughs", where people merely sit around the table and discuss what actions they would take in the event of a hypothetical scenario. This is a good way to explore all kinds of scenarios from power outages, denial of service attacks, or pandemic diseases.
Checklist Review - Here a physical inventory is taken of all the equipment needed at the DR site.
Stand-alone Test - Sometimes called a "component test" or "unit test", a single application is recovered and tested.
End-to-End simulation - All applications for a business process are recovered for a full simulation.
Full Rehearsal - Business is suspended to perform this over a weekend.
Production Cut-Over - If you are moving data center locations, this is a good time to consider testing some procedures. Other times, production is cut-over for a week over to the DR site and then returned back to the primary site.
Mock Disaster - Management calls this unexpectedly to the IT staff, certain IT staff are told to participate, and others are told not to. This helps to identify critical resources, how well procedures are documented, and members of the team are adequately cross-trained.
For exercise, set the appropriate scope and objectives, score the results, and then identifyaction plans to address the gaps uncovered. Scoring can be as simple as "Not addressed","Needs Improvement" and "Met Criteria".
Full Speed Ahead for iSCSI
The analyst presented this final session of the conference. He recognized IBM's early leadership in this area back in 1999, with the IP200i disk system. Today, there are many storage vendors that provide iSCSI solutions, the top three being:
23 percent - Dell/EqualLogic
15 percent - EMC
14 percent - HP/LeftHand Networks
This protocol has been mostly adopted for Windows, Linux and VMware, but has been largelyignored by the UNIX community. The primary value proposition is to offer SAN-like functionality at lower cost. When using the existing NICs that come built-in on most servers, iSCSI canbe 30-50 percent less expensive than FC-based SANs. Even if you install TCP-Offload-Engine (TOE) cards into the servers, iSCSI can still represent a 16-19 percent cost savings. ManyIBM servers now have TOE functionality built-in.
Since lower costs are the primary motivator, most iSCSI deployments are on 1GbE. The new10Gbps Ethernet is still too expensive for most iSCSI configurations. For servers runninga single application, 2 1GbE NICs is sufficient. For servers running virtualization with multiple workloads might need 4 or 5 NICs (1GbE), or consider 2 10GbE NICs if 10Gbps is available.
The iSCSI protocol has been most successful for small and medium sized businesses (SMB) lookingfor one-stop shopping. Buying iSCSI storage from the same vendor as your servers makes a lot of sense: EqualLogic with Dell servers, LeftHand software with HP servers, and IBM's DS3300 or N series with IBM System x servers.The average iSCSI unit was 10TB for about $24,000 US dollars.
Security and Management software for iSCSI is not as fully developed as for FC-based SANs.For this reason, most network vendors suggest having IP SANs isolated from your regular LAN.If that is not possible, consider VPN or encryption to provide added security.Issues of security and management imply that iSCSI won't dominate the large enteprise data center. Instead, many arewatching closely the adoption of Fibre Channel over Ethernet (FCoE), based on revised standardsfor 10Gbps Ethernet. FCoE standards probably won't be finalized till mid-2009, with productsfrom major vendors by 2010, and perhaps taking as much as 10 percent marketshare by 2011.
I hope you have enjoyed this series of posts. In addition to the sessions I attended, theconference has provided me with 67 presentations for me to review. Those who attended couldpurchase all the audio recordings and proceedings of every session for $295 US dollars, and those who missed the event can purchase these for $595 US dollars. These are reasonable prices, when you realize that the average Las Vegas visitor spends 13.9 hours gambling, losing an average of $626 US dollars per visit. The audio recordings and proceedings can provide more than 13.9 hours of excitement for less money!
Continuing this week's theme, my team here at theTucson Executive Briefing Center (TEBC) have made these two videos for me, usingcloud-computing facilities from OfficeMax and the folks at JibJab.Only five people were allowed per video, so we had to make two to get everyone in.
If you have been to the Tucson Executive Briefing Center, perhaps you can recognizesome of our faces!
"If you've spent any time in the storage biz, you probably realize that the server vendors sell more storage than they have any right to."
This is the old [Supermarkets-vs-Specialty Shops] debate I discussed over a year ago. The debate goes along the lines that some peopleprefer to buy their entire information infrastructure (servers, storage, software and services)from a single vendor, one-stop shopping, while others might prefer to buy their pieces ascomponents from different vendors that specialize in each technology. Because of this, Specialty shops tend to focus on other Specialty shops as their primary competitors (EMC vs. NetApp), whileSupermarkets tend to focus on other Supermarkets (IBM vs. HP).
The apparent contradiction is that Chuck feels the Supermarkets (IBM, HP, Sun and Dell) should not have any right to sell storage, in the same manner that butchers, bakers and candlestick makersdo not believe that Supermarkets should have any right to sell meat, bread or candles?If servers and storage are so different, how can self-proclaimed storage-only specialist EMC have the right to sell their non-storage offerings, from server virtualization (VMware) to cloud-computing services? With EMC's latest announcement of DW/BI centers, I think we can safely take EMC off the list of storage-only specialists. We will needto come up with a third category for those caught in limbo between being one-stop shopping Supermarkets like IBM and being a pure storage-only Specialists like NetApp. Perhaps EMC has become the IT equivalent of Wal-Mart's[Neighborhood Market].(No offense intended to my friends at Wal-Mart!)
Then Chuck continues with these statements:
"It is rarely is it the case that a server vendor can offer you a better storage product, or better service, or better functionality than what a storage specialist can do.
...Interestingly enough, Dell appears to do a sizable amount of storage business "off base" with EMC products -- outside the context of a specific server transaction."
This second contradiction relates to products that are manufactured by specialty shops, butsold through supermarket channels. Chuck would like to imply that the only storage products anyone should consider is gear made by specialty shops, whether you get it directly from them, or through Supermarket's with appropriate OEM agreements. Storage made by Supermarkets, either organicallydeveloped or through acquisitions, should not be considered? What happens when a Supermarket acquires a specialty shop? We've already seen how negative EMC has been against IBM's acquisitions of XIV and Diligent, which allowed a Supermarket like IBM to provide better products in both cases than what is available from any specialty shop. Kind of pokes a big hole in that argument!
But Dell also acquired EqualLogic, which Chuck admits might have a "fit in the marketplace".As it turns out, companies would rather buy EMCequipment from Dell sales people, than from EMC directly, and perhaps this is becauseDell, like IBM, sees the big picture. Dell, IBM and the rest of the IT Supermarkets understand theentire information infrastructure, not just the storage components of a data center. With HP and Sun selling HDS gear, and IBM selling NetApp gear, it becomes obvious that EMC needs Dell more than Dell needs EMC.
Chuck then pokes fun at NetApp in comparing the EMC NX4 to NetApp's FAS2020, comparable to IBM System Storage N series N3300. Here's an excerpt:
Like other Celerras, it does the full unified storage thing: iSCSI, NAS and "real deal" FC that isn't emulated.
The irony, of course, is that the NX4 does not actually use "real" Fibre Channel drives,but rather SAS and SATA drives. I guess Chuck's concern is that the NetApp, which doesuse "real" Fibre Channel drives, provides FC-attached LUNs to the host through its WAFL mapping,rather than through EMC's traditional RAID-rank mapping approach.How Chuck can imply that anything in the IT industry that is "emulated" is somehow seriouslyworse than "real", but then spend 40 percent of his posts devoted to the benefits of VMware,which offers "emulated" virtual machines, seems to be yet another contradiction.
"Cloud computing" has been ill-defined and over-hyped, yet storage vendors have been quick to trot out their own "cloud storage" offerings and end users are wondering whether there's significant cost savings in these services for them, particularly in tough economic times.
"Cloud-speak" can be downright confusing....
"Surprisingly, Gartner considers the amorphous nature of the term to be good news: 'The very confusion and contradiction that surrounds the term 'cloud computing' signifies its potential to change the status quo in the IT market,' the IT research firm said earlier this year."
Consistent with Scott Adams's original prediction, the barriers of entry have lowered for storage vendors as well.Rather than competing on function and price through valued relationships and trusted expertise, some vendors would rather confuse instead. EMC tries to paint the NX4 as being "just as good as" anNetApp or IBM N series for unified storage, and EMC tries to create new categories, like Cloud-Oriented Storage (COS), to give their me-too products the impression they are in a league of their own.All of this to discourage customers from making their own comparisons and doing their own research.
IBM doesn't play that way. If you want straight talk aboutIBM's products, contact your local IBM Business Partner or sales rep.
Wrapping up this week's theme on ways to make the planet smarter, and less confusing, I present IBM's third annual [five in five]. These are five IBM innovations to watch over the next five years, all of which have implications on information storage. Here is a quick [3-minute video] that provides the highlights: