Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
Wrapping up my post-week coverage of the [Data Center 2010 conference], I stuck through the end to get my money's worth at this conference. As the morning went on, it became obvious many people booked flights or started their weekends prior to the official 3:15pm ending of the last day.
Strategies for Data Life Cycle Management
I prefer the term "Information Lifecycle Management", but the two analysts presenting decided to use DLM instead. Let's start with the biggest challenge faced by the audience.
The problem is not meeting Service Level Agreements (SLA) but Service Level Expectations. When looking at the real business value of IT, you should link IT strategy to business outcomes and directives, align with your CIO's pet initiatives, and position storage as a technology supporting IT Directors goals. Here were the top five goals:
Curtailing Storage Sprawl
Compliance and e-Discovery
Improving Service Levels for Data Availability and Protection
Moving to Cloud Computing
The analysts reviewed both a "Tops Down" and "Bottoms Up" approach. They recommend what they call an "Enterprise Infomration Archive" (what IBM calls Smart Archive, by the way) that provides a better understanding of all data.
No greater lie has been told than "Storage is Cheap". Currently, only 10 percent of companies hvae a formal "deletion policy", but the analysts predict this will rise to 50 percent by 2013.
The "Bottoms Up" approach is focused on modernizing the data center at the storage technology level. There has been a resurgence in interest in ILM solutions, implementing storage tiers, and storage efficiency features like thin provisioning, data deduplication and real-time compression. Cloud Computing can help off-load this effort to someone else.
ILM provides real business value, such as reduce costs, improve quality of service, and mitigate risks. The analysts felt that if you are not partnering with a storage vendor that offers five essential technologies, you should probably change vendors. What are those five essential technologies? I am glad you asked. Watch this [YouTube video] to find out.
Getting the Most From Your Storage Vendor Relationships
The analyst mentioned there are two kinds of storage vendors. Suppliers that sell you solutions, and Partners that work with you to develop unique functionality. He offered some advice:
Allow vendors to analyze and profile your workloads, such as IOPS, MB/sec bandwidth, average blocksize, and so on.
Review your Service level agreements (SLAs), procedures and asset management strategies
Identify upgrade risks, conversion costs, and unintended consequences
Take advantage of vendor engineers and technical staff for skills transfer, best practices, industry trends, and competitive comparisons
Explore different solutions and approaches
Avoid big pitfalls by negotiating and locking in upgrade and maintenance costs, scheduling conversions, and getting any guarantees in writing.
Asking the audience how they currently interact with their storage vendors:
The analyst's "Do's and Don'ts" were good advice for nearly any kind of business negotiation:
Keep language simple and enforceable
Limit diagnostic time
Be reasonble with rolling time-lines
Design remedies that keep you whole and are implementable in your environment
Make remedies punitive
Use qualitative measures
Rely on vendor's metrics only
Set terms that expire during life of system
Let the vendor provide best practices after installation, set reasonable expectations, schedule regular reviews, and insist on cross-vendor cooperation, have zero tolerance for finger-pointing between vendors. Depreciate storage equipment quickly.
This was the last session of the conference, a workshop to deal with irrational behavior during unexpected events that could disrupt or impact business operations. In the exercise, each table was a fictitious company, and the 7-8 people sitting at each table represented different department heads who had to make recommendations to upper management on how to deal with each disastrous situation presented to us. Decisions had to be made with limited and incomplete information. Each table had to come to a consensus on each action, and a single spokesperson from each table would present the recommendations. Winners of each round got prizes.
Plenty of coffee, not enough juice. Power and Cooling were top of mind. The rooms were cold, designed for people wearing suits I imagine. I enjoyed plenty of hot coffee throughout the event. Everyone complained that their smartphones and iPads were running out of electricity. The conference had "recharge" stations with plugs for all kinds of different phones, but the Micro-USB plugs that I needed for my Samsung Vibrant, and the apple connections needed by everyone else's iPhones and iPads, were always taken. I remember when you could charge your cell phone once a week, because you hardly used it to make calls, and now that they can be used to follow Twitter feeds, surf websites, and other actions between sessions, power runs out quickly.
Information Overload. I was one of those following tweets on the HootSuite app on my Android-based smart phone. I was able to meet some of the people I have exchanged blog comments and tweets. One told me that his tweets was his way of taking notes, so that his trip report would be done when he got back to the office. I used to write trip reports also, before blogging and tweeting.
The mood was positive. Overall, all the rival competitors got along well. I had friendly chats with people from Oracle, HP, Cisco, EMC, VCE, and others. People are overall optimistic that the IT industry is set for economic growth in 2011.
The only people who look forward to change are babies in soiled diapers. My impression is that people who were threatened by Cloud Computing now have a better understanding on what they need to do going forward. Yes, this means learning new skills, re-evaluating your backup/recovery procedures, reviewing your BC/DR contingency plans, and a variety of other changes. Those who don't like frequent change should consider getting out of the IT industry. Just sayin'
I suspect this will be my last post of 2010. I will be taking a much-needed break, celebrating the Winter Solstice. To all my readers, I wish you good times over the next few weeks, and a Happy New Year!
Continuing my post-week coverage of the [Data Center 2010 conference], Thursday morning had some interesting sessions for those that did not leave town last night.
Interactive Session Results
In addition to the [Profile of Data Center 2010] that identifies the demographics of this year's registrants, the morning started with highlights of the interactive polls during the week.
External or Heterogeneous Storage Virtualization
The analyst presented his views on the overall External/Heterogeneous Storage Virtualization marketplace. He started with the key selling points.
Avoid vendor lock-in. Unlike the IBM SAN Volume Controller, many of the other storage virtualization products result in vendor lock-in.
Leverage existing back-end capacity. Limited to what back-end storage devices are supported.
Simplify and unify management of storage. Yes, mostly.
Lower storage costs. Unlike the IBM SAN Volume Controller, many using other storage virtualization discover an increase in total storage costs.
Migration tools. Yes, as advertised.
Consolidation/Transition. Yes, over time.
Better functionality. Potentially.
Shortly after several vendors started selling external/heterogeneous storage virtualization solutions, either as software or pre-installed appliances, major storage vendors that were caught with their pants down immediately started calling everything internally as also "storage virtualization" to buy some time and increase confusion.
While the analyst agreed that storage virtualization simplifies the view of storage from the host server side, it can complicate the management of storage on the storage end. This often comes up at the Tucson Briefing Center. I explain this as the difference between manual and automatic transmission cars. My father was a car mechanic, and since he is the sole driver and sole mechanic, he prefers manual transmission cars, easier to work on. However, rental car companies, such as Hertz or Avis, prefer automatic transmission cars. This might require more skills on behalf of their mechanics, but greatly simplifies the experience for those driving.
The analyst offered his views on specific use cases:
Data Migration. The analyst feels that external virtualization serves as one of the best tools for data migration. But what about tech refresh of the storage virtualization devices themselves? Unlike IBM SAN Volume Controller, which allows non-disruptive upgrades of the nodes themselves, some of the other solutions might make such upgrades difficult.
Consolidation/Transition. External virtualization can also be helpful, depending on how aggressive the schedule for consolidation/transition is performed.
Improved Functionality/Usability. IBM SAN Volume Controller is a good example, an unexpected benefit. Features like thin provisioning, automated storage tiering, and so on, can be added to existing storage equipment.
The analyst mentioned that there were different types of solutions. The first category were those that support both internal storage and external storage virtualization, like the HDS USP-V or IBM Storwize V7000. He indicated that roughly 40 percent of HDS USP-V are licensed for virtualization. The second category were those that support external virtualization only, such as IBM SAN Volume Controller, HP Lefthand and SVSP, and so on. The third category were software-only Virtual Guest images that could provide storage virtualization capabilities.
The analyst mentioned EMC's failed product Invista, which sold less than 500 units over the past five years. The low penetration for external virtualization, estimated between 2-5 percent, could be explained from the bad taste that left in everyone considering their options. However, the analyst predicts that by 2015, external virtualization will reach double digit marketshare.
Having a feel for the demographics of the registrants, and specific interactive polling in each meeting, provides a great view on who is interested in what topic, and some insight into their fears and motivations.
Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday evening we had six hospitality suites. These are fun informal get-togethers sponsored by various companies. I present them in the order that I attended them.
Intel - The Silver Lining
Intel called their suite "The Silver Lining". Magician Joel Bauer wowed the crowds with amazing tricks.
Intel handed out branded "Snuggies". I had to explain to this guy that he was wearing his backwards.
i/o - Wrestling with your Data Center?
New-comer "i/o" named their suite "Wrestling with your Data Center?" They invited attendees frustrated with their data centers to don inflated Sumo Wrestling suits.
APC by Schneider Electric - Margaritaville
This will be the last year for Margaritaville, a theme that APC has used now for several years at this conference.
Cisco - Fire and Ice
Cisco had "Fire and Ice" with half the room decorated in Red for fire, and White for ice.
This is Ivana, welcoming people to the "Ice" side.
This is Peter, on the "Fire" side. Cisco tried to have opposites on both sides, savory food on one side, sweets on the other.
CA Technologies - Can you Change the Game?
CA Technologies offered various "sports games", with a DJ named "Coach".
Compellent - Get "Refreshed" at the Fluid Data Hospitality Suite
Compellent chose a low-key format, "lights out" approach with a live guitarist. They had hourly raffles for prizes, but it was too dark to read the raffle ticket numbers.
Of the six, my favorite was Intel. The food was awesome, the Snuggies were hilarious, and the magician was incredibly good. I would like to think Intel for providing me super-secret inside access to their Cloud Computing training resources and for the Snuggie!
Continuing my post-week coverage of the [Data Center 2010 conference], Wendesday afternoon included a mix of sessions that covered storage and servers.
Enabling 5x Storage Efficiency
Steve Kenniston, who now works for IBM from recent acquisition of Storwize Inc, presented IBM's new Real-Time Compression appliance. There are two appliances, one handles 1 GbE networks, and the other supports mixed 1GbE/10GbE connectivity. Files are compressed in real-time with no impact to performance, and in some cases can improve performance because there is less data written to back-end NAS devices. The appliance is not limited to IBM's N series and NetApp, but is vendor-agnostic. IBM is qualifying the solution with other NAS devices in the market. The compression can compress up to 80 percent, providing a 5x storage efficiency.
Townhall - Storage
The townhall was a Q&A session to ask the analysts their thoughts on Storage. Here I will present the answer from the analyst, and then my own commentary.
Are there any gotchas deploying Automated Storage Tiering?
Analyst: you need to fully understand your workload before investing any money into expensive Solid-State Drives (SSD).
Commentary: IBM offers Easy Tier for the IBM DS8000, SAN Volume Controller, and Storwize V7000 disk systems. Before buying any SSD, these systems will measure the workload activity and IBM offers the Storage Tier Advisory Tool (STAT) that can help identify how much SSD will benefit each workload. If you don't have these specific storage devices, IBM Tivoli Storage Productivity Center for Disk can help identify disk performance to determine if SSD is cost-justified.
Wouldn't it be simpler to just have separate storage arrays for different performance levels?
Analyst: No, because that would complicate BC/DR planning, as many storage devices do not coordinate consistency group processing from one array to another.
Commentary: IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems support consistency groups across storage arrays, for those customers that want to take advantage of lower cost disk tiers on separate lower cost storage devices.
Can storage virtualization play a role in private cloud deployments?
Analyst: Yes, by definition, but today's storage virtualization products don't work with public cloud storage providers. None of the major public cloud providers use storage virtualization.
Commentary: IBM uses storage virtualization for its public cloud offerings, but the question was about private cloud deployments. IBM CloudBurst integrated private cloud stack supports the IBM SAN Volume Controller which makes it easy for storage to be provisioned in the self-service catalog.
Can you suggest one thing we can do Monday when we get back to the office?
Analyst: Create a team to develop a storage strategy and plan, based on input from your end-users.
Commentary: Put IBM on your short list for your next disk, tape or storage software purchase decision. Visit
[ibm.com/storage] to re-discover all of IBM's storage offerings.
What is the future of Fibre Channel?
Analyst 1: Fibre Channel is still growing, will go from 8Gbps to 16Gbps, the transition to Ethernet is slow, so FC will remain the dominant protocol through year 2014.
Analyst 2: Fibre Channel will still be around, but NAS, iSCSI and FCoE are all growing at a faster pace. Fibre Channel will only be dominant in the largest of data centers.
Commentary: Ask a vague question, get a vague answer. Fibre Channel will still be around for the next five years.
However, SAN administrators might want to investigate Ethernet-based approaches like NAS, iSCSI and FCoE where appropriate, and start beefing up their Ethernet skills.
Will Linux become the Next UNIX?
Linux in your datacenter is inevitable. In the past, Linux was limited to x86 architectures, and UNIX operating systems ran on specialized CPU architectures: IBM AIX on POWER7, Solaris on SPARC, HP-UX on PA-RISC and Itanium, and IBM z/OS on System z Architecture, to name a few. But today, Linux now runs on many of these other CPU chipsets as well.
Two common workloads, Web/App serving and DBMS, are shifting from UNIX to Linux. Linux Reliability, Availability and Serviceability (RAS) is approaching the levels of UNIX. Linux has been a mixed blessing for UNIX vendors, with x86 server margins shrinking, but the high-margin UNIX market has shrunk 25 percent in the past three years.
UNIX vendors must make the "mainframe argument" that their flavor of UNIX is more resilient than any OS that runs on Intel or AMD x86 chipsets. In 2008, Sun Solaris was the number #1 UNIX, but today, it is IBM AIX with 40 percent marketshare. Meanwhile HP has focused on extending its Windows/x86 lead with a partnership with Microsoft.
The analyst asks "Are the three UNIX vendors in it for the long haul, or are they planning graceful exits?" The four options for each vendor are:
Milk it as it declines
Accelerate the decline by focusing elsewhere
Impede the market to protect margins
Re-energize UNIX base through added value
Here is the analyst's view on each UNIX vendor.
IBM AIX now owns 40 percent marketshare of the UNIX market. While the POWER7 chipset supports multiple operating systems, IBM has not been able to get an ecosystem to adopt Linux-on-POWER. The "Other" includes z/OS, IBM i, and other x86-based OS.
HP has multi-OS Itanium from Intel, but is moving to Multi-OS blades instead. Their "x86 plus HP-UX" strategy is a two-pronged attack against IBM AIX and z/OS. Intel Nehalem chipset is approaching the RAS of Itanium, making the "mainframe argument" more difficult for HP-UX.
Before Oracle acquired Sun Microsystems, Oracle was focused on Linux as a UNIX replacement. After the acquisition, they now claim to support Linux and Solaris equally. They are now focused on trying to protect their rapidly declining install base by keeping IBM and HP out. They will work hard to differentiate Solaris as having "secret sauce" that is not in Linux. They will continue to compete head-on against Red Hat Linux.
An interactive poll of the audience indicated that the most strategic Linux/UNIX platform over the next next five years was Red Hat Linux. This beat out AIX, Solaris and HP-UX, as well as all of the other distributions of Linux.
The rooms emptied quickly after the last session, as everyone wanted to get to the "Hospitality Suites".
Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday morning started with another keynote session, followed by some break-out sessions.
Realities of IT Investment
Tighter budgets mean more business decisions. Future investments will come from cost savings. The analysts report that 77 percent of IT decisions are made by CFOs. Most organizations are spending less now than back in 2008 before the recession.
How we innovate through IT is changing. In bad times, risk trumps return, but only 21 percent of the audience have a formal "risk calculation" as part of their purchase plans.
Divestment matters as much as investment. Reductions in complexity have the greatest long-term cost savings. Try to retire at least 20 percent of your applications next year. With the advent of Cloud Computing, companies might just retire it and go entirely with public cloud offerings. Note that this graph the years are different than the ones above, in groups of half-decade increments.
It is important to identify functional dependencies and link your IT risks to business outcomes. Focus on making costs visible, and re-think how you communicate IT performance measurements and their impact to business. Try to change the culture and mind-set so that projects are not referred to as "IT projects" focused on technology, but rather they are "business projects" focused on business results.
Moving to the Cloud
Richard Whitehead from Novell presented challenges in moving to Cloud Computing. There are risks and challenges managing multiple OS environments. Users should have full access to all IT resources they need to do their jobs. Computing should be secure, compliant, and portable. Here is the shift he sees from physical servers to virtual and cloud deployments, years 2010 to 2015:
Richard considers a "workload" as being the combination of the operating system, middleware, and application. He then defines "Business Service" as an appropriate combination of these workloads. For example, a business service that provides a particular report might involve a front-end application, talking through business logic workload server, talking to a back-end database workload server.
To address this challenge, Novell introduces "Intelligent Workload Management", called WorkloadIQ. This manages the lifecycle to build, secure, deploy, manage and measure each workload. Their motto was to take the mix of physical, virtual and cloud workloads all "make it work as one". IBM is a business partner with Novell, and I am a big fan of Novell's open-source solutions including SUSE Linux.
A Funny Thing Happened on the Way to the Cloud....
Bud Albers, CTO of Disney, shared their success in deploying their hybrid cloud infrastructure. Everyone recognizes the Disney brand for movies and theme parks, but may not aware that they also own ABC News and ESPN television, Travel cruises, virtual worlds, mobile sites, and deploy applications like Fantasy Football and Fantasy Fishing.
Two years ago, each Line of Business (LOB) owned their own servers, they were continually out of space, power and HVAC issues forced tactical build-outs of their datacenters. But in 2008, the answer to all questions was Cloud Computing, it slices and dices like something invented by [Ron Popeill], with no investment or IT staff required. However, continuing to ask the CFO for CAPEX to purchase assets that were only 1/7th used was not working out either. That's right, over 75 percent of their servers were running less than 15 percent CPU utilization.
The compromise was named "D*Cloud". Internal IT infrastructure would be positioned for Cloud Computing, by adopting server virtualization, implementing REST/SOAP interfaces, and replicating the success across their various Content Distribution Networks (CDN). Disney is no stranger to Open Source software, using Linux and PHP. Their [Open Source] web page shows tools available from Disney Animation studios.
At the half-way point, they had half their applications running virtualized on just 4 percent of their servers. Today, they run over 20 VMs per host and have 65 percent of their apps virtualized. Their target is 80 percent of their apps virtualized by 2014.
Bud used the analogy that public clouds will be the "gas stations" of the IT industry. People will choose the cheapest gas among nearby gas stations. By focusing on "Application management" rather than "VM instance management", Disney is able to seamlessly move applications as needed from private to public cloud platforms.
Their results? Disney is now averaging 40 percent CPU utilization across all servers. Bud feels they have achieved better scalability, better quality of service, and increased speed, all while saving money. Disney is spending less on IT now than in 2008,
UPMC Maximizes Storage Efficiency with IBM
Kevin Muha, UPMC Enterprise Architect & Technology Manager for Storage and Data Protection Services, was unable to present this in person, so Norm Protsman (IBM) presented Kevin's charts on the success at the University of Pittsburgh Medical Center [UPMC]. UPMC is Western Pennsylvania's largest employer, with roughly 50,000 employees across 20 hospitals, 400 doctors' offices and outpatient sites. They have frequently been rated one of the best hospitals in the US.
Their challenge was storage growth. Their storage environment had grown 328 percent over the past three years, to 1.6PB of disk and nearly 7 PB of physical tape. To address this, UPMC deployed four IBM TS7650G ProtecTIER gateways (2 clusters) and three XIV storage systems for their existing IBM Tivoli Storage Manager (TSM) environment. Since they were already using TSM over a Fibre Channel SAN, the implementation took only three days.
UPMC was backing up nearly 60TB per day, in a 15-hour back window. Their primary data is roughly 60 percent Oracle, with the rest being a mix of Microsoft Exchange, SQL Server, and unstructured data such as files and images.
Their results? TSM reclamation is 30 percent faster. Hardware footprint reduced from 9 tiles to 5. Over 50 percent reduction in recovery time for Oracle DB, and 20 percent reduction in recovery of SQL Server, Microsoft Exchange, and Epic Cache. They average 24:1 deduplication overall, which can be broken down by data category as follows:
29:1 Cerner Oracle
18:1 EPIC Cache
10:1 Microsoft SQL Server
8:1 Unstructured files
6:1 Microsoft Exchange
UPMC still has lots of LTO-4 tapes onsite and offsite from before the change-over, so the next phase planned is to implement "IP-based remote replication" between ProtecTIER gateways to a third data center at extended distance. The plan is to only replicate the backups of production data, and not replicate the backups of test/dev data.
Continuing my post-week coverage of the [Data Center 2010 conference], we had receptions on the Show floor. This started at the Monday evening reception and went on through a dessert reception Wednesday after lunch. I worked the IBM booth, and also walked around to make friends at other booths.
Here are my colleagues at the IBM booth. David Ayd, on the left, focuses on servers, everything from IBM System z mainframes, to POWER Systems that run IBM's AIX version of UNIX, and of course the System x servers for the x86 crowd. Greg Hintermeister, on the right, focuses on software, including IBM Systems Director and IBM Tivoli software. I covered all things storage, from disk to tape. For attendees that stopped by the booth expressing interest in IBM offerings, we gave out Starbucks gift cards for coffee, laptop bags, 4GB USB memory sticks and copies of my latest book: "Inside System Storage: Volume II".
Across the aisle were our cohorts from IBM Facilities and Data Center services. They had the big blue Portable Modular Data Center (PMDC). Last year, there were three vendors that offered these: IBM, SGI, and HP. Apparently, IBM won the smack-down, as IBM has returned victorious, as SGI only had the cooling portion of their "Ice Cube" and HP had no container whatsoever.
IBM's PMDC is fully insulated so that you can use it in cold weather below 50 degrees F like Alaska, to the hot climates up to 150 degrees F like Iraq or Afghanistan, and everything in between. They come in three lengths, 20, 40 and 53 feet, and can be combined and stacked as needed into bigger configurations. The systems include their own power generators, cooling, water chillers, fans, closed circuit surveillance, and fire suppression. Unlike the HP approach, IBM allows all the equipment to be serviced from the comfort inside.
This is Mary, one of the 200 employees secunded to the new VCE. Michael Capellas, the CEO of VCE, offered to give a hundred dollars to the [Boys and Girls Club of America], a charity we both support, if I agreed to take this picture. The Boys and Girls Club inspires and enables young people to realize their full potential as productive, responsible, and caring citizens, so it was for a good cause.
The show floor offers attendees a chance to see not just the major players in each space, but also all the new up-and-coming start-ups.
Mastering the art of stretching out a week-long event into two weeks' worth of blog posts, I continue my
coverage of the [Data Center 2010 conference], Tuesday afternoon I attended several sessions that focused on technologies for Cloud Computing.
(Note: It appears I need to repeat this. The analyst company that runs this event has kindly asked me not to mention their name on this blog, display any of their logos, mention the names of any of their employees, include photos of any of their analysts, include slides from their presentations, or quote verbatim any of their speech at this conference. This is all done to protect and respect their intellectual property that their members pay for. The pie charts included on this series of posts were rendered by Google Charting tool.)
Converging Storage and Network Fabrics
The analysts presented a set of alternative approaches to consolidating your SAN and LAN fabrics. Here were the choices discussed:
Fibre Channel over Ethernet (FCoE) - This requires 10GbE with Data Center Bridging (DCB) standards, what IBM refers to as Converged Enhanced Ethernet (CEE). Converged Network Adapters (CNAs) support FC, iSCSI, NFS and CIFS protocols on a single wire.
Internet SCSI (iSCSI) - This works on any flavor of Ethernet, is fully routable, and was developed in the 1990s by IBM and Cisco. Most 1GbE and all 10GbE Network Interface Cards (NIC) support TCP Offload Engine (TOE) and "boot from SAN" capability. Native suppot for iSCSI is widely available in most hypervisors and operating systems, including VMware and Windows. DCB Ethernet is not required for iSCSI, but can be helpful. Many customers keep their iSCSI traffic in a separate network (often referred to as an IP SAN) from the rest of their traditional LAN traffic.
Network Attached Storage (NAS) - NFS and CIFS have been around for a long time and work with any flavor of Ethernet. Like iSCSI, DCB is not required but can be helpful. NAS went from being for files only, to be used for email and database, and now is viewed as the easiest deployment for VMware. Vmotion is able to move VM guests from one host to another within the same LAN subnet.
Infiniband or PCI extenders - this approach allows many servers to share fewer number of NICs and HBAs. While Infiniband was limited in distance for its copper cables, recent advances now allow fiber optic cables for 150 meter distances.
Interactive poll of the audience offered some insight on plans to switch from FC/FICON to Ethernet-based storage:
Interactive poll of the audience offered some insight on what portion storage is FCP/FICON attached:
Interactive poll of the audience offered some insight on what portion storage is Ethernet-attached:
Interactive poll of the audience offered some insight on what portion of servers are already using some Ethernet-attached storage:
Each vendor has its own style. HP provides homogeneous solutions, having acquired 3COM and broken off relations with Cisco. Cisco offers tight alliances over closed proprietary solutions, publicly partnering with both EMC and NetApp for storage. IBM offers loose alliances, with IBM-branded solutions from Brocade and BNT, as well as reselling arrangements with Cisco and Juniper. Oracle has focused on Infiniband instead for its appliances.
The analysts predict that IBM will be the first to deliver 40 GbE, from their BNT acquisition. They predict by 2014 that Ethernet approaches (NAS, iSCSI, FCoE) will be the core technology for all but the largest SANs, and that iSCSI and NAS will be more widespread than FCoE. As for cabling, the analysts recommend copper within the rack, but fiber optic between racks. Consider SAN management software, such as IBM Tivoli Storage Productivity Center.
The analysts felt that the biggest inhibitor to merging SAN and LANs will be organizational issues. SAN administrators consider LAN administrators like "Cowboys" undisciplined and unwilling to focus on 24x7 operational availability, redundancy or business continuity. LAN administrators consider SAN administrators as "Luddites" afraid or unwilling to accept FCoE, iSCSI or NAS approaches.
Driving Innovation through Innovation
Mr. Shannon Poulin from Intel presented their advancements in Cloud Computing. Let's start with some facts and predictions:
There are over 2.5 billion photos on Facebook, which runs on 30,000 servers
30 billion videos viewed every month
Nearly all Internet-connected devices are either computers or phones
An additional billion people on the Internet
Cars, televisions, and households will also be connected to the Internet
The world will need 8x more network bandwidth, 12x more storage, and 20x more compute power
To avoid confusion between on-premise and off-premise deployments, Intel defines "private cloud" as "single tenant" and "public cloud" as "multi-tenant". Clouds should be
automated, efficient, simple, secure, and interoperable enough to allow federation of resources across providers. He also felt that Clouds should be "client-aware" so that it know what devices it is talking to, and optimizes the results accordingly. For example, if watching video on a small 320x240 smartphone screen, it makes no sense for the Cloud server to push out 1080p. All devices are going through a connected/disconnected dichotomy. They can do some things while disconnected, but other things only while connected to the Internet or Cloud provider.
An internal Intel task force investigated what it would take to beat MIPS and IBM POWER processors and found that their own Intel chips lacked key functionality. Intel plans to address some of their shortcomings with a new chip called "Sandbridge" sometime next year. They also plan a series of specialized chips that support graphics processing (GPU), network processing (NPU) and so on. He also mentioned Intel released "Tukwilla" earlier this year, the latest version of Itanium chip. HP is the last major company to still use Itanium for their servers.
Shannon wrapped up the talk with a discussion of two Cloud Computing initiatives. The first is [Intel® Cloud Builders], a cross-industry effort to build Cloud infrastructures based on the Intel Xeon chipset. The second is the [Open Data Center Alliance], comprised of leading global IT managers who are working together to define and promote data center requirements for the cloud and beyond.
The analysts feel that we need to switch from thinking about "boxes" (servers, storage, networks) to "resources". To this end, they envision a future datacenter where resources are connected to an any-to-any fabric that connects compute, memory, storage, and networking resources as commodities. They feel the current trend towards integrated system stacks is just a marketing ploy by vendors to fatten their wallets. (Ouch!)
A new concept to "disaggregate" caught my attention. When you make cookies, you disaggregate a cup of sugar from the sugar bag, a teaspoon of baking soda from the box, and so on. When you carve a LUN from a disk array, you are disaggregating the storage resources you need for a project. The analysts feel we should be able to do this with servers and network resources as well, so that when you want to deploy a new workload you just disaggregate the bits and pieces in the amounts you actually plan to use and combine them accordingly. IBM calls these combinations "ensembles" of Cloud computing.
Very few workloads require "best-of-breed" technologies. Rather, this new fabric-based infrastructure recognizes the reality that most workloads do not. One thing that IT Data Center operations can learn from Cloud Service Providers is their focus on "good enough" deployment.
This means however that IT professionals will need new skill sets. IT administrators will need to learn a bit of application development, systems integration, and runbook automation. Network adminis need to enter into 12-step programs to stop using Command Line Interfaces (CLI). Server admins need to put down their screwdrivers and focus instead on policy templates.
Whether you deploy private, public or hybrid cloud computing, the benefits are real and worth the changes needed in skill sets and organizational structure.
Continuing my coverage of the [Data Center 2010 conference], Tuesday afternoon I presented "Choosing the Right Storage for your Server Virtualization". In 2008 and 2009, I attended this conference as a blogger only, but this time I was also a presenter.
The conference asked vendors to condense their presentations down to 20 minutes. I am sure this was inspired by the popular 18-minute lectures from the [TED conference] or perhaps the [Pecha Kucha] night gatherings in Japan where each presenter speaks while showing 20 slides for 20 seconds each, This forces the presenters to focus on their key points and not fill the time slot with unnecessary marketing fluff. This also allows more vendors to have a chance to pitch their point of view.
Continuing my coverage of the Data Center 2010 conference, Tuesday morning I attended several sessions. The first was a serious IT discussion with Mazen Rawashdeh, Technology Executive from eBay, and the second was a lighthearted review of the benefits from Cloud Computing from humorist Dave Barry, and the third focused on re-architecting backup strategies.
eBay – How One Fast Growing Company is Solving its Infrastructure and Data Center Challenges
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change." -- Charles Darwin
So far, this has been the best session I have attended. eBay operates in 32 countries in seven languages, helping 90 million users to buy or sell 245 million items in 50,000 categories. Let's start with some statistics of the volume of traffic that eBay handles:
$2000 traded every second
cell phone sold every six seconds
pair of shoes sold every nine seconds
a major appliance sold every minute
93 billion database actions every day
50 TB of daily ingested daily
code changes to the eBay application are rolled in every day
In 2007, eBay discovered a disturbing trend, that infrastructure costs matched linear growth to business listing volume, which was an unsustainable model. Mazen Rawashdeh, eBay Marketplace Technology Operations, presented their strategy to break free from this problem. They want to double the number of listings without doubling their costs. They are 2 years into their 4 year plan:
Switched from expensive 12U high servers consuming 3 Kilowatts over to open source software on commodity 1-2U server hardware. Mazen owns all the costs from cement floor up to the web server.
Replaced team-optimized key performance indicators (KPI) with a common KPI. The server team focused on transactions per minute. The storage team was focused on utilization. The network team was focused on MB/sec bandwidth. The problem is that changes to optimize one might have negative impact to other teams. The new KPI was "Watts per listing" that allowed all teams to focus on a common goal.
Focused on changing the corporate culture for communicating clear measurable goals so that everyone understands the why and how of this new KPI. You have to spend money to save money in the long run. Consider costs at least 36 months out.
Changed from purchasing servers and depreciating them over 3 years to a lease model with server replacement tech refresh every 18 months. It is a bad idea to keep IT equipment after full depreciation, as energy savings alone on new equipment easily justifies 18-month replacement.
Adopted storage tiers. Storage is purchased not leased because it is more difficult to swap out disk arrays. They have 10-40 PB of disk. They do not use traditional backup, but rather use disk replication across distant locations. They are quick to delete or archive data that does not belong on their production systems.
Their results so far? They have reduced the Watts per listing by 70 percent over the past two years. They were able to double their volume with a relatively flat IT budget.
The Wit and Wisdom of Dave Barry, Humorist and Author
Dave Barry is a humor columnist. For 25 years he was a syndicated columnist whose work appeared in more than 500 newspapers in the United States and abroad, including the [Funny Times] that I subscribe to. In 1988 he won the Pulitzer Prize for Commentary about the election and politics in general. Dave has also written a total of 30 books, of which two of his books were used as the basis for the CBS TV sitcom "Dave's World," in which Harry Anderson played a much taller version of Dave.
I first met Dave about ten years ago at a SHARE conference in Minneapolis, MN. It was good to see him again.
Backup and Beyond
The analyst covered the "Three C's" of backup: cost, capability and complexity. There are many ways to implement backup, and he predicts that 30 percent of all companies will re-evaluate and re-architect their backup strategy, or at least change their backup software, by 2014 to address these three issues. Another survey indicates that 43 percent of companies are considering backup the primary reason they are investigating public cloud service providers.
The top three primary backup software vendors for the audience were Symantec, IBM, and Commvault. An interactive poll of the audience offered some insight:
There appears to be shift away from using disk to emulate tape (Virtual Tape Library) and instead use direct disk interfaces.
Some of the recommended actions were:
Exploit backup software features. On average, people keep 11 versions of backup, try cutting this down to four versions. IBM Tivoli Storage Manager allows this to be done via management class policies.
Implement a separate archive. Once data is archived and backed up, it reduces the backup load of production systems. Any chance to backup semi-static data less frequently will help.
Switch to capacity-based pricing which will allow more flexibility on server options to run backup software.
Implement data deduplication and compression, such as with IBM ProtecTIER data deduplication solution.
Consider a tiered recovery approach, where less critical applications have less backup protection. Many keep 1-2 years of backups, but 90 percent of all recoveries are for backups from the most recent 27 days. Reduce backup retention to 90 days.
Consider adopting a "Unified Recovery Management" strategy that protects laptops and desktops, remote office and branch offices, mission critical applications, and provide for business continuity and disaster recovery.
regularly test your recovery to validate your procedures and assumptions of your recoverability.
While the conference is divided into seven major tracks, it quickly becomes obvious that many of these IT datacenter issues overlap, and that approaches and decisions in one area can easily impact other areas.
Continuing my coverage of the Data Center 2010 conference, Monday I afternoon included presentations from IBM executives.
Blueprint for a Smart data center
Steve Sams, IBM Vice President, Global Site and Facilities Services, is well known at this conference. In charge of designing and building data center facilities for IBM and its clients, he has lots of experience in various datacenter configurations.
The presentation was an update from last year's [Data Center Cost Saving Actions Your CFO Will Love]. 70 cents of every IT dollar is spent on just keeping the existing systems running, leaving only 30 percent to handle growth and business transformation. Over 70 percent of datacenters are more than seven years old, and may not be designed to handle today's density in IT equipment.
Many companies wanting to virtualize are stalled. IBM's Server Virtualization Analytics services can help cut this transformation time in half, with an ROI of only 6-18 months for complex Wintel environments. This is just one of the 17 end-to-end datacenter analytics tools IBM offers. The results have been 220 percent more VM instances per admin FTE than traditional deployments. IBM drinks its own champagne, having saved over $4 Billion USD in its own datacenter consolidation and virtualization projects.
Want to Cut the Cost of Storage in Half? Here’s How
The speaker of this session started out with a startling prediction: the amount of storage purchased in the five years 2010-2014 will be 25x what was purchased in 2009, on a PB basis. Most attempts to stem this capacity growth have failed. Therefore, the focus to cut storage costs need to be elsewhere.
The first concern is poor utilization. Utilization on DAS averages 10 percent, SANs 40-50 percent. Thin provisioning can raise this to 60-75 percent. Thin Provisioning was first introduced for the mainframe storage in the 1990s by StorageTek which IBM resold as the IBM RAMAC Virtual Array (RVA), but many credit 3PAR for porting this over to distributed operating systems in 2002. Other options include data deduplication and compression to reduce the cost of storing data on disk.
The second approach is use of storage tiering. In this case, the speaker felt SATA was 3x cheaper ($/GB) but can also be 3x lower performance. Moving data between faster FC/SAS 10K and 15K RPM drives to slower 7200 RPM drives can offer some cost reductions.
Implementing "quotas" in email, file systems or other applications is one of the worst financial decisions an IT department can make, as it merely shifts the storage management from experts (IT staff) to non-experts (end users).
The speaker recommended using archive instead. Keeping backup tapes for long-term is not archive, backups should not be older than eight weeks old.
Interactive polls of the audience gave some interesting insight:
When asked expected storage capacity "compound annual growth rate" (CAGR) for the next few years, 26 percent estimate 35-50 CAGR, 30 percent estimate 50-75 CAGR, and 15 estimate greater than 75 percent CAGR.
For thin provisioning, 43 percent of the audience already are using it, and 33 percent plan to next year.
Similarly , 41 percent of audience is using data deduplication for their primary data, and 30 percent plan to next year.
For automated tiering that moves portions of data automatically between fast and slow tiers of storage to optimize performance, like IBM's Easy Tier, 20 percent are already using it, and 44 percent plan to next year.
41 percent already have some archiving for file systems, 17 percent plan to next year.
Only 6 percent have an all-disk backup/replication environment, but 20 percent plan to adopt this next year.
The downsize of trying to squeeze out costs with these approaches and technologies is that there can be negative impact to performance. The speaker suggested a balanced approach of adding lower cost storage to existing fast storage to meet both capacity and performance requirements.
Smarter Infrastructures Deliver Better Economics
Elaine Lennox, IBM Vice President and Business Line Executive for System Software, presented the "3 D's" of a Smarter Infrastructure: design, data and delivery.
Design: new technologies and approaches are forcing people to reconsider the design of their applications, their infrastructure and their facilities.
Data: on average, companies store 17 copies of the same piece of production data. Data needs to be managed better in the future.
Delivery: new types of cloud computing are changing the way IT services can be delivered, and how they are consumed by end users.
Roadmap to Enterprise Cloud Computing
This was a combo vendor/customer presentation. Rex Wang from Oracle presented an overview of Oracle's service and product offerings, and then Jonathan Levine, COO of LinkShare, presented his experiences deploying Oracle ExaData.
Rex presented Oracle's "Cloud maturity model" that has its customers go through the following steps:
Silo: each application on its own stack of software, server and storage.
Grid: virtualization for shared infrastructure and platforms (internal IaaS and PaaS).
Private cloud: self-service, policy-based management, metered chargeback and capacity planning.
Hybrid Cloud: workloads portable between private and public clouds, offering federation, cloud bursting, and interoperability.
Rex felt the standard "Buy vs Rent" argument in the business world applies to IT as well, and that there could be break-even points over long-term TCO analysis that favors one over the other. He cited internal research that showed 28 percent of Oracle customers have internal or private cloud, and 14 percent use public cloud. 25 percent use Application PaaS, 21 percent database PaaS, 5 percent Identity management PaaS, 10 percent Compute IaaS, 18 percent storage IaaS, and 15 percent Test/Dev IaaS.
Rex felt that in all the hype around taking a single host and dividing it into multiple VMs, people have forgotten that the opposite approach of taking multiple instances into clusters is also important. He also felt you have to look at the entire "Application Lifecycle" that goes from:
IT sets up the equipment as an internal PaaS or IaaS
Developers write the application
End users are trained and use the application
Application owners manage and monitor the application
IT meters the usage and does chargeback to each application owner
Oracle's ExaData and ExaLogic compete directly against IBM's Smart Analytics System, IBM CloudBurst, and IBM Smart Business Storage Cloud.
Next up was Jonathan Levine, COO of [LinkShare], a subsidiary of Rakutan in Japan. This is an [Affiliated Marketing] company. Instead of pay-per-view or pay-per-click web advertising, this company only gets paid when the "end user" actually buys something when clicking on web advertising.
The business runs on an 8TB data warehouse and 1 TB OLTP database, ingesting 50GB daily, with 400 million transactions per day with 8.5 GB/sec throughput.
They discovered that the Oracle ExaData did not work right out of the box. In fact, it took them about a year to get it working for them, roughly the same amount of months it took them on their last Oracle 10 to Oracle 11 conversion.
Part of their business allows advertisers and web content publishers to generate reports on activity. Jonathan indicates that if the response is longer than 5 seconds, it might as well be an hour. He called this the "Excel" rule, that results need to be as fast as local PC Microsoft Excel pivot table processing.
With the new Exadata, they met this requirement. Over 84 percent of their transactions happen under 2 seconds, 9 percent take 2-4 seconds, and another 4 percent in the 4-8 second range. They hope that as they approach the winter holiday season that they can handle 2-3x more traffic without negatively impacting this response time.