Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
This week I am in beautiful Las Vegas for the Data Center 2010 Conference. While the conference officially starts Monday, I arrived on Sunday to help set up the IBM Booth (Booth "Z").
(Note: This is my third year attending this conference. IBM is a platinum sponsor for this event. The analyst company that runs this event has kindly asked me not to mention their name on this blog, display any of their logos, mention the names of any of their employees, include photos of any of their analysts, include slides from their presentations, or quote verbatim any of their speech at this conference. This is all done to protect and respect their intellectual property that their members pay for. This is all documented in a lengthy document in case I forget. So, if the picture of the conference backpack appears lopped off at the top, this was done intentionally to comply with their request. The list of sponsors at this event represents a "who's who" of the IT industry.)
The pre-conference orientation is for people who are first-timers, or for those who have not attended this conference in a while. The conference includes 7 keynote presentations and 68 sessions organized into seven "tracks" plus one "virtual track" which crosses the other seven:
Servers and Operating Systems
Cost Optimization "Virtual Track"
Each session is further classified as foundational versus advanced, business versus technical, and practical versus strategic.
The speaker also presented some unique methodologies that will be used this week, including "Magic Quadrant", "MarketScope", "Hype Cycle" and "IT Market Clock" which provide graphical representation to help attendees better understand the conference materials.
The Welcome Reception was sponsored by VCE, formerly known as Acadia, the coalition comprised of VMware, Intel, Cisco and EMC. I joked that this should be "VICE" so that Intel does not feel left out.
While we enjoyed drinks and snacks, we listened to live music from the all-violin band [Phat Strad].
The CEO of VCE, Michael Capellas, recognized me from across the room and came over to ask me how IBM was doing. We had a nice friendly chat about the IT industry and the economy.
Continuing my coverage of the Data Center 2010 conference, Monday I attended four keynote sessions.
The first keynote speaker started out with an [English proverb]: Turbulent waters make for skillful mariners.
He covered the state of the global economy and how CIOs should address the challenge. We are on the flat end of an "L-shaped" recovery in the United States. GDP growth is expected to be only 4.7 percent Latin America, 2.3 percent in North America, 1.5 percent Europe. Top growth areas include 8.0 percent India and 8.6 percent China, with an average of 4.7 growth for the entire Asia Pacific region.
On the technical side, the top technologies that CIOs are pursuing for 2011 are Cloud Computing, Virtualization, Mobility, and Business Intelligence/Analytics. He asked the audience if the "Stack Wars" for integrated systems are hurting or helping innovation in these areas.
Move over "conflict diamonds", companies now need to worry about [conflict minerals].
He proposed an alternative approach called Fabric-Based Infrastructure. In this new model, a shared pool of servers is connected to a shared pool of storage over an any-to-any network. In this approach, IT staff spend all of their time just stocking up the vending machine, allowing end-users to get the resources they need.
Crucial Trends You Need to Watch
The second speaker covered ten trends to watch, but these were not limited to just technology trends.
Virtualization is just beginning - even though IBM has had server virtualization since 1967 and storage virtualization since 1974, the speaker felt that adoption of virtualization is still in its infancy. Ten years ago, average CPU utilization for x86 servers of was only 5-7 percent. Thanks to server virtualization like VMware and Hyper-V, companies have increased this to 25 percent, but many projects to virtualized have stalled.
Big Data is the elephant in the room - storage growth is expected to grow 800 percent over the next 5 years.
Green IT - Datacenters consume 40 to 100 times more energy than the offices they support. Six months ago, Energy Star had announced [standards for datacenters] and energy efficiency initiatives.
Unified Communications - Voice over IP (VoIP) technologies, collaboration with email and instant messages, and focus on Mobile smartphones and other devices combines many overlapping areas of communication.
Staff retention and retraining - According to US Labor statistics, the average worker will have 10 to 14 different jobs by the time they reach 38 years of age. People need to broaden their scope and not be so vertically focused on specific areas.
Social Networks and Web 2.0 - the keynote speaker feels this is happening, and companies that try to restrict usage at work are fighting an uphill battle. Better to get ready for it and adopt appropriate policies.
Legacy Migrations - companies are stuck on old technology like Microsoft Windows XP, Internet Explorer 6, and older levels of Office applications. Time is running out, but migration to later releases or alternatives like Red Hat Linux with Firefox browser are not trivial tasks.
Compute Density - Moore's Law that says compute capability will double every 18 months is still going strong. We are now getting more cores per socket, forcing applications to re-write for parallel processing, or use virtualization technologies.
Cloud Computing - every session this week will mention Cloud Computing.
Converged Fabrics - some new approaches are taking shape for datacenter design. Fabric-based infrastructure would benefit from converging SAN and LAN fabrics to allow pools of servers to communicate freely to pools of storage.
He sprinkled fun factoids about our world to keep things entertaining.
50 percent of today's 21-year-olds have produced content for the web. 70 percent of four-year-olds have used a computer. The average teenager writes 2,282 text messages on their cell phone per month.
This year, Google averaged 31 billion searches per month, compared 2.6 billion searches per month in 2007.
More video has been uploaded to YouTube in the last two months than the three major US networks (ABC, NBC, CBS) have aired since 1948.
Wikipedia averages 4300 new articles per day, and now has over 13 million articles.
This year, Facebook reached 500 million users. If it were a country, it would be ranked third. Twitter would be ranked 7th, with 69% of their growth being from people 32-50 years old.
In 1997, a GB of flash memory cost nearly $8000 to manufacture, today it is only $1.25 instead.
The computer in today's cell phone is million times cheaper, and thousand times more powerful, than a single computer installed at MIT back in 1965. In 25 years, the compute capacity of today's cell phones could fit inside a blood cell.
See [interview of Ray Kurzweil] on the Singularity for more details.
The Virtualization Scenario: 2010 to 2015
The third keynote covered virtualization. While server virtualization has helped reduce server costs, as well as power and cooling energy consumption, it has had a negative effect on other areas. Companies that have adopted server virtualization have discovered increased costs for storage, software and test/development efforts.
The result is a gap between expectations and reality. Many virtualization projects have stalled because there is a lack of long-term planning. The analysts recommend deploying virtualization in stages, tackle the first third, so called "low hanging fruit", then proceed with the next third, and then wait and evaluate results before completing the last third, most difficult applications.
Virtualization of storage and desktop clients are completely different projects than server virtualization and should be handled accordingly.
Cloud Computing: Riding the Storm Out
The fourth keynote focus on the pros and cons of Cloud Computing. First they start by defining the five key attributes of Cloud: self-service, scalable elasticity, shared pool of resources, metered and paid per use, over open standard networking technologies.
In addition to IaaS, PaaS and SaaS classifications, the keynote speaker mentioned a fourth one: Business Process as a Service (BPaaS), such as processing Payroll or printing invoices.
While the debate rages over the benefits between private and public cloud approaches, the keynote speaker brings up the opportunites for hybrid and community clouds. In fact, he felt there is a business model for a "cloud broker" that acts as the go-between companies and cloud service providers.
A poll of the audience found the top concerns inhibiting cloud adoption were security, privacy, regulatory compliance and immaturity. Some 66 percent indicated they plan to spend more on private cloud in 2011, and 20 percent plan to spend more on public cloud options. He suggested six focus areas:
Test and Development
Prototyping / Proof-of-Concept efforts
Web Application serving
SaaS like email and business analytics
Select workloads that lend themselves to parallelization
The session wrapped up with some stunning results reported by companies. Server provisioning accomplished in 3-5 minutes instead of 7-12 weeks. Reduced cost of email by 70 percent. Four-hour batch jobs now completed in 20 minutes. 50 percent increase in compute capacity with flat IT budget. With these kind of results, the speaker suggests that CIOs should at least start experimenting with cloud technologies and start to profile their workloads and IT services to develop a strategy.
That was just Monday morning, this is going to be an interesting week!
Continuing my coverage of the Data Center 2010 conference, Monday I afternoon included presentations from IBM executives.
Blueprint for a Smart data center
Steve Sams, IBM Vice President, Global Site and Facilities Services, is well known at this conference. In charge of designing and building data center facilities for IBM and its clients, he has lots of experience in various datacenter configurations.
The presentation was an update from last year's [Data Center Cost Saving Actions Your CFO Will Love]. 70 cents of every IT dollar is spent on just keeping the existing systems running, leaving only 30 percent to handle growth and business transformation. Over 70 percent of datacenters are more than seven years old, and may not be designed to handle today's density in IT equipment.
Many companies wanting to virtualize are stalled. IBM's Server Virtualization Analytics services can help cut this transformation time in half, with an ROI of only 6-18 months for complex Wintel environments. This is just one of the 17 end-to-end datacenter analytics tools IBM offers. The results have been 220 percent more VM instances per admin FTE than traditional deployments. IBM drinks its own champagne, having saved over $4 Billion USD in its own datacenter consolidation and virtualization projects.
Want to Cut the Cost of Storage in Half? Here’s How
The speaker of this session started out with a startling prediction: the amount of storage purchased in the five years 2010-2014 will be 25x what was purchased in 2009, on a PB basis. Most attempts to stem this capacity growth have failed. Therefore, the focus to cut storage costs need to be elsewhere.
The first concern is poor utilization. Utilization on DAS averages 10 percent, SANs 40-50 percent. Thin provisioning can raise this to 60-75 percent. Thin Provisioning was first introduced for the mainframe storage in the 1990s by StorageTek which IBM resold as the IBM RAMAC Virtual Array (RVA), but many credit 3PAR for porting this over to distributed operating systems in 2002. Other options include data deduplication and compression to reduce the cost of storing data on disk.
The second approach is use of storage tiering. In this case, the speaker felt SATA was 3x cheaper ($/GB) but can also be 3x lower performance. Moving data between faster FC/SAS 10K and 15K RPM drives to slower 7200 RPM drives can offer some cost reductions.
Implementing "quotas" in email, file systems or other applications is one of the worst financial decisions an IT department can make, as it merely shifts the storage management from experts (IT staff) to non-experts (end users).
The speaker recommended using archive instead. Keeping backup tapes for long-term is not archive, backups should not be older than eight weeks old.
Interactive polls of the audience gave some interesting insight:
When asked expected storage capacity "compound annual growth rate" (CAGR) for the next few years, 26 percent estimate 35-50 CAGR, 30 percent estimate 50-75 CAGR, and 15 estimate greater than 75 percent CAGR.
For thin provisioning, 43 percent of the audience already are using it, and 33 percent plan to next year.
Similarly , 41 percent of audience is using data deduplication for their primary data, and 30 percent plan to next year.
For automated tiering that moves portions of data automatically between fast and slow tiers of storage to optimize performance, like IBM's Easy Tier, 20 percent are already using it, and 44 percent plan to next year.
41 percent already have some archiving for file systems, 17 percent plan to next year.
Only 6 percent have an all-disk backup/replication environment, but 20 percent plan to adopt this next year.
The downsize of trying to squeeze out costs with these approaches and technologies is that there can be negative impact to performance. The speaker suggested a balanced approach of adding lower cost storage to existing fast storage to meet both capacity and performance requirements.
Smarter Infrastructures Deliver Better Economics
Elaine Lennox, IBM Vice President and Business Line Executive for System Software, presented the "3 D's" of a Smarter Infrastructure: design, data and delivery.
Design: new technologies and approaches are forcing people to reconsider the design of their applications, their infrastructure and their facilities.
Data: on average, companies store 17 copies of the same piece of production data. Data needs to be managed better in the future.
Delivery: new types of cloud computing are changing the way IT services can be delivered, and how they are consumed by end users.
Roadmap to Enterprise Cloud Computing
This was a combo vendor/customer presentation. Rex Wang from Oracle presented an overview of Oracle's service and product offerings, and then Jonathan Levine, COO of LinkShare, presented his experiences deploying Oracle ExaData.
Rex presented Oracle's "Cloud maturity model" that has its customers go through the following steps:
Silo: each application on its own stack of software, server and storage.
Grid: virtualization for shared infrastructure and platforms (internal IaaS and PaaS).
Private cloud: self-service, policy-based management, metered chargeback and capacity planning.
Hybrid Cloud: workloads portable between private and public clouds, offering federation, cloud bursting, and interoperability.
Rex felt the standard "Buy vs Rent" argument in the business world applies to IT as well, and that there could be break-even points over long-term TCO analysis that favors one over the other. He cited internal research that showed 28 percent of Oracle customers have internal or private cloud, and 14 percent use public cloud. 25 percent use Application PaaS, 21 percent database PaaS, 5 percent Identity management PaaS, 10 percent Compute IaaS, 18 percent storage IaaS, and 15 percent Test/Dev IaaS.
Rex felt that in all the hype around taking a single host and dividing it into multiple VMs, people have forgotten that the opposite approach of taking multiple instances into clusters is also important. He also felt you have to look at the entire "Application Lifecycle" that goes from:
IT sets up the equipment as an internal PaaS or IaaS
Developers write the application
End users are trained and use the application
Application owners manage and monitor the application
IT meters the usage and does chargeback to each application owner
Oracle's ExaData and ExaLogic compete directly against IBM's Smart Analytics System, IBM CloudBurst, and IBM Smart Business Storage Cloud.
Next up was Jonathan Levine, COO of [LinkShare], a subsidiary of Rakutan in Japan. This is an [Affiliated Marketing] company. Instead of pay-per-view or pay-per-click web advertising, this company only gets paid when the "end user" actually buys something when clicking on web advertising.
The business runs on an 8TB data warehouse and 1 TB OLTP database, ingesting 50GB daily, with 400 million transactions per day with 8.5 GB/sec throughput.
They discovered that the Oracle ExaData did not work right out of the box. In fact, it took them about a year to get it working for them, roughly the same amount of months it took them on their last Oracle 10 to Oracle 11 conversion.
Part of their business allows advertisers and web content publishers to generate reports on activity. Jonathan indicates that if the response is longer than 5 seconds, it might as well be an hour. He called this the "Excel" rule, that results need to be as fast as local PC Microsoft Excel pivot table processing.
With the new Exadata, they met this requirement. Over 84 percent of their transactions happen under 2 seconds, 9 percent take 2-4 seconds, and another 4 percent in the 4-8 second range. They hope that as they approach the winter holiday season that they can handle 2-3x more traffic without negatively impacting this response time.
Continuing my coverage of the Data Center 2010 conference, Tuesday morning I attended several sessions. The first was a serious IT discussion with Mazen Rawashdeh, Technology Executive from eBay, and the second was a lighthearted review of the benefits from Cloud Computing from humorist Dave Barry, and the third focused on re-architecting backup strategies.
eBay – How One Fast Growing Company is Solving its Infrastructure and Data Center Challenges
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change." -- Charles Darwin
So far, this has been the best session I have attended. eBay operates in 32 countries in seven languages, helping 90 million users to buy or sell 245 million items in 50,000 categories. Let's start with some statistics of the volume of traffic that eBay handles:
$2000 traded every second
cell phone sold every six seconds
pair of shoes sold every nine seconds
a major appliance sold every minute
93 billion database actions every day
50 TB of daily ingested daily
code changes to the eBay application are rolled in every day
In 2007, eBay discovered a disturbing trend, that infrastructure costs matched linear growth to business listing volume, which was an unsustainable model. Mazen Rawashdeh, eBay Marketplace Technology Operations, presented their strategy to break free from this problem. They want to double the number of listings without doubling their costs. They are 2 years into their 4 year plan:
Switched from expensive 12U high servers consuming 3 Kilowatts over to open source software on commodity 1-2U server hardware. Mazen owns all the costs from cement floor up to the web server.
Replaced team-optimized key performance indicators (KPI) with a common KPI. The server team focused on transactions per minute. The storage team was focused on utilization. The network team was focused on MB/sec bandwidth. The problem is that changes to optimize one might have negative impact to other teams. The new KPI was "Watts per listing" that allowed all teams to focus on a common goal.
Focused on changing the corporate culture for communicating clear measurable goals so that everyone understands the why and how of this new KPI. You have to spend money to save money in the long run. Consider costs at least 36 months out.
Changed from purchasing servers and depreciating them over 3 years to a lease model with server replacement tech refresh every 18 months. It is a bad idea to keep IT equipment after full depreciation, as energy savings alone on new equipment easily justifies 18-month replacement.
Adopted storage tiers. Storage is purchased not leased because it is more difficult to swap out disk arrays. They have 10-40 PB of disk. They do not use traditional backup, but rather use disk replication across distant locations. They are quick to delete or archive data that does not belong on their production systems.
Their results so far? They have reduced the Watts per listing by 70 percent over the past two years. They were able to double their volume with a relatively flat IT budget.
The Wit and Wisdom of Dave Barry, Humorist and Author
Dave Barry is a humor columnist. For 25 years he was a syndicated columnist whose work appeared in more than 500 newspapers in the United States and abroad, including the [Funny Times] that I subscribe to. In 1988 he won the Pulitzer Prize for Commentary about the election and politics in general. Dave has also written a total of 30 books, of which two of his books were used as the basis for the CBS TV sitcom "Dave's World," in which Harry Anderson played a much taller version of Dave.
I first met Dave about ten years ago at a SHARE conference in Minneapolis, MN. It was good to see him again.
Backup and Beyond
The analyst covered the "Three C's" of backup: cost, capability and complexity. There are many ways to implement backup, and he predicts that 30 percent of all companies will re-evaluate and re-architect their backup strategy, or at least change their backup software, by 2014 to address these three issues. Another survey indicates that 43 percent of companies are considering backup the primary reason they are investigating public cloud service providers.
The top three primary backup software vendors for the audience were Symantec, IBM, and Commvault. An interactive poll of the audience offered some insight:
There appears to be shift away from using disk to emulate tape (Virtual Tape Library) and instead use direct disk interfaces.
Some of the recommended actions were:
Exploit backup software features. On average, people keep 11 versions of backup, try cutting this down to four versions. IBM Tivoli Storage Manager allows this to be done via management class policies.
Implement a separate archive. Once data is archived and backed up, it reduces the backup load of production systems. Any chance to backup semi-static data less frequently will help.
Switch to capacity-based pricing which will allow more flexibility on server options to run backup software.
Implement data deduplication and compression, such as with IBM ProtecTIER data deduplication solution.
Consider a tiered recovery approach, where less critical applications have less backup protection. Many keep 1-2 years of backups, but 90 percent of all recoveries are for backups from the most recent 27 days. Reduce backup retention to 90 days.
Consider adopting a "Unified Recovery Management" strategy that protects laptops and desktops, remote office and branch offices, mission critical applications, and provide for business continuity and disaster recovery.
regularly test your recovery to validate your procedures and assumptions of your recoverability.
While the conference is divided into seven major tracks, it quickly becomes obvious that many of these IT datacenter issues overlap, and that approaches and decisions in one area can easily impact other areas.
Continuing my coverage of the [Data Center 2010 conference], Tuesday afternoon I presented "Choosing the Right Storage for your Server Virtualization". In 2008 and 2009, I attended this conference as a blogger only, but this time I was also a presenter.
The conference asked vendors to condense their presentations down to 20 minutes. I am sure this was inspired by the popular 18-minute lectures from the [TED conference] or perhaps the [Pecha Kucha] night gatherings in Japan where each presenter speaks while showing 20 slides for 20 seconds each, This forces the presenters to focus on their key points and not fill the time slot with unnecessary marketing fluff. This also allows more vendors to have a chance to pitch their point of view.