Continuing my coverage of the Data Center 2010 conference, Monday I afternoon included presentations from IBM executives.
- Blueprint for a Smart data center
Steve Sams, IBM Vice President, Global Site and Facilities Services, is well known at this conference. In charge of designing and building data center facilities for IBM and its clients, he has lots of experience in various datacenter configurations.
The presentation was an update from last year's [Data Center Cost Saving Actions Your CFO Will Love]. 70 cents of every IT dollar is spent on just keeping the existing systems running, leaving only 30 percent to handle growth and business transformation. Over 70 percent of datacenters are more than seven years old, and may not be designed to handle today's density in IT equipment.
Many companies wanting to virtualize are stalled. IBM's Server Virtualization Analytics services can help cut this transformation time in half, with an ROI of only 6-18 months for complex Wintel environments. This is just one of the 17 end-to-end datacenter analytics tools IBM offers. The results have been 220 percent more VM instances per admin FTE than traditional deployments. IBM drinks its own champagne, having saved over $4 Billion USD in its own datacenter consolidation and virtualization projects.
- Want to Cut the Cost of Storage in Half? Here’s How
The speaker of this session started out with a startling prediction: the amount of storage purchased in the five years 2010-2014 will be 25x what was purchased in 2009, on a PB basis. Most attempts to stem this capacity growth have failed. Therefore, the focus to cut storage costs need to be elsewhere.
The first concern is poor utilization. Utilization on DAS averages 10 percent, SANs 40-50 percent. Thin provisioning can raise this to 60-75 percent. Thin Provisioning was first introduced for the mainframe storage in the 1990s by StorageTek which IBM resold as the IBM RAMAC Virtual Array (RVA), but many credit 3PAR for porting this over to distributed operating systems in 2002. Other options include data deduplication and compression to reduce the cost of storing data on disk.
The second approach is use of storage tiering. In this case, the speaker felt SATA was 3x cheaper ($/GB) but can also be 3x lower performance. Moving data between faster FC/SAS 10K and 15K RPM drives to slower 7200 RPM drives can offer some cost reductions.
Implementing "quotas" in email, file systems or other applications is one of the worst financial decisions an IT department can make, as it merely shifts the storage management from experts (IT staff) to non-experts (end users). The speaker recommended using archive instead. Keeping backup tapes for long-term is not archive, backups should not be older than eight weeks old.
Interactive polls of the audience gave some interesting insight:
- When asked expected storage capacity "compound annual growth rate" (CAGR) for the next few years, 26 percent estimate 35-50 CAGR, 30 percent estimate 50-75 CAGR, and 15 estimate greater than 75 percent CAGR.
- For thin provisioning, 43 percent of the audience already are using it, and 33 percent plan to next year.
- Similarly , 41 percent of audience is using data deduplication for their primary data, and 30 percent plan to next year.
- For automated tiering that moves portions of data automatically between fast and slow tiers of storage to optimize performance, like IBM's Easy Tier, 20 percent are already using it, and 44 percent plan to next year.
- 41 percent already have some archiving for file systems, 17 percent plan to next year.
- Only 6 percent have an all-disk backup/replication environment, but 20 percent plan to adopt this next year.
The downsize of trying to squeeze out costs with these approaches and technologies is that there can be negative impact to performance. The speaker suggested a balanced approach of adding lower cost storage to existing fast storage to meet both capacity and performance requirements.
- Smarter Infrastructures Deliver Better Economics
Elaine Lennox, IBM Vice President and Business Line Executive for System Software, presented the "3 D's" of a Smarter Infrastructure: design, data and delivery.
- Design: new technologies and approaches are forcing people to reconsider the design of their applications, their infrastructure and their facilities.
- Data: on average, companies store 17 copies of the same piece of production data. Data needs to be managed better in the future.
- Delivery: new types of cloud computing are changing the way IT services can be delivered, and how they are consumed by end users.
- Roadmap to Enterprise Cloud Computing
This was a combo vendor/customer presentation. Rex Wang from Oracle presented an overview of Oracle's service and product offerings, and then Jonathan Levine, COO of LinkShare, presented his experiences deploying Oracle ExaData.
Rex presented Oracle's "Cloud maturity model" that has its customers go through the following steps:
- Silo: each application on its own stack of software, server and storage.
- Grid: virtualization for shared infrastructure and platforms (internal IaaS and PaaS).
- Private cloud: self-service, policy-based management, metered chargeback and capacity planning.
- Hybrid Cloud: workloads portable between private and public clouds, offering federation, cloud bursting, and interoperability.
Rex felt the standard "Buy vs Rent" argument in the business world applies to IT as well, and that there could be break-even points over long-term TCO analysis that favors one over the other. He cited internal research that showed 28 percent of Oracle customers have internal or private cloud, and 14 percent use public cloud. 25 percent use Application PaaS, 21 percent database PaaS, 5 percent Identity management PaaS, 10 percent Compute IaaS, 18 percent storage IaaS, and 15 percent Test/Dev IaaS.
Rex felt that in all the hype around taking a single host and dividing it into multiple VMs, people have forgotten that the opposite approach of taking multiple instances into clusters is also important. He also felt you have to look at the entire "Application Lifecycle" that goes from:
- IT sets up the equipment as an internal PaaS or IaaS
- Developers write the application
- End users are trained and use the application
- Application owners manage and monitor the application
- IT meters the usage and does chargeback to each application owner
Oracle's ExaData and ExaLogic compete directly against IBM's Smart Analytics System, IBM CloudBurst, and IBM Smart Business Storage Cloud.
Next up was Jonathan Levine, COO of [LinkShare], a subsidiary of Rakutan in Japan. This is an [Affiliated Marketing] company. Instead of pay-per-view or pay-per-click web advertising, this company only gets paid when the "end user" actually buys something when clicking on web advertising.
The business runs on an 8TB data warehouse and 1 TB OLTP database, ingesting 50GB daily, with 400 million transactions per day with 8.5 GB/sec throughput.
They discovered that the Oracle ExaData did not work right out of the box. In fact, it took them about a year to get it working for them, roughly the same amount of months it took them on their last Oracle 10 to Oracle 11 conversion.
Part of their business allows advertisers and web content publishers to generate reports on activity. Jonathan indicates that if the response is longer than 5 seconds, it might as well be an hour. He called this the "Excel" rule, that results need to be as fast as local PC Microsoft Excel pivot table processing.
With the new Exadata, they met this requirement. Over 84 percent of their transactions happen under 2 seconds, 9 percent take 2-4 seconds, and another 4 percent in the 4-8 second range. They hope that as they approach the winter holiday season that they can handle 2-3x more traffic without negatively impacting this response time.
Attendees who want a copy of the IBM presentations can go to the IBM [Literature Fullfillment] website.