Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday morning started with another keynote session, followed by some break-out sessions.
- Realities of IT Investment
Tighter budgets mean more business decisions. Future investments will come from cost savings. The analysts report that 77 percent of IT decisions are made by CFOs. Most organizations are spending less now than back in 2008 before the recession.
How we innovate through IT is changing. In bad times, risk trumps return, but only 21 percent of the audience have a formal "risk calculation" as part of their purchase plans.
Divestment matters as much as investment. Reductions in complexity have the greatest long-term cost savings. Try to retire at least 20 percent of your applications next year. With the advent of Cloud Computing, companies might just retire it and go entirely with public cloud offerings. Note that this graph the years are different than the ones above, in groups of half-decade increments.
It is important to identify functional dependencies and link your IT risks to business outcomes. Focus on making costs visible, and re-think how you communicate IT performance measurements and their impact to business. Try to change the culture and mind-set so that projects are not referred to as "IT projects" focused on technology, but rather they are "business projects" focused on business results.
- Moving to the Cloud
Richard Whitehead from Novell presented challenges in moving to Cloud Computing. There are risks and challenges managing multiple OS environments. Users should have full access to all IT resources they need to do their jobs. Computing should be secure, compliant, and portable. Here is the shift he sees from physical servers to virtual and cloud deployments, years 2010 to 2015:
Richard considers a "workload" as being the combination of the operating system, middleware, and application. He then defines "Business Service" as an appropriate combination of these workloads. For example, a business service that provides a particular report might involve a front-end application, talking through business logic workload server, talking to a back-end database workload server.
To address this challenge, Novell introduces "Intelligent Workload Management", called WorkloadIQ. This manages the lifecycle to build, secure, deploy, manage and measure each workload. Their motto was to take the mix of physical, virtual and cloud workloads all "make it work as one". IBM is a business partner with Novell, and I am a big fan of Novell's open-source solutions including SUSE Linux.
- A Funny Thing Happened on the Way to the Cloud....
Bud Albers, CTO of Disney, shared their success in deploying their hybrid cloud infrastructure. Everyone recognizes the Disney brand for movies and theme parks, but may not aware that they also own ABC News and ESPN television, Travel cruises, virtual worlds, mobile sites, and deploy applications like Fantasy Football and Fantasy Fishing.
Two years ago, each Line of Business (LOB) owned their own servers, they were continually out of space, power and HVAC issues forced tactical build-outs of their datacenters. But in 2008, the answer to all questions was Cloud Computing, it slices and dices like something invented by [Ron Popeill], with no investment or IT staff required. However, continuing to ask the CFO for CAPEX to purchase assets that were only 1/7th used was not working out either. That's right, over 75 percent of their servers were running less than 15 percent CPU utilization.
The compromise was named "D*Cloud". Internal IT infrastructure would be positioned for Cloud Computing, by adopting server virtualization, implementing REST/SOAP interfaces, and replicating the success across their various Content Distribution Networks (CDN). Disney is no stranger to Open Source software, using Linux and PHP. Their [Open Source] web page shows tools available from Disney Animation studios.
At the half-way point, they had half their applications running virtualized on just 4 percent of their servers. Today, they run over 20 VMs per host and have 65 percent of their apps virtualized. Their target is 80 percent of their apps virtualized by 2014.
Bud used the analogy that public clouds will be the "gas stations" of the IT industry. People will choose the cheapest gas among nearby gas stations. By focusing on "Application management" rather than "VM instance management", Disney is able to seamlessly move applications as needed from private to public cloud platforms.
Their results? Disney is now averaging 40 percent CPU utilization across all servers. Bud feels they have achieved better scalability, better quality of service, and increased speed, all while saving money. Disney is spending less on IT now than in 2008,
- UPMC Maximizes Storage Efficiency with IBM
Kevin Muha, UPMC Enterprise Architect & Technology Manager for Storage and Data Protection Services, was unable to present this in person, so Norm Protsman (IBM) presented Kevin's charts on the success at the University of Pittsburgh Medical Center [UPMC]. UPMC is Western Pennsylvania's largest employer, with roughly 50,000 employees across 20 hospitals, 400 doctors' offices and outpatient sites. They have frequently been rated one of the best hospitals in the US.
Their challenge was storage growth. Their storage environment had grown 328 percent over the past three years, to 1.6PB of disk and nearly 7 PB of physical tape. To address this, UPMC deployed four IBM TS7650G ProtecTIER gateways (2 clusters) and three XIV storage systems for their existing IBM Tivoli Storage Manager (TSM) environment. Since they were already using TSM over a Fibre Channel SAN, the implementation took only three days.
UPMC was backing up nearly 60TB per day, in a 15-hour back window. Their primary data is roughly 60 percent Oracle, with the rest being a mix of Microsoft Exchange, SQL Server, and unstructured data such as files and images.
Their results? TSM reclamation is 30 percent faster. Hardware footprint reduced from 9 tiles to 5. Over 50 percent reduction in recovery time for Oracle DB, and 20 percent reduction in recovery of SQL Server, Microsoft Exchange, and Epic Cache. They average 24:1 deduplication overall, which can be broken down by data category as follows:
- 29:1 Cerner Oracle
- 18:1 EPIC Cache
- 10:1 Microsoft SQL Server
- 8:1 Unstructured files
- 6:1 Microsoft Exchange
UPMC still has lots of LTO-4 tapes onsite and offsite from before the change-over, so the next phase planned is to implement "IP-based remote replication" between ProtecTIER gateways to a third data center at extended distance. The plan is to only replicate the backups of production data, and not replicate the backups of test/dev data.
The presentation and supporting case study details on this is available on the [IBM Literature Fullfilment] website.
The show floor closed after Wednesday's lunch, so many people made their last attempts to meet the folks at the booth.
technorati tags: IBM, CFO, Novell, WorkloadIQ, Disney, UPMC, ProtecTIER, TS7650G, deduplication, XIV, Kevin Muha, Norm Protsman, Bud Albers
It's Tuesday, and you know what that means... IBM Announcements!
- IBM System Storage ProtecTIER
Today, IBM refreshed its IBM System Storage ProtecTIER data deduplication family with new hardware and software. On the hardware side, The [TS7650G gateway] now has 32 cores and 64GB RAM. The [TS7650 Appliance] now has 24 cores and 64GB of RAM, and the [TS7610 Appliance Express] has 4 cores and up to 16GB of RAM.
On the software side, all of these now support Symantec's proprietary "OpenStorage" OST API. This applies across the board, from the [Enterprise Edition], [Appliance Edition], and the [Entry Edition]. For those using Symantec NetBackup as their backup software, the OST API can provide advantages over the standard VTL interface.
- IBM Systems Director Storage Control
The second announcement has an interesting twist. I could file this in my "I Told You So" folder. Offiically, it's called the [Cassandra Complex], where you accurately predict how something will turn out, but being unable to convince anyone else of what the future holds.
About ten years ago, I was asked to be lead architect of a new product to be called IBM TotalStorage Productivity Center, which was later renamed to IBM Tivoli Storage Productivity Center. This would combine three projects:
- Tivoli Storage Resource Manager (TSRM)
- Tivoli SAN Manager (TSANM)
- Multiple Device Manager (MDM)
The first two were based on Tivoli's internal GUI platform, and the MDM was a plug-in for IBM Systems Director. I argued that administrators would want everything on a single pane of glass, and that we should bring all the components under a common GUI platform, such as IBM Systems Director. Unfortunately, management did not agree with me on that, and preferred instead to leave each interface alone to minimize development effort. The only "unification" was to give them all similar sounding names, four components packaged as single product:
- Productivity Center for Data (formerly TSRM)
- Productivity Center for Fabric (formerly TSANM)
- Productivity Center for Disk (formerly MDM)
- Productivity Center for Replication (formerly MDM)
While this management decision certainly allowed version 1 to hit the market sooner, this was not a good "first impression" of the product for many of our clients.
In 2002, IBM acquired Trellisoft, Inc. which replaced the internally-developed TSRM with a much better interface, but again, this was different GUI than the other components. A "launcher" was created that would launch the various disparate interfaces for each component for Version 2. At this point, we have different development teams scattered in five locations, with the first two components being developed by the Tivoli software team, and the other two components being developed by the System Storage hardware team.
Often times, when a technical lead architect and management do not agree, things do not end well. The lead architect has to leave the product, and management is forced to take alternative actions to keep the product going. In my case, management considered the idea of a common GUI as an expensive "nice-to-have" luxury we could not afford, but I considered this a "must-have". I moved on to a new job within IBM, and management, unable to continue without my leadership, gave up and handed the entire project over to the Tivoli Software team.
The Tivoli Software team took a whiff at the pile of code and agreed that it stunk. Dusting off my original design documents, they pretty much discarded most of the code and re-wrote much from scratch, with a common database, common app server, and common GUI platform. Unfortunately, Productivity Center for Replication was held up waiting for some hardware prerequisites, but the other three components would be packaged together as "Productivity Center v3 - Standard Edition" and was a big improvement over the prior versions.
In Version 4, TotalStorage Productivity Center was renamed to Tivoli Storage Productivity Center, and the Replication component was brought into the mix. A scaled-down version packaged as Productivity Center "Basic Edition" was made available as a hardware appliance named "System Storage Productivity Center" or SSPC. The idea was to provide a pre-installed 1U-high hardware console that had the basic functions of Productivity Center, with the option to upgrade to the full Tivoli Storage Productivity Center with just license keys.
So, now, years later, management recognizes that a common GUI platform is more than just a "nice-to-have". IBM now support three very specific use cases:
- 1. Administration for a single product
For small clients who might have only a single IBM product, IBM is now focused on making the GUI browser-based, specifically to work with the Mozilla Firefox browser, but any similar browser should work as well. The new IBM Storwize V7000 GUI is a good example of this.In this case, the browser serves as the common GUI platform.
- 2. Administration for both servers and storage devices
For mid-sized companies that have administrators managing both servers and storage, IBM announced this month the new [IBM Systems Director Storage Control v4.2.1] plug-in, which provides Tivoli Storage Productivity Center "Basic Edition" support. This allows admins already familiar with IBM Systems Director for managing their servers to also manage basic storage functions. This is the "I Told You So" moment, connecting server and storage administration under the IBM Systems Director management platform makes a lot of sense, it did when I came up with the idea 10 years ago! Hmmmm?
- 3. Administration for just the storage environment
For larger companies big enough to have separate server and storage admin teams, IBM continues to offer the full Tivoli Storage Productivity Center product for the storage admins. The most recent release enhanced the support for IBM DS8000, SVC, Storwize V7000 and XIV storage systems.
Today, analysts consider IBM's [Tivoli Storage Productivity Center] one of the leading products in its category. I am glad my original vision has finally come to life, even though it took a while longer than I expected.
To learn more about IBM storage hardware, software or services, see the updated [IBM System Storage] landing page.
technorati tags: IBM, ProtecTIER, TS7650G, TS7650, TS7610, Symantec, NetBackup, OpenStorage, API, OST, TPC, TSRM, Trellisoft, TSANM, SSPC, Systems Director, Storage Control, GUI
Continuing my discussion of this week's announcements of IBM storage products, I will cover the announcements that double storage capacity per footprint.
- Linear Tape Open - Generation 5
IBM announced [LTO-5 drives], the TS2250 half-height and the TS2350 full-height drives, as well as support for LTO-5 drives in its various tape libraries: TS3100, TS3200, and TS3500. The native 1.5TB capacity of the LTO-5 cartridge is nearly double the 800GB capacity of the LTO-4 predecessor. With 2:1 compression, that's 3TB of data per cartridge! Performance-wise, the data transfer rate is 140 MB/sec, about 17 percent improvement over the 120MB/sec of the LTO-4 technology. The TS2250, TS2350, TS3100 and TS3200 now all offer dual-SAS ports for higher availability.
LTO-5 carries forward many of the advancements of past generations. For example, LTO-5 continues the G-2/G-1 "backward compatibility" architecture, which means that the LTO-5 drive can read LTO-3 and LTO-4 cartridges, and can write LTO-4 cartridges. Like the LTO-3 and LTO-4, the same LTO-5 drive can read and write WORM or regular rewriteable cartridges. Like the LTO-4, the LTO-5 offers drive-level data-at-rest encryption. These use a symmetric 256-bit AES key, managed by IBM Tivoli Key Lifecycle Manager (TKLM).
One thing that is new in LTO-5 is the Long Term File System [LTFS] available on the TS2250 and TS2350, which allows you to treat the tape as a hierarchical file system, with files and folders, that you can drag and drop like any other file system.
- XIV storage system
IBM [doubles the capacity of the XIV storage system] by supporting 2TB SATA drives. A full 15-module frame can hold up to 161TB of usable capacity. The smallest 6-module system with 2TB can hold up to 55TB of usable capacity. At this time, all of the drives in an XIV must be the same type, so we do not yet allow intermix of 1TB and 2TB in the same frame. The 2TB are more energy efficient, with a full 15-module frame consuming on average 6.7 kVA, compared to 7.8 kVA for the 1TB drives. The performance is roughly the same, so if, for example, your application workload got 3700 IOPS per module with 1TB drives, it will get about the same 3700 IOPS per module with 2TB drives.
- TS7650 ProtecTIER Data Deduplication
IBM now supports [many-to-one virtual tape volume mirroring] on the ProtecTIER. In other words, you can have two or more locations sending data to a single ProtecTIER disaster recovery site.
- N series disk system
The EXN1000 and EXN3000 can now double in capacity with 2TB SATA drives. These can be attached to the N3000 entry-level models, such as the N3400.
- DS3000 disk system
The DS3200, DS3300 and DS3400, as well as their related expansion drawers, now supports 2TB SATA drives. This means that a single control unit with three expansion drawers can hold up to 96TB of raw capacity (48 drives).
- DS8700 disk system
The DS8700 also now supports 2TB SATA drives, for a maximum raw capacity over 2PB, as well as new 600GB Fibre Channel drives. Now that IBM offers [Easy Tier] functionality, pairing Solid State Drives with slower, energy-efficient SATA disk makes a lot of financial sense.
That's a lot of announcements! As always, feel free to dig into each of the links to learn more about each product.
technorati tags: IBM, LTO-5, TS2250, TS2350, TS3100, TS3200, TS500, AES, TKLM, LTFS, XIV, 2TB, TS7650, TS7650G, EXN1000, EXN3000, N3400, DS3200, DS3300, DS3400, DS8700, SATA