While I am not trying to start a blogfight with fellow blogger Scott Waterhouse from EMC, his latest post about TSM is just distasteful.
Here's an excerpt from Scott's last post:
"So does TSM scale? Sure! Just add more servers. But this is not an economy of scale. Nothing gets less expensive as the capacity grows. You get a more or less linear growth of costs that is directly correlated to the growth of primary storage capacity. (Technically, it costs will jump at regular and predictable intervals, by regular and predictable and equal amounts, as you add TSM servers to the infrastructure--but on average it is a direct linear growth. Assuming you are right sized right now, if you were to double your primary storage capacity, you would double the size of the TSM infrastructure, and double your associated costs.)"
I talked about inaccurate vendor FUD in my post [The murals in restaurants], and recently, I saw StorageBod's piece, [FUDdy Waters]. So what would "economies of scale" look like? Using Scott's own words:
- Without Economies of Scale
"If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data."
- With Economies of Scalee
"If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale."
So, let's do some simple examples. I'll focus on a backup solution just for employee workstations, each employee has 100GB of personal data to backup on their laptop or PC. We'll look at a one-person company, a ten-person company, and a hundred-person company.
- Case 1: The one-person company
- Here the sole owner needs a backup solution. Here are all the steps she might perform:
- Spend hours of time evaluating different backup products available, and make sure her operating system, file system and applications are supported
- Spend hours shopping for external media, this could be an external USB disk drive, optical DVD drive, or tape drive, and confirm it is supported by the selected backup software.
- Purchase the backup software, external drive, and if optical or tape, blank media cartridges.
- Spend time learning the product, purchase "Backup for Dummies" or similar book, and/or taking a training class.
- Install and configure the software
- Operate the software, or set it up to run automatically, and take the media offsite at the end of the day, and back each morning
- Case 2: The ten-person company
- I guess if each of the ten employees went off and performed all of the same steps as above, there would be no economies of scale.
Fortunately, co-workers are amazingly efficient in avoiding unnecessary work.
- Rather than have all ten people evaluate backup solutions, have one person do it. If everyone runs the same or similar operating system, file systems and applications, this can be done about the same as the one-person case.
- Ditto on the storage media. Why should 10 people go off and evaluate their own storage media. One person can do it for all ten people in about the same time as it takes for one person.
- Purchasing the software and hardware. Ok, here is where some costs may be linear, depending on your choices. Some software vendors give bulk discounts, so purchasing 10 seats of the same software could be less than 10 times the cost of one license. As for storage hardware, it might be possible to share drives and even media. Perhaps one or two storage systems can be shared by the entire team.
- For a lot of backup software, most of the work is in the initial set up, then it runs automatically afterwards. That is the case for TSM. You create a "dsm.opt" file, and it can list all of the include/exclude files and other rules and policies. Once the first person sets this up, they share it with their co-workers.
- Hopefully, if storage hardware was consolidated, such that you have fewer drives than people, you can probably have fewer people responsible for operations. For example, let's have the first five employees sharing one drive managed by Joe, and the second five employees sharing a second drive managed by Sally. Only two people need to spend time taking media offsite, bringing it back and so on.
- Case 3: The hundred-person company
- Again, it is possible that a hundred-person company consists of 10 departments of 10 people each, and they all follow the above approach independently, resulting in no economies of scale. But again, that is not likely.
- Here one or a few people can invest time to evaluate backup solutions. Certainly far less than 100 times the effort for a one-person company.
- Same with storage media. With 100 employees, you can now invest in a tape library with robotic automation.
- Purchase of software and hardware. Again, discounts will probably apply for large deployments. Purchasing 1 tape library for all one hundred people is less than 10 times the cost and effort of 10 departments all making independent purchases.
- With a hundred employees, you may have some differences in operating system, file systems and applications. Still, this might mean two to five versions of dsm.opt, and not 10 or 100 independent configurations.
- Operations is where the big savings happen. TSM has "progressive incremental backup" so it only backs up changed data. Other backup schemes involve taking period full backups which tie up the network and consume a lot of back end resources. In head-to-head comparisons between IBM Tivoli Storage Manager and Symantec's NetBackup, IBM TSM was shown to use significantly less network LAN bandwidth, less disk storage capacity, and fewer tape cartridges than NetBackup.
- The savings are even greater with data deduplication. Either using hardware, like IBM TS76750 ProtecTIER data deduplication solution, or software like the data deduplication capability built-in with IBM TSM v6.1, you can take advantage of the fact that 100 employees might have a lot of common data between them.
So, I have demonstrated how savings through economies of scale are achieved using IBM Tivoli Storage Manager. Adding one more person in each case is cheaper than the first person. The situation is not linear as Scott suggests. But what about larger deployments? IBM TS3500 Tape Library can hold one PB of data in only 10 square feet of data center floorspace. The IBM TS7650G gateway can manage up to 1 PB of disk, holding as much as 25 PB of backup copies. IT Analysts Tony Palmer, Brian Garrett and Lauren Whitehouse from Enterprise Strategy Group tried IBM TSM v6.1 out for themselves and wrote up a ["Lab Validation"] report. Here is an excerpt:
"Backup/recovery software that embeds data reduction technology can address all three of these factors handily. IBM TSM 6.1 now has native deduplication capabilities built into its Extended Edition (EE) as a no-cost option. After data is written to the primary disk pool, a deduplication operation can be scheduled to eliminate redundancy at the sub-file level. Data deduplication, as its name implies, identifies and eliminates redundant data.
TSM 6.1 also includes features that optimize TSM scalability and manageability to meet increasingly demanding service levels resulting from relentless data growth. The move from a proprietary back-end database to IBM DB2 improves scalability, availability, and performance without adding complexity; the DB2 database is automatically maintained and managed by TSM. IBM upgraded the monitoring and reporting capabilities to near real-time and completely redesigned the dashboard that provides visibility into the system. TSM and TSM EE include these enhanced monitoring and reporting capabilities at no cost."
The majority of Fortune 1000 customers use IBM Tivoli Storage Manager, and it is the backup software that IBM uses itself in its own huge data centers, including the cloud computing facilities. In combination with IBM Tivoli FastBack for remote office/branch office (ROBO) situations, and complemented with point-in-time and disk mirroring hardware capabilities such as IBM FlashCopy, Metro Mirror, and Global Mirror, IBM Tivoli Storage Manager can be an effective, scalable part of a complete Unified Recovery Management solution.
technorati tags: IBM, Tivoli, Storage Manager, TSM, TS7650, TS7650G, TS3500, Scalability, deduplication, economies+of+scale, Scott+Waterhouse, EMC, Symantec, NetBackup, backup, software, solutions, disk, tape, optical, drive
Now that the US Recession has been declared over, companies are looking to invest in IT again. To help you plan your upcoming investments, here are some upcoming events in April.
- SNW Spring 2010, April 12-15
IBM is a Platinum Plus sponsor at this [Storage Networking World event], to be held April 12-15 at the Rosen Shingle Creek Resort in Orlando, Florida. If you are planning to go, here's what you can go look for:
- IBM booth at the Solution Center featuring the DS8700 and XIV disk systems, SONAS and the Smart Business Storage Cloud (SBSC), and various Tivoli storage software
- IBM kiosk at the Platinum Galleria focusing on storage solutions for SAP and Microsoft environments
- IBM Senior Engineer Mark Fleming presenting "Understanding High Availability in the SAN"
- IBM sponsored "Expo Lunch" on Tuesday, April 13, featuring Neville Yates, CTO of IBM ProtecTIER, presenting "Data Deduplication -- It's not Magic - It's Math!"
- IBM CTO Vincent Hsu presenting "Intelligent Storage: High Performance and Hot Spot Elimination"
- IBM Senior Technical Staff Member (STSM) Gordon Arnold presenting "Cloud Storage Security"
- One-on-One meetings with IBM executives
I have personally worked with Mark, Neville, Vincent and Gordon, so I am sure they will do a great job in their presentations. Sadly, I won't be there myself, but fellow blogger [Rich Swain from IBM] will be at the event to blog about all the actviities there.
- Systems for a Smarter Planet webinar, April 15
Can't travel to Orlando? On April 15, IBM will offer a [Systems for a Smarter Planet webinar] to highlight IBM's vision, strategy and latest offerings. Speakers include:
- Jim Stallings - General Manager, Global Markets, IBM Systems and Technology Group
- Scott Handy - Vice President, WW Marketing, Power Systems, IBM Systems and Technology Group
- Dan Galvan - Vice President, Marketing & Strategy, Storage and Networking Systems, IBM Systems and Technology Group
- Inna Kuznetsova - Vice President, Marketing and Sales Enablement, Systems Software, IBM Systems and Technology Group
- Jeanine Cotter - Vice President, Systems Services, IBM Global Technology Services
The webinar will include client testimonials from various companies as well.
- Dynamic Infrastructure Executive Summit, April 27-29
I will be there, at this this 2-and-a-half-day [Executive Summit] in Scottsdale, Arizona, to talk to company executives. Discover how IBM can help you manage your ever-increasing amount of information with an end-to-end, innovative approach to building a dynamic infrastructure. You will learn all of our innovative solutions and find out how you can effectively transform your enterprise for a smarter planet.
It's looking to be a busy month!
technorati tags: , IBM, SNW, DS8700, XIV, SONAS, SBSC, Tivoli, CTO, ProtecTIER, Deduplication, SAP, Microsoft, webinar, summit, Orlando, Florida, Scottsdale, Arizona, #ibmsystems, #dyninfra
My colleagues, Harley Puckett (left) and Jack Arnold (right) were highlighted in today's Arizona Daily Star, our local newspaper, as part of an article on IBM's success and leadership in the IT storage industry. At 1400 employees here in Tucson, IBM is Southern Arizona's 36th largest employer.
Highlighted in the article:
- DS8700 with the new Easy Tier feature
- TS7650 ProtecTIER virtual tape library with data deduplication capability
- LTO-5 tape and the new Long Term File System (LTFS)
- XIV with the new 2TB drive, for a maximum per-rack usable capacity of 161 TB.
Read the full article [IBMers Crank Out 4 New Offerings To Handle Data Deluge]
technorati tags: , Arizona Daily Star, IBM Tucson, DS8700, Easy Tier, ProtecTIER, Deduplication, LTO-5, LTFS, XIV, IBM, Tucson, Arizona
Continuing this week's coverage of IBM's 3Q announcements, today it's all about storage for our mainframe clients.
- IBM System Storage DS8700
IBM is the leader in high-end disk attached to mainframes, with the IBM DS8700 being our latest model in a long series of successful products in this space. Here are some key features:
- Full Disk Encryption (FDE), which I mentioned in my post [Different Meanings of the word "Protect"]. FDE are special 15K RPM Fibre Channel drives that include their own encryption chip, so that IBM DS8700 can encrypt the data at rest without impacting performance of reads or writes. The encryption keys are managed by IBM Tivoli Key Lifecycle Manager (TKLM).
- Easy Tier, which I covered in my post [DS8700 Easy Tier Sub Lun Automatic Migration] which offers what EMC promised but has yet to deliver, the ability to have CKD volumes and FBA LUNs to straddle the fence between Solid State Drives (SSD) and spinning disk. For example, a 54GB CKD volume could have 4GB on SSD and the remaining 50GB on spinning drives. The hottest extents are moved automatically to SSD, and the coldest moved down to spinning disk. To learn more about Easy Tier, watch my [7-minute video] on IBM [Virtual Briefing Center].
- z/OS Distributed Data Backup (zDDB), announced this week, provides the ability for a program running on z/OS to backup data written by distributed operating systems like Windows or UNIX stored in FBA format. In the past, to backup FBA LUNs involved a program like IBM Tivoli Storage Manager client to read the data natively, send it over Ethernet LAN to a TSM Server, which could run on the mainframe and use mainframe resources. This feature eliminates the Ethernet traffic by allowing a z/OS program to read the FBA blocks through standard FICON channels, which can then be written to z/OS disk or tape resources. Here is the [Announcement Letter] for more details.
One program that takes advantage of this new zDDB feature already is Innovation's [FDRSOS], which I pronounce "fudder sauce". If you are an existing FDRSOS customer, now is a good time to get rid of any EMC or HDS disk and replace with the new IBM DS8700 system.
- IBM System Storage TS7680 ProtecTIER Deduplication Gateway for System z
When it comes to virtual tape libraries that attach to mainframes, the two main players are IBM TS7700 series and Oracle StorageTek Virtual Storage Manager (VSM). However, mainframe clients with StorageTek equipment are growing frustrated over Oracle's lack of commitment for mainframe-attachable storage. To make matters worse, Oracle recently missed a key delivery date for their latest enterprise tape drive.
Unfortunately, neither of these offer deduplication of the data. IBM solved this with the IBM TS7680. I covered the initial announcement six months ago in my post [TS7680 ProtecTIER Deduplication for the mainframe].
What's new this week is that IBM now supports native IP-based asynchronous replication of virtual tapes at distance, from one TS7680 to another TS7680. This replaces the method of replication using the back end disk features. The problem with using disk replication is that all the virtual tapes will be copied over. Instead, the ProtecTIER administrator can decide which subset of virtual tapes should be replicated to the remote site, and that can reduce both storage requirements as well as bandwidth costs. See the [Announcement Letter] for more details.
These new solutions will work with existing mainframes, as well as the new IBM [zEnterprise mainframe models] announced this week.
technorati tags: , IBM, DS8700, FDE, Easy+Tier, zDDB, SSD, TS7680, Deduplication, VTL, Oracle, Sun, StorageTek, STK, VSM, zEnterprise
Wrapping up my week's theme of storage optimization, I thought I would help clarify the confusion between data reduction and storage efficiency. I have seen many articles and blog posts that either use these two terms interchangeably, as if they were synonyms for each other, or as if one is merely a subset of the other.
- Data Reduction is LOSSY
By "Lossy", I mean that reducing data is an irreversible process. Details are lost, but insight is gained. In his paper, [Data Reduction Techniques", Rajana Agarwal defines this simply:
"Data reduction techniques are applied where the goal is to aggregate or amalgamate the information contained in large data sets into manageable (smaller) information nuggets."
Data reduction has been around since the 18th century.
Take for example this histogram from [SearchSoftwareQuality.com]. We have reduced ninety individual student scores, and reduced them down to just five numbers, the counts in each range. This can provide for easier comprehension and comparison with other distributions.
The process is lossy. I cannot determine or re-create an individual student's score from these five histogram values.
This next example, complements of [Michael Hardy], represents another form of data reduction known as ["linear regression analysis"]. The idea is to take a large set of data points between two variables, the x axis along the horizontal and the y axis along the vertical, and find the best line that fits. Thus the data is reduced from many points to just two, slope(a) and intercept(b), resulting in an equation of y=ax+b.
The process is lossy. I cannot determine or re-create any original data point from this slope and intercept equation.
In this last example, from [Yahoo Finance], reduces millions of stock trades to a single point per day, typically closing price, to show the overall growth trend over the course of the past year.
The process is lossy. Even if I knew the low, high and closing price of a particular stock on a particular day, I would not be able to determine or re-create the actual price paid for individual trades that occurred.
- Storage Efficiency is LOSSLESS
By contrast, there are many IT methods that can be used to store data in ways that are more efficient, without losing any of the fine detail. Here are some examples:
- Thin Provisioning: Instead of storing 30GB of data on 100GB of disk capacity, you store it on 30GB of capacity. All of the data is still there, just none of the wasteful empty space.
- Space-efficient Copy: Instead of copying every block of data from source to destination, you copy over only those blocks that have changed since the copy began. The blocks not copied are still available on the source volume, so there is no need to duplicate this data.
- Archiving and Space Management: Data can be moved out of production databases and stored elsewhere on disk or tape. Enough XML metadata is carried along so that there is no loss in the fine detail of what each row and column represent.
- Data Deduplication: The idea is simple. Find large chunks of data that contain the same exact information as an existing chunk already stored, and merely set a pointer to avoid storing the duplicate copy. This can be done in-line as data is written, or as a post-process task when things are otherwise slow and idle.
When data deduplication first came out, some lawyers were concerned that this was a "lossy" approach, that somehow documents were coming back without some of their original contents. How else can you explain storing 25PB of data on only 1PB of disk?
(In some countries, companies must retain data in their original file formats, as there is concern that converting business documents to PDF or HTML would lose some critical "metadata" information such as modificatoin dates, authorship information, underlying formulae, and so on.)
Well, the concern applies only to those data deduplication methods that calculate a hash code or fingerprint, such as EMC Centera or EMC Data Domain. If the hash code of new incoming data matches the hash code of existing data, then the new data is discarded and assumed to be identical. This is rare, and I have only read of a few occurrences of unique data being discarded in the past five years. To ensure full integrity, IBM ProtecTIER data deduplication solution and IBM N series disk systems chose instead to do full byte-for-byte comparisons.
- Compression: There are both lossy and lossless compression techniques. The lossless Lempel-Ziv algorithm is the basis for LTO-DC algorithm used in IBM's Linear Tape Open [LTO] tape drives, the Streaming Lossless Data Compression (SLDC) algorithm used in IBM's [Enterprise-class TS1130] tape drives, and the Adaptive Lossless Data Compression (ALDC) used by the IBM Information Archive for its disk pool collections.
Last month, IBM announced that it was [acquiring Storwize. It's Random Access Compression Engine (RACE) is also a lossless compression algorithm based on Lempel-Ziv. As servers write files, Storwize compresses those files and passes them on to the destination NAS device. When files are read back, Storwize retrieves and decompresses the data back to its original form.
To read independent views on IBM's acquisition, read Lauren Whitehouse (ESG) post [Remote Another Chair, Chris Mellor (The Register) article [Storwize Swallowed], or Dave Raffo (SearchStorage.com) article [IBM buys primary data compression].
As with tape, the savings from compression can vary, typically from 20 to 80 percent. In other words, 10TB of primary data could take up from 2TB to 8TB of physical space. To estimate what savings you might achieve for your mix of data types, try out the free [Storwize Predictive Modeling Tool].
So why am I making a distinction on terminology here?
Data reduction is already a well-known concept among specific industries, like High-Performance Computing (HPC) and Business Analytics. IBM has the largest marketshare in supercomputers that do data reduction for all kinds of use cases, for scientific research, weather prediction, financial projections, and decision support systems. IBM has also recently acquired a lot of companies related to Business Analytics, such as Cognos, SPSS, CoreMetrics and Unica Corp. These use data reduction on large amounts of business and marketing data to help drive new sources of revenues, provide insight for new products and services, create more focused advertising campaigns, and help understand the marketplace better.
There are certainly enough methods of reducing the quantity of storage capacity consumed, like thin provisioning, data deduplication and compression, to warrant an "umbrella term" that refers to all of them generically. I would prefer we do not "overload" the existing phrase "data reduction" but rather come up with a new phrase, such as "storage efficiency" or "capacity optimization" to refer to this category of features.
IBM is certainly quite involved in both data reduction as well as storage efficiency. If any of my readers can suggest a better phrase, please comment below.
technorati tags: IBM, data reduction, storage efficiency, histogram, linear regression, thin provisioning, data deduplication, lossy, lossless, EMC, Centera, hash collisions, Information Archive, LTO, LTO-DC, SLDC, ALDC, compression, deduplication, Storwize, supercomputers, HPC, analytics
It's Tuesday, and you know what that means... IBM Announcements!
- IBM System Storage ProtecTIER
Today, IBM refreshed its IBM System Storage ProtecTIER data deduplication family with new hardware and software. On the hardware side, The [TS7650G gateway] now has 32 cores and 64GB RAM. The [TS7650 Appliance] now has 24 cores and 64GB of RAM, and the [TS7610 Appliance Express] has 4 cores and up to 16GB of RAM.
On the software side, all of these now support Symantec's proprietary "OpenStorage" OST API. This applies across the board, from the [Enterprise Edition], [Appliance Edition], and the [Entry Edition]. For those using Symantec NetBackup as their backup software, the OST API can provide advantages over the standard VTL interface.
- IBM Systems Director Storage Control
The second announcement has an interesting twist. I could file this in my "I Told You So" folder. Offiically, it's called the [Cassandra Complex], where you accurately predict how something will turn out, but being unable to convince anyone else of what the future holds.
About ten years ago, I was asked to be lead architect of a new product to be called IBM TotalStorage Productivity Center, which was later renamed to IBM Tivoli Storage Productivity Center. This would combine three projects:
- Tivoli Storage Resource Manager (TSRM)
- Tivoli SAN Manager (TSANM)
- Multiple Device Manager (MDM)
The first two were based on Tivoli's internal GUI platform, and the MDM was a plug-in for IBM Systems Director. I argued that administrators would want everything on a single pane of glass, and that we should bring all the components under a common GUI platform, such as IBM Systems Director. Unfortunately, management did not agree with me on that, and preferred instead to leave each interface alone to minimize development effort. The only "unification" was to give them all similar sounding names, four components packaged as single product:
- Productivity Center for Data (formerly TSRM)
- Productivity Center for Fabric (formerly TSANM)
- Productivity Center for Disk (formerly MDM)
- Productivity Center for Replication (formerly MDM)
While this management decision certainly allowed version 1 to hit the market sooner, this was not a good "first impression" of the product for many of our clients.
In 2002, IBM acquired Trellisoft, Inc. which replaced the internally-developed TSRM with a much better interface, but again, this was different GUI than the other components. A "launcher" was created that would launch the various disparate interfaces for each component for Version 2. At this point, we have different development teams scattered in five locations, with the first two components being developed by the Tivoli software team, and the other two components being developed by the System Storage hardware team.
Often times, when a technical lead architect and management do not agree, things do not end well. The lead architect has to leave the product, and management is forced to take alternative actions to keep the product going. In my case, management considered the idea of a common GUI as an expensive "nice-to-have" luxury we could not afford, but I considered this a "must-have". I moved on to a new job within IBM, and management, unable to continue without my leadership, gave up and handed the entire project over to the Tivoli Software team.
The Tivoli Software team took a whiff at the pile of code and agreed that it stunk. Dusting off my original design documents, they pretty much discarded most of the code and re-wrote much from scratch, with a common database, common app server, and common GUI platform. Unfortunately, Productivity Center for Replication was held up waiting for some hardware prerequisites, but the other three components would be packaged together as "Productivity Center v3 - Standard Edition" and was a big improvement over the prior versions.
In Version 4, TotalStorage Productivity Center was renamed to Tivoli Storage Productivity Center, and the Replication component was brought into the mix. A scaled-down version packaged as Productivity Center "Basic Edition" was made available as a hardware appliance named "System Storage Productivity Center" or SSPC. The idea was to provide a pre-installed 1U-high hardware console that had the basic functions of Productivity Center, with the option to upgrade to the full Tivoli Storage Productivity Center with just license keys.
So, now, years later, management recognizes that a common GUI platform is more than just a "nice-to-have". IBM now support three very specific use cases:
- 1. Administration for a single product
For small clients who might have only a single IBM product, IBM is now focused on making the GUI browser-based, specifically to work with the Mozilla Firefox browser, but any similar browser should work as well. The new IBM Storwize V7000 GUI is a good example of this.In this case, the browser serves as the common GUI platform.
- 2. Administration for both servers and storage devices
For mid-sized companies that have administrators managing both servers and storage, IBM announced this month the new [IBM Systems Director Storage Control v4.2.1] plug-in, which provides Tivoli Storage Productivity Center "Basic Edition" support. This allows admins already familiar with IBM Systems Director for managing their servers to also manage basic storage functions. This is the "I Told You So" moment, connecting server and storage administration under the IBM Systems Director management platform makes a lot of sense, it did when I came up with the idea 10 years ago! Hmmmm?
- 3. Administration for just the storage environment
For larger companies big enough to have separate server and storage admin teams, IBM continues to offer the full Tivoli Storage Productivity Center product for the storage admins. The most recent release enhanced the support for IBM DS8000, SVC, Storwize V7000 and XIV storage systems.
Today, analysts consider IBM's [Tivoli Storage Productivity Center] one of the leading products in its category. I am glad my original vision has finally come to life, even though it took a while longer than I expected.
To learn more about IBM storage hardware, software or services, see the updated [IBM System Storage] landing page.
technorati tags: IBM, ProtecTIER, TS7650G, TS7650, TS7610, Symantec, NetBackup, OpenStorage, API, OST, TPC, TSRM, Trellisoft, TSANM, SSPC, Systems Director, Storage Control, GUI
Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday morning started with another keynote session, followed by some break-out sessions.
- Realities of IT Investment
Tighter budgets mean more business decisions. Future investments will come from cost savings. The analysts report that 77 percent of IT decisions are made by CFOs. Most organizations are spending less now than back in 2008 before the recession.
How we innovate through IT is changing. In bad times, risk trumps return, but only 21 percent of the audience have a formal "risk calculation" as part of their purchase plans.
Divestment matters as much as investment. Reductions in complexity have the greatest long-term cost savings. Try to retire at least 20 percent of your applications next year. With the advent of Cloud Computing, companies might just retire it and go entirely with public cloud offerings. Note that this graph the years are different than the ones above, in groups of half-decade increments.
It is important to identify functional dependencies and link your IT risks to business outcomes. Focus on making costs visible, and re-think how you communicate IT performance measurements and their impact to business. Try to change the culture and mind-set so that projects are not referred to as "IT projects" focused on technology, but rather they are "business projects" focused on business results.
- Moving to the Cloud
Richard Whitehead from Novell presented challenges in moving to Cloud Computing. There are risks and challenges managing multiple OS environments. Users should have full access to all IT resources they need to do their jobs. Computing should be secure, compliant, and portable. Here is the shift he sees from physical servers to virtual and cloud deployments, years 2010 to 2015:
Richard considers a "workload" as being the combination of the operating system, middleware, and application. He then defines "Business Service" as an appropriate combination of these workloads. For example, a business service that provides a particular report might involve a front-end application, talking through business logic workload server, talking to a back-end database workload server.
To address this challenge, Novell introduces "Intelligent Workload Management", called WorkloadIQ. This manages the lifecycle to build, secure, deploy, manage and measure each workload. Their motto was to take the mix of physical, virtual and cloud workloads all "make it work as one". IBM is a business partner with Novell, and I am a big fan of Novell's open-source solutions including SUSE Linux.
- A Funny Thing Happened on the Way to the Cloud....
Bud Albers, CTO of Disney, shared their success in deploying their hybrid cloud infrastructure. Everyone recognizes the Disney brand for movies and theme parks, but may not aware that they also own ABC News and ESPN television, Travel cruises, virtual worlds, mobile sites, and deploy applications like Fantasy Football and Fantasy Fishing.
Two years ago, each Line of Business (LOB) owned their own servers, they were continually out of space, power and HVAC issues forced tactical build-outs of their datacenters. But in 2008, the answer to all questions was Cloud Computing, it slices and dices like something invented by [Ron Popeill], with no investment or IT staff required. However, continuing to ask the CFO for CAPEX to purchase assets that were only 1/7th used was not working out either. That's right, over 75 percent of their servers were running less than 15 percent CPU utilization.
The compromise was named "D*Cloud". Internal IT infrastructure would be positioned for Cloud Computing, by adopting server virtualization, implementing REST/SOAP interfaces, and replicating the success across their various Content Distribution Networks (CDN). Disney is no stranger to Open Source software, using Linux and PHP. Their [Open Source] web page shows tools available from Disney Animation studios.
At the half-way point, they had half their applications running virtualized on just 4 percent of their servers. Today, they run over 20 VMs per host and have 65 percent of their apps virtualized. Their target is 80 percent of their apps virtualized by 2014.
Bud used the analogy that public clouds will be the "gas stations" of the IT industry. People will choose the cheapest gas among nearby gas stations. By focusing on "Application management" rather than "VM instance management", Disney is able to seamlessly move applications as needed from private to public cloud platforms.
Their results? Disney is now averaging 40 percent CPU utilization across all servers. Bud feels they have achieved better scalability, better quality of service, and increased speed, all while saving money. Disney is spending less on IT now than in 2008,
- UPMC Maximizes Storage Efficiency with IBM
Kevin Muha, UPMC Enterprise Architect & Technology Manager for Storage and Data Protection Services, was unable to present this in person, so Norm Protsman (IBM) presented Kevin's charts on the success at the University of Pittsburgh Medical Center [UPMC]. UPMC is Western Pennsylvania's largest employer, with roughly 50,000 employees across 20 hospitals, 400 doctors' offices and outpatient sites. They have frequently been rated one of the best hospitals in the US.
Their challenge was storage growth. Their storage environment had grown 328 percent over the past three years, to 1.6PB of disk and nearly 7 PB of physical tape. To address this, UPMC deployed four IBM TS7650G ProtecTIER gateways (2 clusters) and three XIV storage systems for their existing IBM Tivoli Storage Manager (TSM) environment. Since they were already using TSM over a Fibre Channel SAN, the implementation took only three days.
UPMC was backing up nearly 60TB per day, in a 15-hour back window. Their primary data is roughly 60 percent Oracle, with the rest being a mix of Microsoft Exchange, SQL Server, and unstructured data such as files and images.
Their results? TSM reclamation is 30 percent faster. Hardware footprint reduced from 9 tiles to 5. Over 50 percent reduction in recovery time for Oracle DB, and 20 percent reduction in recovery of SQL Server, Microsoft Exchange, and Epic Cache. They average 24:1 deduplication overall, which can be broken down by data category as follows:
- 29:1 Cerner Oracle
- 18:1 EPIC Cache
- 10:1 Microsoft SQL Server
- 8:1 Unstructured files
- 6:1 Microsoft Exchange
UPMC still has lots of LTO-4 tapes onsite and offsite from before the change-over, so the next phase planned is to implement "IP-based remote replication" between ProtecTIER gateways to a third data center at extended distance. The plan is to only replicate the backups of production data, and not replicate the backups of test/dev data.
The presentation and supporting case study details on this is available on the [IBM Literature Fullfilment] website.
The show floor closed after Wednesday's lunch, so many people made their last attempts to meet the folks at the booth.
technorati tags: IBM, CFO, Novell, WorkloadIQ, Disney, UPMC, ProtecTIER, TS7650G, deduplication, XIV, Kevin Muha, Norm Protsman, Bud Albers
Every January, we look back into the past as well as look into the future for trends to watch for the upcoming year. Ray Lucchesi of Silverton Consulting has a great post looking back at the [Top 10 storage technologies over the last decade]. I am glad to see that IBM has been involved with and instrumental in all ten technologies.
Looking into the future, Mark Cox of eChannel has an article [Storage Trends to Watch in 2011], based on his interviews with two fellow IBM executives: Steve Wojtowecz, VP of storage software development, and Clod Barrera, distinguished engineer and CTO for storage. Let's review the four key trends:
- Cloud Storage and Cloud Computing
No question: Cloud Computing will be the battleground of the IT industry this decade. I am amused by the latest spate of Microsoft commercials where problems are solved with someone saying "...to the cloud". Riding on the coat tails of this is "Cloud Storage", the ability to store data across an Internet Protocol (IP) network, such as 10GbE Ethernet, in support of Cloud Computing applications. Cloud Storage protocols in the running include NFS, CIFS, iSCSI and FCoE.
Mark writes "..vendors who aren't investing in cloud storage solutions will fall behind the curve."
- Economic Downturn forces Innovation
The old British adage applies: "Necessity is the mother of invention." The status quo won't do. In these difficult economic times, IT departments are running on constrained budgets and staff. This forces people to evaluate innovative technologies for storage efficiency like real-time compression and data deduplication to make better use of what they currently have. It also is forcing people to take a "good enough" attitude, instead of paying premium prices for best-of-breed they don't really need and can't really afford.
- IT Service Management
Companies are getting away from managing individual pieces of IT kit, and are focusing instead on the delivery of information, from the magnetic surface of disk and tape media, to the eyes and ears of the end users. The deployment mix of private, hybrid and public clouds makes this even more important to measure and manage IT as a set of services that are delivered to the business. IT Service Management software can be the glue, helping companies implement ITIL v3 best practices and management disciplines.
- Smarter Data Placement
A recent survey by "The Info Pro" analysts indicates that "managing storage growth" is considered more critical than "managing storage costs" or "managing storage complexity".
This tells me that companies are willing to spend a bit extra to deploy a tiered information infrastructure if it will help them manage storage growth, which typically ranges around 40 to 60 percent per year. While I have discussed the concept of "Information Lifecycle Management" (ILM), for the past four years on this blog, I am glad to see it has gone mainstream, helped in part with automated storage tiering features like IBM System Storage Easy Tier feature on the IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems. Not all data is created equal, so the smart placement of data, based on the business value of the information contained, makes a lot of sense.
These trends are influencing what solutions the various different vendors will offer, and will influence what companies purchase and deploy.
technorati tags: IBM, Steve Wojtowecz, Clod Barrera, Mark Cox, Cloud Computing, Cloud, Storage, NFS, CIFS, iSCSI, FCoE, real-time compression, deduplication, IT Service Management, Easy Tier, DS8000, SVC, Storwize V7000
Continuing my coverage of the [IBM System x and System Storage Technical Symposium]. Here is a recap of Day 2:
- IBM Storage Strategy in the Smarter Computing Era
Since Clod Barrera introduced IBM's Smarter Computing initiative during yesterday's keynote session, I took it to the next lower level, with a presentation on how IBM's Storage Strategy aligns with the Smarter Computing approach.
- Deduplication -- It's Not Magic, It's Math!
Local IBMer Paul Rizio presented this high-level session on the concepts of data deduplication, and how it is implemented in IBM's N series, TSM and ProtecTIER virtual tape libraries. I first met Paul earlier this year when we were both instructors at Top Gun classes we held in Auckland, New Zealand and Sydney, Australia.
- IBM Information Archive for files, email and eDiscovery
This was a reprise of my presentation that I gave last July in Orlando, Florida (see my blog post [IBM Storage University - Day 1]). I explained the differences between backup and archive, the differences between Tivoli Storage Manager and System Storage Archive Manager, and the Information Archive (IA) The Information Archive for files, email and eDiscovery bundle combines IA hardware with content collectors for files and email, eDiscovery analyzer and eDiscovery manager software.
- What are Industry Consultants saying about IBM Storage?
Vic Peltz, from our IBM Almaden Research Center, presented this lively presentation on how IT industry analysts gather their information and structure their findings into various models. For many in the audience, this would be their first exposure to concepts like a "Magic Quadrant", "MarketScope" and the various stages of the "Hype Cycle".
- IBM SONAS and the Smart Business Storage Cloud
The title of this session just rolls off my tongue, similar to "James and the Giant Peach" or "Harold and the Purple Crayon". I had presented this back in July (see my blog post [IBM Storage University - Cloud Storage]). This time, I had updated the materials to reflect the new SONAS R1.3 release, and the new IBM SmartCloud offerings announced last month.
Of course the big news is that U.S. President Barack Obama is here in Australia, with a stop in Canberra (not far from Melbourne), followed by a stop in Darwin on the north side of this country. This is his first official visit to Australia as president.
technorati tags: IBM, Storage, Symposium, Melbourne, Australia, Storage+Strategy, Smarter+Computing, Deduplication, ProtecTIER, TSM, Information Archive, Magic Quadrant, Hype Cycle, SONAS, SmartCloud, Barack Obama
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of more of the Tuesday afternoon sessions:
- IBM CIOs and Storage
Barry Becker, IBM Manager of Global Strategic Outsourcing Enablement for Data Center Services, presented this session on Storage Infrastructure Optimization (SIO).
A bit of context might help. I started my career in DFHSM which moved data from disk to tape to reduce storage costs. Over the years, I wouuld visit clients, analyze their disk and tape environment, and provide a set of recommendations on how to run their operations better. In 2004, this was formalized into week-long "Information Lifecycle Management (ILM) Assessments", and I spent 18 months in the field training a group of folks on how to perform them. The IBM Global Technology Services team have taken a cross-brand approach, expanding this ILM approach to include evaluations of the application workloads and data types. These SIO studies take 3-4 weeks to complete.
Over the next decade, there will only be 50 percent more IT professionals than we have today, so new approaches will be needed for governance and automation to deal with the explosive growth of information.
SIO deals with both the demand and supply of data growth in five specific areas:
- Data reclamation, rationalization and planning
- Virtualization and tiering
- Backup, business continuity and disaster recovery
- Storage process and governance
- Archive, Retention and Compliance
The process involves gathering data and interview business, financial and technical stakeholders like storage administrators and application owners. The interviews take less than one hour per person.
Over the past two years, the SIO team has uncovered disturbing trends. A big part of the problem is that 70 percent of data stored on disk has not been accessed in the past 90 days, and is unlikely to be accessed at all in the near future, so would probably be better to store on lower cost storage tiers.
Storage Resource Management (SRM) is also a mess, with over 85 percent of clients having serious reporting issues. Even rudimentary "Showback" systems to report back what every individual, group or department were using resulted in significant improvement.
Archive is not universally implemented mostly because retention requirements are often misunderstood. Barry attributed this to lack of collaboration between storage IT personnel, compliance officers, and application owners. A "service catalog" that identifies specific storage and data types can help address many of these concerns.
The results were impressive. Clients that follow SIO recommendations save on average 20 to 25 percent after one year, and 50 percent after three to five years. Implementing storage virtualization averaged 22 percent lower CAPEX costs. Those that implemented a "service catalog" saved on average $1.9 million US dollars. Internally, IBM's own operations have saved $13 million dollars implementing these recommendations over the past three years.
- Reshaping Storage for Virtualization and Big Data
The two analysts presenting this topic acknowledged there is no downturn on the demand for storage. To address this, they recommend companies identify storage inefficiencies, develop better forecasting methodologies, implement ILM, and follow vendor management best practices during acquisition and outsourcing.
To deal with new challenges like virtualization and Big Data, companies must decide to keep, replace or supplement their SRM tools, and build a scalable infrastructure.
One suggestion to get upper management to accept new technologies like data deduplication, thin provisioning, and compression is to refer to them as "Green" technologies, as they help reduce energy costs as well. Thin provisioning can help drive up storage utilization to rates as high as you dare, typically 60 to 70 percent is what most people are comfortable with.
A poll of the audience found that top three initiatives for 2012 are to implement data deduplication, 10Gb Ethernet, and Solid-State drives (SSD).
The analysts explained that there are two different types of cloud storage. The first kind is storage "for" the cloud, used for cloud compute instances (aka Virtual Machines), such as Amazon EBS for EC2. The second kind is storage "as" the cloud, storage as a data service, such as Amazon S3, Azure Blob and AT&T Synaptic.
The analysts feel that cloud storage deployments will be mostly private clouds, bursting as needed to public cloud storage. This creates the need for a concept called "Cloud Storage Gateways" that manage this hybrid of some local storage and some remote storage. IBM's SONAS Active Cloud Engine provides long-distance caching in this manner. Other smaller startups include cTera, Nasuni, Panzura, Riverbed, StorSimple, and TwinStrata.
A variation of this are "storage gateways" for backup and archive providers as a staging area for data to be subsequently sent on to the remote location.
New projects like virtualization, Cloud computing and Big Data are giving companies a new opportunity to re-evaluate their strategies for storage, process and governance.
technorati tags: IBM, SIO, SRM, deduplication, 10GbE, SSD, Amazon, EBS, EC2, Azure, SONAS, Active Cloud Engine, Cloud Computing, virtualization, Big Data
This week I am in Orlando, Florida for the IBM Edge conference. Here is a recap of Day 3.
- Data Footprint Reduction: Understanding IBM Storage Efficiency Options
Earlier this year, I wrote a Web article titled [Data Footprint Reduction] which covered data deduplication and compression, and was asked to present this at IBM Edge. I have expanded it to include:
- Thin Provisioning
- Space-Efficient Point-in-Time copies
- Data Deduplication
After I presented the basic concepts, Sanjay Bhikot, a Unix and Storage admin at RICOH, presented his real-world experiences with data deduplication using the IBM ProtecTIER and real-time compression Beta experience using the SAN Volume Controller (SVC).
- IBM Active Cloud Engine Implementation on IBM SONAS 1.3 and IBM Storwize V7000 Unified
John Sing (IBM) presented the latest enhancements in the v1.3.2 release of SONAS and Storwize V7000 Unified.
- Introducing VMware vSphere Storage Features
Fellow blogger Stephen Foskett presented this session on VMware's storage features. This included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol" which de-multiplexes the "I/O Blender" that server hypervisors do by tagging individual requests to individual OS guests to provide added benefit. IBM is a leading reseller of VMware, so it makes sense that most of our storage meets all of Steve's requirements for recommendation.
- IBM's Storage Strategy in the Smarter Computing Era
Last year, I presented this on the fourth day of the conference, and feedback we received from attendees was that this should have been presented sooner in the week, as it provides great context for the more detailed product presentations.
To address this concern, the IBM executives presented IBM strategy on Monday's keynote session, but allowed me to present this on Wednesday for several reasons:
- You may have missed the keynote session. For example, you may not have arrived in time to hear the executives speak due to weather or mechanical problems causing travel delays.
- You may have attended the keynote session, but want to hear it again. Maybe you were a bit hung-over, or just may have been overwhelmed with the size and scope of this event. I have read for strategic topics, audiences may have to hear the message five to seven times before they truly appreciate and understand it.
- You may want to ask questions, and explore the implications in more detail. While keynote sessions can reach a broader audience, the communication is very much uni-directional. With break-out sessions with a few hundred people, the venue is more intimate and can afford opportunties for information exchange.
This was well attended, so the plan worked!
- IBM SONAS and the Cloud Storage Taxonomy
The title of this session rolls off the tongue nicely, much like "James and the Giant Peach", "Harold and the Purple Crayon", or "Charlie and the Chocolate Factory".
When people say they are interested in "Cloud Storage", what exactly do they mean. After discussions with hundreds of clients, IBM has worked out a "taxonomy" that identifies four distinct types of storage:
- Persistent storage
- Ephemeral storage
- Hosted storage
- Reference storage
In this session, I presented how IBM SONAS addresses all four of these categories, as well as other IBM storage products that can address specific categories in the taxonomy.
In the evening, the attendees at IBM Edge joined the attendees from Innovate2012 (focused on IBM Rational products) at SeaWorld, with BBQ dinner, rides, Shamu the whale show, and a concert featuring Foreigner!
technorati tags: IBM, Stephen Foskett, Sanjay Bhikot, Data Footprint Reduction, Compression, Deduplication, Space-Efficient, Point-in-time, RICOH, SVC, Storwize V7000, SONAS, Active Cloud Engine, Smarter Computing, Smarter Storage, Foreigner, SeaWorld, Innovate2012