"Politics makes strange bedfellows."
--- Charles Dudley Warner
In my September 2007 post [Supermarkets and Specialty Shops], I explain that there are two kinds of clients:
- Those that prefer to work with one-stop shopping of an IT Supermarket, with companies like IBM, HP and Dell who offer a complete set of servers, storage, switches, software and services, what we call "The Five S's".
- Those that perfer shopping for components at individual specialty shops, like butchers, bakers, and candlestick makers, hoping that this singular focus means the products are best-of-breed in the market. Companies like HDS for disk, Quantum for tape, and Symantec for software come to mind.
My how the IT landscape for vendors has evolved in just the past five years! Cisco starts to sell servers, and enters a "mini-mall" alliance with EMC and VMware to offer vBlock integrated stack of server, storage and switches with VMware as the software hypervisor. For those not familiar with the concept of mini-malls, these are typically rows of specialty shops. A shopper can park their car once, and do all their shopping from the various shops in the mini-mall. Not quite "one-stop" shopping of a supermarket, but tries to address the same need.
("Who do I call when it breaks?" -- The three companies formed a puppet company, the Virtual Computing Environment company, or VCE, to help answer that question!)
Among the many things IBM has learned in its 100+ years of experience, it is that clients want choices. Cisco figured this out also, and partnered with NetApp to offer the aptly-named FlexPod reference architecture. In effect, Cisco has two boyfriends, when she is with EMC, it is called a Vblock, and when she is with NetApp, it is called a FlexPod. I was lucky enough to find this graphic to help explain the three-way love triangle.
Did this move put a strain on the relationship between Cisco and EMC? Last month, EMC announced VSPEX, a FlexPod-like approach that provides a choice of servers, and some leeway for resellers to make choices to fit client needs better. Why limit yourself to Cisco servers, when IBM and HP servers are better? Is this an admission that Vblock has failed, and that VSPEX is the new way of doing things? No, I suspect it is just EMC's way to strike back at both Cisco and NetApp in what many are calling the "Stack Wars". (See [The Stack Wars have Begun!], [What is the Enterprise Stack?], or [The Fight for the Fully Virtualized Data Center] for more on this.)
(FTC Disclosure: I am both an employee and shareholder of IBM, so the U.S. Federal Trade Commission may consider this post a paid, celebrity endorsement of the IBM PureFlex system. IBM has working relationships with Cisco, NetApp, and Quantum. I was not paid to mention, nor have I any financial interest in, any of the other companies mentioned in this blog post. )
Chris Mellor and Timothy Prickett Morgan at The Register have a great series of posts exploring this new development: [EMC VSPEX storage torpedo could sink FlexPods], [El Reg hurls EMC onto the rack, drills into VSPEX], [We were right: EMC's VSPEX will take on FlexPods], and [How EMC stuffs channel cakeholes with VSPEX recipes].
Last month, IBM announced its new PureSystems family, ushering in a [new era in computing]. I invite you all to check out the many "Paterns of Expertise" available at the [IBM PureSystems Centre]. This is like an "app store" for the data center, and what I feel truly differentiates IBM's offerings from the rest.
The trend is obvious. Clients who previously purchased from specialty shops are discovering the cost and complexity of building workable systems from piece-parts from separate vendors has proven expensive and challenging. IBM PureFlex™ systems eliminate a lot of the complexity and effort, but still offer plenty of flexibility, choice of server processor types, choice of server and storage hypervisors, and choice of various operating systems.
technorati tags: IBM, Stack Wars, PureSystems, PureFlex, VSPEX, EMC, Vblock, Cisco, VMware, NetApp, FlexPod, HDS, Quantum, Symantec
Tags: 
ibm
symantec
vspex
pureflex
netapp
vblock
flexpod
cisco
hds
puresystems
quantum
vmware
emc
stack+wars
|
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of some of the Tuesday afternoon sessions:
- Brocade: Maximizing Your Cloud: How Data Centers Must Evolve
This was a session sponsored by Brocade to promote their concept of the "Ethernet Fabric". The first speaker, John McHugh, was from Brocade, and the second speaker was a client testimonial, Jamie Shepard, EVP for International Computerware, Inc.
John had an interesting take on today's network challenges. He feels that most LANs are organized for "North-South" traffic, referring to upload/downloads between clients and servers. However, the networks of tomorrow will need to focus on "East-West" traffic, referring to servers talking to other servers.
John was also opposed to integrated stacks that combine servers, storage and networking into a single appliance, as this prevents independent scaling of resources.
- The Future of Backup is Not Backup
Primary data is growing at 40 to 60 percent compound annual growth rate (CAGR), but backup data is growing faster. Why? Because data that was not backed up before are now being backed up, including test data, development data, and mobile application data.
Backup costs are 19x more expensive than production software costs. There is an enormous gap in data protection because companies fail to factor this into their budgets. It is not uncommon for IT departments to use multiple backup tools, for example one tool for VMs, and another tool for servers, and a third product for desktops.
part of the problem is identifying who "buys" the backup software. The server team might focus on the operating systems supported. The storage team focuses on the disk and tape media supported. The application owners focus on the features and capabilities for backup that minimize impact to their application.
The analyst organized these issues into three "C's" of backup concerns: Cost, Capability and Complexity. Cost is not just the software license fee for the backup software, but the cost of backup media, courier fees, and transmisison bandwidth. Capability refers to the features and functions, and IT folks are tired of having to augment their backup solution with additional tools and scripts to compensate for lack of capability. Complexity refers to the challenges trying to get existing backup software to tackle new sources like Virtual Machines, Mobile apps, and so on.
Has everyone moved to a tape-less backup system? Polling results found that people are shifting back to tape, either in a tape-only environment, or to supplement their disk or disk-based virtual tape library (VTL). Here are the polling results:
The poll also showed the top three backup software vendors were Symantec, IBM and Commvault, which is consistent with marketshare. However, the analyst feels that by 2014, an estimated 30 percent of companies will change their backup softwar vendor out of frustration over cost, capability and/or complexity.
There are a lot new backup software products specific to dealing with Virtual Machines. Some are focused exclusively on VMware. When asked what tool people used to backup their VMs, the polling results showed the following. NOte that 20 percent for Other includes products from major vendors, like IBM Tivoli Storage Manager for Virtual Environments, as the analyst was more interested in the uptake of backup software from startups.
Some companies are considering Cloud Computing for backup. This is one area where having the cloud service provider at a distance is an actual advantage for added protection. A poll asking whether some or most data is backed up to the Cloud, either already today, or plans for the near future within the next 12 or 24 months, showed the following:
In addition to backup service providers, there are now several startups that offer file sharing, and some are adding "versioning" to this that can serve as an alternative to backup. These include DropBox, SugarSync, iCloud, SpiderOak and ShareFile.
The final topic was Snapshot and Disk Replication. These tend to be hardware-based, so they may not have options for versioning, scheduling, or application-aware capabilities normally associated with backup software. Space-efficient snapshots, which point unchanged data back to the original source, may not provide full data protection that disparate backup copies would provide. Here were polling results on whether snapshot/replication was used to augment or replace some or most of their backups:
Some of his observations and recommendations:
- Maintenance is more expensive than acquisition cost. Don't focus on the tip of the iceberg. Some backup software is more efficient for bandwidth and media which will save tons of money in the long run.
- Try to optimize what you have. He calls this the "Starbuck's effect". If you just need one coffee, then paying $4.50 for a cup makes sense. But if you need 100 coffees, you might be better off buying the beans.
- Design backups to meet service level agreements (SLAs). In the past, backup was treated as one-size-fits-all, but today you can now focus on a workload by workload basis.
- Be conservative in adopting new technologies until you have your backup procedures in place to handle data protection.
- Backup is for operational recovery, not long-term retention of data. A poll showed two-thirds of the audience kept backup versions for longer than 60 days! Re-evaluate how long you keep backups, and how many versions you keep. If you need long-term retention, use archive process instead.
- Recovery testing is a dying art. Practice recovery procedures so that you can do it safely and correctly when it matters most.
The analyst had a series of awesome pictures of large structures, the pyramids of Giza, the Chrysler building, and so on, and how they would look without their foundations in place. Backup is a foundation and should be treated as such in all IT planning purposes.
IT is evolving, but some basic needs like networking and backup procedures don't change. As companies re-evaluate their IT operations for Big Data, Cloud Computing and other new technologies, it is best to remember that some basic needs must be met as part of those evaluations.
Tags: 
ibm
vtl
d2t
cloud+computing
snapshot
brocade
ethernet+fabric
john+mchugh
d2d
dropbox
client-server
symantec
sugarsync
commvault
spideroak
icloud
replication
d2d2t
|
I have been working on Information Lifecycle Management (ILM) since before they coined the phrase. There were several break-out sessions on the third day at the [IBM System Storage Technical University 2011] related to new twists to ILM.
- The Intelligent Storage Service Catalog (ISSC) and Smarter ILM
Hans Ammitzboll, Solution Rep for IBM Global Technology Services (GTS), presented an approach to ILM focused on using different storage products for different tiers. Is this new? Not at all! The original use of the phrase "Information Lifecycle Management" was coined in the early 1990s by StorageTek to help sell automated tape libraries.
Unfortunately, disk-only vendors started using the term ILM to refer to disk-to-disk tiering inside the disk array. Hans feels it does not make sense to put the least expensive penny-per-GB 7200 RPM disk inside the most expense enterprise-class high-end disk arrays.
IBM GTS manages not only IBM's internal operations, but the IT operations of hundreds of other clients. To help manage all this storage, they developed software to supplement reporting, monitoring and movement of data from one tier to another.
The Intelligent Storage Service Catalog (ISSC) can save up to 80 percent of planning time for managing storage. What did people use before? Hans poked fun at chargeback and showback systems that "offer savings" but don't actually "impose savings". He referred to these as Name-and-Shame, where the top 10 offenders of storage usage.
His storage pyramid involves a variety of devices, with IBM DS8000, SVC and XIV for the high-end, midrange disk like Storwize V7000, and blended disk-and-tape solutions like SONAS and Information Archive (IA) for the lower tiers.
To help people understand these concepts, IBM developed the [Thinking Worlds Player] game.
- Managing your Data with Policies on SONAS
Mark Taylor, IBM Advanced Technical Services, presented the policy-driven automation of IBM's Scale-Out NAS (SONAS). A SONAS system can hold 1 to 256 file systems, and each file system is further divided into fileset containers. Think of fileset containers like 'tree branches' of the file system.es.
SONAS supports policies for file placement, file movement, and file deletion. These are SQL-like statements that are then applied to specific file systems in the SONAS. Input variables include date last modified, date last accessed, file name, file size, fileset container name, user id and group id. You can choose to have the rules be case-sensitive or case-insensitive. The rules support macros. A macro pre-processor can help simplify calculations and other definitions that are used repeatedly.
Each file system in SONAS consists of one or more storage pools. For file systems with multiple pools, file placement policies can determine which pool to place each file. Normally, when a set of files are in a specific sub-directory on other NAS systems, all the files will be on the same type of disk. With SONAS, some files can be placed on 15K RPM drives, and other files on slower 7200 RPM drives. This file virtualization separates the logical grouping of files from the physical placement of them.
Once files are placed, other policies can be written to migrate from one disk pool to another, migrate from disk to tape, or delete the file. Migrating from one disk pool to another is done by relocation. The next time the file is accessed, it will be accessed directly from the new pool. When migrating from disk to tape, a stub is left in the directory structure metadata, so that subsequent access will cause the file to be recalled automatically from tape, back to disk. Policies can determine which storage pool files are recalled to when this happens.
Migrating from disk to tape involves sending the data from SONAS to external storage pool manager, such as IBM Tivoli Storage Manager (TSM) server connected to a tape library. SONAS supports pre-migration, which allows the data to be copied to tape, but left on disk, until space is needed to be freed up. For example, a policy with THRESHOLD(90,70,50) will kick in when the file system is 90 percent full, file will be migrated (moved) to tape until it reaches 70 percent, and then files will be pre-migrated (copied) to tape until it reaches 50 percent.
Policies to delete files can apply to both disk and tape pools. Files deleted on tape remove the stub from the directory structure metadata and notify the external storage pool manager to clean up its records for the tape data.
If this all sounds like a radically new way of managing data, it isn't. Many of these functions are based on IBM's Data Facility Storage Management Subsystem (DFSMS) for the mainframe. In effect, SONAS brings mainframe-class functionality to distributed systems.
- Understanding IBM SONAS Use Cases
For many, the concept of a scale-out NAS is new. Stephen Edel, IBM SONAS product offering manager, presented a variety of use cases where SONAS has been successful.
First, let's consider backup. IBM SONAS has built-in support for Tivoli Storage Manager (TSM), as well as supporting the NDMP industry standard protocol, for use with Symantec NetBackup, Commvault Simpana, and EMC Legato Networker. While many NAS solutions support NDMP, IBM SONAS can support up to 128 session per interface node, and up to 30 interface nodes, for parallel processing. SONAS has a high-speed file scan to identify files to be backed up, and will pre-fetch the small files into cache to speed up the backup process. A SONAS system can support up to 256 systems, and each file system can be backed up on its own unique schedule if you like. Different file systems can be backed up to different backup servers.
SONAS also has anti-virus support, with your choice of Symantec or McAfee. An anti-virus scan can be run on demand, as needed, or as files are individually accessed. When a Windows client reads a file, SONAS will determine if it has been already scanned with the most recent anti-virus signatures, and if not, will scan before allowing the file to be read. SONAS will also scan new files created.
Successful SONAS deployments addressed the following workloads:
- content capture including video capture
- content distribution
- file production/rendering
- high performance computing, research and business analytics
- "Cheap and Deep" archive
- worldwide information exchange and geographically distant collaboration
- cloud hosting
SONAS is selling well in Government, Universities, Healthcare, and Media/Entertainment, but is not limited to these industries. It can be used for private cloud deployments and public cloud deployments. Having centralized management for Petabytes of data can be cost-effective either way.
IBM SONAS brings the latest techologies to bring a Smarter ILM to a variety of workloads and use cases.
technorati tags: , IBM, ILM, SONAS, ISSC, GTS, Hans Ammitzbol, Mark Taylor, Stephen Edel, Scale-Out, NAS, NDMP, Symantec, McAfee, Anti-Virus, Policy-Driven Automation
Tags: 
stephen+edel
symantec
scale-out
ibm
mcafee
issc
ilm
nas
hans+ammitzbol
ndmp
gts
mark+taylor
sonas
anti-virus
policy-driven+automation
|
It's Tuesday, and you know what that means... IBM Announcements!
- IBM System Storage ProtecTIER
-
Today, IBM refreshed its IBM System Storage ProtecTIER data deduplication family with new hardware and software. On the hardware side, The [TS7650G gateway] now has 32 cores and 64GB RAM. The [TS7650 Appliance] now has 24 cores and 64GB of RAM, and the [TS7610 Appliance Express] has 4 cores and up to 16GB of RAM.
On the software side, all of these now support Symantec's proprietary "OpenStorage" OST API. This applies across the board, from the [Enterprise Edition], [Appliance Edition], and the [Entry Edition]. For those using Symantec NetBackup as their backup software, the OST API can provide advantages over the standard VTL interface.
- IBM Systems Director Storage Control
-
The second announcement has an interesting twist. I could file this in my "I Told You So" folder. Offiically, it's called the [Cassandra Complex], where you accurately predict how something will turn out, but being unable to convince anyone else of what the future holds.
About ten years ago, I was asked to be lead architect of a new product to be called IBM TotalStorage Productivity Center, which was later renamed to IBM Tivoli Storage Productivity Center. This would combine three projects:
- Tivoli Storage Resource Manager (TSRM)
- Tivoli SAN Manager (TSANM)
- Multiple Device Manager (MDM)
The first two were based on Tivoli's internal GUI platform, and the MDM was a plug-in for IBM Systems Director. I argued that administrators would want everything on a single pane of glass, and that we should bring all the components under a common GUI platform, such as IBM Systems Director. Unfortunately, management did not agree with me on that, and preferred instead to leave each interface alone to minimize development effort. The only "unification" was to give them all similar sounding names, four components packaged as single product:
- Productivity Center for Data (formerly TSRM)
- Productivity Center for Fabric (formerly TSANM)
- Productivity Center for Disk (formerly MDM)
- Productivity Center for Replication (formerly MDM)
While this management decision certainly allowed version 1 to hit the market sooner, this was not a good "first impression" of the product for many of our clients.
In 2002, IBM acquired Trellisoft, Inc. which replaced the internally-developed TSRM with a much better interface, but again, this was different GUI than the other components. A "launcher" was created that would launch the various disparate interfaces for each component for Version 2. At this point, we have different development teams scattered in five locations, with the first two components being developed by the Tivoli software team, and the other two components being developed by the System Storage hardware team.
Often times, when a technical lead architect and management do not agree, things do not end well. The lead architect has to leave the product, and management is forced to take alternative actions to keep the product going. In my case, management considered the idea of a common GUI as an expensive "nice-to-have" luxury we could not afford, but I considered this a "must-have". I moved on to a new job within IBM, and management, unable to continue without my leadership, gave up and handed the entire project over to the Tivoli Software team.
The Tivoli Software team took a whiff at the pile of code and agreed that it stunk. Dusting off my original design documents, they pretty much discarded most of the code and re-wrote much from scratch, with a common database, common app server, and common GUI platform. Unfortunately, Productivity Center for Replication was held up waiting for some hardware prerequisites, but the other three components would be packaged together as "Productivity Center v3 - Standard Edition" and was a big improvement over the prior versions.
In Version 4, TotalStorage Productivity Center was renamed to Tivoli Storage Productivity Center, and the Replication component was brought into the mix. A scaled-down version packaged as Productivity Center "Basic Edition" was made available as a hardware appliance named "System Storage Productivity Center" or SSPC. The idea was to provide a pre-installed 1U-high hardware console that had the basic functions of Productivity Center, with the option to upgrade to the full Tivoli Storage Productivity Center with just license keys.
So, now, years later, management recognizes that a common GUI platform is more than just a "nice-to-have". IBM now support three very specific use cases:
- 1. Administration for a single product
-
For small clients who might have only a single IBM product, IBM is now focused on making the GUI browser-based, specifically to work with the Mozilla Firefox browser, but any similar browser should work as well. The new IBM Storwize V7000 GUI is a good example of this.In this case, the browser serves as the common GUI platform.
- 2. Administration for both servers and storage devices
-
For mid-sized companies that have administrators managing both servers and storage, IBM announced this month the new [IBM Systems Director Storage Control v4.2.1] plug-in, which provides Tivoli Storage Productivity Center "Basic Edition" support. This allows admins already familiar with IBM Systems Director for managing their servers to also manage basic storage functions. This is the "I Told You So" moment, connecting server and storage administration under the IBM Systems Director management platform makes a lot of sense, it did when I came up with the idea 10 years ago! Hmmmm?
- 3. Administration for just the storage environment
-
For larger companies big enough to have separate server and storage admin teams, IBM continues to offer the full Tivoli Storage Productivity Center product for the storage admins. The most recent release enhanced the support for IBM DS8000, SVC, Storwize V7000 and XIV storage systems.
Today, analysts consider IBM's [Tivoli Storage Productivity Center] one of the leading products in its category. I am glad my original vision has finally come to life, even though it took a while longer than I expected.
To learn more about IBM storage hardware, software or services, see the updated [IBM System Storage] landing page.
technorati tags: IBM, ProtecTIER, TS7650G, TS7650, TS7610, Symantec, NetBackup, OpenStorage, API, OST, TPC, TSRM, Trellisoft, TSANM, SSPC, Systems Director, Storage Control, GUI
Tags: 
ts7650
trellisoft
tsrm
systems+director
symantec
deduplication
ibm
ost
ts7610
api
protectier
ts7650g
storage+control
tsanm
gui
netbackup
openstorage
tpc
sspc
|