Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Well, I'm back safely from my tour of Asia. I am glad to report that Tokyo, Beijing and Kuala Lumpur are pretty much how I remember them from the last time I was there in each city. I have since been fighting jet lag by watching the last thirteen episodes of LOST season 6 and the series finale.
Recently, I have started seeing a lot of buzz on the term "Storage Federation". The concept is not new, but rather based on the work in database federation, first introduced in 1985 by [A federated architecture for information management] by Heimbigner and McLeod. For those not familiar with database federation, you can take several independent autonomous databases, and treat them as one big federated system. For example, this would allow you to issue a single query and get results across all the databases in the federated system. The advantage is that it is often easier to federate several disparate heterogeneous databases than to merge them into a single database. [IBM Infosphere Federation Server] is a market leader in this space, with the capability to federate DB2, Oracle and SQL Server databases.
Storage expansion: You want to increase the storage capacity of an existing storage system that cannot accommodate the total amount of capacity desired. Storage Federation allows you to add additional storage capacity by adding a whole new system.
Storage migration: You want to migrate from an aging storage system to a new one. Storage Federation allows the joining of the two systems and the evacuation from storage resources on the first onto the second and then the first system is removed.
Safe system upgrades: System upgrades can be problematic for a number of reasons. Storage Federation allows a system to be removed from the federation and be re-inserted again after the successful completion of the upgrade.
Load balancing: Similar to storage expansion, but on the performance axis, you might want to add additional storage systems to a Storage Federation in order to spread the workload across multiple systems.
Storage tiering: In a similar light, storage systems in a Storage Federation could have different capacity/performance ratios that you could use for tiering data. This is similar to the idea of dynamically re-striping data across the disk drives within a single storage system, such as with 3PAR's Dynamic Optimization software, but extends the concept to cross storage system boundaries.
To some extent, IBM SAN Volume Controller (SVC), XIV, Scale-Out NAS (SONAS), and Information Archive (IA) offer most, if not all, of these capabilities. EMC claims its VPLEX will be able to offer storage federation, but only with other VPLEX clusters, which brings up a good question. What about heterogenous storage federation? Before anyone accuses me of throwing stones at glass houses, let's take a look at each IBM solution:
IBM SAN Volume Controller
The IBM SAN Volume Controller has been doing storage federation since 2003. Not only can IBM SAN Volume Controller bring together storage from a variety of heterogenous storage, the SVC cluster itself can be a mix of different hardware models. You can have a 2145-8A4 node pair, 2145-8G4 node pair, and the new 2145-CF8 node pair, all combined together into a single SVC cluster. Upgrading SVC hardware nodes in an SVC cluster is always non-disruptive.
IBM XIV storage system
The IBM XIV has two kinds of independent modules. Data modules have processor, cache and 12 disks. Interface modules are data modules with additional processor, FC and Ethernet (iSCSI) adapters. Because these two modules play different roles in an XIV "colony", that number of each type is predetermined. Entry-level six-module systems have 2 interface and 4 data modules. Full 15-module systems have 6 interface and 9 data modules. Individual modules can be added or removed non-disruptively in an XIV.
IBM Scale-Out NAS
The SONAS is comprised of three kinds of nodes that work together in concert. A management node, one or more interface nodes, and two or more storage nodes. The storage nodes are paired to manage up to 240 nodes in a storage pod. Individual interface or data nodes can be added or removed non-disruptively in the SONAS. The underlying technology, the General Parallel File System, has been doing storage federation since 1996 for some of the largest top 500 supercomputers in the world.
IBM Information Archive (IA)
For the IA, there are 1, 2 or 3 nodes, which manages a set of collections. A collection can either be file-based using industry-standard NAS protocols, or object-based using the popular System Storage™ Archive Manager (SSAM) interface. Normally, you have as many collections as you have nodes, but nodes are powerful enough to manage two collections to provide N-1 availability. This allows a node to be removed, and a new node added into the IA "colony", in a non-disruptive manner.
Even in an ant colony, there are only a few types of ants, with typically one queen, several males, and lots of workers. But all the ants are red. You don't see colonies that mix between different species of ants. For databases, federation was a way to avoid the much harder task of merging databases from different platforms. For storage, I am surprised people have latched on to the term "federation", given our mixed results in the other "federations" we have formed, which I have conveniently (IMHO) ranked from least effective to most effective:
The Union of Soviet Socialist Republics (USSR)
My father used to say, "If the Soviet Union were in charge of the Sahara desert, they would run out of sand in 50 years." The [Soviet Union] actually lasted 68 years, from 1922 to 1991.
The United Nations (UN)
After the previous League of Nations failed, the UN was formed in 1945 to facilitate cooperation in international law, international security, economic development, social progress, human rights, and the achieving of world peace by stopping wars between countries, and to provide a platform for dialogue.
The European Union (EU)
With the collapse of the Greek economy, and the [rapid growth of debt] in the UK, Spain and France, there are concerns that the EU might not last past 2020.
The United States of America (USA)
My own country is a federation of states, each with its own government. California's financial crisis was compared to the one in Greece. My own state of Arizona is under boycott from other states because of its recent [immigration law]. However, I think the US has managed better than the EU because it has evolved over the past 200 years.
The Organization of the Petroleum Exporting Countries [OPEC]
Technically, OPEC is not a federation of cooperating countries, but rather a cartel of competing countries that have agreed on total industry output of oil to increase individual members' profits. Note that it was a non-OPEC company, BP, that could not "control their output" in what has now become the worst oil spill in US history. OPEC was formed in 1960, and is expected to collapse sometime around 2030 when the world's oil reserves run out. Matt Savinar has a nice article on [Life After the Oil Crash].
United Federation of Planets
The [Federation] fictitiously described in the Star Trek series appears to work well, an optimistic view of what federations could become if you let them evolve long enough.
Given the mixed results with "federation", I think I will avoid using the term for storage, and stick to the original term "scale-out architecture".
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
IBM
HDS
EMC
Why?
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
How?
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
What?
SAN Volume Controller
HDS USP-V and USP-VM
Invista
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
IBM SVC
EMC Invista
EMC VPLEX
Hardware
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
SAN Fabric
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Multi-pathing driver
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Cache
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
No
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Thin Provisioning
Provides thin provisioning to devices that don't offer this natively
No
Like Invista, No
Campus-wide access
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Limited support
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
None
HP SVSP-like, requires at least one EMC storage array to hold metadata
Internal storage
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
None
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
Continuing my drawn out coverage of IBM's big storage launch of February 9, today I'll cover the IBM System Storage TS7680 ProtecTIER data deduplication gateway for System z.
On the host side, TS7680 connects to mainframe systems running z/OS or z/VM over FICON attachment, emulating an automated tape library with 3592-J1A devices. The TS7680 includes two controllers that emulate the 3592 C06 model, with 4 FICON ports each. Each controller emulates up to 128 virtual 3592 tape drives, for a total of 256 virtual drives per TS7680 system. The mainframe sees up to 1 million virtual tape cartridges, up to 100GB raw capacity each, before compression. For z/OS, the automated library has full SMS Tape and Integrated Library Management capability that you would expect.
Inside, the two control units are both connected to a redundant pair cluster of ProtecTIER engines running the HyperFactor deduplication algorithm that is able to process the deduplication inline, as data is ingested, rather than post-process that other deduplication solutions use. These engines are similar to the TS7650 gateway machines for distributed systems.
On the back end, these ProtecTIER deduplication engines are then connected to external disk, up to 1PB. If you get 25x data deduplication ratio on your data, that would be 25PB of mainframe data stored on only 1PB of physical disk. The disk can be any disk supported by ProtecTIER over FCP protocol, not just the IBM System Storage DS8000, but also the IBM DS4000, DS5000 or IBM XIV storage system, various models of EMC and HDS, and of course the IBM SAN Volume Controller (SVC) with all of its supported disk systems.
"With Cisco Systems, EMC, and VMware teaming up to sell integrated IT stacks, Oracle buying Sun Microsystems to create its own integrated stacks, and IBM having sold integrated legacy system stacks and rolling in profits from them for decades, it was only a matter of time before other big IT players paired off."
Once again we are reminded that IBM, as an IT "supermarket", is able to deliver integrated software/server/storage solutions, and our competitors are scrambling to form their own alliances to be "more like IBM." This week, IBM announced new ordering options for storage software with System x servers, including BladeCenter blade servers and IntelliStation workstations. Here's a quick recap:
FastBack v6.1
IBM Tivoli Storage Manager FastBack v6.1 supports both Windows and Linux! FastBack is a data protection solution for ROBO (Remote Office, Branch Office) locations. It can protect Microsoft Exchange, Lotus Domino, DB2, Oracle applications. FastBack can provide full volume-level recovery, as well as individual file recovery, and in some cases Bare Machine Recovery. FastBack v6.1 can be run stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution.
FlashCopy Manager v2.1
FlashCopy Manager uses point-in-time copy capabilities, such as SnapShot or FlashCopy, to protect application data using an application-aware approach for Microsoft Exchange, Microsoft SQL server, DB2, Oracle, and SAP. It can be used with IBM SAN Volume Controller (SVC), DS8000 series, DS5000 series, DS4000 series, DS3000 series, and XIV storage systems. When applicable, FlashCopy manager coordinates its work with Microsoft's Volume Shadow Copy Services (VSS) interface. FlashCopy Manager can provide data protection using just point-in-time disk-resident copies, or can be integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution to move backup images to external storage pools, such as low-cost, energy-efficient tape cartridges.
General Parallel File System (GPFS) v3.3 Multiplatform
GPFS can support AIX, Linux, and Windows! Version 3.3 adds support for Windows 2008 Server on 64-bit chipset architectures from AMD and Intel. Now you can have a common GPFS cluster with AIX, Linux and Windows servers all sharing and accessing the same files. A GPFS cluster can have up to 256 file systems. Each of these file systems can be up to 1 billion files, up to 1PB of data, and can have up to 256 snapshots. GPFS can be used stand-alone, or integrated with a full IBM Tivoli Storage Manager (TSM) unified recovery management solution with parallel backup streams.
For full details on these new ordering options, see the IBM [Press Release].
Continuing my coverage of the Data Center Conference, Dec 1-4, 2009 here in Las Vegas, this post focused on data protection strategies.
Two analysts co-presented this session which provided an overview of various data protection techniques. A quick survey of the audience found that 27 percent have only a single data center, 13 percent have load sharing of their mission critical applications across multiple data centers, and the rest use a failover approach to either development/test resources, standby resources or an outsourced facility.
There are basically five ways to replicate data to secondary locations:
Array-based replication. Many high-end disk arrays offer this feature. IBM's DS8000 and XIV both have synchronous and asynchronous mirroring. Data Deduplication can help in this regard to reduce the amount of data transmitted across locations.
NAS-based replication. I consider this just another variant of the first, but this can be file-based instead of block-based, and can often be done over the public internet rather than dark fiber.
Network-based replication. This is the manner that IBM SAN Volume Controller, EMC RecoverPoint, and others can replicate. The analysts liked this approach as it was storage vendor-independent.
Host-based replication. This is often done by the host's Operating System, such as through a Logical Volume Manager (LVM) component.
Application/Database replication. There are a variety of techniques, including log shipping of transactions, SQL replication, and active/active application-specific implementations.
The analysts felt that "DR Testing" has become a lost art. People are just not doing it as often as they should, or not doing it properly, resulting in surprises when a real disaster strikes.
A question came up about the confusion between "Disaster Recovery Tiers" and Uptime Institute's "Data Center Facilities Tiers". I agree this is confusing. Many clients call their most mission critical applications as Tier 1, less critical as Tier 2, and least critical as Tier 3. In 1983, IBM User Group GUIDE came up with "Business Continuity Tiers" where Tier 1 was the slowest recovery from manual tape, and Tier 7 was the fastest recovery with a completely automated site, network, server and storage failover. However, for Data Center facility tiers, Uptime has the simplest least available (99.3 percent uptime) data center as Tier 1, and the most advanced, redundant, highest available (99.995 percent) data center as Tier 4. This just goes to show that when one person starts using "Tier 1" or "Tier 4" terminology, it can be misinterpreted by others.
This week several IBM executives will present at the 28th Annual Data Center Conference here in Las Vegas. Here is a quick recap:
Steve Sams: Data Center Cost Saving Actions Your CFO Will Love
A startling 78 percent of today's data centers were built in the last century, before the "dot com" era and the adoption of high-density blade servers. IBM Vice President of Global Site and Facility Services, Steve Sams, presented actions that can help extend the life of existing data centers, help rationalize the infrastructure across the company, and design a new data center that is flexible and responsive to changing needs.
In one example, an 85,000 square foot datacenter in Lexington had reached 98 percent capacity based on power/cooling requirements. They estimated it would take $53 million US dollars to either upgrade the facility or build a new facility to meet projected growth. Instead, IBM was able to consolidate servers six-to-one, an 85 percent reduction. IBM also was able to make changes to the cooling equipment, redirect airflow and changed out the tiles, re-oriented the servers for more optimal placement, and implement measurement and management tools. The end result? The facility now has eight times the compute capability and enjoys 15 percent headroom for additonal growth. All this for only 1.5 million US dollar investment, instead of 53 million.
IBM builds hundreds of data centers for clients large and small. In addition to the "Portable Modular Data Center"(PMDC) shipping container on display at the Solution Showcase, IBM offers the "Scalable Modular Data Center", a turn-key system with a small 500 to 2500 square foot size for small customers. For larger deployments, the "Enterprise Modular Data Center" offers standardized deployments in 5000 square foot increments. IBM also offers "High Density Zones" which can be perfect way to avoid a full site retrofit.
Helene Armitage: IT-wide Virtualization
Helene is IBM General Manager of the newly formed IBM System Software division. A smarter planet will require more dynamic infrastructures, which is IBM's approach to helping clients through the virtualization journey. The virtualization of resources, workloads and business processes will require end-to-end management. To help, IBM offers IBM Systems Director.
Helene indicated that there are four stages of adoption:
Physical consolidation - VMware and Hyper-V are the latest examples of running many applications on fewer physical servers. Of course, IBM has been doing this for decades with mainframes, and has had virtualization on System i and System p POWER systems as well. A quick survey of the audience found that about 20 percent were doing server virtualization on non-x86 platforms (for example, PowerVM or System z mainframe z/VM)
Pools of resources - SAN Volume Controller is an example solution to manage storage as a pool of disparate storage resources. Supercomputers manage pools of servers.
Integrated Service Management - in the past, resources were managed by domain, resulting in islands of management. Now, with IBM Systems Director, you can manage AIX, IBM i, Linux and Windows servers, including non-IBM servers running Linux and Windows.
Service management can provide monitoring, provisioning, service catalog, self-service, and business-aligned processes.
Cloud computing - Helene agreed that not everyone will get to this stage. Some will adopt cloud computing, whether public, private or some kind of hybrid, and others may be fine at stage 3.
For those clients that want assistance, IBM offers three levels of help:
Help me decide what is best for me
Help me implement what I have decided to do
Help me manage and run my operations
With IBM's compelling vision for the future, best of breed solutions, leadership in management software, extensive experience in services, and solid business industry knowledge, it makes sense to tap IBM to help with your next IT transformation.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
announcement.
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
IBM System Storage Productivity Center v1.4
The latest [System Storage Productivity Center (SSPC) v1.4] can manage all of your DS3000, DS4000, DS5000, DS6000, DS8000 series disk, and SAN Volume Controller. You can get the SSPC built in two modes:
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
Well, it's Wednesday! Normally, IBM makes its announcements on Tuesday's, but this week that landed on the 13th, and some people are superstitious, so we pushed it back to today. Fellow IBM master inventory Barry Whyte starts the first in a series of posts with: New SVC v5 CF8 node with native SSD support.
There are really two separate items being announced for the IBM System Storage SAN Volume Controller (SVC):
SVC v5 software
The software moves from a 32-bit kernel to a 64-bit kernel. Fortunately, IBM had the foresight to know that would happen back in 2005, so models 8F2, 8F4 and 8G4 can be upgrade to this new software level and gain new functionality. This is because these models have 64-bit capable processors. Those with six-year-old 4F2 will continue to run on SVC 4.3.1, but should consider it's about time to upgrade soon.
New 2145-CF8 model
The CF8 is based on the IBM System x 3550M2. Each node can have up to 4 Solid-State Drives (SSD) that can be treated as SVC Managed Disk Groups. Virtual disks can be easily migrated from hard disk drives (HDD) over to SSD, processed, and then move them back to HDD. By treating the SSD as managed disks, rather than an extension of the cache, we are able to support all of the features and functions in a seamless manner.
As Barry says, IBM has been working on this for quite a while, and based on initial responses looks to be quite successful in the market!