Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
“In times of universal deceit, telling the truth will be a revolutionary act.”
-- George Orwell
Well, it has been over two years since I first covered IBM's acquisition of the XIV company. Amazingly, I still see a lot of misperceptions out in the blogosphere, especially those regarding double drive failures for the XIV storage system. Despite various attempts to [explain XIV resiliency] and to [dispel the rumors], there are still competitors making stuff up, putting fear, uncertainty and doubt into the minds of prospective XIV clients.
Clients love the IBM XIV storage system! In this economy, companies are not stupid. Before buying any enterprise-class disk system, they ask the tough questions, run evaluation tests, and all the other due diligence often referred to as "kicking the tires". Here is what some IBM clients have said about their XIV systems:
“3-5 minutes vs. 8-10 hours rebuild time...”
-- satisfied XIV client
“...we tested an entire module failure - all data is re-distributed in under 6 hours...only 3-5% performance degradation during rebuild...”
-- excited XIV client
“Not only did XIV meet our expectations, it greatly exceeded them...”
In this blog post, I hope to set the record straight. It is not my intent to embarrass anyone in particular, so instead will focus on a fact-based approach.
Fact: IBM has sold THOUSANDS of XIV systems
XIV is "proven" technology with thousands of XIV systems in company data centers. And by systems, I mean full disk systems with 6 to 15 modules in a single rack, twelve drives per module. That equates to hundreds of thousands of disk drives in production TODAY, comparable to the number of disk drives studied by [Google], and [Carnegie Mellon University] that I discussed in my blog post [Fleet Cars and Skin Cells].
Fact: To date, no customer has lost data as a result of a Double Drive Failure on XIV storage system
This has always been true, both when XIV was a stand-alone company and since the IBM acquisition two years ago. When examining the resilience of an array to any single or multiple component failures, it's important to understand the architecture and the design of the system and not assume all systems are alike. At it's core, XIV is a grid-based storage system. IBM XIV does not use traditional RAID-5 or RAID-10 method, but instead data is distributed across loosely connected data modules which act as independent building blocks. XIV divides each LUN into 1MB "chunks", and stores two copies of each chunk on separate drives in separate modules. We call this "RAID-X".
Spreading all the data across many drives is not unique to XIV. Many disk systems, including EMC CLARiiON-based V-Max, HP EVA, and Hitachi Data Systems (HDS) USP-V, allow customers to get XIV-like performance by spreading LUNs across multiple RAID ranks. This is known in the industry as "wide-striping". Some vendors use the terms "metavolumes" or "extent pools" to refer to their implementations of wide-striping. Clients have coined their own phrases, such as "stripes across stripes", "plaid stripes", or "RAID 500". It is highly unlikely that an XIV will experience a double drive failure that ultimately requires recovery of files or LUNs, and is substantially less vulnerable to data loss than an EVA, USP-V or V-Max configured in RAID-5. Fellow blogger Keith Stevenson (IBM) compared XIV's RAID-X design to other forms of RAID in his post [RAID in the 21st Centure].
Fact: IBM XIV is designed to minimize the likelihood and impact of a double drive failure
The independent failure of two drives is a rare occurrence. More data has been lost from hash collisions on EMC Centera than from double drive failures on XIV, and hash collisions are also very rare. While the published worst-case time to re-protect from a 1TB drive failure for a fully-configured XIV is 30 minutes, field experience shows XIV regaining full redundancy on average in 12 minutes. That is 40 times less likely than a typical 8-10 hour window for a RAID-5 configuration.
A lot of bad things can happen in those 8-10 hours of traditional RAID rebuild. Performance can be seriously degraded. Other components may be affected, as they share cache, connected to the same backplane or bus, or co-dependent in some other manner. An engineer supporting the customer onsite during a RAID-5 rebuild might pull the wrong drive, thereby causing a double drive failure they were hoping to avoid. Having IBM XIV rebuild in only a few minutes addresses this "human factor".
In his post [XIV drive management], fellow blogger Jim Kelly (IBM) covers a variety of reasons why storage admins feel double drive failures are more than just random chance. XIV avoids load stress normally associated with traditional RAID rebuild by evenly spreading out the workload across all drives. This is known in the industry as "wear-leveling". When the first drive fails, the recovery is spread across the remaining 179 drives, so that each drive only processes about 1 percent of the data. The [Ultrastar A7K1000] 1TB SATA disk drives that IBM uses from HGST have specified 1.2 million hours mean-time-between-failures [MTBF] would average about one drive failing every nine months in a 180-drive XIV system. However, field experience shows that an XIV system will experience, on average, one drive failure per 13 months, comparable to what companies experience with more robust Fibre Channel drives. That's innovative XIV wear-leveling at work!
Fact: In the highly unlikely event that a DDF were to occur, you will have full read/write access to nearly all of your data on the XIV, all but a few GB.
Even though it has NEVER happened in the field, some clients and prospects are curious what a double drive failure on an XIV would look like. First, a critical alert message would be sent to both the client and IBM, and a "union list" is generated, identifying all the chunks in common. The worst case on a 15-module XIV fully loaded with 79TB data is approximately 9000 chunks, or 9GB of data. The remaining 78.991 TB of unaffected data are fully accessible for read or write. Any I/O requests for the chunks in the "union list" will have no response yet, so there is no way for host applications to access outdated information or cause any corruption.
(One blogger compared losing data on the XIV to drilling a hole through the phone book. Mathematically, the drill bit would be only 1/16th of an inch, or 1.60 millimeters for you folks outside the USA. Enough to knock out perhaps one character from a name or phone number on each page. If you have ever seen an actor in the movies look up a phone number in a telephone booth then yank out a page from the phone book, the XIV equivalent would be cutting out 1/8th of a page from an 1100 page phone book. In both cases, all of the rest of the unaffected information is full accessible, and it is easy to identify which information is missing.)
If the second drive failed several minutes after the first drive, the process for full redundancy is already well under way. This means the union list is considerably shorter or completely empty, and substantially fewer chunks are impacted. Contrast this with RAID-5, where being 99 percent complete on the rebuild when the second drive fails is just as catastrophic as having both drives fail simultaneously.
Fact: After a DDF event, the files on these few GB can be identified for recovery.
Once IBM receives notification of a critical event, an IBM engineer immediately connects to the XIV using remote service support method. There is no need to send someone physically onsite, the repair actions can be done remotely. The IBM engineer has tools from HGST to recover, in most cases, all of the data.
Any "union" chunk that the HGST tools are unable to recover will be set to "media error" mode. The IBM engineer can provide the client a list of the XIV LUNs and LBAs that are on the "media error" list. From this list, the client can determine which hosts these LUNs are attached to, and run file scan utility to the file systems that these LUNs represent. Files that get a media error during this scan will be listed as needing recovery. A chunk could contain several small files, or the chunk could be just part of a large file. To minimize time, the scans and recoveries can all be prioritized and performed in parallel across host systems zoned to these LUNs.
As with any file or volume recovery, keep in mind that these might be part of a larger consistency group, and that your recovery procedures should make sense for the applications involved. In any case, you are probably going to be up-and-running in less time with XIV than recovery from a RAID-5 double failure would take, and certainly nowhere near "beyond repair" that other vendors might have you believe.
Fact: This does not mean you can eliminate all Disaster Recovery planning!
To put this in perspective, you are more likely to lose XIV data from an earthquake, hurricane, fire or flood than from a double drive failure. As with any unlikely disaster, it is best to have a disaster recovery plan than to hope it never happens. All disk systems that sit on a single datacenter floor are vulnerable to such disasters.
For mission-critical applications, IBM recommends using disk mirroring capability. IBM XIV storage system offers synchronous and asynchronous mirroring natively, both included at no additional charge.
If you store your VMware bits on external SAN or NAS-based disk storage systems, this post is for you. The subject of the post, VM Volumes, is a potential storage management game changer!
Fellow blogger Stephen Foskett mentioned VM Volumes in his [Introducing VMware vSphere Storage Features] presentation at IBM Edge 2012 conference. His session on VMware's storage features included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol", now more formally known as VM Volumes. This post provides a follow-up to this, describing the VM Volumes concepts, architecture, and value proposition.
"VM Volumes" is a future architecture that VMware is developing in collaboration with IBM and other major storage system vendors. So far, very little information about VM Volumes has been released. At VMworld 2012 Barcelona, VMware highlights VM Volumes for the first time and IBM demonstrates VM Volumes with the IBM XIV Storage System (more about this demo below). VM Volumes is worth your attention -- when it becomes generally available, everyone using storage arrays will have to reconsider their storage management practices in a VMware environment -- no exaggeration!
But enough drama. What is this all about?
(Note: for the sake of clarity, this post refers to block storage only. However, the VM Volumes feature applies to NAS systems as well. Special thanks to Yossi Siles and the XIV development team for their help on this post!)
The VM Volumes concept is simple: VM disks are mapped directly to special volumes on a storage array system, as opposed to storing VMDK files on a vSphere datastore.
The following images illustrate the differences between the two storage management paradigms.
You may still be asking yourself: bottom line, how will I benefit from VM Volumes?
Well, take a VM snapshot for example. With VM Volumes, vSphere can simply offload the operation by invoking a hardware snapshot of the hardware volume. This has significant implications:
VM-Granularity: Only the right VMs are copied (with datastores, backing up or cloning individual-VM portions of hardware snapshot of a datastore would require more complex configuration, tools and work)
Hardware Offload: No ESXi server resources are consumed
XIV advantage: With XIV, snapshots consume no space upfront and are completed instantly.
Here's the first takeaway: With VM Volumes, advanced storage services (which cost a lot when you buy a storage array), will become available at an individual VM level. In a cloud world, this means that applications can be provisioned easily with advanced storage services, such as snapshots and mirroring.
Now, let's take a closer look at another relevant scenario where VM Volumes will make a lot of difference - provisioning an application with special mirroring requirements:
VM Volumes case: The application is ordered via the private cloud portal. The requestor checks a box requesting an asynchronous mirror. He changes the default RPO for his needs. When the request is submitted, the process wraps up automatically: Volumes are created on one of the storage arrays, configured with a mirror and RPO exactly as specified. A few minutes later, the requestor receives an automatic mail pointing to the application virtual machine.
Datastores case #1: As may be expected, a datastore that is mirrored with the special RPO does not exist. As a result, the automated workflow sets a pending status on the request, creates an urgent ticket to a VMware administrator and aborts. When the VMware admin handles that ticket, she re-assigns the ticket to the storage administrator, asking for a new volume which is mirrored with the special RPO, and mapped to the right ESXi cluster. The next day, the volume is created; the ticket is re-assigned to the storage admin, with the new LUN being pointed to. The VMware administrator follows and creates the datastore on top of it. Since the automated workflow was aborted, the admin re-assigns the ticket to the cloud administrator, who sometime later completes the application provisioning manually.
Datastores case #2: Luckily for the requestor, a datastore that is mirrored with the special RPO does exist. However, that particular datastore is consuming space from a high performance XIV Gen3 system with SSD caching, while the application does not require that level of performance, so the workflow requires a storage administrator approval. The approval is given to save time, but the storage administrator opens a ticket for himself to create a new volume on another array, as well as a follow-up ticket for the VMware admin to create a new datastore using the new volume and migrate the application to the other datastore. In this case, provisioning was relatively rapid, but required manual follow up, involving the two administrators.
Here's the second takeaway: With VM Volumes, management is simplified, and end-to-end automation is much more applicable. The reason is that there are no datastores. Datastores physically group VMs that may otherwise be totally unrelated, and require close coordination between storage and VMware administrators.
Now, the above mainly focuses on the VMware or cloud administrator perspective. How does VM Volumes impact storage management?
VM's are the new hosts: Today, storage administrators have visibility of physical hosts in their management environment. In a non-virtualized environment, this visibility is very helpful. The storage administrator knows exactly which applications in a data center are storage-provisioned or affected by storage management operations because the applications are running on well-known hosts. However, in virtualized environments the association of an application to a physical host is temporary. To keep at least the same level of visibility as in physical environments, VMs should become part of the storage management environment, like hosts. Hosts are still interesting, for example to manage physical storage mapping, but without VM visibility, storage administrators will know less about their operation than they are used to, or need to. VM Volumes enables such visibility, because volumes are provided to individual VMs. The XIV VM Volumes demonstration at VMworld Barcelona, although experimental, shows a view of VM volumes, in XIV's management GUI.
Here's a screenshot:
That's not all!
Storage Profiles and Storage Containers: A Storage Profile is a vSphere specification of a set of storage services. A storage profile can include properties like thin or thick provisioning, mirroring definition, snapshot policy, minimum IOPS, etc.
Storage administrators define a portfolio of supported storage services, maintained as a set of storage profiles, and published (via VASA integration) to vSphere.
VMware or cloud administrators define the required storage profiles for specific applications
VMware and storage administrators need to coordinate the typical storage requirements and the automatically-available storage services. When a request to provision an application is made, the associated storage profiles are matched against the published set of available storage profiles. The matching published profiles will be used to create volumes, which will be bound to the application VMs. All that will happen automatically.
Note that when a VM is created today, a datastore must be specified. With VM Volumes, a new management entity called Storage Container (also known as Capacity Pool) replaces the use of datastore as a management object. Each Storage Container exposes a subset of the available storage profiles, as appropriate. The storage container also has a capacity quota.
Here are some more takeaways:
New way to interface vSphere and storage management: Storage administrators structure and publish storage services to vSphere via storage profiles and storage containers.
Automated provisioning, out of the box: The provisioning process automatically matches application-required storage profiles against storage profiles available from the specified storage containers. There is no need to build custom scripts and custom processes to automate storage provisioning to applications
The XIV advantage:
XIV services are very simple to define and publish. The typical number of available storage profiles would be low. It would also be easy to define application storage profiles.
XIV provides consistent high performance, up to very high capacity utilization levels, without any maintenance. As a result, automated provisioning (which inherently implies less human attention) will not create an elevated risk of reduced performance.
Note: A storage vendor VASA provider is required to support VM Volumes, storage profiles, storage containers and automated provisioning. The IBM Storage VASA provider runs as a standalone service that needs to be deployed on a server.
To summarize the VM Volumes value proposition:
Streamline cloud operation by providing storage services at VM and application level, enabling end-to-end provisioning automation, and unifying VMware and storage administration around volumes and VMs.
Increase storage array ROI, improve vSphere scalability and response time, and reduce cloud provisioning lag, by offloading VM-level provisioning, failover, backup, storage migration, storage space recycling, monitoring, and more, to the storage array, using advanced storage operations such as mirroring and snapshots.
Simplify the adoption of VM Volumes using XIV, with smaller and simpler sets of storage profiles. Apply XIV's supreme fast cloning to individual VMs, and keep automation risks at bay with XIV's consistent high performance.
Until you can get your hands on a VM Volumes-capable environment, the VMware and IBM developer groups will be collaborating and working hard to realize this game-changing feature. The above information is definitely expected to trigger your questions or comments, and our development teams are eager to learn from them and respond. Enter your comments below, and I will try to answer them, and help shape the next post on this subject. There's much more to be told.
This week, IBM made over a dozen announcements related to IBM storage products. Here is part 2 of my overview:
IBM System Storage® DS8000 series microcode
One of the advantages of acquiring XIV as IBM's other high-end disk system, is that it allows the DS8000 team to focus on the IBM i and z/OS operating systems. As a result, IBM DS8000 has over half the mainframe-attach market share.
For both the DS8700 and DS8800 models, IBM Easy Tier now support sub-LUN automated tiering across three storage tiers: Solid-State Drives, high-performance spinning disk drives (15K and 10K RPM), and high-capacity disk drives (7200 RPM).
For System z customers, the latest DS8000 microcode has synergy with z/OS and GDPS, now supporting 4x larger EAV volumes, faster high-performance FICON (zHPF), and Workload Manager (WLM) integration with the I/O Priority Manager. IBM has a world record SAP performance of 59 million account postings per hour. DB2 v10 for z/OS queries were measured at 11x faster using the new zHPF feature.
IBM System Storage® DS8800 systems
On the hardware side, the DS8800 now supports a fourth frame to hold a total over 1,500 disk drives. Yes, we have customers that three frames wasn't enough, and they wanted more.
IBM is now also offering new drive options. Small Form Factor (2.5 inch) drives now include 300GB 15K RPM drives, and a 900GB 10K RPM drives. But wait! There's more! The DS8800 is no longer a SFF-only box, it now allows for mixing in Large form factor (3.5 inch) drives, starting with the 3TB NL-SAS 7200 RPM drive.
IBM XIV® Storage System Gen3
We announced the XIV Gen3 already, but we have two enhancements.
First, we now offer a model based entirely on 3TB NL-SAS drives. If you are thinking, what IBM is going to put 3TB drives into everything? Yup. Once we go through all the pain and suffering of qualifying a drive, we make sure we get our money's worth!
Secondly, we have now an iPad application to manage the XIV. This has nothing to do with Apple CEO Steve Jobs passing away last week, it was merely coincidence.
IBM Real-time Compression Appliances™ STN6500 and STN6800 V3.8
The latest software for RtCA now supports Microsoft SMB v2, and enhanced reporting so that storage admins know exactly the benefits of the compression ratios of different file extensions.
IBM System Storage EXP2500 Express®
The EXP2500 is for direct-attach situations, like the IBM BladeCenter. IBM adds LFF 3.5-inch 3TB NL-SAS drives, SFF 2.5-inch 300GB 15K RPM SAS drives, and 900GB 7200 RPM NL-SAS drives.
My colleague Curtis Neal refers to these as "B.F.D" announcements, which of course stands for Bigger, Faster, Denser!
Full VMware Vstorage API for Array Integration (VAAI). Back in 2008, VMware announced new vStorage APIs for its vSphere ESX hypervisor: vStorage API for Site Recovery Manager, vStorage API for Data Potection, vStorage API for Multipathing. Last July, VMware added a new API called vStorage API for Array Integration [VAAI] which offers three primitives:
Hardware-assisted Blocks zeroing. Sometimes referred to as "Write Same", this SCSI command will zero out a large section of blocks, presumably as part of a VMDK file. This can then be used to reclaim space on the XIV on thin-provisioned LUNs.
Hardware-assisted Copy. Make an XIV snapshot of data without any I/O on the server hardware.
Hardware-assisted locking. On mainframes, this is call Parallel Access Volumes (PAV). Instead of locking an entire LUN using standard SCSI reserve commands, this primitive allows an ESX host to lock just an individual block so as not to interfere with other hosts accessing other blocks on that same LUN.
Quality of Service (QoS) Performance Classes.
When XIV was first released, it treated all hosts and all data the same, even when deployed for a variety of different applications. This worked for some clients, such as [Medicare y Mucho Más]. They migrated their databases, file servers and email system from EMC CLARiiON to an IBM XIV Storage System. In conjunction with VMware, the XIV provides a highly flexible and scalable virtualized architecture, which enhances the company's business agility.
However, other clients were skeptical, and felt they needed additional "nobs" to prioritize different workloads. The new 10.2.4 microcode allows you to define four different "performance classes". This is like the door of a nightclub. All the regular people are waiting in a long line, but when a celebrity in a limo arrives, the bouncer unclips the cord, and lets the celebrity in. For each class, you provide IOPS and/or MB/sec targets, and the XIV manages to those goals. Performance classes are assigned to each host based on their value to the business.
Offline Initialization for Asynchronous Mirror.
Internally, we called this Truck Mode. Normally, when a customer decides to start using Asynchronous Mirror, they already have a lot of data at the primary location, and so there is a lot of data to send over to the new XIV box at the secondary location. This new feature allows the data to be dumped to tape at the primary location. Those tapes are shipped to the secondary location and restored on the empty XIV. The two XIV boxes are then connected for Asynchronous Mirroring, and checksums of each 64KB block are compared to determine what has changed at the primary during this "tape delivery time". This greatly reduces the time it takes for the two boxes to get past the initial synchronization phase.
IP-based Replication. When IBM first launched the Storwize V7000 last October, people commented that the one feature they felt missing was IP-based replication. Sure, we offered FCP-based replication as most other Enterprise-class disk systems offer today, but many midrange systems also offer IP-based repliation to reduce the need for expensive FCIP routers. [IBM Tivoli Storage FastBack for Storwize V7000] provides IP-based replication for Storwize V7000 systems.
Network Attached Storage
IBM announced two new models of the IBM System Storage N series. The midrange N6240 supports up to 600 drives, replacing the N6040 system. The entry-level N6210 supports up to 240 drives, and replaces the N3600 system. Details for both are available on the latest [data sheet].
IBM Real-Time Compression appliances work with all N series models to provide additional storage efficiency. Last October, I provided the [Product Name Decoder Ring] for the STN6500 and STN6800 models. The STN6500 supports 1 GbE ports, and the STN6800 supports 10GbE ports (or a mix of 10GbE and 1GbE, if you prefer). The IBM versions of these models were announced last December, but some people were on vacation and might have missed it. For more details of this, read the [Resources page], the [landing page], or [watch this video].
IBM System Storage DS3000 series
IBM System Storage [DS3524 Express DC and EXP3524 Express DC] models are powered with direct current (DC) rather than alternating current (AC). The DS3524 packs dual controllers and two dozen small-form factor (2.5 inch) drives in a compact 2U-high rack-optimized module. The EXP3524 provides addition disk capacity that can be attached to the DS3524 for expansion.
Large data centers, especially those in the Telecommunications Industry, receive AC from their power company, then store it in a large battery called an Uninterruptible Power Supply (UPS). For DC-powered equipment, they can run directly off this battery source, but for AC-powered equipment, the DC has to be converted back to AC, and some energy is lost in the conversion. Thus, having DC-powered equipment is more energy efficient, or "green", for the IT data center.
Whether you get the DC-powered or AC-powered models, both are NEBS-compliant and ETSI-compliant.
New Tape Drive Options for Autoloaders and Libraries
IBM System Storage [TS2900 Autoloader] is a compact 1U-high tape system that supports one LTO drive and up to 9 tape cartridges. The TS2900 can support either an LTO-3, LTO-4 or LTO-5 half-height drive.
IBM System Storage [TS3100 and TS3200 Tape Libraries] were also enhanced. The TS3100 can accomodate one full-height LTO drive, or two half-height drives, and hold up to 24 cartridges. The TS3200 offers twice as many drives and space for cartridges.
Well, I'm back safely from my tour of Asia. I am glad to report that Tokyo, Beijing and Kuala Lumpur are pretty much how I remember them from the last time I was there in each city. I have since been fighting jet lag by watching the last thirteen episodes of LOST season 6 and the series finale.
Recently, I have started seeing a lot of buzz on the term "Storage Federation". The concept is not new, but rather based on the work in database federation, first introduced in 1985 by [A federated architecture for information management] by Heimbigner and McLeod. For those not familiar with database federation, you can take several independent autonomous databases, and treat them as one big federated system. For example, this would allow you to issue a single query and get results across all the databases in the federated system. The advantage is that it is often easier to federate several disparate heterogeneous databases than to merge them into a single database. [IBM Infosphere Federation Server] is a market leader in this space, with the capability to federate DB2, Oracle and SQL Server databases.
Storage expansion: You want to increase the storage capacity of an existing storage system that cannot accommodate the total amount of capacity desired. Storage Federation allows you to add additional storage capacity by adding a whole new system.
Storage migration: You want to migrate from an aging storage system to a new one. Storage Federation allows the joining of the two systems and the evacuation from storage resources on the first onto the second and then the first system is removed.
Safe system upgrades: System upgrades can be problematic for a number of reasons. Storage Federation allows a system to be removed from the federation and be re-inserted again after the successful completion of the upgrade.
Load balancing: Similar to storage expansion, but on the performance axis, you might want to add additional storage systems to a Storage Federation in order to spread the workload across multiple systems.
Storage tiering: In a similar light, storage systems in a Storage Federation could have different capacity/performance ratios that you could use for tiering data. This is similar to the idea of dynamically re-striping data across the disk drives within a single storage system, such as with 3PAR's Dynamic Optimization software, but extends the concept to cross storage system boundaries.
To some extent, IBM SAN Volume Controller (SVC), XIV, Scale-Out NAS (SONAS), and Information Archive (IA) offer most, if not all, of these capabilities. EMC claims its VPLEX will be able to offer storage federation, but only with other VPLEX clusters, which brings up a good question. What about heterogenous storage federation? Before anyone accuses me of throwing stones at glass houses, let's take a look at each IBM solution:
IBM SAN Volume Controller
The IBM SAN Volume Controller has been doing storage federation since 2003. Not only can IBM SAN Volume Controller bring together storage from a variety of heterogenous storage, the SVC cluster itself can be a mix of different hardware models. You can have a 2145-8A4 node pair, 2145-8G4 node pair, and the new 2145-CF8 node pair, all combined together into a single SVC cluster. Upgrading SVC hardware nodes in an SVC cluster is always non-disruptive.
IBM XIV storage system
The IBM XIV has two kinds of independent modules. Data modules have processor, cache and 12 disks. Interface modules are data modules with additional processor, FC and Ethernet (iSCSI) adapters. Because these two modules play different roles in an XIV "colony", that number of each type is predetermined. Entry-level six-module systems have 2 interface and 4 data modules. Full 15-module systems have 6 interface and 9 data modules. Individual modules can be added or removed non-disruptively in an XIV.
IBM Scale-Out NAS
The SONAS is comprised of three kinds of nodes that work together in concert. A management node, one or more interface nodes, and two or more storage nodes. The storage nodes are paired to manage up to 240 nodes in a storage pod. Individual interface or data nodes can be added or removed non-disruptively in the SONAS. The underlying technology, the General Parallel File System, has been doing storage federation since 1996 for some of the largest top 500 supercomputers in the world.
IBM Information Archive (IA)
For the IA, there are 1, 2 or 3 nodes, which manages a set of collections. A collection can either be file-based using industry-standard NAS protocols, or object-based using the popular System Storage™ Archive Manager (SSAM) interface. Normally, you have as many collections as you have nodes, but nodes are powerful enough to manage two collections to provide N-1 availability. This allows a node to be removed, and a new node added into the IA "colony", in a non-disruptive manner.
Even in an ant colony, there are only a few types of ants, with typically one queen, several males, and lots of workers. But all the ants are red. You don't see colonies that mix between different species of ants. For databases, federation was a way to avoid the much harder task of merging databases from different platforms. For storage, I am surprised people have latched on to the term "federation", given our mixed results in the other "federations" we have formed, which I have conveniently (IMHO) ranked from least effective to most effective:
The Union of Soviet Socialist Republics (USSR)
My father used to say, "If the Soviet Union were in charge of the Sahara desert, they would run out of sand in 50 years." The [Soviet Union] actually lasted 68 years, from 1922 to 1991.
The United Nations (UN)
After the previous League of Nations failed, the UN was formed in 1945 to facilitate cooperation in international law, international security, economic development, social progress, human rights, and the achieving of world peace by stopping wars between countries, and to provide a platform for dialogue.
The European Union (EU)
With the collapse of the Greek economy, and the [rapid growth of debt] in the UK, Spain and France, there are concerns that the EU might not last past 2020.
The United States of America (USA)
My own country is a federation of states, each with its own government. California's financial crisis was compared to the one in Greece. My own state of Arizona is under boycott from other states because of its recent [immigration law]. However, I think the US has managed better than the EU because it has evolved over the past 200 years.
The Organization of the Petroleum Exporting Countries [OPEC]
Technically, OPEC is not a federation of cooperating countries, but rather a cartel of competing countries that have agreed on total industry output of oil to increase individual members' profits. Note that it was a non-OPEC company, BP, that could not "control their output" in what has now become the worst oil spill in US history. OPEC was formed in 1960, and is expected to collapse sometime around 2030 when the world's oil reserves run out. Matt Savinar has a nice article on [Life After the Oil Crash].
United Federation of Planets
The [Federation] fictitiously described in the Star Trek series appears to work well, an optimistic view of what federations could become if you let them evolve long enough.
Given the mixed results with "federation", I think I will avoid using the term for storage, and stick to the original term "scale-out architecture".
Continuing my drawn out coverage of IBM's big storage launch of February 9, today I'll cover the IBM System Storage TS7680 ProtecTIER data deduplication gateway for System z.
On the host side, TS7680 connects to mainframe systems running z/OS or z/VM over FICON attachment, emulating an automated tape library with 3592-J1A devices. The TS7680 includes two controllers that emulate the 3592 C06 model, with 4 FICON ports each. Each controller emulates up to 128 virtual 3592 tape drives, for a total of 256 virtual drives per TS7680 system. The mainframe sees up to 1 million virtual tape cartridges, up to 100GB raw capacity each, before compression. For z/OS, the automated library has full SMS Tape and Integrated Library Management capability that you would expect.
Inside, the two control units are both connected to a redundant pair cluster of ProtecTIER engines running the HyperFactor deduplication algorithm that is able to process the deduplication inline, as data is ingested, rather than post-process that other deduplication solutions use. These engines are similar to the TS7650 gateway machines for distributed systems.
On the back end, these ProtecTIER deduplication engines are then connected to external disk, up to 1PB. If you get 25x data deduplication ratio on your data, that would be 25PB of mainframe data stored on only 1PB of physical disk. The disk can be any disk supported by ProtecTIER over FCP protocol, not just the IBM System Storage DS8000, but also the IBM DS4000, DS5000 or IBM XIV storage system, various models of EMC and HDS, and of course the IBM SAN Volume Controller (SVC) with all of its supported disk systems.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
Well, it was Tuesday again, and we had quite a lot of announcements here at IBM this week!
Over 1,800 clients attended the [Live February 5 webcast]! The announcements were all part of IBM's SmartCloud Storage portfolio. Here are the highlights:
STN7800 Real-time Compression Appliance
Back in October 2010, IBM announced the acquisition of Storwize, Inc., renaming its NAS-compression units to the IBM Real-time Compression appliances. Some folks were confused, so I had a blog post [IBM Storwize Product Name Decoder Ring].
IBM initially offered two models:
The [STN6500 model] had 16 Ethernet ports 1GbE (16x1GbE) and a pair of four-core processors.
The [STN6800 model] had either eight 10GbE ports (8x10GbE), or four 10GbE plus eight 1GbE ports (4x10GbE+8x1GbE). It has a pair of six-core processors.
Now, IBM offers the [STN7800 model], which can replace either of the ones above, offering 16x1GbE, 8x10GbE, and 4x10GbE+8x1GBE port configurations. It has a pair of eight-core processors to handle more robust Cloud Storage environments. See [Announcement Letter 113-012] for more details.
New XIV Gen3 model 214
With its awesome support for VMware, the XIV is often chosen for Cloud storage. The new XIV model 214 now offers up to a dozen 10GbE ports, or you can stay with the 22 1GbE ports available on previous models. These can be used for iSCSI host attachment and/or IP-based replication.
IBM strives to make each new model of every storage device more energy efficient than the last.
The new XIV model is no exception. The original XIV, introduced in 2008, consumed 8.4 kVA fully loaded. The XIV Gen 3 model 114 consumed 7.0 kVA. This new model 214 consumes only 5.9 kVA!
It has been almost three years since my now infamous post [Double Drive Failure Debunked: XIV Two Years Later]. Back then, the XIV offered only 1TB and 2TB drives, with rebuild time for 1TB drive of less than 30 minutes, and for 2TB less than 60 minutes.
The new XIV Gen3 software 11.2 release, available for both the 114 and 214 models, can now rebuild a 2TB drive in less than 26 minutes, and a 3TB drive in less than 39 minutes. There is also support specific to Windows Server 2012 including thin provisioning, MSCS, VSS, and Hyper-V. See [Announcement Letter 113-013] for more details.
SmartCloud Storage Access
IBM is the first major storage vendor to offer a product of this kind, so understanding it may be a bit difficult.
The concept is simple. Rather than having end-users having to ask IT every time they need some storage space, IBM created a self-service portal that frees up the IT department to work on more important transformational projects.
This is basically what people can do with "Public Cloud" storage service providers, so basically IBM is now giving you the capability with your "Private Cloud" storage deployment.
Here is the sequence of events. End users point their favorite web browser to the self-service portal, and login using their credentials stored in your Active Directory or LDAP server database.
Once validated, the end-user now can request new storage space, expanding their existing space, or returning the space to the IT department. For new storage requests, users can have a choice of storage classes, -- such as Gold, Silver and Bronze-- defined in the Tivoli Storage Productivity Center (TPC), either stand-alone or in the SmartCloud Virtual Storage Center.
But wait! Do you want to give every end-user a blank check to provision their own storage? Most IT staff are horrified at the thought.
Knowing this, IBM has included an option to put in an approval process, based on the end-user and the amount of capacity requested. The approver can be the cloud administrator, or someone delegated for approvals, known as an environment owner.
For some users, policies may restrict the storage classes as well. For example, Fred can only have Silver or Bronze, but not Gold.
Once the approval is obtained, TPC then issues the appropriate commands to the appropriate SONAS or Storwize V7000 Unified device. SmartCloud Storage Access can do this for thousands of storage devices across dozens of geographically dispersed locations.
Before, the Cloud Admin had to configure storage pools of managed disks, define file systems, dole out file sets to hundreds or thousands of users with hard quotas, and then configure shares based on the protocols required, like CIFS, NFS, HTTPS, etc.
With SmartCloud Storage Access, the Cloud admin still defines the pools and file systems, but then lets the self-service capability of the software to create the file sets, set the quotas and configure shares with the appropriate protocols. This greatly reduces the work on the IT staff, and greatly improves the turn-around time for end-user requests to get exactly what they want, when they need it.
The next time you withdraw money from an ATM machine, fill up your gas tank at the self-service gas station, then serve your own salad at the salad bar and fill up your own soft drink at the fast food restaurant, you will realize and appreciate that SmartCloud Storage Access is a brilliant move for the IT staff.
Cloud administrators, environment owners, and end-users can all use SmartCloud Storage Access to monitor and report on storage usage.
Well it's Wednesday, and you know what that means... IBM Announcements.
(Normally, announcements are on Tuesdays, but we moved this one over to Wednesday to line up with our big launch event in Pinehurst, NC. )
A lot was announced today, so I decided to break it up into several separate posts. I will start with our Enterprise Systems: DS8870, TS7700 Release 3, and XIV Gen3.
Enterprise systems are the servers, storage and software at the core of an enterprise IT infrastructure. Enterprise systems enable a private cloud infrastructure at enterprise scale, with flexible service delivery models that provide dynamic efficiency for resource and workload management. They make sure critical data is always available across the enterprise, making it accessible in new ways so that actionable insights can be derived from advanced and operational analytics. They also provide ultimate security, ensuring the integrity of critical data while mitigating risk and providing assured compliance.
IBM System Storage DS8870® disk system
This new storage system is the next generation in IBM's DS8000 series, based on IBM's POWER7 chipset. Each CEC can have 2, 4, 8 or 16 cores. Like the DS8800, you can have a mix of 2.5-inch and 3.5-inch disk drives of different speeds and capacities, up to 1,536 drives in a four-frame configuration. The maximum cache is now 1TB usable. The combination of faster chipset and more cache can triple performance for some workloads!
All DS8870s ship standard with all Full Disk Encryption (FDE-capable) drives. The problem in the past was that people would buy DS8000 with non-FDE drives, and then later want to activate encryption, and discovered that they have to swap out their drives with those with the encryption chip built in. Now, all drives on the DS8870 will have the encryption chip. This also allows Easy Tier sub-volume automated tiering to move encrypted data between all media types.
Flash optimization with DS8000 Easy Tier can improve performance up to 3 times with 3% of data on solid-state storage. Easy Tier is easy to deploy and runs automatically.
Support of the American National Standards Institute's (ANSI) T10 Data Integrity Field (DIF) standard. This is a feature that the mainframe has had for years, and is now being extended to distributed operating systems. The concept is simple. When sending data between server and storage, generate a checksum at the source, and then validate the checksum at the target. When you write a block of data, the server generates the checksum, and the DS8870 validates the checksum on arrival. When you read the data back, the DS8870 generates the checksum, and the server validates it on arrival. This ensures that data was not corrupted in between. There is a great write-up on IBM developerWorks: [End-to-end data protection using T10 standard data integrity field].
Energy Efficient. The DS8870 consumes less energy than its predecessor, the DS8800. For example, a fully-configured four-frame DS8870 with 1,536 disk drives consumes only 23.2kW, compared to the same number of drives in a DS8800 consumed 26.3 kW. By comparison, the DS8700 with five frames and 1,024 drives consumed 29.2kW.
Support for new System z load balancing algorithm. System z Workload Manager now interacts with the DS8870 I/O Priority Manager to optimize designated Quality of Service (QoS) levels. We have also the fastest operational analytics solution with DB2 list Prefetch cache optimization with DS8870 High Performance FICON (zHPF) integration. This solution increases DB2 query performance up to 11 times with disk, and up to 60 times with solid-state drives (SSD). File scans are up to 30 percent faster using DS8870 zHPF support for sequential access methods (QSAM, BPAM, and BSAM).
VMware vStorage APIs for Array Integration (VAAI) support. Why should the IBM DS8800 series support VMware when IBM already offers great VMware support with SAN Volume Controller (SVC), Storwize V7000 and XIV storage sytsems? Good question. This was hotly debated between development and marketing. Several DS8000 customers have already added SVC to provide full VMware VAAI support. As a consultant, I am neither development nor marketing, but felt it necessary to weigh in on my opinion on this. The DS8000 is a consolidation platform. According to one analyst survey, 22 percent of companies run on a single disk platform, so for DS8000 to be the one, it needs to support VMware and exploit these special APIs.
Six Nines Availability. Critical enterprise systems need to deliver continuous data availability, or very close to it. IBM solutions can help deliver up to six “nines” of availability, or 99.9999 percent when combining DS8000 Metro Mirror and GDPS Hyperswap. That's less than 30 seconds of downtime per year.
The TS7700 Release 3 represents a refresh to our existing virtual tape libraries. These are mainframe-only, offered in two models: TS7720 is a disk-only device, and the TS7740 is a blended disk-and-tape solution.
Industry standard hardware encryption. This applies to user data stored on the TS7700 system cache (disk), and for data transferred between TS7700 systems. This is especially important for regulations, like Payment Card Industry Data Security Standard (PCI-DSS). In previous models, the data would not be encrypted until it was moved off disk and written to tape. Now, it is encrypted the minute in lands on the disk cache, and stays encrypted as it is replicated from one TS7700 to another in the grid.
Up to 4 Million logical volume capacity. This is twice the previous support.
More physical capacity for TS7720 systems. The maximum capacity for the disk-only model is raised from 440TB to 620TB, representing a 40 percent increase.
My latest book "Inside System Storage: Volume V" is now available!
I have published my fifth volume in my "Inside System Storage" series! Currently, it is only available in Paperback. My editor, Susan Pollard, is hoping to have the eBook and Hardcover versions ready for Cyber Monday. The foreword was written by my Dr. Sondra Ashmore.
You can order this, and all my other books, in all formats, directly from my [Author Spotlight] page. The paperback will also be available soon from other online booksellers, search for ISBN 978-1-300-26223-7.
Improved Scalability. A new Multi-system Manager (MSM) server reduces the operational complexity for large and multi-site XIV deployments. Previously, admins connected directly to XIV boxes. If you had 10 admins logged in, then every XIV box was managing 10 admin conversations. The new MSM acts as a go-between. The admins connect to the MSM, and the MSM connects to the XIV boxes. The MSM polls and caches the status of each XIV, greatly increasing the number of XIV boxes that an admin can manage.
Enhanced User Interface. A new Multi-system Manager server reduces the operational complexity for large and multi-site XIV deployments. We also added support for IPsec and US. Government (USGv6) certification for admistering the XIV over IPv6 networks. The XIV Mobile Dashboard app for iPhone and iPad is spiffed up. Finally, the GUI has been internationalized and translated to the Japanese language.
Enhanced Integration for Cloud. For OpenStack, XIV now offers a Nova-volume driver which provides persistent storage to OpenStack compute nodes. The Nova task force is now looking to move storage into its own project called Cinder. For VMware, XIV has full support for Site Recovery Manager (SRM) v4.1 and v5.0 releases. XIV now also supports the Microsoft System Center Virtual Machine Manager, which can manage Hyper-V, VMware and Citrix XenServer hypervisors.
Smaller entry point. The original XIV supported 1TB and 2TB drives, with the smallest offering being 27TB usable. When IBM introduced the XIV Gen3, the two choices were 2TB and 3TB disk drives. Unfortunately, this meant that the initial entry model was now 55TB in size, and each additional module would be more expensive as well. IBM is now going to offer 1TB support for XIV Gen3 for a lower price point, these are actually 2TB drives with half the capacity turned off.
This week I am in Orlando, Florida for the IBM Edge conference. Thursday evening after all the other sessions, we had a Free-for-All, a Q&A panel across all storage topics, moderated by Scott Drummond. The conference officially ends at noon tomorrow, but for many, this is the last session, as people fly out Friday morning. Here are the questions and the panel responses during the session.
When will IBM unify their storage management between Mainframe z/OS and the distributed systems platforms?
IBM offers a Change and Configuration Management Data Base (CCMDB) for this purpose with appropriate collectors from z/OS and distributed systems, but hasn't sold well.
When will IBM devices have RESTful interfaces?
Both IBM Systems Director and IBM Tivoli Storage Productivity Center (TPC) offer RESTful APIs. IBM Systems Director can manage z/VM and Linux on System z, as well as Power Systems and x86 based distributed systems. Since October 2008, IBM's Project Zero introduced RESTful interfaces to PHP and Groovy software running on WebSphere sMash environments. We have not heard much about this since 2008.
Will IBM TPC support NPIV on Power Systems?
TPC 5.1 has toleration support for this, showing the first port connection discovered, but not all connections, and we expect to retrofit this toleration to TPC 4.2.2 Fixpack 2. Hopefully, we will have full support in a future release.
We would like TPC for Replication to run on Linux for System z. We do not run z/OS at the disaster recovery site location.
Submit an IBM Request for Enhancement [RFE] for this. We have TPC for Replication on z/OS, as well as the distributed systems version that runs on Windows, Linux and AIX.
We have enhancements we would like to see for XIV and SONAS also, can we use the RFE process for this also?
Yes, submit the requirements for our review.
We heard the Statement of Direction that there would be storage integrated into the PureSystems. What exactly does that mean?
The PureSystems family of expert-integrated systems is based on a new chassis that has a front part, a midplane, and a back-part. All IBM System Storage products that support x86 and Power Systems can work with PureSystems. However, IBM does not yet offer storage that fits in the front part of the PureFlex chassis, but the Statement of Direction indicates that we intend to offer that option. Until then, the IBM Storwize V7000 is the storage of choice that can be put into the PureSystems rack, but outside the individual chasses.
We see some features like Real-Time Compression being put into the SAN Volume Controller (SVC), and other features put into the back-end devices. How are we supposed to make sense of this?
IBM's new pilot program, the SmartCloud Virtual Storage Center, to bring these all together. In general, we have design teams of system architects that determine which features go in which products, and prioritize accordingly.
We heard the IBM Executives during the opening session indicate that IBM's strategy involves supporting Big Data, but I haven't seen any storage that supports native Hadoop interfaces. Did I miss something?
First, I want to emphasize that Big Data is more than just MapReduce workloads. IBM offers Streams and BigInsights software to handle text, as well as Business Intelligence and Data Warehouse solutions for structured data. IBM's General Parallel File System (GPFS) has a Shared-Nothing-Cluster (SNC) mode with Hadoop interfaces that runs twice as fast as Hadoop's native HDFS file system. The storage products we recommend for Big Data are the SONAS and the DCS3700 disk systems, as both are optimized for the sequential workloads Big Data represents.
Everytime we upgrade our SVC, we review the list for SDDPCM multi-pathing and see that we need to upgrade our back-end DS8000 microcode up to recommended levels. Can we get a list of combinations that work from other customers?
The advantage of storage hypervisors like SVC is that we can separate the multi-pathing driver from the back-end managed disk systems. You only need the SDDPCM to support the SVC, not the back-end devices. For the most part, SVC has not dropped support for any level of previously supported OS or multi-pathing software.
On SVC, when we migrate volumes (vDisks) from one storage pool to another, we would like to throttle this process during FlashCopy.
Yes, we had several requests like this, which is why we now recommend using Volume Mirorring to perform migrations. In fact the GUI wizard uses Volume Mirroring by default when migrations are performed. As for throttling, IBM has implemented "I/O Priority Manager" that offers Quality of Service classes for DS8000 and XIV Gen3, and might consider porting this to other products in our portfolio.
Sizing systems is an art. I just need to know if the DS8000 is running hot. Can we have the equivalent of "red lines" for our disk systems similar to automobile engines?
Storage Optimizer was added to TPC 4.2 to help in this area, identifying heat-maps for IBM DS8000, DS6000, DS5000, DS4000, SVC and Storwize V7000. We recommend you look at the performance violation reports.
How can we evaluate the characteristics of our workloads?
Yes, TPC can do this.
When we are replacing non-IBM storage with IBM, we don't have good tools to evaluate the non-IBM equipment. What is IBM doing for this?
IBM's Disk Magic modeling tool can take inputs from a variety of sources, including iostat from the servers themselves. You can also install a 90-day trial of TPC to help with this.
We really like EMC's "Grab" program, does IBM have one also?
Updating the Host Attachment Kit (HAK) for AIX is quite painful for the SVC. We prefer the method employed for the XIV.
Thanks for the feedback.
For SVC, we need to correlate disk with VMware and VIOS. Can we get vSCSI information on VIOS?
TPC 5.1 has this support, and we believe it has been retrofitted to TPC 4.2.2 Fixpack 2, coming out this month.
Currently, with SVC, when volumes are part of a Global Mirror (GM) session, we need to cancel GM, expand the source volume, expand the target volume, then restart GM. We would like this to be fully automated and non-disruptive.
Sounds like a great requirement to submit for the RFE process.
Can we get an RSS Feed for the RFE community.
Yes, you can subscribe to it. You can also set up "Watch Lists".
Thanks to all of the IBM experts on the panel for their participation at this event!
Have you ever noticed that sometimes two movies come out that seem eerily similar to each other, released by different studios within months or weeks of each other? My sister used to review film scripts for a living, she would read ten of them and have to pick her top three favorites, and tells me that scripts for nearly identical concepts came all the time. Here are a few of my favorite examples:
1994: [Wyatt Earp] and [Tombstone] were Westerns recounting the famed gunfight at the O.K. Corral. Tombstone, Arizona is near Tucson, and the gunfight is recreated fairly often for tourists.
1998: [Armageddon] and [Deep Impact] were a pair of disaster movies dealing with a large rock heading to destroy all life on earth. I was in Mazatlan, Mexico to see the latter, dubbed in Spanish as "Impacto Profundo".
1998: [A Bug's Life] and [Antz] were computer-animated tales of the struggle of one individual ant in an ant colony.
2000: [Mission to Mars] and [Red Planet] were sci-fi pics exploring what a manned mission to our neighboring planet might entail.
This is different than copy-cat movies that are re-made or re-imagined many years later based on the previous successes of an original. Ever since my blog post [VPLEX: EMC's Latest Wheel is Round] in 2010 comparing EMC's copy-cat product that came our seven years after IBM's SAN Volume Controller (SVC), I've noticed EMC doesn't talk about VPLEX that much anymore.
This week, IBM announced [XIV Gen3 Solid-State Drive support] and our friends over at EMC announced [VFCache SSD-based PCIe cards]. Neither of these should be a surprise to anyone who follows the IT industry, as IBM had announced its XIV Gen3 as "SSD-Ready" last year specifically for this purpose, and EMC has been touting its "Project Lightning" since last May.
Fellow blogger Chuck Hollis from EMC has a blog post [VFCache means Very Fast Cache indeed] that provides additional detail. Chuck claims the VFCache is faster than popular [Fusion-IO PCIe cards] available for IBM servers. I haven't seen the performance spec sheets, but typically SSD is four to five times slower than the DRAM cache used in the XIV Gen3. The VFCache's SSD is probably similar in performance to the SSD supported in the IBM XIV Gen3, DS8000, DS5000, SVC, N series, and Storwize V7000 disk systems.
Nonetheless, I've been asked my opinions on the comparison between these two announcements, as they both deal with improving application performance through the use of Solid-State Drives as an added layer of read cache.
(FTC Disclosure: I am both a full-time employee and stockholder of the IBM Corporation. The U.S. Federal Trade Commission may consider this blog post as a paid celebrity endorsement of IBM servers and storage systems. This blog post is based on my interpretation and opinions of publicly-available information, as I have no hands-on access to any of these third-party PCIe cards. I have no financial interest in EMC, Fusion-IO, Texas Memory Systems, or any other third party vendor of PCIe cards designed to fit inside IBM servers, and I have not been paid by anyone to mention their name, brands or products on this blog post.)
The solutions are different in that IBM XIV Gen3 the SSD is "storage-side" in the external storage device, and EMC VFCache is "server-side" as a PCI Express [PCIe] card. Aside from that, both implement SSD as an additional read cache layer in front of spinning disk to boost performance. Neither is an industry first, as IBM has offered server-side SSD since 2007, and IBM and EMC have offered storage-side SSD in many of their other external storage devices. The use of SSD as read cache has already been available in IBM N series using [Performance Accelerator Module (PAM)] cards.
IBM has offered cooperative caching synergy between its servers and its storage arrays for some time now. The predecessor to today's POWER7-based were the iSeries i5 servers that used PCI-X IOP cards with cache to connect i5/OS applications to IBM's external disk and tape systems. To compete in this space, EMC created their own PCI-X cards to attach their own disk systems. In 2006, IBM did the right thing for our clients and fostered competition by entering in a [Landmark agreement] with EMC to [license the i5 interfaces]. Today, VIOS on IBM POWER systems allows a much broader choice of disk options for IBM i clients, including the IBM SVC, Storwize V7000 and XIV storage systems.
Can a little SSD really help performance? Yes! An IBM client running a [DB2 Universal Database] cluster across eight System x servers was able to replace an 800-drive EMC Symmetrix by putting eight SSD Fusion-IO cards in each server, for a total of 64 Solid-State drives, saving money and improving performance. DB2 has the Data Partitioning Feature that has multi-system DB2 configurations using a Grid-like architecture similar to how XIV is designed. Most IBM System x and BladeCenter servers support internal SSD storage options, and many offer PCIe slots for third-party SSD cards. Sadly, you can't do this with a VFCache card, since you can have only one VFCache card in each server, the data is unprotected, and only for ephemeral data like transaction logs or other temporary data. With multiple Fusion-IO cards in an IBM server, you can configure a RAID rank across the SSD, and use it for persistent storage like DB2 databases.
Here then is my side-by-side comparison:
IBM XIV Gen3 SSD Caching
Selected x86-based models of Cisco UCS, Dell PowerEdge, HP ProLiant DL, and IBM xSeries and System x servers
All of these, plus any other blade or rack-optimized server currently supported by XIV Gen3, including Oracle SPARC, HP Titanium, IBM POWER systems, and even IBM System z mainframes running Linux
Operating System support
Linux RHEL 5.6 and 5.7, VMware vSphere 4.1 and 5.0, and Windows 2008 x64 and R2.
All of these, plus all the other operating systems supported by XIV Gen3, including AIX, IBM i, Solaris, HP-UX, and Mac OS X
FCP and iSCSI
Vendor-supplied driver required on the server
Yes, the VFCache driver must be installed to use this feature.
No, IBM XIV Gen3 uses native OS-based multi-pathing drivers.
External disk storage systems required
None, it appears the VFCache has no direct interaction with the back-end disk array, so in theory the benefits are the same whether you use this VFCache card in front of EMC storage or IBM storage
XIV Gen3 is required, as the SSD slots are not available on older models of IBM XIV.
Shared disk support
No, VFCache has to be disabled and removed for vMotion to take place.
Yes! XIV Gen3 SSD caching shared disk supports VMware vMotion and Live Partition Mobility.
Support for multiple servers
An advantage of the XIV Gen3 SSD caching approach is that the cache can be dynamically allocated to the busiest data from any server or servers.
Support for active/active server clusters
Aware of changes made to back-end disk
No, it appears the VFCache has no direct interaction with the back-end disk array, so any changes to the data on the box itself are not communicated back to the VFCache card itself to invalidate the cache contents.
None identified. However, VFCache only caches blocks 64KB or smaller, so any sequential processing with larger blocks will bypass the VFCache.
Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.
Number of SSD supported
One, which seems odd as IBM supports multiple Fusion-IO cards for its servers. However, this is not really a single point of failure (SPOF) as an application experiencing a VFCache failure merely drops down to external disk array speed, no data is lost since it is only read cache.
6 to 15 (one per XIV module) for high availability.
Pin data in SSD cache
Yes, using split-card mode, you can designate a portion of the 300GB to serve as Direct-attached storage (DAS). All data written to the DAS portion will be kept in SSD. However, since only one card is supported per server and the data is unprotected, this should only be used for ephemeral data like logs and temp files.
No, there is no option to designate an XIV Gen3 volume to be SSD-only. Consider using Fusion-IO PCIe card as a DAS alternative, or another IBM storage system for that requirement.
Pre-sales Estimating tools
Yes! CDF and Disk Magic tools are available to help cost-justify the purchase of SSD based on workload performance analysis.
IBM has the advantage that it designs and manufactures both servers and storage, and can design optimal solutions for our clients in that regard.
Well, it's Tuesday again, and you know what that means! IBM Announcements! Typically, IBM System Storage has three to five major product launches per year. Making announcements every Tuesday would have been two frequent, and having one big announcement every two or three years would be too far apart. Worldwide combined revenues for storage hardware and software grew double digits last year, comparing full-year 2011 to the prior 2010 year, and I am sure that 2012 will also be a good year for IBM as well! This week we have announcements for both disk and tape, but since 2012 is the 60th Diamond Anniversary for tape, I will start with tape systems first.
TS1140 support for JA/JJ tape cartridges
The TS1140 enterprise tape drive was announced at the [Storage Innovation Executive Summit] last May. It supported a new E07 format on three different new tape cartridges. Models "JC" was 4.0TB standard re-writeable tapes, "JY" was 4.0TB WORM tapes, and "JK" were 500GB economy tapes that were less expensive, but offered faster random access.
Generally, IBM has adopted an N-2 read, N-1 write [backward compatibility]. This means that the TS1140 could read E05 and E06 formatted tapes on JB and JX media, and could write E06 format on JB and JX media. However, there are a lot of older JA and JJ media, especially as part of TS7740 environments, so IBM now supports TS1140 drives to read J1A formatted JA and JJ media. This is not just for TS7740 environments, any TS1140 in stand-alone or tape library configurations will support this as well.
TS7700 R2.1 enhancements
IBM is a leader in tape virtualization with or without physical tape as back-end media. There are two hardware models of the [IBM Virtualization Engine TS7700 family] for the IBM System z mainframe. These virtual libraries are referred to as "clusters" in IBM literature.
The TS7740 Virtual Tape Library supports putting virtual tape images on disk first, then move less-active data to physical tape, which I covered in my blog post [IBM Announcements - July 2007].
A unique feature of the TS7700 series is support for a Grid configuration, which allows up to six different TS7700 clusters to be grouped into a single instance image. These clusters can be in local or remote locations, connected via WAN or LAN connections.
R2.1 is the latest software release of this successful IBM's TS7700 series.
True Sync Mode Copy. Before R2.1, the TS7700 offered "immediate mode copy". An application would write to a virtual tape, and when it was done with the tape and performed an unmount, the TS7700 would then replicate the tape contents to a secondary cluster on the grid. With True Sync Mode, data contents are replicated per implicit or explicit SYNC points. This is another IBM first in the IT tape industry.
Remote Mount Fail-over. When you have two or more TS7700 clusters in a grid configuration, you can do remote mounts. We've added fail-over multi-pathing up to four paths, so that if a link to a remote cluster is down, it will try one of the others instead.
Parallel Copies and Pre-Migration. On of my 19 patents is for the pre-migration feature for the IBM 3494 Virtual Tape Server (VTS) that carries forward into the TS7700, and is also used in the SONAS and Information Archive products. However, when the grid architecture was introduced, the engineers decided not to allow pre-migration and copies to secondary clusters to occur concurrently. Now these two operations can be done in parallel.
Merge two grids into one grid. Now that we can support up to six clusters into a single grid, we have people with 2-cluster and 3-cluster grids looking to merge them into one. Of course, all the logical and physical volume serials (VOLSER) must be unique!
Accelerate off JA/JJ Media. There are a lot of older JA and JJ media still in TS7700 libraries. This feature allows customers to speed up the transition to newer physical tape media.
Copy Export to E06 format on JB media. This one is clever, and I have to say I would have never thought about it. Let's say you have a TS7740 with TS1140 drives, but you want to export some virtual tapes to physical media to be sent to someone who only has a TS7740 connected with older TS1130 drives. These older drives can't read new JC media nor make sense of the E07 format. This feature will let you export to older JB media in E06 format so that it will be fully readable at the new location on the TS1130 drives.
Copy Export Merge service offering. Thanks to mergers and acquisitions, it is sometimes necessary to split off a portion of data from a TS7700 grid. In the past, IBM supported sending this export to a completely empty TS7700 library, but this new service offerings allows the export to be merged into an existing TS7700 that already contains data.
LTFS-SDE support for Mac OS X 10.7 Lion
How do people still not yet know about the Linear Tape File System [LTFS]? I mentioned this in my blogs back in 2010 in [April], [September], and [November]. Last year, LTFS was the [NAB Show Pick Hits Award] and an [Emmy] for revolutionizing the use of digital tape in Television broadcasting.
In layman's terms, the Single Drive Edition [LTFS-SDE] allows a tape cartridge to be treated like USB memory stick. It is supported on the LTO5 tape drives for systems running various levels of Windows, Linux and Mac OS X. Prior to this announcement, IBM supported Snow Leopard (10.5.6) and Leopard (10.6), and now supports Mac OS X 10.7 "Lion" release.
IBM first introduced Solid-State Drives (SSD) back in 2007 where it made sense the most, in [drive-for-drive replacements on blade servers in the IBM BladeCenter]. Blade servers typically only have a single drive, and SSD are both faster and use less energy on a drive-for-drive comparison, so this provided immediate benefit. Today, SSD are available on a variety of System x and POWER system servers.
In 2008, IBM rocked the world by being the first to reach [1 Million IOPS with Project Quicksilver]. This was an all-SSD configuration which many considered unrealistic (at the time), but it showed the potential for solid state drives.
When the [XIV Gen3 was Announced - July 2011], each module included an 1.8-inch "SSD-Ready" slot in the back. IBM made a Statement of Direction that IBM would someday offer SSD drives to put in these slots. Today's announcement is that IBM has finalized the qualification process, so now XIV Gen3 clients can have 400GB of usable non-volatile SSD read cache added to each module. This SSD can be added to existing XIV Gen3 boxes in the field, or it can be factory-installed in new shipments. If you have a 15-module XIV, that's 6TB of additional read cache! This SSD is entirely managed by the XIV Gen3, so you won't have to spend weeks reading manuals or specifying configuration parameters.
When you carve volumes on the XIV, you now have an option to enable or disable use of the SSD cache for each volume. Since XIV is being used in private and public cloud deployments, this offers the ability to offer premium performance at premium prices. The use of SSD is complementary to IBM XIV Quality of Service (QoS) performance levels, which are determined by host instead.
Well, that's the first major IBM System Storage launch of 2012. Let me know what you think in the comment section below.
This week I was aboard the Queen Mary in Long Beach, California! This was a business event organized by [Key Info Systems], a valued IBM Business Partner. Key Info resells IBM servers, storage and switches.
The Queen Mary retired in 1967, and has been converted into a hotel and events venue. The locals just parked their car and walked on board, but I got to stay Tuesday through Thursday in one of the cabins. It was long and narrow, with round windows! There were four dials for the bathtub: Cold Salt, Hot Fresh, Cold Fresh, and Hot Salt.
Stepping on the boat was like walking back in time through history! If you decide to go see it, check out the [Art Deco bar at the front of the Promenade deck. The ship is still in the water, but is permanently docked. It is sectioned off to prevent the ocean waves from affecting it, so we did not have the nauseous moving back and forth normally associated with cruise ships.
(It is with a bit of irony that we are on the Queen Mary just days after the tragedy of the [Costa Concordia], the largest Italian cruise ship that ran aground near Isola de Giglio. The captain will have to explain how he [fell into a lifeboat] before he had a chance to wait for everyone else to get safely off the shipwreck. He was certainly no [Captain Sulley]! I am thankful that most of the 4,200 people survived the incident.)
Lief Morin, Founder and Chief Executive for Key Info Systems, kicked off the meeting with highlights of 2011 successes. I have known Lief for years, as Key Info comes to the Tucson EBC on a frequent basis. This event was designed to give his sellers an update of what is the latest for each product line, and what to look forward to in the next 12-18 months.
The next speaker was from Vision Solutions that provides High Availability solutions for IBM i on Power Systems. In 2010, their company nearly doubled in size with the acquisition of Double-Take, which provides data replication for x86 servers running Windows, Linux, VMware, Hyper-V and other hypervisors. The capabilities of Double-Take sounded similar to what IBM offers with [Tivoli Storage Manager FastBack] and [Tivoli Storage Manager for Virtual Environments].
Dinner at Sir Winston's
Rather than take the "Ghosts and Legends" tour, I opted for dinner at the Queen Mary's signature restaurant, Sir Winston's. This is a fancy place, so dress accordingly. If you want the Raspberry soufflé, order it early as it takes 30 minutes to prepare!
[Storwize V7000], including the new Storwize V7000 Unified configuration
Storage is an important part of the Key Info Systems revenue stream, so I was glad to have lots of questions and interactions from the audience.
Murder Mystery Dinner
The acting troupe from [Dinner Detective] put on quite the show for us! With all that is going on in the world, it is good to laugh out loud every now and then.
In other murder mystery dinners I have participated in, each person is assigned a "character" and given a script of what to say and when to say it. This was different, we got to pick our own characters. I chose "Doctor Watson", from the Sherlock Holmes series. Several attendees thought it was a double meaning with [IBM Watson], the computer that figured out the clues on Jeopardy! television game show, and has since been [put to work at Wellpoint] to help out the Healthcare industry.
After the "murder" happened, two actors portraying policemen selected members of the audience to answer questions. We didn't get a script of what to say, so everyone had to "ad lib". I was singled out as a suspect, and had fun playing along in character. One of the attendees afterwards said he was impressed that I was able to fabricate such amusing and elaborate responses to their personal and embarassing questions. As a public speaker for IBM, I have had a lot of practice thinking quickly on my feet.
Fibre Channel and Ethernet Switches
The next two speakers gave us an update on Fibre Channel and Ethernet switches, and their thoughts on the inevitability of Fibre Channel over Ethernet (FCoE). One of the exciting new developments is the [Brocade Network Subscription] which creates a flexible pay-per-use Ethernet port rental model for customers. This is especially timely given the Financial Accounting Standards Board proposed [FASB Change 13] that affects operating leases in the balance sheet.
With the Brocade Network Subscription, you pay monthly for the ports you are using. Need more ports, Brocade will install the added gear. Use fewer ports, Brocade will take the equipment back. There is no term endpoint or residual value like tradtional leasing, so when you are done using the equipment, give it back any time. This is ideal for companies that may need to have a lot of Ethernet ports for the next 2-3 years, but then plan to taper down, and don't want to get stuck with a long-term commitment or capital depreciation.
The last speaker was from VMware. IBM is the #1 reseller of VMware, and VMware commands an impressive 81 percent marketshare in the x86 virtualization space. The speaker presented VMware's strategy going forward, which aligns well with IBM's own strategy, to help companies Cloud-enable their existing IT infrastructures, in preparation for eventual moves to Hybrid or Public cloud deployments.
Special thanks to Lief Morin for sponsoring this event, Raquel Hernandez from IBM for coordinating my travel, and Pete, Christina and Kendrell from Key Info Systems for organizing the activities!
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Monday afternoon sessions:
IBM Watson and your Data Center
Steve Sams, IBM VP of Site and Facilities Services, cleverly used IBM Watson as a way to explain how analytics can be used to help manage your data center. Sadly, most of the people at my table missed the connection between IBM Watson and Analytics. How does answering a single trivia question in under three seconds relate to the ongoing operations of a data center? If you were similarly confused, take a peak at my series of IBM Watson blog posts:
The analyst who presented this topic was probably the fastest-speaking Texan I have met. He covered various aspects of Cloud Computing that people need to consider. Why hasn't Cloud taken off sooner? The analyst feels that Cloud Computing wasn't ready for us, and we weren't ready for Cloud Computing. The fundamentals of Cloud Computing have not changed, but we as a society have. Now that many end users are comfortable consuming public cloud resources, from Facebook to Twitter to Gmail, they are beginning to ask for similar from their corporate IT.
Legal issues - see this hour-long video, [Cloud Law & Order], which discusses legal issues related to Cloud Computing.
Employee staffing - need to re-tool and re-train IT employees to start thinking of their IT as a service provider internally.
Hybrid Cloud - rather than struggle choosing between private and public cloud methodologies, consider a combination of both.
University of Rochester Medical Center (URMC) Cracks Code on Data Growth
Often times, the hour is split, 30 minutes of the sponsor talking about various products, followed by 30 minutes of the client giving a user experience. Instead, I decided to let the client speak for 45 minutes, and then I moderated the Q&A for the remaining 15 minutes. This revised format seemed to be well-received!
University of Rochester is in New York, about 60 miles east of Buffalo, and 90 miles from Toronto across Lake Ontario. Six years ago, Rick Haverty joined URMC as the Director of Infrastructure services, managing 130 of the 300 IT personnel at the Medical Center. I met Rick back in May, when he presented at the IBM [Storage Innovation Executive Summit] in New York City.
URMC has DS8000, DS5000, XIV, SONAS, Storwize V7000 and is in the process of deploying Storwize V7000 Unified. He presented how he has used these for continuous operations and high availability, while controlling storage growth and costs.
The Q&A was lively, focusing on how his team manages 1PB of disk storage with just four storage administrators, his choice of a "Vendor Neutral Archive" (VNA), and his experiences with integration.
This was a great afternoon, and I was glad to get all my speaking gigs done early in the week. I would like to thank Rick Haverty of URMC for doing a great job presenting this afternoon!
Special thanks to Anthony Vandewerdt, who sent me his version of this presentation that he planned to present in Australia next week. I "smartened it up" (or whatever the appropriate phrase is the opposite of "dumbed it down") for the technical audience.
Recovery procedures for single and double drive failures. A double drive failure on an XIV typically involves less recovery effort than traditional RAID5-based disk systems, and in many cases results in no data loss whatsoever. I provided details on this in my blog post [Double Drive Failure Debunked: XIV Two Years Later], so no need to repeat myself here.
Replacing the Automatic Transfer Switch (ATS) non-disruptively. To support either single-phase and triple-phase power sources, the XIV uses an ATS to take two independent power feeds, and distribute this out to the three Uninterruptible Power Supplies (UPS).
Built-in Migration capability to copy data off other disk systems over to the XIV.
Configuring Synchronous and Asynchronous mirroring using either the Fibre Channel or Internet Protocol ports.
Optimizing the use of XIV for VMware, AIX and other operating systems.
The IBM XIV Storage System is quite popular in New Zealand, with four times more boxes sold per capita than the other countries in the Asia Pacific region. I covered both the A14 model as well as the new Gen3 model.
Business Continuity/Disaster Recovery (BC/DR) Update: Lessons, Planning, Solutions
My colleague Vic Peltz from IBM Almaden presented on lessons learned from Hurricane Katrina and various other natural disasters. Unlike tradtional presentations the focus on technology, Vic took a different approach, focusing on people and procedures. I was here last year when the earthquake hit Christchurch on the south island, so I was well aware that BC/DR was top of mind for many of the attendees. Throughout this week, I have felt tremors, and many of the locals told me that these happen all the time.
Introduction to IBM Storwize V7000
I knew I was in trouble when the request for me to present Storwize sounded like something from [Mission Impossible]:
"Good morning, Mr. Pearson. Your mission, should you choose to accept it, involves presenting Storwize V7000 in Auckland, New Zealand. You may also present the Storwize V7000 Unified, but it is essential that you not cover the SAN Volume Controller or SONAS products from which they are based upon, as you will not have enough time. The audience is very technical, so be careful. As always, should any questions come up that you cannot answer, the conference coordinators will disavow all knowledge of your actions, nor reimburse your laundry charges. This message will self-destruct in five seconds."
Well, I accomplished my mission in 75 minutes. I was able to cover the block-only version of the IBM Storwize V7000, with support for clustering the control enclosures, expansion drawers and external storage virtualization. I then spent a few minutes on the block-and-file Storwize V7000 Unified, which adds support for CIFS, NFS, HTTPS, FTP and SCP protocols through two new "file modules", with integrated support for backup and anti-virus checking. I covered both IBM Easy Tier for sub-LUN automated tiering between Solid-State Drives (SSD) and spinning disk, as well as Active Cloud Engine for file-based movement between disk and tape.
An exciting new addition to the IBM storage line, the Storwize V7000 is a very versatile and solid choice as a midrange storage device. This session will cover a technical overview of the controller as well as its positioning within the overall IBM storage line.
xST04 - XIV Implementation, Migration and Optimization
Attend this session to learn how to integrate the IBM XIV Storage System in your IT environment. After this session, you should understand where the IBM XIV Storage system fits, and understand how to take full advantage of the performance capabilities of XIV Storage by using the massive parallelism of its grid architecture. You will learn how to migrate data onto the XIV and hear about real world client experiences.
xST05 - IBM's Storage Strategy in the Smarter Computing Era
Want to understand IBM's storage strategy better? This session will cover the three key themes of IBM's Smarter Computing initiative: Big Data, Optimized Systems, and Cloud. IBM System Storage strategy has been aligned to meet the storage efficiency, data protection and retention required to meet these challenges.
IBM offers encryption in a variety of ways. Data can be encrypted on the server, in the SAN switch, or on the disk or tape drive. This session will explain how encryption works, and explain the pros and cons with each encryption option.
sAC01 - IBM Information Archive for email, Files and eDiscovery
IBM has focused on data protection and retention, and the IBM Information Archive is the ideal product to achieve it. Come to this session to discuss archive solutions, compliance regulations, and support for full-text indexing and eDiscovery to support litigation.
sGE04 - IBM's Storage Strategy in the Smarter Computing Era
Want to understand IBM's storage strategy better? This session will cover the three key themes of IBM's Smarter Computing initiative: Big Data, Optimized Systems, and Cloud. IBM System Storage strategy has been aligned to meet the storage efficiency, data protection and retention required to meet these challenges.
sSM03 - IBM Tivoli Storage Productivity Center – Overview and Update
IBM's latest release of IBM Tivoli Storage Productivity Center is v4.2.2, a storage resource management tool that manages both IBM and non-IBM storage devices, including disk systems, tape libraries, and SAN switches. This session will give an overview of the various components of Tivoli Storage Productivity Center and provide an update on what's new in this product.
sSN06 - SONAS and the Smart Business Storage Cloud (SBSC)
Confused over IBM's Cloud strategy? Trying to figure out how IBM Storage plays in private, hybrid or public cloud offerings? This session will cover both the SONAS integrated appliance and the Smart Business Storage Cloud customized solution, and will review available storage services on the IBM Cloud.
sTA01 - Tape Storage Reinvented: What's New and Exciting in the Tape World?
This very informative session will keep you up to date with the latest tape developments. These include the TS3500 tape library connector Model SC1 (Shuttle). The shuttle enables extreme scalability of over 300,000 tape cartridges in a single library image by interconnecting multiple tape libraries with a unique, high speed transport system. The world's fastest tape drive, the TS1140 3592-E07, will also be presented. The performance and functionality of the new TS1140 as well as the new 4TB tape media will be discussed. Also, the IBM System Storage Linear Tape File System (LTFS), including the Library Edition, will be presented. LTFS allows a disk-like, drag-and-drop interface for tape. This is a not-to-be-missed session for all you tape lovers out there!
In December, I will be going to Gartner's Data Center Conference in Las Vegas, but the agenda has not been finalized, so I will save that for another post.
Jim is an IBM Fellow for IBM Systems and Technology Group. There are only 73 IBM Fellows currently working for IBM, and this is the highest honor IBM can bestow on an employee. He has been working with IBM since 1968.
He is tasked with predicting the future of IT, and help drive strategic direction for IBM. Cost pressures, requirements for growth, accelerating innovation and changing business needs help influence this direction.
IBM's approach is to integrate four different "IT building blocks":
Scale-up Systems, like the IBM System Storage DS8000 and TS3500 Tape Library
Resource Pools, such as IBM Storage Pools formed from managed disks by IBM SAN Volume Controller (SVC)
Integrated stacks and appliances, integrated software and hardware stacks, from Storwize V7000 to full rack systems like IBM Smart Analytics Server or CloudBurst.
Mobility of workloads and resources requires unified end-to-end service management. Fortunately, IBM is the #1 leader in IT Service Management solutions.
Jim addressed three myths:
Myth 1: IT Infrastructures will be homogenous.
Jim feels that innovations are happening too rapidly for this to ever happen, and is not a desirable end-goal. Instead, a focus to find the right balance of the IT building blocks might be a better approach.
Myth 2: All of your problems can be solved by replacing everything with product X.
Jim feels that the days of "rip-and-replace" are fading away. As IBM Executive Steve Mills said, "It isn't about the next new thing, but how well new things integrate with established applications and processes."
Myth 3: All IT will move to the Cloud model.
Jim feels a substantial portion of IT will move to the Cloud, but not all of it. There will always be exceptions where the old traditional ways of doing things might be appropriate. Clouds are just one of the many building blocks to choose from.
Jim's focus lately has been finding new ways to take advantage of virtualization concepts. Server, storage and network virtualization are helping address these challenges through four key methods:
Sharing - virtualization that allows a single resource to be used by multiple users. For example, hypervisors allow several guest VM operating systems share common hardware on a single physical server.
Aggregation - virtualization that allows multiple resources to be managed as a single pool. For example, SAN Volume Controller can virtualize the storage of multiple disk arrays and create a single storage pool.
Emulation - virtualization that allows one set of resources to look and feel like a different set of resources. Some hypervisors can emulate different kinds of CPU processors, for example.
Insulation - virtualization that hides the complexity from the end-user application or other higher levels of infrastructure, making it easier to make changes of the underlying managed resources. For example, both SONAS and SAN Volume Controller allow disk capacity to be removed and replaced without disruption to the application.
In today's economy, IT transformation costs must be low enough to yield near-term benefits. The long-term benefits are real, but near-term benefits are needed for projects to get started.
What set's IBM ahead of the pack? Here was Jim's list:
100 Years of Innovation, including being the U.S. Patent leader for the last 18 years in a row
IBM's huge investment in IBM Research, with labs all over the globe
Leadership products in a broad portfolio
Workload-optimized designs with integration from middleware all the way down to underlying hardware
Comprehensive management software for IBM and non-IBM equipment
Clod is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. His presentation focused on trends and directions in the IT storage industry. Clod started with five workload categories:
To address these unique workload categories, IBM will offer workload-optimized systems. The four drivers on the design for these are performance, efficiency, scalability, and integration. For example, to address performance, companies can adopt Solid-State Drives (SSD). Unfortunately, these are 20 times more expensive dollar-per-GB than spinning disk, and the complexity involved in deciding what data to place on SSD was daunting. IBM solved this with an elegant solution called IBM System Storage Easy Tier, which provides automated data tiering for IBM DS8000, SAN Volume Controller (SVC) and Storwize V7000.
For scalability, IBM has adopted Scale-Out architectures, as seen in the XIV, SVC, and SONAS. SONAS is based on the highly scalable IBM General Parallel File System (GPFS). File systems are like wine, they get better with age. GPFS was introduced 15 years ago, and is more mature than many of the other "scalable file systems" from our competition.
Areal Density advancements on Hard Disk Drives (HDD) are slowing down. During the 1990s, the IT industry enjoyed 60 to 100 percent annual improvement in areal density (bits per square inch). In the 2000s, this dropped to 25 to 40 percent, as engineers are starting to hit various physical limitations.
Storage Efficiency features like compression have been around for a while, but are being deployed in new ways. For example, IBM invented WAN compression needed for Mainframe HASP. WAN compression became industry standard. Then IBM introduced compression on tape, and now compression on tape is an industry standard. ProtecTIER and Information Archive are able to combine compression with data deduplication to store backups and archive copies. Lastly, IBM now offers compression on primary data, through the IBM Real-Time Compression appliance.
For the rest of this decade, IBM predicts that tape will continue to enjoy (at least) 10 times lower dollar-per-GB than the least expensive spinning disk. Disk and Tape share common technologies, so all of the R&D investment for these products apply to both types of storage media.
For integration, IBM is leading the effort to help companies converge their SAN and LAN networks. By 2015, Clod predicts that there will be more FCoE purchased than FCP. IBM is also driving integration between hypervisors and storage virtualization. For example, IBM already supports VMware API for Array Integration (VAAI) in various storage products, including XIV, SVC and Storwize V7000.
Lastly, Clod could not finish a presentation without mentioning Cloud Computing. Cloud storage is expected to grow 32 percent CAGR from year 2010 to 2015. Roughly 10 percent of all servers and storage will be in some type of cloud by 2015.
As is often the case, I am torn between getting short posts out in a timely manner versus spending some more time to improve the length and quality of information, but posted much later. I will spread out the blog posts in consumable amounts throughout the next week or two, to achieve this balance.
Did IBM XIV force EMC's hand to announce VMAXe? Let's take a stroll down memory lane.
In 2008, IBM XIV showed the world that it could ship a Tier-1, high-end, enterprise-class system using commodity parts. Technically, prior to its acquisition by IBM, the XIV team had boxes out in production since 2005. EMC incorrectly argued this announcement meant the death of the IBM DS8000. Just because EMC was unable to figure out how to have more than one high-end disk product, doesn't mean IBM or other storage vendors were equally challenged. Both IBM XIV and DS8000 are Tier-1, high-end, enterprise-class storage systems, as are the IBM N series N7900 and the IBM Scale-Out Network Attached Storage (SONAS).
In April 2009, EMC followed IBM's lead with their own V-Max system, based on Symmetrix Engenuity code, but on commodity x86 processors. Nobody at EMC suggested that the V-Max meant the death of their other Symmetrix box, the DMX-4, which means that EMC proved to themselves that a storage vendor could offer multiple high-end disk systems. Hitachi Data Systems (HDS) would later offer the VSP, which also includes some commodity hardware as well.
In July 2009, analysts at International Technology Group published their TCO findings that IBM XIV was 63 percent less expensive than EMC V-Max, in a whitepaper titled [COST/BENEFIT CASE
FOR IBM XIV STORAGE SYSTEM Comparing Costs for IBM XIV and EMC V-Max Systems]. Not surprisingly, EMC cried foul, feeling that EMC V-Max had not yet been successful in the field, it was too soon to compare newly minted EMC gear with a mature product like XIV that had been in production accounts for several years. Big companies like to wait for "Generation 1" of any new product to mature a bit before they purchase.
To compete against IBM XIV's very low TCO, EMC was forced to either deeply discount their Symmetrix, or counter-offer with lower-cost CLARiiON, their midrange disk offering. An ex-EMCer that now works for IBM on the XIV sales team put it in EMC terms -- "the IBM XIV provides a Symmetrix-like product at CLARiiON-like prices."
(Note: Somewhere in 2010, EMC dropped the hyphen, changing the name from V-Max to VMAX. I didn't see this formally announced anywhere, but it seems that the new spelling is the officially correct usage. A common marketing rule is that you should only rename failed products, so perhaps dropping the hyphen was EMC's way of preventing people from searching older reviews of the V-Max product.)
This month, IBM introduced the IBM XIV Gen3 model 114. The analysts at ITG updated their analysis, as there are now more customers that have either or both products, to provide a more thorough comparison. Their latest whitepaper, titled [Cost/Benefit Case for IBM XIV Systems: Comparing Cost
Structures for IBM XIV and EMC VMAX Systems], shows that IBM maintains its substantial cost savings advantage, representing 69 percent less Total Cost of Ownership (TCO) than EMC, on average, over the course of three years.
In response, EMC announced its new VMAXe, following the naming convention EMC established for VNX and VNXe. Customers cannot upgrade VNXe to VNX, nor VMAXe to VMAX, so at least EMC was consistent in that regard. Like the IBM XIV and XIV Gen3, the new EMC VMAXe eliminated "unnecessary distractions" like CKD volumes and FICON attachment needed for the IBM z/OS operating system on IBM System z mainframes. Fellow blogger Barry Burke from EMC explains everything about the VMAXe in his blog post [a big thing in a small package].
So, you have to wonder, did IBM XIV force EMC's hand into offering this new VMAXe storage unit? Surely, EMC sales reps will continue to lead with the more profitable DMX-4 or VMAX, and then only offer the VMAXe when the prospective customer mentions that the IBM XIV Gen3 is 69 percent less expensive. I haven't seen any list or street prices for the VMAXe yet, but I suspect it is less expensive than VMAX, on a dollar-per-GB basis, so that EMC will not have to discount it as much to compete against IBM.
The new [IBM System Storage Tape Controller 3592 Model C07] is an upgrade to the previous C06 controller. Like the C06, the new 3592-C07 can have up to four FICON (4Gbps) ports, four FC ports, and connect up to 16 drives. The difference is that the C07 supports 8Gbps speed FC ports, and can support the [new TS1140 tape drives that were announced on May 9]. A cool feature of the C07 is that it has a built-in library manager function for the mainframe. On the previous models, you had to have a separate library manager server.
Crossroads ReadVerify Appliance (3222-RV1)
IBM has entered an agreement to resell [Crossroads ReadVerify Appliance], or "RV1" for short. The RV1 is a 1U-high server with software that gathers information on the utilization, performance and health for a physical tape environment, such as an IBM TS3500 Tape Library. The RV1 also offers a feature called "ArchiveVerify" which validates long-term retention archive tapes, providing an audit trail on the readability of tape media. This can be useful for tape libraries attached behind IBM Information Archive compliance storage solution, or the IBM Scale-Out Network Attached Storage (SONAS).
As an added bonus, Crossroads has great videos! Here's one, titled [Tape Sticks]
Linear Tape File System (LTFS) Library Edition Version 2.1
While the hardware is all refreshed, the overall "scale-out" architecture is unchanged. Kudos to the XIV development team for designing a system that is based entirely on commodity hardware, allowing new hardware generations to be introduced with minimal changes to the vast number of field-proven software features like thin provisioning, space-efficient read-only and writeable snapshots, synchronous and asynchronous mirroring, and Quality of Service (QoS) performance classes.
The new XIV Gen3 features an Infiniband interconnect, faster 8Gbps FC ports, more iSCSI ports, faster motherboard and processors, SAS-NL 2TB drives, 24GB cache memory per XIV module, all in a single frame IBM rack that supports the IBM Rear Door Heat Exchanger. The results are a 2x to 4x boost in performance for various workloads. Here are some example performance comparisons:
Disclaimer: Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. Your mileage may vary.
In a Statement of Direction, IBM also has designed the Gen3 modules to be "SSD-ready" which means that you can insert up to 500GB of Solid-State drive capacity per XIV module, up to 7.5TB in a fully-configured 15 module frame. This SSD would act as an extension of DRAM cache, similar to how Performance Accelerator Modules (PAM) on IBM N series.
IBM will continue to sell XIV Gen2 systems for the next 12-18 months, as some clients like the smaller 1TB disk drives. The new Gen3 only comes with 2TB drives. There are some clients that love the XIV so much, that they also use it for less stringent Tier 2 workloads. If you don't need the blazing speed of the new Gen3, perhaps the lower cost XIV Gen2 might be a great fit!
As if I haven't said this enough times already, the IBM XIV is a Tier-1, high-end, enterprise-class disk storage system, optimized for use with mission critical workloads on Linux, UNIX and Windows operating systems, and is the ideal cost-effective replacement for EMC Symmetrix VMAX, HDS USP-V and VSP, and HP P9000 series disk systems, . Like the XIV Gen2, the XIV Gen3 can be used with IBM System i using VIOS, and with IBM System z mainframes running Linux, z/VM or z/VSE. If you run z/OS or z/TPF with Count-Key-Data (CKD) volumes and FICON attachment, go with the IBM System Storage DS8000 instead, IBM's other high-end disk system.
Dan Galvan, IBM VP of Marketing for Storage, was the next speaker. With 300 billion emails being sent per day, 4.6 billion cell phones in the world, and 26 million MRIs per year, there is going to be a huge demand for file-based storage. In fact, a recent study found that file-based storage will grow at 60 percent per year, compared to 15 percent growth for block-based storage.
Dan positioned IBM's Scale-out Network Attached Storage (SONAS) as the big "C:" drive for a company. SONAS offers a global namespace, a single point of management, with the ability to scale capacity and performance tailored for each environment.
The benefits of SONAS are great. We can consolidate dozens of smaller NAS filers, we can virtualize files across different storage pools, and increase overall efficiency.
Powering advanced genomic research to cure cancer
The next speaker was supposed to be Bill Pappas, Senior Enterprise Network Storage Architect, Research Informatics at [St. Jude Children’s Research Hospital]. Unfortunately, St. Jude is near the flooding of the Mississippi river, and he had to stay put. An IBM team was able to capture his thoughts on video that was shown on the big screen.
Thanks to the Human Genome project, St. Jude is able to cure people. They see 5700 patients per year, and have an impressive 70 percent cure rate. The first genetic scan took 10 years, now the technology allows a genome to be mapped in about a week. Having this genomic information is making vast strides in healthcare. It is the difference of fishing in a river, versus putting a wide net to catch all the fish in the Atlantic ocean all at once.
Recently, St. Jude migrated 250 TB of files from other NAS to an IBM SONAS solution. The SONAS can handle a mixed set of workloads, and allows internal movement of data from fast disk, to slower high-capacity disk, and then to tape. SONAS is one of the few storage systems that supports a blended disk-and-tape approach, which is ideal for the type of data captured by St. Jude.
IBM's own IT transformation
Pat Toole, IBM's CIO, presented the internal transformation of IBM's IT operations. He started in 2002 in the midst of IBM's effort to restructure its process and procedures. They identified four major data sources: employee data, client data, product data, and financial data. They put a focus to understand outcomes and set priorities.
The result? A 3-to-1 payback on CIO investments. This allowed IBM to go from server sprawl to consolidated pooling of resources with the right levels of integration. In 1997, IBM had 15,000 different applications running across 155 separate datacenters. Today, they have reduced this down to 4,500 applications and 7 datacenters. Their goal is to reduce down to 2,225 applications by 2015. Of these, only 250 are mission critical.
Pat's priorities today: server and storage virtualization, IT service management, cloud computing, and data-centered consolidation. IBM runs its corporate business on the following amount of data:
9 PB of block-based storage, SVC and XIV
1 PB of file-based storage, SONAS
15 PB of tape for backup and archive
Pat indicated that this environment is growing 25 percent per year, and that an additional 70-85 PB relates to other parts of the business.
By taking this focused approach, IBM was able to increase storage utilization from 50 to 90 percent, and to cut storage costs by 50 percent. This was done through thin provisioning, storage virtualization and pooling.
Looking forward to the future, Pat sees the following challenges: (a) that 120,000 IBM employees have smart phones and want to connect them to IBM's internal systems; (b) the increase in social media; and (c) the use of business analytics.
After the last session, people gathered in the "Hall of the Universe" for the evening reception, featuring food, drinks and live music. It was a great day. I got to meet several bloggers in person, and their feedback was that this was a very blogger-friendly event. Bloggers were given the same level of access as corporate executives and industry analysts.
Continuing my post-week coverage of the [Data Center 2010 conference], Wednesday morning started with another keynote session, followed by some break-out sessions.
Realities of IT Investment
Tighter budgets mean more business decisions. Future investments will come from cost savings. The analysts report that 77 percent of IT decisions are made by CFOs. Most organizations are spending less now than back in 2008 before the recession.
How we innovate through IT is changing. In bad times, risk trumps return, but only 21 percent of the audience have a formal "risk calculation" as part of their purchase plans.
Divestment matters as much as investment. Reductions in complexity have the greatest long-term cost savings. Try to retire at least 20 percent of your applications next year. With the advent of Cloud Computing, companies might just retire it and go entirely with public cloud offerings. Note that this graph the years are different than the ones above, in groups of half-decade increments.
It is important to identify functional dependencies and link your IT risks to business outcomes. Focus on making costs visible, and re-think how you communicate IT performance measurements and their impact to business. Try to change the culture and mind-set so that projects are not referred to as "IT projects" focused on technology, but rather they are "business projects" focused on business results.
Moving to the Cloud
Richard Whitehead from Novell presented challenges in moving to Cloud Computing. There are risks and challenges managing multiple OS environments. Users should have full access to all IT resources they need to do their jobs. Computing should be secure, compliant, and portable. Here is the shift he sees from physical servers to virtual and cloud deployments, years 2010 to 2015:
Richard considers a "workload" as being the combination of the operating system, middleware, and application. He then defines "Business Service" as an appropriate combination of these workloads. For example, a business service that provides a particular report might involve a front-end application, talking through business logic workload server, talking to a back-end database workload server.
To address this challenge, Novell introduces "Intelligent Workload Management", called WorkloadIQ. This manages the lifecycle to build, secure, deploy, manage and measure each workload. Their motto was to take the mix of physical, virtual and cloud workloads all "make it work as one". IBM is a business partner with Novell, and I am a big fan of Novell's open-source solutions including SUSE Linux.
A Funny Thing Happened on the Way to the Cloud....
Bud Albers, CTO of Disney, shared their success in deploying their hybrid cloud infrastructure. Everyone recognizes the Disney brand for movies and theme parks, but may not aware that they also own ABC News and ESPN television, Travel cruises, virtual worlds, mobile sites, and deploy applications like Fantasy Football and Fantasy Fishing.
Two years ago, each Line of Business (LOB) owned their own servers, they were continually out of space, power and HVAC issues forced tactical build-outs of their datacenters. But in 2008, the answer to all questions was Cloud Computing, it slices and dices like something invented by [Ron Popeill], with no investment or IT staff required. However, continuing to ask the CFO for CAPEX to purchase assets that were only 1/7th used was not working out either. That's right, over 75 percent of their servers were running less than 15 percent CPU utilization.
The compromise was named "D*Cloud". Internal IT infrastructure would be positioned for Cloud Computing, by adopting server virtualization, implementing REST/SOAP interfaces, and replicating the success across their various Content Distribution Networks (CDN). Disney is no stranger to Open Source software, using Linux and PHP. Their [Open Source] web page shows tools available from Disney Animation studios.
At the half-way point, they had half their applications running virtualized on just 4 percent of their servers. Today, they run over 20 VMs per host and have 65 percent of their apps virtualized. Their target is 80 percent of their apps virtualized by 2014.
Bud used the analogy that public clouds will be the "gas stations" of the IT industry. People will choose the cheapest gas among nearby gas stations. By focusing on "Application management" rather than "VM instance management", Disney is able to seamlessly move applications as needed from private to public cloud platforms.
Their results? Disney is now averaging 40 percent CPU utilization across all servers. Bud feels they have achieved better scalability, better quality of service, and increased speed, all while saving money. Disney is spending less on IT now than in 2008,
UPMC Maximizes Storage Efficiency with IBM
Kevin Muha, UPMC Enterprise Architect & Technology Manager for Storage and Data Protection Services, was unable to present this in person, so Norm Protsman (IBM) presented Kevin's charts on the success at the University of Pittsburgh Medical Center [UPMC]. UPMC is Western Pennsylvania's largest employer, with roughly 50,000 employees across 20 hospitals, 400 doctors' offices and outpatient sites. They have frequently been rated one of the best hospitals in the US.
Their challenge was storage growth. Their storage environment had grown 328 percent over the past three years, to 1.6PB of disk and nearly 7 PB of physical tape. To address this, UPMC deployed four IBM TS7650G ProtecTIER gateways (2 clusters) and three XIV storage systems for their existing IBM Tivoli Storage Manager (TSM) environment. Since they were already using TSM over a Fibre Channel SAN, the implementation took only three days.
UPMC was backing up nearly 60TB per day, in a 15-hour back window. Their primary data is roughly 60 percent Oracle, with the rest being a mix of Microsoft Exchange, SQL Server, and unstructured data such as files and images.
Their results? TSM reclamation is 30 percent faster. Hardware footprint reduced from 9 tiles to 5. Over 50 percent reduction in recovery time for Oracle DB, and 20 percent reduction in recovery of SQL Server, Microsoft Exchange, and Epic Cache. They average 24:1 deduplication overall, which can be broken down by data category as follows:
29:1 Cerner Oracle
18:1 EPIC Cache
10:1 Microsoft SQL Server
8:1 Unstructured files
6:1 Microsoft Exchange
UPMC still has lots of LTO-4 tapes onsite and offsite from before the change-over, so the next phase planned is to implement "IP-based remote replication" between ProtecTIER gateways to a third data center at extended distance. The plan is to only replicate the backups of production data, and not replicate the backups of test/dev data.
Wrapping up my seven-city romp through Australia and New Zealand, the final city was Canberra, which is the capital of Australia. As with Wellington, this meant many of the clients in the audience work in government agencies.
I had not taken any photos of Anna Wells, IBM Storage Sales Leader for ANZ, but I was able to find this caricature of her on a poster from an award she won within IBM.
I also did not have a picture of Robert, my videographer for this trip, who was always behind the camera himself.
The event went smoothly, just like the rest of them. Anna presented IBM's storage strategy and highlighted specific IBM storage solutions.
I had several emails asking if this event was called "Storage Optimisation Breakfast" because it was held in the mornings, or did we actually serve food at these events. The answer is we actually served food, a variation of the [Full English Breakfast], and most of the attendees gobbled it down while Anna spoke.
The fare was quite similar across all seven locations: scrambled or poached eggs, on toast or english muffin, ham/bacon/sausages, potatoes or mushrooms, and half of a baked tomato with bits of something toasted on top.
One morning, for a change, I decided instead to have a bowl of Weet-Bix cereal. Tasted like cardboard. I learned my lesson.
Next, we had Will Quodling, Manager of Infrastructure Operations, at Australia's Department of Innovation, Industry, Science and Research. The Department of Innovation, Industry, Science and Research consists of 3200 staff that strive to encourage the sustainable growth of Australian industries. The Department is committed to developing policies and delivering programs to provide lasting economic benefits ensuring Australia's competitive future, undertakes analysis, and provides services and advice to the business, science and research community. American President, Barack Obama, visited Australia and was interested in adopting a similar concept for the United States.
The department was looking to replace their existing IBM System Storage DS4800 disk systems with something more energy efficient. They selected IBM XIV storage system, with an expected savings of 10kW per year. They are able to run 800 VMware images and 150 VDI workstations using storage on one XIV, replicate the data to a second XIV at a remote location, and have a third XIV for their Web serving environment. They tested out both single drive and full module failures, and experienced better-than-expected rebuild times, with no impact to users, and no impact to performance.
After 17 days without a functioning government, Australia finally selected a prime minister. Her name is Julia Gillard, shown here. She won in part by promising to build a National Broadband Network (NBN) for the entire country, including the rural areas.
[Canberra] is an interesting town, a fully planned community designed in 1913 by Chicago's husband-and-wife architect team of Walter Burley Griffin and Marion Mahony Griffin. The location was selected as being half-way compromise between Australia's two largest cities, Sydney and Melbourne.
I would like to thank all the wonderful people in both Australia and New Zealand for making this a successful trip!
Continuing my romp through Australia and New Zealand, the last Storage Optimisation Breakfast of the week was Brisbane, which the locals here refer to as [Brisvegas], probably for all of the nightlife and casinos here.
The IBM office building is conveniently across the street from my hotel, the [Sofitel Brisbane]. The hotel also sits above central station, which allows quick transportation to the airport.
This time, we had a tag team of two people from James Cook University (JCU) to present their success story. First up was Kent Adams, the Director or Information Technology and Resources. JCU is recognized as one of the top 5 percent of Universities worldwide, and as a result, their data storage requirements are growing at 400 percent per year! Their latest purchase put out for RFP was for at least 40TB that could handle at least 20,000 IOPS. The winning solutions was an IBM XIV disk system.
Behind the scenes at all the events this week here in Australia were, from left to right, Natalie from GPJ Australia, the local subsidiary of the George P. Johnson events management we use in the states; Sonia Phillips, IBM Advisory Marketing Lead for Dynamic Infrastructure Optimisation and Cloud Computing, Demand Programs, for Australia and New Zealand; and Monika Lovgren, IBM Marketing and Execution Lead for Workload Optimised Systems for Australia.
The second speaker was Lee Askew, one of the Storage Administrators. Overall, the JCU team have been amazed at how well this box works. When they started it up, they expected to spend the next 24-36 hours formatting RAID ranks, but not with the XIV. It was ready in 2 minutes and they started provisioning storage right away. Their own tests to fail a drive found they can do a full rebuild to redundancy in 9 minutes. It took 8-36 hours on their previous disk array. Failing a full data module took only 75 minutes to bring back to redundancy.
After a long and tiring week, I was able to relax by walking through this beautiful King Edward park near the IBM building. This had a nice variety of plants and flowers, and with the surprise visit of a lizard about the length of my arm that crossed my path.
JCU also uses Asynchronous Mirror to replicate data to another XIV at distance. Again, as with all aspects of IBM XIV, the solution works as advertised. They are well positioned to grow from the 18,000 students they have today, to their target goal of 25,000 students they want to have by 2015.
Worldwide, IBM has done well with colleges and universities, and this was a great example of how partnering with IBM for your IT infrastructure can make a huge difference!
Continuing my coverage of the annual [2010 System Storage Technical University], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, they were called "Meet the Experts", one for mainframe storage, and the other for storage attached to distributed systems. This year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This post provides a recap of the Storage Hardware free-for-all.
The emcee for the event was Scott Drummond. The other experts on the panel included Dan Thompson, Carlos Pratt, Jack Arnold, Jim Blue, Scott Schroder, Ed Baker, Mike Wood, Steve Branch, Randy Arseneau, Tony Abete, Jim Fisher, Scott Wein, Rob Wilson, Jason Auvenshine, Dave Canan, Al Watson, and myself, yours truly, Tony Pearson.
What can I do to improve performance on my DS8100 disk system? It is running a mix of sequential batch processing and my medical application (EPIC). I have 16GB of cache and everything is formatted as RAID-5.
We are familiar with EPIC. It does not "play well with others", so IBM recommends you consider dedicating resources for just the EPIC data. Also consider RAID-10 instead for the EPIC data.
How do I evaluate IBM storage solutions in regards to [PCI-DSS] requirements.
Well, we are not lawyers, and some aspects of the PCI-DSS requirements are outside the storage realm. In March 2010, IBM was named ["Best Security Company"] by SC Magazine, and we have secure storage solutions for both disk and tape systems. IBM DS8000 and DS5000 series offer Full Disk Encryption (FDE) disk drives. IBM LTO-4/LTO-5 and TS1120/TS1130 tape drives meet FIPS requirements for encryption. We will provide you contact information on an encryption expert to address the other parts of your PCI-DSS specific concerns.
My telco will only offer FCIP routing for long-distance disk replication, but my CIO wants to use Fibre Channel routing over CWDM, what do I do?
IBM XIV, DS8000 and DS5000 all support FC-based long distance replication across CWDM. However, if you don't have dark fiber, and your telco won't provide this option, you may need to re-negotiate your options.
My DS4800 sometimes reboots repeatedly, what should I do.
This was a known problem with microcode level 760.28, it was detecting a failed drive. You need to replace the drive, and upgrade to the latest microcode.
Should I use VMware snapshots or DS5000 FlashCopy?
VMware snapshots are not free, you need to upgrade to the appropriate level of VMware to get this function, and it would be limited to your VMware data only. The advantage of DS5000 FlashCopy is that it applies to all of your operating systems and hypervisors in use, and eliminates the consumption of VMware overhead. It provides crash-consistent copies of your data. If your DS5000 disk system is dedicated to VMware, then you may want to compare costs versus trade-offs.
Any truth to the rumor that Fibre Channel protocol will be replaced by SAS?
SAS has some definite cost advantages, but is limited to 8 meters in length. Therefore, you will see more and more usage of SAS within storage devices, but outside the box, there will continue to be Fibre Channel, including FCP, FICON and FCoE. The Fibre Channel Industry Alliance [FCIA] has a healthy roadmap for 16 Gbps support and 20 Gbps interswitch link (ISL) connections.
What about Fibre Channel drives, are these going away?
We need to differentiate the connector from the drive itself. Manufacturers are able to produce 10K and 15K RPM drives with SAS instead of FC connectors. While many have suggested that a "Flash-and-Stash" approach of SSD+SATA would eliminate the need for high-speed drives, IBM predicts that there just won't be enough SSD produced to meet the performance needs of our clients over the next five years, so 15K RPM drives, more likely with SAS instead of FC connectors, will continue to be deployed for the next five years.
We'd like more advanced hands-on labs, and to have the certification exams be more product-specific rather than exams for midrange disk or enterprise disk that are too wide-ranging.
Ok, we will take that feedback to the conference organizers.
IBM Tivoli Storage Manager is focused on disaster recovery from tape, how do I incorporate remote disk replication.
This is IBM's Unified Recovery Management, based on the seven tiers of disaster recovery established in 1983 at GUIDE conference. You can combine local recovery with FastBack, data center server recovery with TSM and FlashCopy manager, and combine that with IBM Tivoli Storage Productivity Center for Replication (TPC-R), GDOC and GDPS to manage disk replication across business continuity/disaster recovery (BC/DR) locations.
IBM Tivoli Storage Productivity Center for Replication only manages the LUNs, what about server failover and mapping the new servers to the replicated LUNs?
There are seven tiers of disaster recovery. The sixth tier is to manage the storage replication only, as TPC-R does. The seventh tier adds full server and network failover. For that you need something like IBM GDPS or GDOC that adds this capability.
All of my other vendor kit has bold advertising, prominent lettering, neon lights, bright colors, but our IBM kit is just black, often not even identifying the specific make or model, just "IBM" or "IBM System Storage".
IBM has opted for simplified packaging and our sleek, signature "raven black" color, and pass these savings on to you.
Bring back the SHARK fins!
We will bring that feedback to our development team. ("Shark" was the codename for IBM's ESS 800 disk model. Fiberglass "fins" were made as promotional items and placed on top of ESS 800 disk systems to help "identify them" on the data center floor. Unfortunately, professional golfer [<a href="http://www.shark.com/">Greg Norman</a>] complained, so IBM discontinued the use of the codename back in 2005.)
Where is Infiniband?
Like SAS, Infiniband had limited distance, about 10 to 15 meters, which proved unusable for server-to-storage network connections across data center floorspace. However, there are now 150 meter optical cables available, and you will find Infiniband used in server-to-server communications and inside storage systems. IBM SONAS uses Infiniband today internally. IBM DCS9900 offers Infiniband host-attachment for HPC customers.
We need midrange storage for our mainframe please?
In addition to the IBM System Storage DS8000 series, the IBM SAN Volume Controller and IBM XIV are able to connect to Linux on System z mainframes.
We need "Do's and Don'ts" on which software to run with which hardware.
IBM [Redbooks] are a good source for that, and we prioritize our efforts based on all those cards and letters you send the IBM Redbooks team.
The new TPC v4 reporting tool requires a bit of a learning curve.
The new reporting tool, based on Eclipse's Business Intelligence Reporting Tool [BIRT], is now standardized across the most of the Tivoli portfolio. Check out the [Tivoli Common Reporting] community page for assistance.
An unfortunate side-effect of using server virtualization like VMware is that it worsens management and backup issues. We now have many guests on each blade server.
IBM is the leading reseller of VMware, and understands that VMware adds an added layer of complexity. Thankfully, IBM Tivoli Storage Manager backups uses a lightweight agent. IBM [System Director VMcontrol] can help you manage a variety of hypervisor environments.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
Continuing coverage of my week in Washington DC for the annual [2010 System Storage Technical University], I attended several XIV sessions throughout the week. There were many XIV sessions. I could not attend all of them. Jack Arnold, one of my colleagues at the IBM Tucson Executive Briefing Center, often presents XIV to clients and Business Partners. He covered all the basics of XIV architecture, configuration, and features like snapshots and migration. Carlos Lizarralde presented "Solving VMware Challenges with XIV". Ola Mayer presented "XIV Active Data Migration and Disaster Recovery".
Here is my quick recap of two in particular that I attended:
XIV Client Success Stories - Randy Arseneau
Randy reported that IBM had its best quarter ever for the XIV, reflecting an unexpected surge shortly after my blog post debunking the DDF myth last April. He presented successful case studies of client deployments. Many followed a familiar pattern. First, the client would only purchase one or two XIV units. Second, the client would beat the crap out of them, putting all kinds of stress from different workloads. Third, the client would discover that the XIV is really as amazing as IBM and IBM Business Partners have told them. Finally, in the fourth phase, the client would deploy the XIV for mission-critical production applications.
A large US bank holding company managed to get 5.3 GB/sec from a pair of XIV boxes for their analytics environment. They now have 14 XIV boxes deployed in mission-critical applications.
A large equipment manufacturer compared the offerings among seven different storage vendors, and IBM XIV came out the winner. They now have 11 XIV boxes in production and another four boxes for development/test. They have moved their entire VMware infrastructure to IBM XIV, running over 12,000 guest instances.
A financial services company bought their first XIV in early 2009 and now has 34 XIV units in production attached to a variety of Windows, Solaris, AIX, Linux servers and VMware hosts. Their entire Microsoft Exchange was moved from HP and EMC disk to IBM XIV, and experienced noticeable performance improvement.
When a University health system replaced two competitive disk systems with XIV, their data center temperature dropped from 74 to 68 degrees Fahrenheit. In general, XIV systems are 20 to 30 percent more energy efficient per usable TB than traditional disk systems.
A service provider that had used EMC disk systems for over 10 years evaluated the IBM XIV versus upgrading to EMC V-Max. The three year total cost of ownership (TCO) of EMC's V-Max was $7 Million US dollars higher, so EMC counter-proposed CLARiiON CX4 instead. But, in the end, IBM XIV proved to be the better fit, and now the customer is happy having made the switch.
The manager of an information communications technology service provider was impressed that the XIV was up and running in just a couple of days. They now have over two dozen XIV systems.
Another XIV client had lost all of their Computer Room Air Conditioning (CRAC) units for several hours. The data center heated up to 126 degrees Fahrenheit, but the customer did not lose any data on either of their two XIV boxes, which continued to run in these extreme conditions.
Optimizing XIV Performance - Brian Cormody
This session was an update from the [one presented last year] by Izhar Sharon. Brian presented various best practices for optimizing the performance when using specific application workloads with IBM XIV disk systems.
Oracle ASM: Many people allocate lots of small LUNs, because this made sense a long time ago when all you had was just a bunch of disks (JBOD). In fact, many of the practices that DBAs use to configure databases across disks become unnecessary with XIV. Wth XIV, you are better off allocating a few number of very large LUNs from the XIV. The best option was a 1-volume ASM pool with 8MB AU stripe. A single LUN can contain multiple Oracle databases. A single LUN can be used to store all of the logs.
VMware: Over 70 percent of XIV customers use it with VMware. For VMFS, IBM recommends allocating a few number of large LUNs. You can specify the maximum of 2181 GB. Do not use VMware's internal LUN extension capability, as IBM XIV already has thin provisioning and works better to allow XIV to do this for you. XIV Snapshots provide crash-consistent copies without all the VMware overhead of VMware Snapshots.
SAP: For planning purposes, the "SAPS" unit equates roughly to 0.4 IOPS for ERP OLTP workloads, and 0.6 IOPS for BW/BI OLAP workloads. In general, an XIV can deliver 25-30,000 IOPS at 10-15 msec response time, and 60,000 IOPS at 30 msec response time. With SAP, our clients have managed to get 60,000 IOPS at less than 15 msec.
Microsoft Exchange: Even my friends in Redmond could not believe how awesome XIV was during ESRP testing. Five Exchange 2010 servers connected two a pair of XIV boxes using the new 2TB drawers managed 40,000 mailboxes at the high profile (0.15 IOPS per mailbox). Another client found four XIV boxes (720 drives) was able to handle 60,000 mailboxes (5GB max), which would have taken over 4000 drives if internal disk drives were used instead. Who said SANs are obsolete for MS Exchange?
Asynchronous Replication: IBM now has an "Async Calculator" to model and help design an XIV async replication solution. In general, dark fiber works best, and MPLS clouds had the worst results. The latest 10.2.2 microcode for the IBM XIV can now handle 10 Mbps at less than 250 msec roundtrip. During the initial sync between locations, IBM recommends setting the "schedule=never" to consume as much bandwidth as possible. If you don't trust the bandwidth measurements your telco provider is reporting, consider testing the bandwidth yourself with [iPerf] open source tool.
By combining multiple components into a single "integrated system", IBM can offer a blended disk-and-tape storage solutions. This provides the best of both worlds, high speed access using disk, while providing lower costs and more energy efficiency with tape. According to a study by the Clipper Group, tape can be 23 times less expensive than disk over a 5 year total cost of ownership (TCO).
I've also covered Hierarchical Storage Management, such as my post [Seven Tiers of Storage at ABN Amro], and my role as lead architect for DFSMS on z/OS in general, and DFSMShsm in particular.
However, some explanation might be warranted in the use of these two terms in regards to SONAS. In this case, ILM refers to policy-based file placement, movement and expiration on internal disk pools. This is actually a GPFS feature that has existed for some time, and was tested to work in this new configuration. Files can be individually placed on either SAS (15K RPM) or SATA (7200 RPM) drives. Policies can be written to move them from SAS to SATA based on size, age and days non-referenced.
HSM is also a form of ILM, in that it moves data from SONAS disk to external storage pools managed by IBM Tivoli Storage Manager. A small stub is left behind in the GPFS file system indicating the file has been "migrated". Any reference to read or update this file will cause the file to be "recalled" back from TSM to SONAS for processing. The external storage pools can be disk, tape or any other media supported by TSM. Some estimate that as much as 60 to 80 percent of files on NAS have low reference and should be stored on tape instead of disk, and now SONAS with HSM makes that possible.
This distinction allows the ILM movement to be done internally, within GPFS, and the HSM movement to be done externally, via TSM. Both ILM and HSM movement take advantage of the GPFS high-speed policy engine, which can process 10 million files per node, run in parallel across all interface nodes. Note that TSM is not required for ILM movement. In effect, SONAS brings the policy-based management features of DFSMS for z/OS mainframe to all the rest of the operating systems that access SONAS.
HTTP and NIS support
In addition to NFS v2, NFS v3, and CIFS, the SONAS v1.1.1 adds the HTTP protocol. Over time, IBM plans to add more protocols in subsequent releases. Let me know which protocols you are interested in, so I can pass that along to the architects designing future releases!
SONAS v1.1.1 also adds support for Network Information Service (NIS), a client/server based model for user administration. In SONAS, NIS is used for netgroup and ID mapping only. Authentication is done via Active Directory, LDAP or Samba PDC.
SONAS already had synchronous replication, which was limited in distance. Now, SONAS v1.1.1 provides asynchronous replication, using rsync, at the file level. This is done over Wide Area Network (WAN) across to any other SONAS at any distance.
Interface modules can now be configured with either 64GB or 128GB of cache. Storage now supports both 450GB and 600GB SAS (15K RPM) and both 1TB and 2TB SATA (7200 RPM) drives. However, at this time, an entire 60-drive drawer must be either all one type of SAS or all one type of SATA. I have been pushing the architects to allow each 10-pack RAID rank to be independently selectable. For now, a storage pod can have 240 drives, 60 drives of each type of disk, to provide four different tiers of storage. You can have up to 30 storage pods per SONAS, for a total of 7200 drives.
An alternative to internal drawers of disk is a new "Gateway" iRPQ that allows the two storage nodes of a SONAS storage pod to connect via Fibre Channel to one or two XIV disk systems. You cannot mix and match, a storage pod is either all internal disk, or all external XIV. A SONAS gateway combined with external XIV is referred to as a "Smart Business Storage Cloud" (SBSC), which can be configured off premises and managed by third-party personnel so your IT staff can focus on other things.
See the Announcement Letters for the SONAS [hardware] and [software] for more details.
For those who are wondering how this positions against IBM's other NAS solution, the IBM System Storage N series, the rule of thumb is simple. If your capacity needs can be satisfied with a single N series box per location, use that. If not, consider SONAS instead. For those with non-IBM NAS filers that realize now that SONAS is a better approach, IBM offers migration services.
Both the Information Archive and the SONAS can be accessed from z/OS or Linux on System z mainframe, from "IBM i", AIX and Linux on POWER systems, all x86-based operating systems that run on System x servers, as well as any non-IBM server that has a supported NAS client.
Continuing my coverage of the IBM Dynamic Infrastructure Executive Summit at the Fairmont Resort in Scottsdale, Arizona, we had a day full main-tent sessions. Here is a quick recap of the sessions presented in the afternoon.
Taming the Information Explosion
Doug Balog, IBM Vice President and Disk Storage Business Line Executive, presented on the information explosion. Storage Admins are focused on managing storage growth and the related costs and complexity, proper forecasting and capacity planning, and backup administration. IBM's strategy is to help clients in the following areas:
Storage Efficiency - getting the most use out of the resources you invest
Service Delivery - ensuring that information gets to the right people at the right time
Data Protection - protecting data against unethical tampering, unauthorized access, and unexpected loss and corruption
Cory Vokey, Senior Manager of IT Systems Operations at Research in Motion, Ltd., the people who bring you BlackBerry phone service, provided a client testimonial for the XIV storage system. Before the XIV, RIM suffered high storage costs and per-volume software licensing. Over the past 15 months, RIM deployed XIV as a corporate standard. With the XIV, they have had 100 percent up-time, and enjoyed 50 percent costs savings compared to their previous storage systems. They have increased capacity 300 percent, without any increase to their storage admin staff. XIV has greatly improved their procurement process, as they no longer need to "true up" their software licenses to the volume of data managed, a sore point with their previous storage vendor.
Mainframe Innovations and Integration
Tom Rosamillia, IBM General Manager of the System z mainframe platform, presented on mainframe servers. After 40 years, IBM's mainframe remains the gold standard, able to handle hundreds of workloads on a single server, facilitating immediate growth with scalability. The key values of the System z mainframe are:
Industry leading virtualization, management and qualities of service
A comprehensive portfolio for business intelligence and data warehousing
The premier platform for modernizing the enterprise
A large and growing portfolio of leading-applications ISV support
Steve Phillips, CIO of Avnet, presented the client testimonial for their use of a System z10 mainframe. Last year, Avnet was ranked Fortune's Number One "Most admired" for Technology distribution. Avnet distributes technology from 300 suppliers to over 100,000 resellers, ISVs and end users. They have modernized their system running SAP on System z with DB2 as the database management system, using Hypersockets virtual LAN inside the server to communicate between logical partitions (LPARs). The folks at Avnet especially like the ability for on-the-fly re-assignment of capacity. This is used for end-of-quarter peak processing, and to adjust between test and development workloads. They also like the various special purpose engines available:
z Integrated Information Processor (zIIP) for DB2 workloads
z Application Assist Processor (zAAP) for Java processing under WebSphere
Integrated Facility for Linux (IFL) for Linux applications
Cloud Computing: Real Capabilities, Real Stories
Mike Hill, IBM Vice President of Enterprise Initiatives, presented on IBM's leadership in cloud computing. He covered three trends that are driving IT today. First, there is a consumerization and industrialization of IT interfaces. Second, a convergence of the infrastructure that is driving a new focus on standards. Third, delivering IT as a service has brought about new delivery choices. The result is cloud computing, with on-demand self-service, ubiquitous network access, location-independent resource pooling, rapid elasticity, and flexible pricing models. Government agencies and businesses in Retail, Manufacturing and Utilities are leading the charge to cloud computing.
Mike covered IBM's five cloud computing deployment models, and shared his views on which workloads might be ready for cloud, and which may not be there yet. Organizations are certainly seeing significant results: reduced labor costs, improved capital utilization, reduced provisioning cycle times, improved quality through reduced software defects, and reduced end user IT support costs.
Mitch Daniels, Director of Technology at ManTech International Corporation, presented the customer testimonial for an IBM private cloud for Development and Test. Mantech chose a private cloud as they work with US Federal agencies like Department of Defense, Homeland Security and the Intelligence community. The private cloud was built from:
IBM Cloudburst virtualized server environment
Tivoli Unified Process to document process and workflow
Tivoli Service Automation Manager to request, deliver and manage IT services
Tivoli Self-Service Portal and Service Catalog to allow developers and testers to request resources as needed
The result: Mantech saved 50 percent in labor costs, and can now provision development and test resources in minutes instead of weeks.
The IBM Transformation Story
Leslie Gordon, IBM Vice President of Application and Infrastructure Services Management, presented IBM's own transformation story, becoming the premier "Globally Integrated Enterprise". Based on IBM's 2009 CIO study, CIOs must balance three roles with seemingly contradictory demands:
Make innovations real, be both an insightful visionary but also an able pragmatist
Raise the Return on Investment (ROI) of IT, determine savvy ways to create value but also be ruthless at cutting costs
Expanding the business impact of IT, be a collaborative business leader with the other C-level executives, but also be an inspiring manager for the IT staff.
In this case, IBM drinks its own champagne, using its own solutions to help run its internal operations. In 1997, IBM used over 15,000 applications, but this has been simplified down to 4500 applications today. Thousands of servers were consolidated to Linux on System z mainframes. The applications workloads were categorized as Blue, Bronze, Silver, and Gold to help prioritize the consolidation. IBM's key lessons from all this were:
Gather data at the business unit level, but build the business case from an enterprise view.
Start small and monitor progress continually, run operations concurrently with transformational projects
Address cultural and organizational changes by deploying transformation in waves
I found the client testimonials insightful. It is always good to hear that IBM's solutions work "as advertised" right out of the box.
While clients and IBM executives were in meetings today, in and around the Scottsdale Fairmont resort here in Scottsdale, Arizona, I helped to set up the "Solutions Showcase". There were three stations:
David Ayd and I manned this one, covering storage and server systems. From left to right: a fully-populated 15-module XIV storage system, my laptop running the XIV GUI; two-socket 16-core POWER p770 server, a solid-state drive, PS702 POWER blade, my book Inside System Storage: Volume I, HX5 x86 blade, and four-socket 16-core x3850 M3 server with MAX5 memory extension; David's laptop with various POWER and System x presentations, and our Kaon V-Osk interactive plasma screen display.
Eric Kern manned the Smarter Clouds station. He had live guest images on the IBM Developer and Test cloud, which one the "Best of Interop" award up in Las Vegas this week. I covered IBM's cloud offering in my post [Three Things To Do on the IBM Cloud].
Smarter Data Centers
Ken Schneebeli manned the "Smarter Data Centers" station. He directed people out to the parking lot to see Brian Canney and the Portable Modular Data Center (PMDC). The one here is 8.5 feet by 8.5 feet by 40 feet in size and can be configured and deployed in 12-14 weeks to any location. We can fit any mix of IBM and non-IBM equipment, provided it meets physical dimensions. Want a DS8700 disk system? The PMDC can hold up to 3-frame configurations of the DS8700. Want an eclectic mix of Sun, HP and Dell servers with HDS and EMC disk in your PMDC? IBM can do that too.
After we finished setup, we joined the clients at the "Welcome Reception" on the Lagoon Lawn. The weather was quite pleasant.
Special thanks to Jasdeep Purdhani, Lisa Gates, and Kelly Olson for their help organizing this event.
This week, Tuesday, Wednesday and Thursday, I am at the IBM Dynamic Infrastructure Executive Summit at the beautiful Fairmont Resort in Scottsdale, Arizona. This is a mix of indoor and outdoor meetings, one-on-ones with IBM executives, and main-tent sessions.
The Solutions Showcase will cover the following:
As the bar for performance gets higher and the need to manage, store and analyze massive amounts of information escalates, systems must scale to meet the needs of the business. The latest server and storage technology innovations including: POWER7, eX5, XIV, ProtecTIER, SONAS, and System z Solution Editions.
Smarter Data Centers
Today’s data centers are under extreme power and cooling pressures and space constraints. How can you get more out of your existing facility, while planning for future requirements? IBM energy efficiency consultants will tell you how you can reduce both CAPEX and OPEX costs and plan for future growth with consolidation and virtualization, energy efficient (energy star) equipment and modular data center solutions. Be sure to check out the IBM Portable Modular Data Center (PMDC) that fits in a standard shipping crate!
IBM’s Cloud Computing solutions provide you with flexible, dynamic, secure and cost-efficient delivery choices from pay-per-use (by the hour, week or year) at IBM cloud centers around the world, conditioning your infrastructure to build your own private cloud or out-of-the box cloud solutions that are quick and easy to deploy. Which workloads are the best fit for cloud computing? How do you decide which cloud computing is right for your organization? Cloud experts will talk about the options, give you recommendations based on your business objectives and help you get started.
My colleagues, Harley Puckett (left) and Jack Arnold (right) were highlighted in today's Arizona Daily Star, our local newspaper, as part of an article on IBM's success and leadership in the IT storage industry. At 1400 employees here in Tucson, IBM is Southern Arizona's 36th largest employer.
Highlighted in the article:
DS8700 with the new Easy Tier feature
TS7650 ProtecTIER virtual tape library with data deduplication capability
LTO-5 tape and the new Long Term File System (LTFS)
XIV with the new 2TB drive, for a maximum per-rack usable capacity of 161 TB.
Continuing my discussion of this week's announcements of IBM storage products, I will cover the announcements that double storage capacity per footprint.
Linear Tape Open - Generation 5
IBM announced [LTO-5 drives], the TS2250 half-height and the TS2350 full-height drives, as well as support for LTO-5 drives in its various tape libraries: TS3100, TS3200, and TS3500. The native 1.5TB capacity of the LTO-5 cartridge is nearly double the 800GB capacity of the LTO-4 predecessor. With 2:1 compression, that's 3TB of data per cartridge! Performance-wise, the data transfer rate is 140 MB/sec, about 17 percent improvement over the 120MB/sec of the LTO-4 technology. The TS2250, TS2350, TS3100 and TS3200 now all offer dual-SAS ports for higher availability.
LTO-5 carries forward many of the advancements of past generations. For example, LTO-5 continues the G-2/G-1 "backward compatibility" architecture, which means that the LTO-5 drive can read LTO-3 and LTO-4 cartridges, and can write LTO-4 cartridges. Like the LTO-3 and LTO-4, the same LTO-5 drive can read and write WORM or regular rewriteable cartridges. Like the LTO-4, the LTO-5 offers drive-level data-at-rest encryption. These use a symmetric 256-bit AES key, managed by IBM Tivoli Key Lifecycle Manager (TKLM).
One thing that is new in LTO-5 is the Long Term File System [LTFS] available on the TS2250 and TS2350, which allows you to treat the tape as a hierarchical file system, with files and folders, that you can drag and drop like any other file system.
XIV storage system
IBM [doubles the capacity of the XIV storage system] by supporting 2TB SATA drives. A full 15-module frame can hold up to 161TB of usable capacity. The smallest 6-module system with 2TB can hold up to 55TB of usable capacity. At this time, all of the drives in an XIV must be the same type, so we do not yet allow intermix of 1TB and 2TB in the same frame. The 2TB are more energy efficient, with a full 15-module frame consuming on average 6.7 kVA, compared to 7.8 kVA for the 1TB drives. The performance is roughly the same, so if, for example, your application workload got 3700 IOPS per module with 1TB drives, it will get about the same 3700 IOPS per module with 2TB drives.
The EXN1000 and EXN3000 can now double in capacity with 2TB SATA drives. These can be attached to the N3000 entry-level models, such as the N3400.
DS3000 disk system
The DS3200, DS3300 and DS3400, as well as their related expansion drawers, now supports 2TB SATA drives. This means that a single control unit with three expansion drawers can hold up to 96TB of raw capacity (48 drives).
DS8700 disk system
The DS8700 also now supports 2TB SATA drives, for a maximum raw capacity over 2PB, as well as new 600GB Fibre Channel drives. Now that IBM offers [Easy Tier] functionality, pairing Solid State Drives with slower, energy-efficient SATA disk makes a lot of financial sense.
That's a lot of announcements! As always, feel free to dig into each of the links to learn more about each product.
Now that the US Recession has been declared over, companies are looking to invest in IT again. To help you plan your upcoming investments, here are some upcoming events in April.
SNW Spring 2010, April 12-15
IBM is a Platinum Plus sponsor at this [Storage Networking World event], to be held April 12-15 at the Rosen Shingle Creek Resort in Orlando, Florida. If you are planning to go, here's what you can go look for:
IBM booth at the Solution Center featuring the DS8700 and XIV disk systems, SONAS and the Smart Business Storage Cloud (SBSC), and various Tivoli storage software
IBM kiosk at the Platinum Galleria focusing on storage solutions for SAP and Microsoft environments
IBM Senior Engineer Mark Fleming presenting "Understanding High Availability in the SAN"
IBM sponsored "Expo Lunch" on Tuesday, April 13, featuring Neville Yates, CTO of IBM ProtecTIER, presenting "Data Deduplication -- It's not Magic - It's Math!"
IBM CTO Vincent Hsu presenting "Intelligent Storage: High Performance and Hot Spot Elimination"
IBM Senior Technical Staff Member (STSM) Gordon Arnold presenting "Cloud Storage Security"
One-on-One meetings with IBM executives
I have personally worked with Mark, Neville, Vincent and Gordon, so I am sure they will do a great job in their presentations. Sadly, I won't be there myself, but fellow blogger [Rich Swain from IBM] will be at the event to blog about all the actviities there.
Jim Stallings - General Manager, Global Markets, IBM Systems and Technology Group
Scott Handy - Vice President, WW Marketing, Power Systems, IBM Systems and Technology Group
Dan Galvan - Vice President, Marketing & Strategy, Storage and Networking Systems, IBM Systems and Technology Group
Inna Kuznetsova - Vice President, Marketing and Sales Enablement, Systems Software, IBM Systems and Technology Group
Jeanine Cotter - Vice President, Systems Services, IBM Global Technology Services
The webinar will include client testimonials from various companies as well.
Dynamic Infrastructure Executive Summit, April 27-29
I will be there, at this this 2-and-a-half-day [Executive Summit] in Scottsdale, Arizona, to talk to company executives. Discover how IBM can help you manage your ever-increasing amount of information with an end-to-end, innovative approach to building a dynamic infrastructure. You will learn all of our innovative solutions and find out how you can effectively transform your enterprise for a smarter planet.
Well, it's Tuesday again, and that means IBM announcements! Right on the heels of our big storage launch on February 9, today IBM announced some exciting options for its modular disk systems. Let's take a look:
2TB SATA-II drives
That's right, you can now DOUBLE your capacity with 2TB SATA type-II drives on the DS3950, DS4200, DS4700, DS5020, DS5100 and DS5300 disk controllers, as well as the DS4000 EXP420, EXP520, EXP810, EXP5000 and EXP5060 expansion drawers. Here are the Announcement Letters for the [HVEC] and [AAS] ordering systems.
300GB Solid State Drives
IBM also announces 300GB solid state drives (SSD) for the DS5100 and DS5300. These are four times larger than the 73GB drives IBM offered last year, for those workloads that need high read IOPS such as Online Transaction Processing (OLTP) and Enterprise Resource Planning (ERP) applications. Here is the [Announcement Letter].
New N series model N3400
For customers that need less than the minimum 21TB that our IBM Scale-Out Network Attach Storage (SONAS) can provide, IBM offers the new N3400 unified storage disk system, with support for NFS, CIFS, iSCSI and FCP. This is a 2U high 12 drive model that can be expanded up to 136 drives, basically doubling all the stats from last year's N3300 model. Fellow blogger, Rich Swain (IBM), does a great job recapping the speeds and feeds over on his blog [News and Information about IBM N series].
It also appears that the reports and rumors of the death of the DS6800 are premature. Don't believe misleading statements from competitors, such as those found written by fellow blogger BarryB (EMC), aka "the Storage Anarchist", in his latest post [Bring Out Your Dead] showing a cute little tombstone with "Feb 2010" on the bottom. Actually, if he had bothered to read IBM's [Announcement Letter], he would have realized that IBM plans to continue to sell these until June. Of course, IBM will continue to support both new and existing DS6800 customers for many years to come.
Technically, BarryB does not make any factually incorrect statements for me to correct on his blog. The idea that a product is "dead" is, of course, just opinion, and competitors poke fun at each others' announcements every day. One could argue that the EMC V-Max was "dead" after the ITG whitepaper [Cost/Benefit Case for IBM XIV Storage System - Comparing Costs for IBM XIV and EMC V-Max Systems] demonstrated that the IBM XIV cost 63 percent less than a comparable EMC V-Max over the life of three years total cost of ownership (TCO) back in July 2009. The comparison was made with data from clients in a variety of industries including manufacturing, health care, life sciences, telecommunications, financial services, and the public sector. This could explain why so many EMC customers are buying or investigating the IBM XIV and the rest of the IBM storage portfolio.
Continuing my coverage of the Data Center Conference, held Dec 1-4 in Las Vegas, an analyst presented the challenges of managing the rapid growth in storage capacity. Administrators ability to manage storage is not keeping up with the growth. His recommendations:
Aim to just meet but not exceed service level agreements (SLAs)
Revisit past IT decisions. This includes evaluating your SAN to NAS ratio.
Embrace new technologies when they are effective, this includes cloud storage, solid state drives, and interconnect technologies like FCoCEE.
Follow vendor management best practices, update your vendor "short list".
A survey of the audience found:
20 percent have a single external storage vendor
39 percent have two external storage vendors
18 percent have three external storage vendors
23 percent have four or more external storage vendors
Throughout the industry, storage vendors are following IBM's example of using commodity hardware parts. This is because custom ASICs are expensive, and changes take a minimum of three months development time. Software-based implementations can be updated more quickly.
In terms of technologies deployed of SAN, NAS, Compliance Archive (such as the IBM Information Archive), and Virtual Tape Library (VTL) such as the IBM TS7650 ProtecTIER data deduplications solution, here was the survey of the audience:
8 percent: SAN only
14 percent: SAN and NAS
23 percent: SAN, NAS and Compliance Archive
9 percent: SAN and VTL
14 percent: SAN, NAS and VTL
32 percent: SAN, NAS, Compliance Archive and VTL
Cost reduction techniques including thin provisioning, compression, data deduplication, Quality of Service tiers, and archiving. To reduce power and cooling requirements, switch from FC to SATA disk wherever possible, and move storage out of the data center, such as on tape cartridges or cloud storage.
For emerging technologies, the following survey:
16 percent have already implemented a new emerging technology (IBM XIV, Pillar, 3PAR, etc.)
30 percent plan to do so in 12-24 months
4 percent plan to do so in 24-48 months
50 percent have no plans, and will continue to stick with traditional storage technologies
As for adopting Cloud storage, here was the survey:
14 percent already have
31 percent plan to use Cloud storage in 12-24 months
13 percent plan to use Cloud storage in 24-48 months
42 percent have no plans to adopt Cloud storage
My take-away from this is that many companies are still "exploring" into different options available to them. Fortunately, IBM offers a broad portfolio of complete end-to-end solutions to make acquiring the right mix of technologies that are optimized for your workloads possible.
Well, it's Tuesday again, and we have more IBM announcements.
XIV asynchronous mirror
For those not using XIV behind SAN Volume Controller, [XIV now offers native asynchronous mirroring] support to another XIV far, far away. Unlike other disk systems that are limited to two or three sites, an XIV can mirror to up to 15 other sites. The mirroring can be at the individual volume, or a consistency group of multiple volumes. Each mirror pair can have its own recovery point objective (RPO). For example, a consistency group of mission critical application data might be given an RPO of 30 seconds, but less important data might be given an RPO of 20 minutes. This allows the XIV to prioritize packets it sends across the network.
As with XIV synchronous mirror, this new asynchronous mirror feature can send the data over either its
Fibre Channel ports (via FCIP) or its Ethernet ports.
The IBM System Storage SAN384B and SAN768B directors now offer [two new blades!]
A 24-port FCoCEE blade where each port can handle 10Gb convergence enhanced Ethernet (CEE). CEE can be used to transmit Fibre Channel, TCP/IP, iSCSI and other Ethernet protocols. This connect directly to server's converged network adapter (CNA) cards.
A 24-port mixed blade, with 12 FC ports (1Gbps, 2Ggbs, 4Gbps, 8Gbps), 10 Ethernet ports (1GbE) and 2 Ethernet ports (10GbE). This would connect to traditional server NIC, TOE and HBA cards as well as traditional NAS, iSCSI and FC based storage devices.
IBM also announced the IBM System Storage [SAN06B-R Fibre Channel router]. This has 16 FC ports (1Gbps up to 8Gbps) and six Ethernet ports (1GbE), with support for both FC routing as well as FCIP extended distance support.
With the holiday season coming up at the end of the year, now is a great time to ask Santa for a new shiny pair of XIV systems, and some extra networking gear to connect them.