Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
For most of us in the United States, it is cold out there, so you better "bundle up!"
Today, IBM announces the [IBM Storwize Rapid Application Storage] bundle, an integrated solution that improves storage efficiency and application availability. It comes in two offerings:
Basic
The "Basic" offering includes a single IBM Storwize V7000 controller enclosure, and three year warranty package that includes software licenses for IBM Tivoli Storage FlashCopy Manager (FCM) and IBM Tivoli Storage Productivity Center for Disk - Midrange Edition (MRE). Planning, configuration and testing services for the software are included and can be performed by either IBM or an IBM Business Partner.
Standard
The "Standard" offering allows for multiple IBM Storwize V7000 enclosures, provides three year warranty package for the FCM and MRE software, and includes implementation services for both the hardware and the software components. These services can be performed by IBM or an IBM Business Partner.
Why bundle? Here are the key advantages for these offerings:
Increased storage utilization! First introduced in 2003, IBM SAN Volume Controller is able to improve storage utilization by 30 percent through virtualization and thin provisioning. IBM Storwize V7000 carries on this tradition. Space-efficient FlashCopy is included in this bundle at no additional charge and can reduce the amount of storage normally required for snapshots by 75 percent or more. IBM Tivoli Storage FlashCopy Manager can manage these FlashCopy targets easily.
Improved storage administrator productivity! The new IBM Storwize V7000 Graphical User Interface can help improve administrator productivity up to 2 times compared to other midrange disk solutions. The IBM Tivoli Storage Productivity Center for Disk - Midrange Edition provides real-time performance monitoring for faster analysis time.
Increased application performance! This bundle includes the "Easy Tier" feature at no additional charge. Easy Tier is IBM's implementation of sub-LUN automated tiering between Solid-State Drives (SSD) and spinning disk. Easy Tier can help improve application throughput up to 3 times, and improve response time up to 60 percent. Easy Tier can help meet or exceed application performance levels with its internal "hot spot" analytics.
Increased application availability! IBM Tivoli Storage FlashCopy Manager provides easy integration with existing applications like SAP, Microsoft Exchange, IBM DB2, Oracle, and Microsoft SQL Server. Reduce application downtime to just seconds with backups and restores using FlashCopy. The built-in online migration feature, included at no additional charge, allows you to seamlessly migrate data from your old disk to the new IBM Storwize V7000.
Significantly reduced implementation time! This bundle will help you cut implementation time in half, with little or no impact to storage administrator staff. This will help you realize your return on investment (ROI) much sooner.
A client asked me to explain "Nearline storage" to them. This was easy, I thought, as I started my IBM career on DFHSM, now known as DFSMShsm for z/OS, which was created in 1977 to support the IBM 3850 Mass Storage System (MSS), a virtual storage system that blended disk drives and tape cartridges with robotic automation. Here is a quick recap:
Online storage is immediately available for I/O. This includes DRAM memory, solid-state drives (SSD), and always-on spinning disk, regardless of rotational speed.
Nearline storage is not immediately available, but can be made online quickly without human intervention. This includes optical jukeboxes, automated tape libraries, as well as spin-down massive array of idle disk (MAID) technologies.
Offline storage is not immediately available, and requires some human intervention to bring online. This can include USB memory sticks, CD/DVD optical media, shelf-resident tape cartridges, or other removable media.
These terms and their definitions have been used for decades, and are consistent with or at least similar to definitions I found on [Wikipedia], [Webopedia], [WiseGEEK], and [SearchStorage].
Sadly, it appears a few storage manufacturers and vendors have been misusing the term "Nearline" to refer to "slower online" spinning disk drives. I find this [June 2005 technology paper from Seagate], and this [2002 NetApp Press Release], the latter of which included this contradiction for their "NearStore" disk array. Here is the excerpt:
"Providing online access to reference information—NetApp nearline storage solutions quickly retrieve and replicate reference and archive information maintained on cost-effective storage—medical images, financial models, energy exploration charts and graphs, and other data-intensive records can be stored economically and accessed in multiple locations more quickly than ever"
Which is it, "online access" or "nearline storage"?
If a client asked why slower drives consume less energy or generate less heat, I could explain that, but if they ask why slower drives must have SATA connections, that is a different discussion. The speed of a drive and its connection technology are for the most part independent. A 10K RPM drive can be made with FC, SAS or SATA connection.
I am opposed to using "Nearlne" just to distinguish between four-digit speeds (such as 5400 or 7200 RPM) versus "online" for five-digit speeds (10,000 and 15,000 RPM). The difference in performance between 10K RPM and 7200 RPM spinning disks is miniscule compared to the differences between solid-state drives and any spinning disk, or the difference between spinning disk and tape.
I am also opposed to using the term "Nearline" for online storage systems just because they are targeted for the typical use cases like backup, archive or other reference information that were previously directed to nearline devices like automated tape libraries.
Can we all just agree to refer to drives as "fast" or "slow", or give them RPM rotational speed designations, rather than try to incorrectly imply that FC and SAS drives are always fast, and SATA drives are always slow? Certainly we don't need new terms like "NL-SAS" just to represent a slower SAS connected drive.
Continuing my week in Washington DC for the annual [2010 System Storage Technical University], I presented a session on Storage for the Green Data Center, and attended a System x session on Greening the Data Center. Since they were related, I thought I would cover both in this post.
Storage for the Green Data Center
I presented this topic in four general categories:
Drivers and Metrics - I explained the three key drivers for consuming less energy, and the two key metrics: Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE).
Storage Technologies - I compared the four key storage media types: Solid State Drives (SSD), high-speed (15K RPM) FC and SAS hard disk, slower (7200 RPM) SATA disk, and tape. I had comparison slides that showed how IBM disk was more energy efficient than competition, for example DS8700 consumes less energy than EMC Symmetrix when compared with the exact same number and type of physical drives. Likewise, IBM LTO-5 and TS1130 tape drives consume less energy than comparable HP or Oracle/Sun tape drives.
Integrated Systems - IBM combines multiple storage tiers in a set of integrated systems managed by smart software. For example, the IBM DS8700 offers [Easy Tier] to offer smart data placement and movement across Solid-State drives and spinning disk. I also covered several blended disk-and-tape solutions, such as the Information Archive and SONAS.
Actions and Next Steps - I wrapped up the talk with actions that data center managers can take to help them be more energy efficient, from deploying the IBM Rear Door Heat Exchanger, or improving the management of their data.
Greening of the Data Center
Janet Beaver, IBM Senior Manager of Americas Group facilities for Infrastructure and Facilities, presented on IBM's success in becoming more energy efficient. The price of electricity has gone up 10 percent per year, and in some locations, 30 percent. For every 1 Watt used by IT equipment, there are an additional 27 Watts for power, cooling and other uses to keep the IT equipment comfortable. At IBM, data centers represent only 6 percent of total floor space, but 45 percent of all energy consumption. Janet covered two specific data centers, Boulder and Raleigh.
At Boulder, IBM keeps 48 hours reserve of gasoline (to generate electricity in case of outage from the power company) and 48 hours of chilled water. Many power outages are less than 10 minutes, which can easily be handled by the UPS systems. At least 25 percent of the Computer Room Air Conditioners (CRAC) are also on UPS as well, so that there is some cooling during those minutes, within the ASHRAE guidelines of 72-80 degrees Fahrenheit. Since gasoline gets stale, IBM runs the generators once a month, which serves as a monthly test of the system, and clears out the lines to make room for fresh fuel.
The IBM Boulder data center is the largest in the company: 300,000 square feet (the equivalent of five football fields)! Because of its location in Colorado, IBM enjoys "free cooling" using outside air temperature 63 percent of the year, resulting in a PUE of 1.3 rating. Electricity is only 4.5 US cents per kWh. The center also uses 1 Million KwH per year of wind energy.
The Raleigh data center is only 100,000 Square feet, with a PUE 1.4 rating. The Raleigh area enjoys 44 percent "free cooling" and electricity costs at 5.7 US cents per kWh. The Leadership in Energy and Environmental Design [LEED] has been updated to certify data centers. The IBM Boulder data center has achieved LEED Silver certification, and IBM Raleigh data center has LEED Gold certification.
Free cooling, electricity costs, and disaster susceptibility are just three of the 25 criteria IBM uses to locate its data centers. In addition to the 7 data centers it manages for its own operations, and 5 data centers for web hosting, IBM manages over 400 data centers of other clients.
It seems that Green IT initiatives are more important to the storage-oriented attendees than the x86-oriented folks. I suspect that is because many System x servers are deployed in small and medium businesses that do not have data centers, per se.
Continuing this week's coverage of IBM's 3Q announcements, today it's all about storage for our mainframe clients.
IBM System Storage DS8700
IBM is the leader in high-end disk attached to mainframes, with the IBM DS8700 being our latest model in a long series of successful products in this space. Here are some key features:
Full Disk Encryption (FDE), which I mentioned in my post [Different Meanings of the word "Protect"]. FDE are special 15K RPM Fibre Channel drives that include their own encryption chip, so that IBM DS8700 can encrypt the data at rest without impacting performance of reads or writes. The encryption keys are managed by IBM Tivoli Key Lifecycle Manager (TKLM).
Easy Tier, which I covered in my post [DS8700 Easy Tier Sub Lun Automatic Migration] which offers what EMC promised but has yet to deliver, the ability to have CKD volumes and FBA LUNs to straddle the fence between Solid State Drives (SSD) and spinning disk. For example, a 54GB CKD volume could have 4GB on SSD and the remaining 50GB on spinning drives. The hottest extents are moved automatically to SSD, and the coldest moved down to spinning disk. To learn more about Easy Tier, watch my [7-minute video] on IBM [Virtual Briefing Center].
z/OS Distributed Data Backup (zDDB), announced this week, provides the ability for a program running on z/OS to backup data written by distributed operating systems like Windows or UNIX stored in FBA format. In the past, to backup FBA LUNs involved a program like IBM Tivoli Storage Manager client to read the data natively, send it over Ethernet LAN to a TSM Server, which could run on the mainframe and use mainframe resources. This feature eliminates the Ethernet traffic by allowing a z/OS program to read the FBA blocks through standard FICON channels, which can then be written to z/OS disk or tape resources. Here is the [Announcement Letter] for more details.
One program that takes advantage of this new zDDB feature already is Innovation's [FDRSOS], which I pronounce "fudder sauce". If you are an existing FDRSOS customer, now is a good time to get rid of any EMC or HDS disk and replace with the new IBM DS8700 system.
IBM System Storage TS7680 ProtecTIER Deduplication Gateway for System z
When it comes to virtual tape libraries that attach to mainframes, the two main players are IBM TS7700 series and Oracle StorageTek Virtual Storage Manager (VSM). However, mainframe clients with StorageTek equipment are growing frustrated over Oracle's lack of commitment for mainframe-attachable storage. To make matters worse, Oracle recently missed a key delivery date for their latest enterprise tape drive.
Unfortunately, neither of these offer deduplication of the data. IBM solved this with the IBM TS7680. I covered the initial announcement six months ago in my post [TS7680 ProtecTIER Deduplication for the mainframe].
What's new this week is that IBM now supports native IP-based asynchronous replication of virtual tapes at distance, from one TS7680 to another TS7680. This replaces the method of replication using the back end disk features. The problem with using disk replication is that all the virtual tapes will be copied over. Instead, the ProtecTIER administrator can decide which subset of virtual tapes should be replicated to the remote site, and that can reduce both storage requirements as well as bandwidth costs. See the [Announcement Letter] for more details.
These new solutions will work with existing mainframes, as well as the new IBM [zEnterprise mainframe models] announced this week.
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
IBM
HDS
EMC
Why?
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
How?
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
What?
SAN Volume Controller
HDS USP-V and USP-VM
Invista
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
IBM SVC
EMC Invista
EMC VPLEX
Hardware
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
SAN Fabric
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Multi-pathing driver
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Cache
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
No
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Thin Provisioning
Provides thin provisioning to devices that don't offer this natively
No
Like Invista, No
Campus-wide access
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Limited support
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
None
HP SVSP-like, requires at least one EMC storage array to hold metadata
Internal storage
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
None
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
This week I got a comment on my blog post [IBM Announces another SSD Disk offering!]. The exchange involved Solid State Disk storage inside the BladeCenter and System x server line. Sandeep offered his amazing performance results, but we have no way to get in contact with him. So, for those interested, I have posted on SlideShare.net a quick five-chart presentation on recent tests with various SSD offerings on the eX5 product line here:
Continuing my coverage of last week's Data Center Conference 2009, my last breakout session of the week was an analyst presentation on Solid State Drive (SSD) technology. There are two different classes of SSD, consumer grade multi-level cell (MLC) running currently at $2 US dollars per GB, and Enterprise grade single-level cell (SLC) running at $4.50 US dollars per GB. Roughly 80 to 90 percent of the SSD is used in consumer use cases, such as digital cameras, cell phones, mobile devices, USB sticks, camcorders, media players, gaming devices and automotive.
While the two classes are different, the large R&D budgets spent on consumer grade MLC carry forward to help out enterprise grade SLC as well. SLC means there is a single level for each cell, so each cell can only hold a single bit of data, a one or a zero. MLC means the cell can hold multiple levels of charge, each representing a different value. Typically MLC can hold 3 to 4 bits of data per cell.
Back in 1997, SLC Enterprise Grade SSD cost roughly $7870 per GB. By 2013, Consumer Grade 4-bit MLC is expected to be only 24 cents per GB. Engineers are working on trade-offs between endurance cycles and retention periods. FLASH management software is the key differentiator, such as clever wear-leveling algorithms.
SSD is 10-15 times more expensive than spinning hard disk drives (HDD), and this price difference is expected to continue for a while. This is because of production volumes. In 4Q09, manufacturers will manufacturer 50 Exabytes of HDD, but only 2 Exabytes of SSD. The analyst thinks that SSD will only be roughly 2 percent of the total SAN storage deployed over the next few years.
How well did the audience know about SSD technology?
4 percent not at all
30 percent some awareness
30 percent enough to make purchase decision
21 percent able to quantify benefits and trade-offs
15 percent experts
SSD does not change the design objectives of disk systems. We want disk systems that are more scalable and have higher performance. We want to fully utilize our investment. We want intelligent self-management similar to caching algorithms. We want an extensible architecture.
What will happen to fast Fibre Channel drives? Take out your Mayan calendar. Already 84mm 10K RPM drives are end of life (EOL) in 2009. The analyst expects 67mm and 70mm 10K drives will EOL in 2010, and that 15K will EOL by 2012. A lot of this is because HDD performance has not kept up with CPU advancements, resulting in an I/O bottleneck. SSD is roughly 10x slower than DRAM, and some architectures use SSD as a cache extension. The IBM N series PAM II card and Sun 7000 series being two examples.
Let's take a look at a disk system with 120 drives, comparing 73GB HDD's versus 32GB SSD's.
Attribute
HDD system
per HDD drive
SSD system
per SSD drive
IOPS
36,000
300
4,200,000
350,000
Throughput
12 GB/sec
100 MB/sec
36 GB/sec
300 MB/sec
Watts
1452
12
288
2
There are various use cases for SSD. These include internal DAS, stand-alone Tier 0 storage, replace or complement HDD in disk arrays, and as an extension of read cache or write cache. The analyst believes there will be mixed MLC/SLC devices that will allow for mixed workloads. His recommendations:
Use SSD to eliminate performance and throughput bottlenecks
Consolidate workloads to maximize value
Use SLAs to identify workload candidates
Evaluate emerging technologies along with established vendors
Do not expect SSD to drastically reduce power/cooling
SSD will continue to complement HDD, primarily SATA disk
Trust but verify, check out customer references offered by storage vendors