Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
After the amount of flack Jon Toigo had to endure for not giving advanced notice to his upcoming Webcast, I thought I would better remind people about my own Webinar that is happening next Tuesday, August 23.
So here's the scoop, next Tuesday I will be presenting [The Future of Storage], August 23, 1pm to 2pm EDT. You can register to attend at the [Infoboom Registration Page]. Infoboom is a social community for business and IT leaders of small and midsize businesses brought to you by IBM.
But that's not all! After the webinar, I will then travel to various cities for face-to-face lectures. Here are the first two:
September 7 - Indianapolis
September 8 - Boston area
If you are near either of these two locations, contact your local IBM storage specialist or IBM business partner to participate.
The IBM Storwize V7000 was introduced last October, and has proven to be wildly successful. I saw two awesome reviews recently of the IBM Storwize V7000 disk system that I thought I would bring to your attention.
The first review is [IBM Storwize V7000] from Roger Howorth of ZDNet UK. Here are some quotes:
"Under the hood, the Storwize V7000 is built from technologies originally developed for IBM's enterprise-class storage systems, so the V7000 benefits from a comprehensive set of high-end features that have been scaled down for mid-range buyers."
"Initial configuration couldn't be simpler."
"We really liked the layout and functionality of the GUI."
"Storwize V7000 is virtual storage that offers efficiency and flexibility through built-in SSD optimization and "thin provisioning" technologies while enabling users to virtualize and re-use existing disk systems..."
"Storwize V7000 advanced functionality also enables non-disruptive migration of data from existing storage, simplifying implementation and minimizing disruption to users."
"The Storwize V7000 graphical user interface is a browser-based, easy to navigate intuitive GUI."
"ESG Lab found that getting started with the Storwize V7000 disk system was intuitive and straightforward."
"Easy Tier increases the efficiency and simplicity of deploying SSD drives."
Full VMware Vstorage API for Array Integration (VAAI). Back in 2008, VMware announced new vStorage APIs for its vSphere ESX hypervisor: vStorage API for Site Recovery Manager, vStorage API for Data Potection, vStorage API for Multipathing. Last July, VMware added a new API called vStorage API for Array Integration [VAAI] which offers three primitives:
Hardware-assisted Blocks zeroing. Sometimes referred to as "Write Same", this SCSI command will zero out a large section of blocks, presumably as part of a VMDK file. This can then be used to reclaim space on the XIV on thin-provisioned LUNs.
Hardware-assisted Copy. Make an XIV snapshot of data without any I/O on the server hardware.
Hardware-assisted locking. On mainframes, this is call Parallel Access Volumes (PAV). Instead of locking an entire LUN using standard SCSI reserve commands, this primitive allows an ESX host to lock just an individual block so as not to interfere with other hosts accessing other blocks on that same LUN.
Quality of Service (QoS) Performance Classes.
When XIV was first released, it treated all hosts and all data the same, even when deployed for a variety of different applications. This worked for some clients, such as [Medicare y Mucho Más]. They migrated their databases, file servers and email system from EMC CLARiiON to an IBM XIV Storage System. In conjunction with VMware, the XIV provides a highly flexible and scalable virtualized architecture, which enhances the company's business agility.
However, other clients were skeptical, and felt they needed additional "nobs" to prioritize different workloads. The new 10.2.4 microcode allows you to define four different "performance classes". This is like the door of a nightclub. All the regular people are waiting in a long line, but when a celebrity in a limo arrives, the bouncer unclips the cord, and lets the celebrity in. For each class, you provide IOPS and/or MB/sec targets, and the XIV manages to those goals. Performance classes are assigned to each host based on their value to the business.
Offline Initialization for Asynchronous Mirror.
Internally, we called this Truck Mode. Normally, when a customer decides to start using Asynchronous Mirror, they already have a lot of data at the primary location, and so there is a lot of data to send over to the new XIV box at the secondary location. This new feature allows the data to be dumped to tape at the primary location. Those tapes are shipped to the secondary location and restored on the empty XIV. The two XIV boxes are then connected for Asynchronous Mirroring, and checksums of each 64KB block are compared to determine what has changed at the primary during this "tape delivery time". This greatly reduces the time it takes for the two boxes to get past the initial synchronization phase.
IP-based Replication. When IBM first launched the Storwize V7000 last October, people commented that the one feature they felt missing was IP-based replication. Sure, we offered FCP-based replication as most other Enterprise-class disk systems offer today, but many midrange systems also offer IP-based repliation to reduce the need for expensive FCIP routers. [IBM Tivoli Storage FastBack for Storwize V7000] provides IP-based replication for Storwize V7000 systems.
Network Attached Storage
IBM announced two new models of the IBM System Storage N series. The midrange N6240 supports up to 600 drives, replacing the N6040 system. The entry-level N6210 supports up to 240 drives, and replaces the N3600 system. Details for both are available on the latest [data sheet].
IBM Real-Time Compression appliances work with all N series models to provide additional storage efficiency. Last October, I provided the [Product Name Decoder Ring] for the STN6500 and STN6800 models. The STN6500 supports 1 GbE ports, and the STN6800 supports 10GbE ports (or a mix of 10GbE and 1GbE, if you prefer). The IBM versions of these models were announced last December, but some people were on vacation and might have missed it. For more details of this, read the [Resources page], the [landing page], or [watch this video].
IBM System Storage DS3000 series
IBM System Storage [DS3524 Express DC and EXP3524 Express DC] models are powered with direct current (DC) rather than alternating current (AC). The DS3524 packs dual controllers and two dozen small-form factor (2.5 inch) drives in a compact 2U-high rack-optimized module. The EXP3524 provides addition disk capacity that can be attached to the DS3524 for expansion.
Large data centers, especially those in the Telecommunications Industry, receive AC from their power company, then store it in a large battery called an Uninterruptible Power Supply (UPS). For DC-powered equipment, they can run directly off this battery source, but for AC-powered equipment, the DC has to be converted back to AC, and some energy is lost in the conversion. Thus, having DC-powered equipment is more energy efficient, or "green", for the IT data center.
Whether you get the DC-powered or AC-powered models, both are NEBS-compliant and ETSI-compliant.
New Tape Drive Options for Autoloaders and Libraries
IBM System Storage [TS2900 Autoloader] is a compact 1U-high tape system that supports one LTO drive and up to 9 tape cartridges. The TS2900 can support either an LTO-3, LTO-4 or LTO-5 half-height drive.
IBM System Storage [TS3100 and TS3200 Tape Libraries] were also enhanced. The TS3100 can accomodate one full-height LTO drive, or two half-height drives, and hold up to 24 cartridges. The TS3200 offers twice as many drives and space for cartridges.
From New York, Rolf went to London, Paris, Madrid, Morocco, Cairo, South Africa, Bangkok Thailand, Malaysia, Singapore, New Zealand, Australia, and then back to United States. I was hoping to run into him while I was in Australia and New Zealand last month, but our schedules did not line up.
Travelingwithout baggage is more than just a convenience, it is a metaphor for the philosophy that we should keep only what we need, and leave behind what we don't. This was the approach taken by IBM in the design of the IBM Storwize V7000 midrange disk system.
The IBM Storwize V7000 disk system consists of 2U enclosures. Controller enclosures have dual-controllers and drives. Expansion enclosures have just drives. Enclosures can have either 24 smaller form factor (SFF) 2.5-inch drives, or twelve larger 3.5-inch drives. A controller enclosure can be connected up to nine expansion enclosures.
The drives are all connected via 6 Gbps SAS, and come in a variety of speeds and sizes: 300GB Solid-State Drive (SSD); 300GB/450GB/600GB high-speed 10K RPM; and 2TB low-speed 7200 RPM drives. The 12-bay enclosures can be intermixed with 24-bay enclosures on the same system, and within an enclosure different speeds and sizes can be intermixed. A half-rack system (20U) could hold as much as 480TB of raw disk capacity.
This new system, freshly designed entirely within IBM, competes directly against systems that carry a lot of baggage, including the HDS AMS, HP EVA, an EMC CLARiiON CX4 systems. Instead, we decided to keep the what we wanted from our other successful IBM products.
Inspired by our successful XIV storage system, IBM has developed a web-based GUI that focuses on ease-of-use. This GUI uses the latest HTML5 and dojo widgets to provide an incredible user experience.
Borrowed from our IBM DS8000 high-end disk systems, state-of-the-art device adapters provide 6 Gbps SAS connectivity with a variety of RAID levels: 0, 1, 5, 6, and 10.
From our SAN Volume Controller, the embedded [ SVC 6.1 firmware] provides all of the features and functions normally associated with enterprise-class systems, including Easy Tier sub-LUN automated tiering between Solid-State Drives and Spinning disk, thin provisioning, external disk virtualization, point-in-time FlashCopy, disk mirroring, built-in migration capability, and long-distance synchronous and asynchronous replication.
Finally, the various "internal NDA" that kept me from publishing this sooner have expired, so now I have the long-awaited [Inside System Storage: Volume II], documenting IBM's transformation in its storage strategy, including behind-the-scenes commentary about IBM's acquisitions of XIV and Diligent. Available initially in paperback form. I am still working on the hard cover and eBook editions.
For those who have not yet read my first book, Inside System Storage: Volume I, it is still available from my publisher Lulu, in [hard cover], [paperback] and [eBook] editions.
IBM System Storage DS8800
A lesson IBM learned long ago was not to make radical changes to high-end disk systems, as clients who run mission-critical applications are more concerned about reliability, availability and serviceability than they are performance or functionality. Shipping any product before it was ready meant painfully having to fix the problems in the field instead.
(EMC apparently is learning this same lesson now with their VMAX disk system. Their Engenuity code from Symmetrix DMX4 was ported over to new CLARiiON-based hardware. With several hundred boxes in the field, they have already racked up over 150 severity 1 problems, roughly half of these resulted in data loss or unavailability issues. For the sake of our mutual clients that have both IBM servers and EMC disk, I hope they get their act together soon.)
To avoid this, IBM made incremental changes to the successful design and architecture of its predecessors. The new DS8800 shares 85 percent of the stable microcode from the DS8700 system. Functions like Metro Mirror, Global Mirror, and Metro/Global Mirror, are compatible with all of the previous models of the DS8000 series, as well as previous models of the IBM Enterprise Storage Server (ESS) line.
The previous models of DS8000 series were designed to take in cold air from both front and back, and route the hot air out the top, known as chimney design. However, many companies are re-arranging their data centers into separate cold aisles and hot aisles. The new DS8800 has front-to-back cooling to help accommodate this design.
My colleague Curtis Neal would call the rest of this a "BFD" announcement, which of course stands for "Bigger, Faster and Denser". The new DS8800 scales-up to more drives than its DS8700 predecessor, and can scale-out from a single-frame 2-way system to a multi-frame 4-way system. IBM has upgraded to faster 5GHz POWER6+ processors, with dual-core 8 Gbps FC and FICON host adapters, 8 Gbps device adapters, and 6 Gbps SAS connectivity to smaller form factor (SFF) 2.5-inch SAS drives. IBM Easy Tier will provide sub-LUN automated tiering between Solid-State Drives and spinning disk. The denser packaging with SFF drives means that we can pack over 1000 drives in only three frames, compared to five frames required for the DS8700.
The [IBM System Storage SAN Volume Controller] software release v6.1 brings Easy Tier sub-LUN automated tiering to the rest of the world. IBM Easy Tier moves the hottest, most active extents up to Solid-State Drives (SSD) and moves the coldest, least active down to spinning disk. This works whether the SSD is inside the SVC 2145-CF8 nodes, or in the managed disk pool.
Tired of waiting for EMC to finally deliver FAST v2 for your VMAX? It has been 18 months since they first announced that someday they would have sub-LUN automatic tiering. What is taking them so long? Why not virtualize your VMAX with SVC, and you can have it sooner!
SVC 6.1 also upgrades to a sexy new web-based GUI, which like the one for the IBM Storwize V7000, is based on the latest HTML5 and dojo widget standards. Inspired by the popular GUI from the IBM XIV Storage System, this GUI has greatly improved ease-of-use.
A client asked me to explain "Nearline storage" to them. This was easy, I thought, as I started my IBM career on DFHSM, now known as DFSMShsm for z/OS, which was created in 1977 to support the IBM 3850 Mass Storage System (MSS), a virtual storage system that blended disk drives and tape cartridges with robotic automation. Here is a quick recap:
Online storage is immediately available for I/O. This includes DRAM memory, solid-state drives (SSD), and always-on spinning disk, regardless of rotational speed.
Nearline storage is not immediately available, but can be made online quickly without human intervention. This includes optical jukeboxes, automated tape libraries, as well as spin-down massive array of idle disk (MAID) technologies.
Offline storage is not immediately available, and requires some human intervention to bring online. This can include USB memory sticks, CD/DVD optical media, shelf-resident tape cartridges, or other removable media.
Sadly, it appears a few storage manufacturers and vendors have been misusing the term "Nearline" to refer to "slower online" spinning disk drives. I find this [June 2005 technology paper from Seagate], and this [2002 NetApp Press Release], the latter of which included this contradiction for their "NearStore" disk array. Here is the excerpt:
"Providing online access to reference information—NetApp nearline storage solutions quickly retrieve and replicate reference and archive information maintained on cost-effective storage—medical images, financial models, energy exploration charts and graphs, and other data-intensive records can be stored economically and accessed in multiple locations more quickly than ever"
Which is it, "online access" or "nearline storage"?
If a client asked why slower drives consume less energy or generate less heat, I could explain that, but if they ask why slower drives must have SATA connections, that is a different discussion. The speed of a drive and its connection technology are for the most part independent. A 10K RPM drive can be made with FC, SAS or SATA connection.
I am opposed to using "Nearlne" just to distinguish between four-digit speeds (such as 5400 or 7200 RPM) versus "online" for five-digit speeds (10,000 and 15,000 RPM). The difference in performance between 10K RPM and 7200 RPM spinning disks is miniscule compared to the differences between solid-state drives and any spinning disk, or the difference between spinning disk and tape.
I am also opposed to using the term "Nearline" for online storage systems just because they are targeted for the typical use cases like backup, archive or other reference information that were previously directed to nearline devices like automated tape libraries.
Can we all just agree to refer to drives as "fast" or "slow", or give them RPM rotational speed designations, rather than try to incorrectly imply that FC and SAS drives are always fast, and SATA drives are always slow? Certainly we don't need new terms like "NL-SAS" just to represent a slower SAS connected drive.
Well, it feels like Tuesday and you know what that means... "IBM Announcement Day!" Actually, today is Wednesday, but since Monday was Memorial Day holiday here in the USA, my week is day-shifted. Yesterday, IBM announced its latest IBM FlashCopy Manager v2.2 release. Fellow blogger, Del Hoobler (IBM) has also posted something on this out atthe [Tivoli Storage Blog].
IBM FlashCopy Manager replaces two previous products. One was called Tivoli Storage Manager for Copy Services, the other was called Tivoli Storage Manager for Advanced Copy Services. To say people were confused between these two was an understatement, the first was for Windows, and the second was for UNIX and Linux operating systems. The solution? A new product that replaces both of these former products to support Windows, UNIX and Linux! Thus, IBM FlashCopy Manager was born. I introduced this product back in 2009 in my post [New DS8700 and other announcements].
IBM Tivoli Storage FlashCopy Manager provides what most people with "N series SnapManager envy" are looking for: application-aware point-in-time copies. This product takes advantage of the underlying point-in-time interfaces available on various disk storage systems:
FlashCopy on the DS8000 and SAN Volume Controller (SVC)
Snapshot on the XIV storage system
Volume Shadow Copy Services (VSS) interface on the DS3000, DS4000, DS5000 and non-IBM gear that supports this Microsoft Windows protocol
For Windows, IBM FlashCopy Manager can coordinate the backup of Microsoft Exchange and SQL Server. The new version 2.2 adds support for Exchange 2010 and SQL Server 2008 R2. This includes the ability to recover an individual mailbox or mail item from an Exchange backup. The data can be recovered directly to an Exchange server, or to a PST file.
For UNIX and Linux, IBM FlashCopy Manager can coordinate the backup of DB2, SAP and Oracle databases. Version 2.2 adds support specific Linux and Solaris operating systems, and provides a new capability for database cloning. Basically, database cloning restores a database under a new name with all the appropriate changes to allow its use for other purposes, like development, test or education training. A new "fcmcli" command line interface allows IBM FlashCopy Manager to be used for custom applications or file systems.
A common misperception is that IBM FlashCopy Manager requires IBM Tivoli Storage Manager backup software to function. That is not true. You have two options:
In Stand-alone mode, it's just you, the application, IBM FlashCopy Manager and your disk system. IBM FlashCopy Manager coordinates the point-in-time copies, maintains the correct number of versions, and allows you to backup and restore directly disk-to-disk.
Unified Recovery Management with Tivoli Storage Manager
Of course, the risk with relying only on point-in-time copies is that in most cases, they are on the same disk system as the original data. The exception being virtual disks from the SAN Volume Controller. IBM FlashCopy Manager can be combined with IBM Tivoli Storage Manager so that the point-in-time copies can be copied off to a local or remote TSM server, so that if the disk system that contains both the source and the point-in-time copies fails, you have a backup copy from TSM. In this approach, you can still restore from the point-in-time copies, but you can also restore from the TSM backups as well.
IBM FlashCopy Manager is an excellent platform to connect application-aware fucntionality with hardware-based copy services.
Well, I'm back safely from my tour of Asia. I am glad to report that Tokyo, Beijing and Kuala Lumpur are pretty much how I remember them from the last time I was there in each city. I have since been fighting jet lag by watching the last thirteen episodes of LOST season 6 and the series finale.
Recently, I have started seeing a lot of buzz on the term "Storage Federation". The concept is not new, but rather based on the work in database federation, first introduced in 1985 by [A federated architecture for information management] by Heimbigner and McLeod. For those not familiar with database federation, you can take several independent autonomous databases, and treat them as one big federated system. For example, this would allow you to issue a single query and get results across all the databases in the federated system. The advantage is that it is often easier to federate several disparate heterogeneous databases than to merge them into a single database. [IBM Infosphere Federation Server] is a market leader in this space, with the capability to federate DB2, Oracle and SQL Server databases.
Storage expansion: You want to increase the storage capacity of an existing storage system that cannot accommodate the total amount of capacity desired. Storage Federation allows you to add additional storage capacity by adding a whole new system.
Storage migration: You want to migrate from an aging storage system to a new one. Storage Federation allows the joining of the two systems and the evacuation from storage resources on the first onto the second and then the first system is removed.
Safe system upgrades: System upgrades can be problematic for a number of reasons. Storage Federation allows a system to be removed from the federation and be re-inserted again after the successful completion of the upgrade.
Load balancing: Similar to storage expansion, but on the performance axis, you might want to add additional storage systems to a Storage Federation in order to spread the workload across multiple systems.
Storage tiering: In a similar light, storage systems in a Storage Federation could have different capacity/performance ratios that you could use for tiering data. This is similar to the idea of dynamically re-striping data across the disk drives within a single storage system, such as with 3PAR's Dynamic Optimization software, but extends the concept to cross storage system boundaries.
To some extent, IBM SAN Volume Controller (SVC), XIV, Scale-Out NAS (SONAS), and Information Archive (IA) offer most, if not all, of these capabilities. EMC claims its VPLEX will be able to offer storage federation, but only with other VPLEX clusters, which brings up a good question. What about heterogenous storage federation? Before anyone accuses me of throwing stones at glass houses, let's take a look at each IBM solution:
IBM SAN Volume Controller
The IBM SAN Volume Controller has been doing storage federation since 2003. Not only can IBM SAN Volume Controller bring together storage from a variety of heterogenous storage, the SVC cluster itself can be a mix of different hardware models. You can have a 2145-8A4 node pair, 2145-8G4 node pair, and the new 2145-CF8 node pair, all combined together into a single SVC cluster. Upgrading SVC hardware nodes in an SVC cluster is always non-disruptive.
IBM XIV storage system
The IBM XIV has two kinds of independent modules. Data modules have processor, cache and 12 disks. Interface modules are data modules with additional processor, FC and Ethernet (iSCSI) adapters. Because these two modules play different roles in an XIV "colony", that number of each type is predetermined. Entry-level six-module systems have 2 interface and 4 data modules. Full 15-module systems have 6 interface and 9 data modules. Individual modules can be added or removed non-disruptively in an XIV.
IBM Scale-Out NAS
The SONAS is comprised of three kinds of nodes that work together in concert. A management node, one or more interface nodes, and two or more storage nodes. The storage nodes are paired to manage up to 240 nodes in a storage pod. Individual interface or data nodes can be added or removed non-disruptively in the SONAS. The underlying technology, the General Parallel File System, has been doing storage federation since 1996 for some of the largest top 500 supercomputers in the world.
IBM Information Archive (IA)
For the IA, there are 1, 2 or 3 nodes, which manages a set of collections. A collection can either be file-based using industry-standard NAS protocols, or object-based using the popular System Storage™ Archive Manager (SSAM) interface. Normally, you have as many collections as you have nodes, but nodes are powerful enough to manage two collections to provide N-1 availability. This allows a node to be removed, and a new node added into the IA "colony", in a non-disruptive manner.
Even in an ant colony, there are only a few types of ants, with typically one queen, several males, and lots of workers. But all the ants are red. You don't see colonies that mix between different species of ants. For databases, federation was a way to avoid the much harder task of merging databases from different platforms. For storage, I am surprised people have latched on to the term "federation", given our mixed results in the other "federations" we have formed, which I have conveniently (IMHO) ranked from least effective to most effective:
The Union of Soviet Socialist Republics (USSR)
My father used to say, "If the Soviet Union were in charge of the Sahara desert, they would run out of sand in 50 years." The [Soviet Union] actually lasted 68 years, from 1922 to 1991.
The United Nations (UN)
After the previous League of Nations failed, the UN was formed in 1945 to facilitate cooperation in international law, international security, economic development, social progress, human rights, and the achieving of world peace by stopping wars between countries, and to provide a platform for dialogue.
The European Union (EU)
With the collapse of the Greek economy, and the [rapid growth of debt] in the UK, Spain and France, there are concerns that the EU might not last past 2020.
The United States of America (USA)
My own country is a federation of states, each with its own government. California's financial crisis was compared to the one in Greece. My own state of Arizona is under boycott from other states because of its recent [immigration law]. However, I think the US has managed better than the EU because it has evolved over the past 200 years.
The Organization of the Petroleum Exporting Countries [OPEC]
Technically, OPEC is not a federation of cooperating countries, but rather a cartel of competing countries that have agreed on total industry output of oil to increase individual members' profits. Note that it was a non-OPEC company, BP, that could not "control their output" in what has now become the worst oil spill in US history. OPEC was formed in 1960, and is expected to collapse sometime around 2030 when the world's oil reserves run out. Matt Savinar has a nice article on [Life After the Oil Crash].
United Federation of Planets
The [Federation] fictitiously described in the Star Trek series appears to work well, an optimistic view of what federations could become if you let them evolve long enough.
Given the mixed results with "federation", I think I will avoid using the term for storage, and stick to the original term "scale-out architecture".
Here I am, day 11 of a 17-day business trip, on my last leg of the trip this week, in Kuala Lumpur in Malaysia. I have been flooded with requests to give my take on EMC's latest re-interpretation of storage virtualization, VPLEX.
I'll leave it to my fellow IBM master inventor Barry Whyte to cover the detailed technical side-by-side comparison. Instead, I will focus on the business side of things, using Simon Sinek's Why-How-What sequence. Here is a [TED video] from Garr Reynold's post
[The importance of starting from Why].
Let's start with the problem we are trying to solve.
Problem: migration from old gear to new gear, old technology to new technology, from one vendor to another vendor, is disruptive, time-consuming and painful.
Given that IT storage is typically replaced every 3-5 years, then pretty much every company with an internal IT department has this problem, the exception being those companies that don't last that long, and those that use public cloud solutions. IT storage can be expensive, so companies would like their new purchases to be fully utilized on day 1, and be completely empty on day 1500 when the lease expires. I have spoken to clients who have spent 6-9 months planning for the replacement or removal of a storage array.
A solution to make the data migration non-disruptive would benefit the clients (make it easier for their IT staff to keep their data center modern and current) as well as the vendors (reduce the obstacle of selling and deploying new features and functions). Storage virtualization can be employed to help solve this problem. I define virtualization as "technology that makes one set of resources look and feel like a different set of resources, preferably with more desirable characteristics.". By making different storage resources, old and new, look and feel like a single type of resource, migration can be performed without disrupting applications.
Before VPLEX, here is a breakdown of each solution:
Non-disruptive tech refresh, and a unified platform to provide management and functionality across heterogeneous storage.
Non-disruptive tech refresh, and a unified platform to provide management and functionality between internal tier-1 HDS storage, and external tier-2 heterogeneous storage.
Non-disruptive tech refresh, with unified multi-pathing driver that allows host attachment of heterogeneous storage.
New in-band storage virtualization device
Add in-band storage virtualization to existing storage array
New out-of-band storage virtualization device with new "smart" SAN switches
SAN Volume Controller
HDS USP-V and USP-VM
For IBM, the motivation was clear: Protect customers existing investment in older storage arrays and introduce new IBM storage with a solution that allows both to be managed with a single set of interfaces and provide a common set of functionality, improving capacity utilization and availability. IBM SAN Volume Controller eliminated vendor lock-in, providing clients choice in multi-pathing driver, and allowing any-to-any migration and copy services. For example, IBM SVC can be used to help migrate data from an old HDS USP-V to a new HDS USP-V.
With EMC, however, the motivation appeared to protect software revenues from their PowerPath multi-pathing driver, TimeFinder and SRDF copy services. Back in 2005, when EMC Invista was first announced, these three software represented 60 percent of EMC's bottom-line profit. (Ok, I made that last part up, but you get my point! EMC charges a lot for these.)
Back in 2006, fellow blogger Chuck Hollis (EMC) suggested that SVC was just a [bump in the wire] which could not possibly improve performance of existing disk arrays. IBM showed clients that putting cache(SVC) in front of other cache(back end devices) does indeed improve performance, in the same way that multi-core processors successfully use L1/L2/L3 cache. Now, EMC is claiming their cache-based VPLEX improves performance of back-end disk. My how EMC's story has changed!
So now, EMC announces VPLEX, which sports a blend of SVC-like and Invista-like characteristics. Based on blogs, tweets and publicly available materials I found on EMC's website, I have been able to determine the following comparison table. (Of course, VPLEX is not yet generally available, so what is eventually delivered may differ.)
Scalable, 1 to 4 node-pairs
One size fits all, single pair of CPCs
SVC-like, 1 to 4 director-pairs
Works with any SAN switches or directors
Required special "smart" switches (vendor lock-in)
SVC-like, works with any SAN switches or directors
Broad selection of IBM Subsystem Device Driver (SDD) offered at no additional charge, as well as OS-native drivers Windows MPIO, AIX MPIO, Solaris MPxIO, HP-UX PV-Links, VMware MPP, Linux DM-MP, and comercial third-party driver Symantec DMP.
Limited selection, with focus on priced PowerPath driver
Invista-like, PowerPath and Windows MPIO
Read cache, and choice of fast-write or write-through cache, offering the ability to improve performance.
No cache, Split-Path architecture cracked open Fibre Channel packets in flight, delayed every IO by 20 nanoseconds, and redirected modified packets to the appropriate physical device.
SVC-like, Read and write-through cache, offering the ability to improve performance.
Space-Efficient Point-in-Time copies
SVC FlashCopy supports up to 256 space-efficient targets, copies of copies, read-only or writeable, and incremental persistent pairs.
Like Invista, No
Remote distance mirror
Choice of SVC Metro Mirror (synchronous up to 300km) and Global Mirror (asynchronous), or use the functionality of the back-end storage arrays
No native support, use functionality of back-end storage arrays, or purchase separate product called EMC RecoverPoint to cover this lack of functionality
Limited synchronous remote-distance mirror within VPLEX (up to 100km only), no native asynchronous support, use functionality of back-end storage arrays
Provides thin provisioning to devices that don't offer this natively
Like Invista, No
SVC Split-Cluster allows concurrent read/write access of data to be accessed from hosts at two different locations several miles apart
I don't think so
PLEX-Metro, similar in concept but implemented differently
Non-disruptive tech refresh
Can upgrade or replace storage arrays, SAN switches, and even the SVC nodes software AND hardware themselves, non-disruptively
Tech refresh for storage arrays, but not for Invista CPCs
Tech refresh of back end devices, and upgrade of VPLEX software, non-disruptively. Not clear if VPLEX engines themselves can be upgraded non-disruptively like the SVC.
Heterogeneous Storage Support
Broad support of over 140 different storage models from all major vendors, including all CLARiiON, Symmetrix and VMAX from EMC, and storage from many smaller startups you may not have heard of
Invista-like. VPLEX claims to support a variety of arrays from a variety of vendors, but as far as I can find, only DS8000 supported from the list of IBM devices. Fellow blogger Barry Burke (EMC) suggests [putting SVC between VPLEX and third party storage devices] to get the heterogeneous coverage most companies demand.
Back-end storage requirement
Must define quorum disks on any IBM or non-IBM back end storage array. SVC can run entirely on non-IBM storage arrays
HP SVSP-like, requires at least one EMC storage array to hold metadata
SVC 2145-CF8 model supports up to four solid-state drives (SSD) per node that can treated as managed disk to store end-user data
Invista-like. VPLEX has an internal 30GB SSD, but this is used only for operating system and logs, not for end-user data.
In-band virtualization solutions from IBM and HDS dominate the market. Being able to migrate data from old devices to new ones non-disruptively turned out to be only the [tip of the iceberg] of benefits from storage virtualization. In today's highly virtualized server environment, being able to non-disruptively migrate data comes in handy all the time. SVC is one of the best storage solutions for VMware, Hyper-V, XEN and PowerVM environments. EMC watched and learned in the shadows, taking notes of what people like about the SVC, and decided to follow IBM's time-tested leadership to provide a similar offering.
EMC re-invented the wheel, and it is round. On a scale from Invista (zero) to SVC (ten), I give EMC's new VPLEX a six.
Last week, I presented "An Introduction to Cloud Computing" for two hours to the local Institute of Management Accountants [IMA] for their Continuing Professional Education [CPE]. Since I present IBM's leadership in Cloud Storage offerings, I have had to become an expert in Cloud Computing overall. The audience was a mix of bookkeepers, accountants, auditors, comptrollers, CPAs, and accounting teachers.
Here is a sample of the questions I took during and after my presentation:
If I need to shut down host machine, I lose all my virtual machines as well?
No, it is possible to seemlessly move virtual machines from one host to another. If you need to shut down a host machine, move all the VMs to other hosts, then you can shut down the empty host without impacting business.
Does the SaaS provider have to build their own app, can they not buy an app and then rent it out?
Yes, but they won't have competitive differentiation, and the software development they buy from will want a big cut of the action. SaaS developers that build their own applications can keep more of the profits for themselves.
How do backups work in cloud computing? Do I have to contact someone at the cloud computing company to find the backup tape?
Large datacenters often keep the most recent backups on disk, and older versions on tape in automated tape libraries that can fetch your backup in less than 2 minutes. Because of this, there is no need to talk to anyone, you can schedule or invoke your own backups, and often perform the recovery yourself using self-service tools.
Last month, my sister tried to rent a car during the week the Tucson Gem Show, but they were out of cars she wanted to drive. Could this happen with Cloud Computing?
Not likely. With rental cars, the cars have to be physically in Tucson to rent them. Rental companies could have brought cars down from Phoenix to satisfy demand. With Cloud Computing, it is all accessible over the global network, you are not limited to the cloud providers nearest you.
Is there a reason why Amazon Web Services (AWS) charges more for a Windows image than a Linux image?
Yes, Amazon and Microsoft have a patent cross-licensing agreement where Amazon pays Microsoft for the priveledge of offering Windows-based images on their EC2 cloud infrastructure. It just makes business sense to pass those costs onto the consumer. Linux is a free open source operating system, and is often the better choice.
So if we rent a machine from Amazon, they send it to my accounting office? What exactly am I getting for 12 cents per hour?
No. The computer remains in their datacenter. You get a virtual machine that runs 1.2Ghz Intel processor, with 1700MB of RAM, and 160GB of hard disk space, with Windows operating system running on it, comparable to a machine you can get at the local BestBuy, but instead of it running in the next room, it is running in a datacenter somewhere else in the United States with electricity and air conditioning.
You access it remotely from your desktop or laptop PC.
Why would I ever rent more than one computer?
It depends on your workload. For example, Derek Gottfrid at the New York Times needed to convert 11 million articles from TIFF format to PDF format so that he could put them up on the web. This would have taken him months using a single computer, so he rented 100 computers and got the entire stack converted in 24 hours, for a cost of about $240. See the articles [Self-Service, Prorated, Super Computing] and [TimesMachine] for details.
What about throughput? Won't I need to run cables from my accounting office to this cloud computing data center?
You will need connectivity, most likely from connections provided by your local telephone or cable company, or through the Internet. Certainly, there can be cases where direct privately-owned fiber optic cables, known as "dark fiber", can directly connect consumers to local Cloud service providers, for added security.
What about medical records? Will Cloud Computing help the Healthcare industry?
Yes, hospitals are finding that digitizing their records greatly reduces costs. IBM offers the Grid Medical Archive Solution [GMAS] as a private cloud storage solution to store X-ray images and other electronic medical records on disk and tape, and these records can be accessed from multiple hospitals and clinics, wherever the doctor or patient happens to be.
The advantage of personal computers was individualization, I could put on my own choices of software, and customize my own settings, won't we lose this with Cloud Computing?
Yes, customized software and settings cost companies millions of dollars with help desk calls. Cloud Computing attempts to provide some standardization, reducing the amount of effort to support IT operations.
Won't putting all the computers into a big datacenter make them more vulnerable to hackers?
Security is a well-known concern, but this is being addressed with encryption, access control lists, multi-tenancy isolation, and VPN connections.
My daughter has a BlackBerry or iPod or something, and when we mentioned that someone in Phoenix wore a monkey suit to avoid photo-radar speed cameras, she was able to pull up a picture on her little hand-held thing, is this the future?
Yes, mobile phones and other hand-held devices now have internet access to take advantage of Cloud Computing services. People will be able to access the information they need from wherever they happen to be. (You can see the picture here: [Man Dons Mask for Speed-Camera Photos])
IBM offers a variety of Cloud Computing services, as well as customized solutions and integrated systems that can be deployed on-premises behind your corporate firewall. To learn more, go to [ibm.com/cloud].
The second speaker was local celebrity Dan Ryan presenting the financials for the upcoming [Rosemont Copper] mining operations. Copper is needed for emerging markets, such as hybrid vehicles and wind turbines. Copper is a major industry in Arizona.
This week I got a comment on my blog post [IBM Announces another SSD Disk offering!]. The exchange involved Solid State Disk storage inside the BladeCenter and System x server line. Sandeep offered his amazing performance results, but we have no way to get in contact with him. So, for those interested, I have posted on SlideShare.net a quick five-chart presentation on recent tests with various SSD offerings on the eX5 product line here:
Continuing my drawn out coverage of IBM's big storage launch of February 9, today I'll cover the IBM System Storage TS7680 ProtecTIER data deduplication gateway for System z.
On the host side, TS7680 connects to mainframe systems running z/OS or z/VM over FICON attachment, emulating an automated tape library with 3592-J1A devices. The TS7680 includes two controllers that emulate the 3592 C06 model, with 4 FICON ports each. Each controller emulates up to 128 virtual 3592 tape drives, for a total of 256 virtual drives per TS7680 system. The mainframe sees up to 1 million virtual tape cartridges, up to 100GB raw capacity each, before compression. For z/OS, the automated library has full SMS Tape and Integrated Library Management capability that you would expect.
Inside, the two control units are both connected to a redundant pair cluster of ProtecTIER engines running the HyperFactor deduplication algorithm that is able to process the deduplication inline, as data is ingested, rather than post-process that other deduplication solutions use. These engines are similar to the TS7650 gateway machines for distributed systems.
On the back end, these ProtecTIER deduplication engines are then connected to external disk, up to 1PB. If you get 25x data deduplication ratio on your data, that would be 25PB of mainframe data stored on only 1PB of physical disk. The disk can be any disk supported by ProtecTIER over FCP protocol, not just the IBM System Storage DS8000, but also the IBM DS4000, DS5000 or IBM XIV storage system, various models of EMC and HDS, and of course the IBM SAN Volume Controller (SVC) with all of its supported disk systems.
It's Tuesday, and that means more IBM announcements!
I haven't even finished blogging about all the other stuff that got announced last week, and here we are with more announcements. Since IBM's big [Pulse 2010 Conference] is next week, I thought I would cover this week's announcement on Tivoli Storage Manager (TSM) v6.2 release. Here are the highlights:
Client-Side Data Deduplication
This is sometimes referred to as "source-side" deduplication, as storage admins can get confused on which servers are clients in a TSM client-server deployment. The idea is to identify duplicates at the TSM client node, before sending to the TSM server. This is done at the block level, so even files that are similar but not identical, such as slight variations from a master copy, can benefit. The dedupe process is based on a shared index across all clients, and the TSM server, so if you have a file that is similar to a file on a different node, the duplicate blocks that are identical in both would be deduplicated.
This feature is available for both backup and archive data, and can also be useful for archives using the IBM System Storage Archive Manager (SSAM) v6.2 interface.
Simplified management of Server virtualization
TSM 6.2 improves its support of VMware guests by adding auto-discovery. Now, when you spontaneously create a new virtual machine OS guest image, you won't have to tell TSM, it will discover this automatically! TSM's legendary support of VMware Consolidated Backup (VCB) now eliminates the manual process of keeping track of guest images. TSM also added support of the Vstorage API for file level backup and recovery.
While IBM is the #1 reseller of VMware, we also support other forms of server virtualization. In this release, IBM adds support for Microsoft Hyper-V, including support using Microsoft's Volume Shadow Copy Services (VSS).
Automated Client Deployment
Do you have clients at all different levels of TSM backup-archive client code deployed all over the place? TSM v6.2 can upgrade these clients up to the latest client level automatically, using push technology, from any client running v5.4 and above. This can be scheduled so that only certain clients are upgraded at a time.
Simultaneous Background Tasks
The TSM server has many background administrative tasks:
Migration of data from one storage pool to another, based on policies, such as moving backups and archives on a disk pool over to a tape pools to make room for new incoming data.
Storage pool backup, typically data on a disk pool is copied to a tape pool to be kept off-site.
Copy active data. In TSM terminology, if you have multiple backup versions, the most recent version is called the active version, and the older versions are called inactive. TSM can copy just the active versions to a separate, smaller disk pool.
In previous releases, these were done one at a time, so it could make for a long service window. With TSM v6.2, these three tasks are now run simultaneously, in parallel, so that they all get done in less time, greatly reducing the server maintenance window, and freeing up tape drives for incoming backup and archive data. Often, the same file on a disk pool is going to be processed by two or more of these scheduled tasks, so it makes sense to read it once and do all the copies and migrations at one time while the data is in buffer memory.
Enhanced Security during Data Transmission
Previous releases of TSM offered secure in-flight transmission of data for Windows and AIX clients. This security uses Secure Socket Layer (SSL) with 256-bit AES encryption. With TSM v6.2, this feature is expanded to support Linux, HP-UX and Solaris.
Improved support for Enterprise Resource Planning (ERP) applications
I remember back when we used to call these TDPs (Tivoli Data Protectors). TSM for ERP allows backup of ERP applications, seemlessly integrating with database-specific tools like IBM DB2, Oracle RMAN, and SAP BR*Tools. This allows one-to-many and many-to-one configurations between SAP servers and TSM servers. In other words, you can have one SAP server backup to several TSM servers, or several SAP servers backup to a single TSM server. This is done by splitting up data bases into "sub-database objects", and then process each object separately. This can be extremely helpful if you have databases over 1TB in size. In the event that backing up an object fails and has to be re-started, it does not impact the backup of the other objects.
Continuing on the [IBM Storage Launch of February 9], John Sing has offered to write the following guest post about the [announcement] of IBM Scale Out Network Attached Storage [IBM SONAS]. John and I have known each other for a while, traveled the world to work with clients and speak at conferences. He is an Executive IT Consultant on the SONAS team.
Guest Post written by John Sing, IBM San Jose, California
What is IBM SONAS? It’s many things, so let’s start with this list:
It’s IBM’s delivery of a productized, pre-packaged Scale Out NAS global virtual file server, delivered in a easy-to-use appliance
IBM’s solution for large enterprise file-based storage requirements, where massive scale in capacity and extreme performance is required, especially for today’s modern analytics-based Competitive Advantage IT applications
Scales to many petabytes of usable storage and billions of files in a single global namespace
Provides integrated central management, central deployment of petabyte levels of storage
Modular commercial-off-the-shelf [COTS] building blocks. I/O, storage, network capacity scale independently of each other. Up to 30 interface nodes and 60 storage nodes, in an IBM General Parallel File System [GPFS]-based cluster. Each 10Gb CEE interface node port is capable of streaming at 900 MB/sec
Files are written in block-sized chunks, striped over as many multiple disk drives in parallel – aggregating throughput on a massive scale (both read and write), as well as providing auto-tuning, auto-balancing
Functionality delivered via one program product, IBM SONAS Software, which provides all of above functions, along with clustered CIFS, NFS v2/v3 with session auto-failover, FTP, high availability, and more
IBM SONAS makes automated tiered storage achievable and realistic at petabyte levels:
Integrated high performance parallel scan engine capable of identifying files at over 10 million files per minute per node
Integrated parallel data movement engine to physically relocate the data within tiered storage
And we’re just scratching the surface. IBM has plans to deploy additional protocols, storage hardware options, and software features.
However, the real question of interest should be, “who really needs that much storage capacity and throughput horsepower?”
The answer may surprise you. IMHO, the answer is: almost any modern enterprise that intends to stay competitive. Hmmm…… Consider this: the reason that IT exists today is no longer to simply save cost (that may have been true 10 years ago). Everyone is reducing cost… but how much competitive advantage is purchased through “let’s cut our IT budget by 10% this year”?
Notice that in today’s world, there are (many) bright people out there, changing our world every day through New Intelligence Competitive Advantage analytics-based IT applications such as real time GPS traffic data, real time energy monitoring and redirection, real time video feed with analytics, text analytics, entity analytics, real time stream computing, image recognition applications, HDTV video on demand, etc. Think of how GPS industry, cell phone / Twitter / Facebook, iPhone and iPad applications, as examples, are creating whole new industries and markets almost overnight.
Then start asking yourself, “What's behind these Competitive Advantage IT applications – as they are the ones that are driving all my storage growth? Why do they need so much storage? What do those applications mean for my storage requirements?”
To be “real-time”, long-held IT paradigms are being broken every day. Things like “data proximity”: we can no longer can extract terabytes of data from production databases and load them to a data warehouse – where’s the “real-time” in that? Instead, today’s modern analytics-based applications demand:
Multiple processes and servers (sometimes numbering in the 100s) simultaneously ….
Running against hundreds of terabytes of data of live production data, streaming in from expanding number of smarter sensors, input devices, users
Producing digital image-intensive results that must be programatically sent to an ever increasing number of mobile devices in geographically dispersed storage
Requiring parallel performance levels, that used to be the domain only of High Performance Computing (HPC)
This is a major paradigm shift in storage – and that is the solution and storage capabilities that IBM SONAS is designed to address. And of course, you should be able to save significant cost through the SONAS global virtual file server consolidation and virtualization as well.
Certainly, this topic warrants more discussion. If you found it interesting, contact me, your local IBM Business Partner or IBM Storage rep to discuss Competitive Advantage IT applications and SONAS further.
Wrapping up my coverage of the Data Center Conference 2009, the week ends with a celebration. This year we had six "Hospitality Suites" sponsored by various different vendors. Each suite has its own theme, decorations and entertainment. The first suite was VMware's "Cloud 9 Ultra Lounge" which offered blue cotton candy martinis. IBM is the leading reseller of VMware.
When the red martini liquid was poured on top of the blue cotton candy, the result was a nasty muddish brown grey color. The guy on the left chose to get the martini without the blue cotton candy. We joked that this is perhaps a good metaphor for cloud computing in general. It looks good on paper, until you actually put it all together and realize it does not look as blue and puffy as you were expecting. However, it tasted good!
Next suite was sponsored by Cisco, one of IBM's storage networking partners. Cisco also decorated in blue, as the guy Jake in the middle demonstrates.
Next suite was sponsored by Brocade, our supplier for IBM-branded networking gear. They went with a red-and-black color scheme. Sadly, many of my pictures inside involved straight jackets and unicycles, so not appropriate for this blog. However, it was easy to remember that they were talking about their "extraordinary networks". Makes you want to help out Brocade by contacting your nearest IBM storage sales rep and buy yourself a SAN768B or two.
Somewhere along the way, we picked up Hawaiian leis at the "Margaritaville" Hospitality Suite, compliments of sponsor APC by Schneider Electric. We had the best "Filet Mignon" appetizers at "Club Dedupe" by our competitor DataDomain, and some fun with my friends over at Computer Associates' "Top Gun" suite. Pictured at right are Paula Koziol with Christian Barrera from Argentina. A good time was had by all.
Well, it's Tuesday again, but this time, today we had our third big storage launch of 2009! A lot got announced today as part of IBM's big "Dynamic Infrastructure" marketing campaign. I will just focus on the
disk-related announcements today:
IBM System Storage DS8700
IBM adds a new model to its DS8000 series with the
[IBM System Storage DS8700]. Earlier this month, fellow blogger and arch-nemesis Barry Burke from EMC posted [R.I.P DS8300] on this mistaken assumption that the new DS8700 meant that DS8300 was going away, or that anyone who bought a DS8300 recently would be out of luck. Obviously, I could not respond until today's announcement, as the last thing I want to do is lose my job disclosing confidential information. BarryB is wrong on both counts:
IBM will continue to sell the DS8100 and DS8300, in addition to the new DS8700.
Clients can upgrade their existing DS8100 or DS8300 systems to DS8700.
BarryB's latest post [What's In a Name - DS8700] is fair game, given all the fun and ridicule everyone had at his expense over EMC's "V-Max" name.
So the DS8700 is new hardware with only 4 percent new software. On the hardware side, it uses faster POWER6 processors instead of POWER5+, has faster PCI-e buses instead of the RIO-G loops, and faster four-port device adapters (DAs) for added bandwidth between cache and drives. The DS8700 can be ordered as a single-frame dual 2-way that supports up to 128 drives and 128GB of cache, or as a dual 4-way, consisting of one primary frame, and up to four expansion frames, with up to 384GB of cache and 1024 drives.
Not mentioned explicitly in the announcements were the things the DS8700 does not support:
ESCON attachment - Now that FICON is well-established for the mainframe market, there is no need to support the slower, bulkier ESCON options. This greatly reduced testing effort. The 2-way DS8700 can support up to 16 four-port FICON/FCP host adapters, and the 4-way can support up to 32 host adapters, for a maximum of 128 ports. The FICON/FCP host adapter ports can auto-negotiate between 4Gbps, 2Gbps and 1Gbps as needed.
LPAR mode - When IBM and HDS introduced LPAR mode back in 2004, it sounded like a great idea the engineers came up with. Most other major vendors followed our lead to offer similar "partitioning". However, it turned out to be what we call in the storage biz a "selling apple" not a "buying apple". In other words, something the salesman can offer as a differentiating feature, but that few clients actually use. It turned out that supporting both LPAR and non-LPAR modes merely doubled the testing effort, so IBM got rid of it for the DS8700.
Update: I have been reminded that both IBM and HDS delivered LPAR mode within a month of each other back in 2004, so it was wrong for me to imply that HDS followed IBM's lead when obviously development happened in both companies for the most part concurrently prior to that. EMC was late to the "partition" party, but who's keeping track?
Initial performance tests show up to 50 percent improvement for random workloads, and up to 150 percent improvement for sequential workloads, and up to 60 percent improvement in background data movement for FlashCopy functions. The results varied slightly between Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes, and I hope to see some SPC-1 and SPC-2 benchmark numbers published soon.
The DS8700 is compatible for Metro Mirror, Global Mirror, and Metro/Global Mirror with the rest of the DS8000 series, as well as the ESS model 750, ESS model 800 and DS6000 series.
New 600GB FC and FDE drives
IBM now offers [600GB drives] for the DS4700 and DS5020 disk systems, as well as the EXP520 and EXP810 expansion drawers. In each case, we are able to pack up to 16 drives into a 3U enclosure.
Personally, I think the DS5020 should have been given a DS4xxx designation, as it resembles the DS4700
more than the other models of the DS5000 series. Back in 2006-2007, I was the marketing strategist for IBM System Storage product line, and part of my job involved all of the meetings to name or rename products. Mostly I gave reasons why products should NOT be renamed, and why it was important to name the products correctly at the beginning.
IBM System Storage SAN Volume Controller hardware and software
Fellow IBM master inventory Barry Whyte has been covering the latest on the [SVC 2145-CF8 hardware]. IBM put out a press release last week on this, and today is the formal announcement with prices and details. Barry's latest post
[SVC CF8 hardware and SSD in depth] covers just part of the entire
The other part of the announcement was the [SVC 5.1 software] which can be loaded
on earlier SVC models 8F2, 8F4, and 8G4 to gain better performance and functionality.
To avoid confusion on what is hardware machine type/model (2145-CF8 or 2145-8A4) and what is software program (5639-VC5 or 5639-VW2), IBM has introduced two new [Solution Offering Identifiers]:
5465-028 Standard SAN Volume Controller
5465-029 Entry Edition SAN Volume Controller
The latter is designed for smaller deployments, supports only a single SVC node-pair managing up to
150 disk drives, available in Raven Black or Flamingo Pink.
EXN3000 and EXP5060 Expansion Drawers
IBM offers the [EXN3000 for the IBM N series]. These expansion drawers can pack 24 drives in a 4U enclosure. The drives can either be all-SAS, or all-SATA, supporting 300GB, 450GB, 500GB and 1TB size capacity drives.
The [EXP5060 for the IBM DS5000 series] is a high-density expansion drawer that can pack up to 60 drives into a 4U enclosure. A DS5100 or DS5300
can handle up to eight of these expansion drawers, for a total of 480 drives.
Pre-installed with Tivoli Storage Productivity Center Basic Edition. Basic Edition can be upgraded with license keys to support Data, Disk and Standard Edition to extend support and functionality to report and manage XIV, N series, and non-IBM disk systems.
Pre-installed with Tivoli Key Lifecycle Manager (TKLM). This can be used to manage the Full Disk Encryption (FDE) encryption-capable disk drives in the DS8000 and DS5000, as well as LTO and TS1100 series tape drives.
IBM Tivoli Storage FlashCopy Manager v2.1
The [IBM Tivoli Storage FlashCopy Manager V2.1] replaces two products in one. IBM used
to offer IBM Tivoli Storage Manager for Copy Services (TSM for CS) that protected Windows application data, and IBM Tivoli Storage Manager for Advanced Copy Services (TSM for ACS) that protected AIX application data.
The new product has some excellent advantages. FlashCopy Manager offers application-aware backup of LUNs containing SAP, Oracle, DB2, SQL server and Microsoft Exchange data. It can support IBM DS8000, SVC and XIV point-in-time copy functions, as well as the Volume Shadow Copy Services (VSS) interfaces of the IBM DS5000, DS4000 and DS3000 series disk systems. It is priced by the amount of TB you copy, not on the speed or number of CPU processors inside the server.
Don't let the name fool you. IBM FlashCopy Manager does not require that you use Tivoli Storage Manager (TSM) as your backup product. You can run IBM FlashCopy Manager on its own, and it will manage your FlashCopy target versions on disk, and these can be backed up to tape or another disk using any backup product. However, if you are lucky enough to also be using TSM, then there is optional integration that allows TSM to manage the target copies, move them to tape, inventory them in its DB2 database, and provide complete reporting.
Yup, that's a lot to announce in one day. And this was just the disk-related portion of the launch!
I saw this as an opportunity to promote the new IBM Tivoli Storage Manager v6.1 which offers a variety of new scalability features, and continues to provide excellent economies of scale for large deployments, in my post [IBM has scalable backup solutions].
"So does TSM scale? Sure! Just add more servers. But this is not an economy of scale. Nothing gets less expensive as the capacity grows. You get a more or less linear growth of costs that is directly correlated to the growth of primary storage capacity. (Technically, it costs will jump at regular and predictable intervals, by regular and predictable and equal amounts, as you add TSM servers to the infrastructure--but on average it is a direct linear growth. Assuming you are right sized right now, if you were to double your primary storage capacity, you would double the size of the TSM infrastructure, and double your associated costs.)"
I talked about inaccurate vendor FUD in my post [The murals in restaurants], and recently, I saw StorageBod's piece, [FUDdy Waters]. So what would "economies of scale" look like? Using Scott's own words:
Without Economies of Scale
"If it costs you $5 to backup a given amount of data, it probably costs you $50 to back up 10 times that amount of data, and $500 to back up 100 times that amount of data."
With Economies of Scalee
"If anybody can figure out how to get costs down to $40 for 10 times the amount of data, and $300 for 100 times the amount of data, they will have an irrefutable advantage over anybody that has not been able to leverage economies of scale."
So, let's do some simple examples. I'll focus on a backup solution just for employee workstations, each employee has 100GB of personal data to backup on their laptop or PC. We'll look at a one-person company, a ten-person company, and a hundred-person company.
Case 1: The one-person company
Here the sole owner needs a backup solution. Here are all the steps she might perform:
Spend hours of time evaluating different backup products available, and make sure her operating system, file system and applications are supported
Spend hours shopping for external media, this could be an external USB disk drive, optical DVD drive, or tape drive, and confirm it is supported by the selected backup software.
Purchase the backup software, external drive, and if optical or tape, blank media cartridges.
Spend time learning the product, purchase "Backup for Dummies" or similar book, and/or taking a training class.
Install and configure the software
Operate the software, or set it up to run automatically, and take the media offsite at the end of the day, and back each morning
Case 2: The ten-person company
I guess if each of the ten employees went off and performed all of the same steps as above, there would be no economies of scale.
Fortunately, co-workers are amazingly efficient in avoiding unnecessary work.
Rather than have all ten people evaluate backup solutions, have one person do it. If everyone runs the same or similar operating system, file systems and applications, this can be done about the same as the one-person case.
Ditto on the storage media. Why should 10 people go off and evaluate their own storage media. One person can do it for all ten people in about the same time as it takes for one person.
Purchasing the software and hardware. Ok, here is where some costs may be linear, depending on your choices. Some software vendors give bulk discounts, so purchasing 10 seats of the same software could be less than 10 times the cost of one license. As for storage hardware, it might be possible to share drives and even media. Perhaps one or two storage systems can be shared by the entire team.
For a lot of backup software, most of the work is in the initial set up, then it runs automatically afterwards. That is the case for TSM. You create a "dsm.opt" file, and it can list all of the include/exclude files and other rules and policies. Once the first person sets this up, they share it with their co-workers.
Hopefully, if storage hardware was consolidated, such that you have fewer drives than people, you can probably have fewer people responsible for operations. For example, let's have the first five employees sharing one drive managed by Joe, and the second five employees sharing a second drive managed by Sally. Only two people need to spend time taking media offsite, bringing it back and so on.
Case 3: The hundred-person company
Again, it is possible that a hundred-person company consists of 10 departments of 10 people each, and they all follow the above approach independently, resulting in no economies of scale. But again, that is not likely.
Here one or a few people can invest time to evaluate backup solutions. Certainly far less than 100 times the effort for a one-person company.
Same with storage media. With 100 employees, you can now invest in a tape library with robotic automation.
Purchase of software and hardware. Again, discounts will probably apply for large deployments. Purchasing 1 tape library for all one hundred people is less than 10 times the cost and effort of 10 departments all making independent purchases.
With a hundred employees, you may have some differences in operating system, file systems and applications. Still, this might mean two to five versions of dsm.opt, and not 10 or 100 independent configurations.
Operations is where the big savings happen. TSM has "progressive incremental backup" so it only backs up changed data. Other backup schemes involve taking period full backups which tie up the network and consume a lot of back end resources. In head-to-head comparisons between IBM Tivoli Storage Manager and Symantec's NetBackup, IBM TSM was shown to use significantly less network LAN bandwidth, less disk storage capacity, and fewer tape cartridges than NetBackup.
The savings are even greater with data deduplication. Either using hardware, like IBM TS76750 ProtecTIER data deduplication solution, or software like the data deduplication capability built-in with IBM TSM v6.1, you can take advantage of the fact that 100 employees might have a lot of common data between them.
So, I have demonstrated how savings through economies of scale are achieved using IBM Tivoli Storage Manager. Adding one more person in each case is cheaper than the first person. The situation is not linear as Scott suggests. But what about larger deployments? IBM TS3500 Tape Library can hold one PB of data in only 10 square feet of data center floorspace. The IBM TS7650G gateway can manage up to 1 PB of disk, holding as much as 25 PB of backup copies. IT Analysts Tony Palmer, Brian Garrett and Lauren Whitehouse from Enterprise Strategy Group tried IBM TSM v6.1 out for themselves and wrote up a ["Lab Validation"] report. Here is an excerpt:
"Backup/recovery software that embeds data reduction technology can address all three of these factors handily. IBM TSM 6.1 now has native deduplication capabilities built into its Extended Edition (EE) as a no-cost option. After data is written to the primary disk pool, a deduplication operation can be scheduled to eliminate redundancy at the sub-file level. Data deduplication, as its name implies, identifies and eliminates redundant data.
TSM 6.1 also includes features that optimize TSM scalability and manageability to meet increasingly demanding service levels resulting from relentless data growth. The move from a proprietary back-end database to IBM DB2 improves scalability, availability, and performance without adding complexity; the DB2 database is automatically maintained and managed by TSM. IBM upgraded the monitoring and reporting capabilities to near real-time and completely redesigned the dashboard that provides visibility into the system. TSM and TSM EE include these enhanced monitoring and reporting capabilities at no cost."
The majority of Fortune 1000 customers use IBM Tivoli Storage Manager, and it is the backup software that IBM uses itself in its own huge data centers, including the cloud computing facilities. In combination with IBM Tivoli FastBack for remote office/branch office (ROBO) situations, and complemented with point-in-time and disk mirroring hardware capabilities such as IBM FlashCopy, Metro Mirror, and Global Mirror, IBM Tivoli Storage Manager can be an effective, scalable part of a complete Unified Recovery Management solution.
This week, scientists at IBM Research and the California Institute of
Technology announced a scientific advancement that could be a major
breakthrough in enabling the semiconductor industry to pack more power
and speed into tiny computer chips, while making them more energy
efficient and less expensive to manufacture. IBM is a leader in
solid-state technology, and this scientific breakthrough shows promise.
But first, a discussion of how solid-state chips are made in the first place. Basically, a round thin wafer is etched using [photolithography]
with lots of tiny transistor circuits. The same chip is repeated over
and over on a single wafer, and once the wafer is complete, it is
chopped up into little individual squares. Wikipedia has a nice article
on [semiconductor device fabrication], but I found this
[YouTube video] more illuminating.
Up until now, the industry was able to get features down to 22
nanometers, and were hitting physical limitations to get down to
anything smaller. The new development from IBM and Caltech is to use
self-assembling DNA strands, folded into specific shapes using other
strands that act as staples, and then using these folded structures as
scaffolding to place in nanotubes. The result? Features as small as 6 nanometers. How cool is that?
While NAND Flash Solid-State Drives are available today, this new
technique can help develop newer, better technologies like Phase Change
Continuing my week in Chicago, for the IBM Storage Symposium 2008, we had sessions that focused on individual products. IBM System Storage SAN Volume Controller (SVC) was a popular topic.
SVC - Everything you wanted to know, but were afraid to ask!
Bill Wiegand, IBM ATS, who has been working with SAN Volume Controller since it was first introduced in 2003. answered some frequently asked questions about IBM System Storage SAN Volume Controller.
Do you have to upgrade all of your HBAs, switches and disk arrays to the recommended firmware levels before upgrading SVC? No. These are recommended levels, but not required. If you do plan to update firmware levels, focus on the host end first, switches next, and disk arrays last.
How do we request special support for stuff not yet listed on the Interop Matrix?
Submit an RPQ/SCORE, same as for any other IBM hardware.
How do we sign up for SVC hints and tips? Go to the IBM
[SVC Support Site] and select the "My Notifications" under the "Stay Informed" box on the right panel.
When we call IBM for SVC support, do we select "Hardware" or "Software"?
While the SVC is a piece of hardware, there are very few mechanical parts involved. Unless there are sparks,
smoke, or front bezel buttons dangling from springs, select "Software". Most of the questions are
related to the software components of SVC.
When we have SVC virtualizing non-IBM disk arrays, who should we call first?
IBM has world-renown service, with some of IT's smartest people working the queues. All of the major storage vendors play nice
as part of the [TSAnet Agreement when a mutual customer is impacted.
When in doubt, call IBM first, and if necessary, IBM will contact other vendors on your behalf to resolve.
What is the difference between livedump and a Full System Dump?
Most problems can be resolved with a livedump. While not complete information, it is generally enough,
and is completely non-disruptive. Other times, the full state of the machine is required, so a Full System Dump
is requested. This involves rebooting one of the two nodes, so virtual disks may temporarily run slower on that
What does "svc_snap -c" do?The "svc_snap" command on the CLI generates a snap file, which includes the cluster error log and trace files from all nodes. The "-c" parameter includes the configuration and virtual-to-physical mapping that can be useful for
disaster recovery and problem determination.
I just sent IBM a check to upgrade my TB-based license on my SVC, how long should I wait for IBM to send me a software license key?
IBM trusts its clients. No software license key will be sent. Once the check clears, you are good to go.
During migration from old disk arrays to new disk arrays, I will temporarily have 79TB more disk under SVC management, do I need to get a temporary TB-based license upgrade during the brief migration period?
Nope. Again, we trust you. However, if you are concerned about this at all, contact IBM and they will print out
a nice "Conformance Letter" in case you need to show your boss.
How should I maintain my Windows-based SVC Master Console or SSPC server?
Treat this like any other Windows-based server in your shop, install Microsoft-recommended Windows updates,
run Anti-virus scans, and so on.
Where can I find useful "How To" information on SVC?
Specify "SAN Volume Controller" in the search field of the
[IBM Redbooks vast library of helpful books.
I just added more managed disks to my managed disk group (MDG), can I get help writing a script to redistribute the extents to improve wide-striping performance?
Yes, IBM has scripting tools available for download on
[AlphaWorks]. For example, svctools will take
the output of the "lsinfo" command, and generate the appropriate SVC CLI to re-migrate the disks around to optimize
performance. Of course, if you prefer, you can use IBM Tivoli Storage Productivity Center instead for a more
Any rules of thumb for sizing SVC deployments?
IBM's Disk Magic tool includes support for SVC deployments. Plan for 250 IOPS/TB for light workloads,
500 IOPS/TB for average workloads, and 750 IOPS/TB for heavy workloads.
Can I migrate virtual disks from one manage disk group (MDG) to another of different extent size?
Yes, the new Vdisk Mirroring capability can be used to do this. Create the mirror for your Vdisk between the
two MDGs, wait for the copy to complete, and then split the mirror.
Can I add or replace SVC nodes non-disruptively? Absolutely, see the Technotes
[SVC Node Replacement page.
Can I really order an SVC EE in Flamingo Pink? Yes. While my blog post that started all
this [Pink It and Shrink It] was initially just some Photoshop humor, the IBM product manager for SVC accepted this color choice as an RPQ option.
The default color remains Raven Black.
The focus on square footage resulted in higher density. This reminds me of the classicIBM commercial ["The Heist"] where Gil panics that the roomful of servers are missing, and Ned explains that it was all consolidated ontoa single IBM server.
I suspect few people picked up on the fact that the acronym for["new enterprise datacenter"] spells "Ned", ourdonut-eating hero in these series of videos.
Costs in the data center are proportional to power usage rather than space.
Power efficiency is more of a behavior problem than it is a technology problem.
This is definitely a step in the right direction. Both servers and storage systems consume a large portionof the energy on the data center floor. IBM Tivoli Usage and Accounting Manager can includeenergy consumption as part of the chargeback calculations.
However, I have to assume his real question is ... "what is the quick and easy way for me to build a lightweight database app like Microsoft Access that I can distribute as a standalone executable?"
To which I would say "Lotus has a program called Approach, which is part of Lotus SmartSuite, which some people still use. However, a lot of the focus in IBM now centers around the lightweight Cloudscape database which IBM acquired from Informix, which is now known as the [open source project called Derby]. Many IBM and Lotus products, such as Lotus Expeditor use the JDBC connection to Derby, which allows you to use Windows, Linux, Flash, etc. ... with no vendor lock in".
I am familiar with Cloudscape, and I evaluated it as a potential database for IBM TotalStorage Productivity Center, when I was the lead architect defining the version 1 release. It runs entirely on Java, which is both a plus and minus. Plus in that it runs anywhere Java runs, but a minus in that it is not optimized for high performance or large scalability. Because of this, we decided instead on using the full commercial DB2 database instead for Productivity Center.
Not to be undone, my colleagues over at DB2 offered a different alternative, [DB2 Express-C], which runs on a variety of Windows, Linux-x86, and Linux on POWER platforms. It is "free" as in beer, not free as in speech, which means you can download and use it today at no charge, and even ship products with it included, but you are not allowed to modify and distribute altered versions of it, as you can with "free as in speech" open source code, as in the case of Derby above (see [Apache License 2.0"] for details).
As I see it, DB2 Express-C has two key advantages. First, if you like the free version, you can purchase a "support contract" for those that need extra hand-holding, or are using this as part of a commercial business venture. Second,for those who do prefer vendor lock-in, it is easyto upgrade Express-C to the full IBM DB2 database product, so if you are developing a product intended for use with DB2, you can develop it first with DB2 Express-C, and migrate up to full DB2 commercial version when you are ready.
This is perhaps more information than you probably expected for such a simple question. Meanwhile, I am stilltrying to figure out MySQL as part of my [OLPC volunteer project].
Well, we had another successful event in Second Life today.
Unlike our April 26 launch of our System Storage products for IBM Business Partners only, this time we decided this time to make it as a "Meet the Storage Experts" Q&A Panel format, and open up registration to everyone. Thesubject matter experts sat at the front of the room on four stools. We had six rows of chairs arrangedsemi-circularly.
Shown above, from left to right, are the avatars of our four experts:
IBM System Storage N series, focusing on recent N3000 disk system announcements
Harold Pike (holding the microphone while speaking)
IBM System Storage DS3000 and DS4000 series, focusing on recent DS3000 disk system announcements
IBM System Storage TS series, focusing on recent TS2230, TS3400 and TS7700 tape system announcements
IBM storage networking, focusing on recent IBM SAN256B director blade announcements
While Eric was a veteran Second Lifer, having presented at our April event, the other three were trainedon how to raise their hand, speak into the microphone, sit on the stool, and so on. I want to thank allof our experts for putting in this effort!
The event was produced by Katrina H Smith. She did a great job, and made sure we were on top ofall the issues and tasks required to get the job done. Running a Second Life event is every bit ashard as running a real face-to-face event. We had several meetings to discuss venue details, placementof chairs, placement of product demos, audio/video recording, wall decorations, tee-shirt and coffee mug design, logistics, and so on.
I acted as moderator/emcee for the event. That is my back in the picture above. The process wassimple, modeled after the "Birds of a Feather" sessions at events like SHARE and the IBMStorage and Storage Networking Symposium. We threw out a list of topics the experts would cover,and people in the audience would "raise their left hand". I, as the moderator, would then walkover to each person, and hold out the microphone for them to ask the question. I would then repeat the question and ask the appropriate expert to provide an answer. We defined gestures onhow to "raise hand" and "put hand down" that we gave to each registered participant.
We had four dedicated "camera-avatars" in world to capture both video and screenshots.Our video editors are now working to edit "highlight videos" that we can use at future events, for training materials, and for our internal "BlueTube" online video system.
The room was filled with examples of each of our products, made into 3D objects that were dimensionallycorrect, and "textured" with photographs of the actual products. If you click on an object, you get a "notecard" that provided more information. Special thanks to Scott Bissmeyer for making all of theseobjects for us.
We made posters of each expert and placed them in all four corners of the room. On the bottom of each coffee mug was a picture of each of the experts, and if you walked under each of the posters, you were"dispensed" a coffee mug matching the expert shown in the poster.Participants could "Collect all Four!" When you bring the coffee mug up to takea sip, the picture on the bottom of the mug is exposed for all to see.And as a final give-away to the audience, we made a variety of event tee-shirts and polo-shirts.
At the end of the session, we asked everyone to click on the "Survey" kiosk near the exit door. We askedsix simple questions using SurveyMonkey.com that took only a fewminutes to process. We found asking questions immediately at the end of the event was the best way tocapture this feedback.
From a "Green" perspective, we had people registered from the following countries: US, India, Mexico,Australia, United Kingdom, Brazil, Germany, Argentina, Chile, China, Canada, and Venezuela. Second Lifeallows all these people who probably could not travel, or could not afford the time and expense to travel,to participate in a simulated face-to-face meeting without energy consumption of traditional travel methods.
More importantly, we got several leads for business. People often ask "Yes, but is there any businessassociated with this?" This time, there was, based on the answers to the questions, several avatars asked for a real sales call to follow-up on the products and offerings they were discussed.
With such a great success, we have already scheduled our next Second Life event, November 8. Mark your calendars! I'll postmore details on the registration process of the November event when available.
Well, it's Tuesday again, and you know what that means? IBM Announcements! I am here in New York for the exciting news!
(FCC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for the IBM z14 mainframe and DS8880 Storage System.)
In support of the [IBM z14] mainframe announcement, IBM has also disclosed R8.3 enhancements for the DS8880 Storage System. Here is a quick recap:
New Tier-1 Flash Capacities available for HPFE Gen2 drawers
IBM introduces the new Tier-1 flash card capacity 3.84 TB flash card. In the past, IBM DS8880 only supported Tier-0 cards that support 10 Drive Writes per Day (10 DWPD), with capacities 400, 800, 1600 and 3200 GB. The Tier-1 flash card only handles 1 DWPD, often dubbed "Read-Intensive" devices, but can actually handle about 90 percent of most production workloads.
zHyperLink™ drastically reduces the latency between the IBM z14 mainframe and the DS8880 storage systems. Traditional FICON paths through SAN switches or directors introduced about 140 to 175 microseconds of latency between systems. This new system is a direct cable, with 20 microsecond latency.
The I/O bays on the DS8880 used for HPFE Gen2 already have zHyperLink ports on them. This direct cable is limited to 150 meters, however, so plan accordingly.
Transparent Cloud Tiering
IBM already announced Transparent Cloud Tiering to IBM Bluemix, IBM Cloud Object Storage and the IBM TS7760 virtualization engine in R8.2.3 release. The new Release 8.3 of DS8880 now adds support for Amazon S3, providing yet another choice for where to migrate data sets to. IBM also adds replication, allowing the data set to be migrated to two separate target locations, for added availability, much like writing to separate ML2 tape cartridges.
Cascading FlashCopy is a feature that has existing for awhile now on IBM XIV and SAN Volume Controller platforms, so this is just a port of that concept over to the DS8880 microcode. Now, if you FlashCopy target can become the source of a follow-on FlashCopy request. You can make copies of copies. This applies to both the volume and data set level functions.
Why would anyone do this? Well, you might suspend your application at midnight and create a clean FlashCopy of a 24-by-7 ever-changing database. Then in the following morning, workers who need a static "midnight version" of the database now can use this as their source and perform additional FlashCopy requests for their own needs.
IBM DS8880 MES Support
MES is an abbreviation for "Miscellaneous Equipment Specification", one of the many Three Letter Acronyms [TLA] that doesn't help knowing what the words stand for. In short, an MES is a formal supported option to upgrade a piece of hardware that is already installed and running at a client location. IBM will offer MES to upgrade existing DS8880 systems to have the additional HPFE Gen2 drawers, and to upgrade the I/O bays to support zHyperLink connections.
(Final note: you might notice the change in upper and lower case. The IBM z14 (lower case) refers to the specific mainframe model, consistent with its predecessors the z13 and z13s, but the family name "IBM z Systems" has been shortened to "IBM Z®" (upper case). IBM Storage Systems and IBM POWER Systems were already upper case, so the mainframe guys just wanted to follow suit. I suspect "IBM i" will remain lower case, however.)
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM Elastic Storage Server
Replacing the older "GSn" and "GLn" models, IBM announces the "Second Generation" GSnS and GLnS models (the second "S" stands for Second Generation), the "n" continues to refer to the number of storage drawers. All of these have a pair of POWER8 servers to drive amazing performance at a low price point.
The "GSnS" models are based on smaller 2U, 24-drive storage drawers, with 3.84 and 15.36 TB Tier-1 Read-intensive Solid-State Drives (SSD). The "GLnS" models are based on larger 5U, 84-drive storage drawers, with 4TB, 8TB and 10TB nearline (7200 rpm) spinning disk.
These new models have the latest IBM Spectrum Scale software pre-installed.
In addition to IBM's two existing Hyperconverged offerings--IBM Spectrum Accelerate for x86 servers, and IBM Spectrum Scale for x86, POWER and z Systems servers--IBM Power Systems now offers a third option. This integrated offering combines Nutanix's Enterprise Cloud Platform software with IBM Power Systems™ hardware to deliver a turnkey hyperconverged solution that targets critical workloads in large enterprises.
Nutanix is offered and will be defaulted/required on these Power® servers only:
While "Hyperconvergence" is still fairly new, and only about 1 percent of data centers have deployed this new technology, I am glad that IBM is a leader in this space with multiple offerings across both x86 and POWER systems platforms.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here is my recap of the sessions on the morning of Day 5, the last day of the conference.
Integrating IBM Storage in Container Environments
Dr. Robert Haas, IBM CTO Storage for Europe, presented IBM Storage for Docker containers. These are different from containers in IBM Cloud Object Storage, and different from the Container Pools used in Spectrum Protect.
Robert gave an overview of IBM Spectrum Conductor, part of the IBM Software Defined Infrastructure (SDI) Spectrum Compute family of software products. The goal is to analyze large amounts of data, access these data efficiently, and protect the data, results and insights as intellectual property.
IBM Spectrum Compute comes in several offerings. IBM Spectrum LSF (Load Sharing Facility) manages long-running batch jobs for modeling, design and simulations. IBM Spectrum Symphony provides low-latency for risk analytics in the financial services sector. IBM Spectrum Conductor comes in two flavors. Conductor for Spark (CFS) manages Spark analytics. Conductor for Containers (CFC) handles Docker and Kubernetes containers.
Docker is the run-time platform. While there are other container run-time platforms like RKT and LXD, Docker is clearly the marketshare leader, growing 40 percent per year.
Statistics from the latest DockerCon2016 conference showed the most popular use cases and workloads for Docker. What can run in Docker: Lots of applications can be "containerized", including Redis, MongoDB, PostgreSQL, OracleDB, Java, to name a few. Docker is well established in enterprises, including service providers, healthcare, insurance and financial services, public sector, and technology firms.
Kubernetes, Mesos and Docker/Swarm are a layer above, as orchestrators. Spectrum Conductor for Containers uses Kubernetes and other open source tools to coordinate activity. Orchestrators restart failed applications, and can scale up or scale down the number of instances as needed. Orchestrators can manage groups of applications, across clusters on-premises and off-premises Cloud.
From a storage perspective, containers access storage like bare-metal operating systems, bypassing all of the layers normally associated with bloated Virtual Machine hypervisors. It also eliminates single root I/O virtualization (SR-IOV) that VMs use to compensate.
Persistent storage can be isolated, so that containers cannot see the files of other containers. This provides multi-tenancy.
Internal persistent storage (directory on host file system). However, if you move a container from one host to another, you may lose access to this internal storage.
External volume, manually mounted.
Volume driver plug-in REST API that automatically mounts it.
The fourth method is preferred. Plug-ins are available for IBM Spectrum Scale, GlusterFS, Portworx, Rancher Convoy, RexRay, and Contiv. The start-up Flocker have gone out of business last year.
The Docker hosts can attach to IBM Spectrum Scale in all of its supported offerings, including POSIX, NFS and SMB protocol. Containerized applications can move from one Docker host to another, and continue access the IBM Spectrum Scale namespace.
IBM has created the "Ubiquity Volume Service" that provides a consistent API for Docker and Kubernetes. This will use IBM Spectrum Control Base Edition to support IBM Spectrum Scale, Spectrum Accelerate, Spectrum Virtualize and DS8000 storage systems. For IBM Spectrum Scale, volumes are mapped to iSCSI volumes, filesets or directories. For other devices, volumes are mapped to block LUNs. Ubiquity is publicly available on GitHub.
Enterprise Applications for IBM Cloud Object Storage
Andy Kutner, IBM Cloud Architect, presented the various options available for NAS gateways that can front IBM Cloud Object Storage.
Ctera offers NAS gateways, and Endpoint agents for backup and Enterprise File Sync & Share (EFSS). This vendor targets Remote Office/Branch Office (ROBO) and small NAS consolidation that have less than 60 TB per office IBM is a reseller of Ctera, so you can get both Ctera and IBM COS from the same IBM sales rep.
Nasuni offers a global file system, accessible from any device, smartphone, tablet or desktop. They are focused on taking out EMC and NetApp NAS solutions. Performance at the edge, combined with capacity in the client's chosen Cloud (including IBM Cloud Object Storage or IBM Bluemix). Infinite snapshots replace backups, offering RPO of 1 minute for Disaster Recovery. Their global file system "UniFS" offers file locking.
Panzura focuses on Cloud Integrated NAS, File Distribution, and Collaboration. This can help eliminate "islands of storage". The File Distribution can be any type of file, but was originally designed for Media and Entertainment, such as videos. Collaboration employs EFSS features for workgroup shared file folders, such as CAD/CAM or engineering blueprints.
IBM Spectrum Scale can provide NFS and SMB access to files, and then move colder, less active data to IBM Cloud Object Storage, using Transparent Cloud Tiering feature. Spectrum Scale offers WAN caching across locations.
IBM COS now offers a native NFS v3 interface. This allows read/write NFS access, with S3 API read of the same content. Each file is mapped to a single object.
This is targeted for large scale archive, static-and-stable data, NFS-based backup software, and applications going through the transition from file-based to object-based. This is not intended for multi-site collaboration or primary NAS replacement. Regardless of the number of geographically dispersed IBM COS sites, the NAS can run on only one or two sites initially.
To provide NFS v3 support, IBM introduces new F5100 File Accessers, which talk to an IBM COS Accesser, which in turn acts on specific Vaults in the storage pools. The file-to-object mapping metadata is replicated on-premises across three File Accessers, and optionally replicated asynchronously to a second site for High Availability. S3 API can read access the file by file name, or by Object URI.
Initially, the "File Accesser" is only available as pre-built system, not as software-only.
There was not enough time to cover other solutions, including Avere, NetApp AltaVault, or Open Source S3FS.
This was a great event, just the right size, between 1,500 and 2,000 attendees. Similar IBM Technical University events coming up later this year:
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Thursday evening, we had the "Meet The Experts" sessions. There were four: Storage, Power Systems, z/OS, and a fourth one focused on z/VM and Linux on z Systems. I was on the expert panel for Storage.
Mo McCullough was the emcee. Special thanks for Shelly Howrigon in her help with this event.
(Disclaimer: Do not shoot the messenger! We had a dozen or so experts on the panel, representing System Storage hardware, software and services. I took notes, trying to capture the essence of the questions, and the answers given by the various IBM experts. The answers from individual IBMers may not reflect the official position of IBM management. I leave out any references to unannounced plans or products. Where appropriate, my own commentary will be in italics.)
When will IBM offer a single pane of glass management for all of its IBM storage products?
IBM is working hard on this. Our strategy is to focus on IBM Spectrum Control as the primary answer. We have extended support across block, file and object, with support for IBM Spectrum Scale and IBM Cloud Object Storage System. We have also provided plug-ins for VMware, Cisco UCS Director, and OpenStack Horizon, for those who prefer those management systems instead.
What we really need are REST APIs!
Good point. IBM already has some REST APIs for the DS8000, XIV and Spectrum Protect, now that IBM has browser-based GUI across its entire product line, it is our strategy to offer REST API across our product line as well.
What is the next generation of ProtecTIER Data Deduplication going to look like?
IBM is focused on provided "data deduplication" for backup workloads directly through IBM Spectrum Protect backup software. IBM continues to sell IBM ProtecTIER.
(Virtual Tape Libraries like IBM ProtecTIER and Dell EMC Data Domain were created to handle the fact that many backup software back only were designed for tape drives and libraries. VTL was disk that pretended to be tape library. Now that IBM Spectrum Protect, NetBackup, Commvault, and all of the other modern backup products write natively to disk, object storage or Cloud services, there really isn't a need for VTL products any more.)
Why does IBM bother with all-Flash version of DS8000 when it already has IBM FlashSystem?
Different products for different workloads. IBM DS8000 offers unique support for z System mainframe FICON attachment and 520-byte block support for IBM i. IBM also offers all-Flash Elastic Storage Server, all-Flash SVC and Storwize products, that complement the IBM FlashSystem product line.
We like how XIV can hot-enable encryption, even with existing data on it. Why doesn't DS8000 offer this?
Two separate implementations. At the time IBM DS8000 encryption was designed, it was decided that the client needed to enable encryption before writing any data.
Will we see a spinning disk version of the FlashSystem A9000
Flash is now less expensive than spinning disk, I don't see why IBM would go backwards. The future is Flash.
We would like Spectrum Control to manage our Dell EMC Isilon
Yes, we have heard that from others. We are working on extending our third party support. Send in your cards and letters to help us prioritize. Or, better yet, submit a "Request For Enhancement" (RFE).
The difference between Tier 0 (Write Endurance) flash and Tier 1 (Read Intensive) flash is confusing, are there any plans in the IT industry to simplify this?
No, if anything it will get worse. Today, IBM's Tier 0 is 10 Drive Write Per Day (DWPD), and Tier 1 is 1 DWPD. Other SSD drives offer 2, 3, 5, 10, 15 and 25 DWPD. As people buy more Flash, and less disk, expect more differentiation in this area.
We would like to tune Easy Tier on the Storwize products
Understood. IBM typically implements new features on the DS8000 platform first, then rolls them over to Spectrum Virtualize. The ability to influence allocation order, pin or avoid tiers, and have application API to influence the placement are already in DS8000.
What will the future of Storwize look like?
We don't have enough time to cover that in this meeting.
Recently, you raised the maximum Storwize FlashCopy background copy rate from 64 MB/sec to 2 GB/sec, but is that realistic?
The setting provides the background task a target "grains per second" to try to achieve. It may not be possible depending on your configuration and the number of concurrent tasks. Your Storwize may be so busy with background activity that it won't take host I/O.
We have been giving you our wishlist, but are there any questions the IBM experts have for the audience
Yes, are there any clients being asked to secure storage against Ransomware and insider threats from disgruntled employees?
(Several hands went up, and we collected their names to have further discussions.)
How should we assign business value to data?
IBM Spectrum Virtualize allows you to assign metadata tags to files, so that these can be used to drive different policies.
(The process of assigning business value is often called "Data Rationalization" and is part of ILM, BC/DR, and Data Governance efforts.)
I am concerned that AES 256 encryption is not good enough now that there is Quantum Computing.
It will be decades before Quantum Computing will be good enough to break these codes.
Will Blockchain drive huge or unique storage requirements?
No. The entries are small. You are appending small transactions to the end of existing ledgers. Nothing unique or different.
Were there any topics not adequately covered at this conference?
IBM didn't have much to offer for Spectrum Compute family of software, the Software Defined Infrastructure (SDI) that runs on both x86 and POWER systems. This should be done under the POWER brand, but many clients use Spectrum Compute with x86 servers. Ironically, Spectrum Compute products are managed under the Storage division, since Spectrum Compute and Spectrum Storage work well together.
We would like Storwize's clever NPIV to be implemented in all of the other IBM arrays, starting with DS8000.
That probably won't happen, as they are different architectures. Whereas Storwize and the rest of IBM Spectrum Virtualize family were designed for nodes to fail, and take their ports down with them, the DS8000 has independent I/O bays that continue to run independent of either POWER8 node. Likewise, FlashSystem 900 has similar separation between the FCP adapters and the processing nodes.
Can we have consistent licensing across the entire IBM Spectrum Virtualize set of products, please?
We have a task force to investigate this, and will gladly add your name to the list for input and feedback.
While the conference continues Friday morning, for many attendees, this was the last event.
IBM Spectrum Scale was formerly called GPFS and has been around since 1998. I am glad it was renamed, as GPFS suffered from "guilt by association" with other file systems, AFS, DFS, XFS, ZFS, and so on.
Spectrum Scale does so much more, supports volume, file and object level access, supports POSIX standards for Windows, AIX and Linux, support Hadoop and Spark with 100 percent compatible HDFS Transparency Connector, support NFS, SMB and iSCSI protocols, as well as OpenStack Swift and Amazon S3 object based access.
Initially designed for video streaming and High Performance Computing (HPC), IBM has extended its reach to work in a variety of workloads across different industries. More than 5,000 production systems are running at client locations.
IBM Spectrum Protect solution design: Server, Deduplication and Disaster Recovery decisions
Dan Thompson, IBM Storage Software Technical Sales Specialist, presented this session.
To make it easier to deploy, IBM Spectrum Protect now has a set of tested "blueprints" that are organized into small, medium and large. Find the one that fits your needs, and it will tell you exactly how the server should be configured. Dan recommends having a "test system" to try out new releases of IBM Spectrum Protect.
For multiple server configurations, Dan recommends adopting a standard naming convention, and to make use of Enterprise Configuration and server-side Client Option Sets. You may want to consider discrete instances for special non-backup functions, like library manager or Operations Center hub server, which allows you to upgrade more aggressively without affecting your backup clients.
If you plan to run multiple Spectrum Protect instances on the same VMware host, set the DBmemPercent to avoid having DB2 consume all of the memory, which will interfere with out Spectrum Protect instances.
For clustered servers, IBM supports Active/Passive, Active/Active, Many/One, and Many/Few configurations. You can mix and match these as needed.
For data spill remediation, consider NIST 800-88 data shredding. This depends on the type of storage media used.
IBM Spectrum Protect for Data Retention, formerly called System Storage Archive Manager (SSAM), offers For Non-erasable, Non-Rewriteable (NENR) enforced Immutability protection. (This used to be called Write-Once-Read-Many or WORM for short, but since WORM applies only to tape and optical media, and IBM Spectrum Protect now supports Flash, Disk, Object Storage and Cloud repositories, IBM has adopted the term NENR instead). Third party KPMG has certified IBM Spectrum Protect for Data Retention meets to their satisfaction the requirements for SEC 17a-4 regulations.
When sizing your server, Dan recommends that you always "over-size" it and grow into it. Use the published "Performance Optimization Guide" to help. Monitor the server and storage using OS and device specific monitoring, in combination with IBM Spectrum Protect reports.
If you are still on BC Tiers 1 or 2, transmitting tapes to a remote vaulting facility or secondary data center, consider upgrading to BC Tier 3 at least. This can be done via electronic vaulting to an Automated Tape Library (ATL), Virtual Tape Library (VTL) or IBM Cloud Object Storage, or a Cloud service provider such as IBM Bluemix or Amazon Web Services. This can be supplemented using DB2 HADR for the IBM Spectrum Protect database.
While Spectrum Protect server can run bare-metal or as a VM, the VM instance will not have support for FCP-based tape or Virtual Tape Library. Many people are moving off tape, especially VTL, and using native Disk, Directory or Cloud container pools instead.
Lastly, take advantage that Operations Center can view all Spectrum Protect servers across all locations. This can be helpful.
Enabling Mission Critical NoSQL workloads using IBM trillions of operations technology
TJ Harris, from the IBM Storage CTO office, and Scott Brewer, FlashSystem Team Lead, co-presented this session.
They gave a background on NoSQL, the most popular being MongoDB. The IT industry estimates that NoSQL will grow 38 percent CAGR from 2015-2020.
The problem occurs when NoSQL applications go through a full file system stack to work with low-latency devices like Flash, especially when the writes are small, often just a few dozen bytes to 100 KB. Fortunately, IBM Research has created the "Trillions of Operations" project to explore ways to take reduce the software stack, and make use of NVMe protocol.
The top three challenges for NoSQL deployments are: (a) Cost, (b) Data management and retention, and (c) Data relevancy.
To enable innovation, MongoDB offers a "Storage Engine API" that allows others to compete at this space. Currently MMAP v1 and WiredTiger are supported. IBM Research implemented its "Trillion Operations" project as a plug-in to this API, optimized for high rates of ingest for data. Compared to Facebook's RocksDB, IBM was 14x faster write, and 2.1x faster read.
Another challenge is coordinate backups and disaster recovery when applications mix traditional RDBMS with these new NoSQL databases.
The week is nearly over, and I can see the light at the end of the tunnel. Everyone had a great time last night's event at the Universal City Walk and Blue Man Group.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here is my recap of the sessions on the morning of Day 4.
Configurable IBM Spectrum Scale
Kent Koeninger presented IBM Spectrum Scale software, which Kent refers to as "Configurable Spectrum Scale" (or CSS for short), as opposed to the pre-built system known as Elastic Storage Server (ESS).
Why choose CSS versus ESS? Lower entry price. You can start with just two single-socket servers and a drawer of disk.
IBM Spectrum Scale was formerly called IBM General Parallel File System (GPFS). Many who tried earlier versions of GPFS found it difficult to configure, because it only had a command line interface. Now, Spectrum Scale has a fully-functional GUI, and clients have been able to install and configure Spectrum Scale in just 30 minutes!
How big can Spectrum Scale grow? As much as your budget can afford! With an architecture that can support YottaBytes of data and 900 quintillion files, you won't hit any limits anytime soon.
There are some unique capabilities of ESS not available in CSS. For example, ESS offers Spectrum Scale Native RAID (erasure coding) with fast rebuild times, and ESS is certified for SAP HANA. You can combine any combination of CSS and ESS in the same Spectrum Scale to create a "data lake" for mixed workloads.
A good use case for Spectrum Scale, either CSS or ESS, is backup. Kent explained why it is an excellent option to store backups with enterprise backup software such as IBM Spectrum Protect or Commvault.
VersaStack - Hybrid Cloud like no other
This session was jointly presented by Chris Vollmar, IBM Storage Architect, and Brent Anderson, Cisco Global Consulting Systems Engineer. IBM and Cisco have been partners for more than 25 years.
VersaStack combines Cisco UCS x86 servers, Cisco Nexus and MDS switches, and IBM FlashSystem or Spectrum Virtualize storage.
What if you have a SAN Infrastructure built entirely from IBM b-type or Brocade-based switches? Cisco supports their SAN switches for this, but nobody has tested VersaStack in this combination, and UCS Director does not manage this combination, so IBM does not support this. Instead, for this situation, IBM recommends doing external connection via Ethernet, or using direct-attach configurations.
The Cisco Validated Design spends four months testing, and gives you bulletproof process to deploy the solution.
There is a difference between Cisco UCS Manager and UCS Director. UCS Manager is available at no additional charge, but only manages the Cisco x86 servers. UCS Director is optionally extra priced, and manages Cisco servers, Cisco networking, and IBM Spectrum Virtualize storage.
Brent explained the benefits of UCS Management through policies and profiles.
Chris covered Cisco CloudCenter, which the Cisco team shortens to just "C3". IBM Spectrum Copy Data Management can be used to move snapshots of data between on-premises and off-premises Cloud to help in Hybrid Cloud configurations.
How to Design an IBM Spectrum Scale solution
Tomer Perry, IBM Spectrum Scale I/O Development, presented this session.
For those who want to bring up a quick IBM Spectrum Scale environment to play around with, you can do this in as little as 30 minutes. But to design a mission critical deployment, additional requirements may need to be addressed. You may need to consult with not just storage admins, but also application owners, network admins and security personnel.
Large companies have hundreds or thousands of applications, so Tomer recommends to group these into "Workload families", based on data set types, access patterns and performance requirements. For NAS take-out, 80 percent of NAS I/O is "get attribute" that can easily be served directly from cache memory.
For each workload family, you may need to decide on snapshots, quotas, namespace (bind mounts, symlinks, etc.), security (ACL, encryption), estimated capacity, replication BC/DR, backup and ILM requirements.
Unless this is completely greenfield deployment, the existing infrastructure needs to be evaluated. This includes the LAN and WAN network topology, name resolution (DNS), time services (NTP), Authentication (AD, LDAP, NIS, Keystone), Keyserver (IBM SKLM), Monitoring and Migration requirements.
Tomer suggests designing the environment in this order: Cluster, File System, Storage Pools, Fileset, Replication, and finally Monitoring.
Generally, you need three NSD servers per cluster. For those licensing Spectrum Scale Standard Edition by the socket, you may be tempted to put everything into one big cluster. The new capacity-based Spectrum Scale Data Management Edition eliminates that concern, so Tomer recommends having separate computer clusters and storage clusters, connected by cross-cluster mount. All nodes in a cluster are considered an "ssh" administration domain.
A single Spectrum Control namespace can support up to 256 file systems. There are various reasons to have multiple file systems: block size, backup/recovery, snapshot, quotas, and cross-cluster isolation. If a file system gets corrupted, it will not affect other file systems. In an internal test, an "fsck" on 1 billion, 1 PB of data file system took only 30 minutes to repair.
Storage Pool design can separate metadata from content, and workloads can be separated to different storage media. With ILM, HSM and TCT, you can move colder data to Cloud, Object Storage, Spectrum Protect or Spectrum Archive.
Filesets are tree branches for each file system. IBM Spectrum Scale supports both dependent and independent filesets. Filesets can be used for Non-erasable, Non-Rewriteable (NENR) Immutability, policies, quotas, snapshots. Consider using a fileset instead of carving off a new file system.
Spectrum Scale offers both synchronous and asynchronous replication. For Synchronous, the ReadReplicaPolicy can be set to default, local or fastest. For Asynchronous, there are a variety of AFM modes (Read-only, Local-Update, Single-Writer, Independent-Writer, and Disaster Recovery). You may need to decide if your AFM gateways are dedicated or collocated. You will need to tune your TCP buffers for WAN performance to get the RPO you desire.
The nice thing about IBM solutions is that you can start small, and grow big. In all of these examples above, IBM offers sizes to match nearly any IT budget.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the sessions of Day 3.
Ethernet-only SANs -- Myth or Reality?
Anuj Chandra, IBM Advisory Engineer, presented an excellent overview of Ethernet-based SANs. He started with a quick history of Ethernet, starting with Robert Metcalfe's original drawing for his concept.
In the past, Ethernet was used for email and message transfer, and so dropped packets were tolerated. However, with the use of Ethernet for SANs, many standards have been adopted to make Ethernet networks more robust. These meet requirements for Flow Control, Congestion management, low latency, data integrity and confidentiality, network isolation, and high availability.
These standards are known as IEEE 802.1Q "Data Center Bridging", including 8012.Qbb Priority Flow Control, 802.1Qaz Enhanced Transmission Selection, 802.1Qau Congestion Notification. There is also the IETF Transparent Interconnection of Lots of Links (TRILL) to replace Spanning Tree Protocol (STP). All of these features are negotiated between endpoints server and storage. Ethernet that supports these new standards is often referred to as "Converged Ethernet" since it handles both traditional email/message traffic as well as SAN data traffic.
In addition to 1GbE and 10GbE, we now have 2.5, 5, 20, 40, 50, 100 Gb Ethernet speeds. By 2020, Anuj estimates over half of all Ethernet ports will be 25 GbE or faster. Amazingly, some of these can work on existing 10BASE-T cables.
Anuj also covered Remote Direct Memory Access (RDMA), and the RDMA-capable Network Interface Cards (RNIC) that support them. In one chart, shown here, Anuj explained Infiniband, RDMA over Converged Ethernet (RoCE) and RoCE v2, and Internet Wide Area RDMA Protocol (iWARP).
While many of these enhancements were intended for Fibre Channel over Ethernet (FCoE), the beneficiary has been iSCSI. Now there is iSCSI Extensions for RDMA (iSER) to take even more advantage of these changes, and can work with Infiniband, RoCE or iWARP. All of these networks can also be used as the basis for NVMe over Fabric (NVMeOF).
Ethernet is the backbone of Cloud usage, and IBM is well positioned to take advantage of these new networking technologies.
Digital Video Surveillance solutions for extended video evidence protection
Dave Taylor, IBM Executive Architect for Software Defined Storage solutions, presented this session on Digital Video Surveillance (DVS).
Most video surveillance is either analog-based, going to standard VHS tapes, or file-based. Sadly, security guards that watch live camera feeds lose their attention span after 22 minutes.
There are an estimated 72 million cameras globally, with 1.5 million more every year.
City governments spend 57 percent of their budget on "public safety". This can include body cams for police departments. Taser International, now called AXON, dominates the body-cam market.
City budgets may not be prepared to store all of this video content into a cloud that complies with Criminal Justice Information Services (CJIS) standards. These Cloud services tend to be more expensive, as the videos must be treated as evidence, tamper-proof, and with appropriate chain of custody.
DVS is not just storing movies. IBM offers Intelligent Video Analytics. It is important to be able to derive insight and actionable response.
Storage capacity adds up quickly. Standard 1080p (1920 by 1080 pixel) camera generates 2.92 GB per hour, 70 GB per day, and over 2TB per month. If you have 1,000 cameras, that's over 2PB of data.
For xProtect servers running Windows, the Tiger Bridge Connector can be used to move the video files to either IBM Spectrum Scale or IBM Cloud Object Storage.
Deep Dive into HyperSwap for Active-Active applications and Disaster Recovery
Andrew Greenfield, IBM Global Engineer for Storage, explained the different ways HyperSwap is implemented across the IBM storage portfolio.
For IBM DS8000, HyperSwap is based on Metro Mirror synchronous replication. In the event that the primary DS8000 fails, the host server can automatically re-direct all I/O to the secondary DS8000. This is often referred to as "High Availability" (HA), and in some cases can serve as Disaster Recovery.
For IBM Spectrum Virtualize products, including SAN Volume Controller (SVC), FlashSystem V9000, Storwize V7000 and V5000 products, as well as Spectrum Virtualize sold as software, the implementation is different.
Previously, SVC offered Stretched Clusters, which put one node in one site, and a second node at another site, which allows for an Active/Active configuration. Unfortunately, the nodes in FlashSystem V9000 and Storwize are "connected at the hip", effectively bolted together, so putting separate nodes in different locations was not possible. To solve this, IBM developed HyperSwap that allows one node-pair to replicate across sites to another node-pair in the same Spectrum Virtualize cluster.
However, even though it is called "HyperSwap", it is not implemented in any way similar to the DS8000 method. Instead, Spectrum Virtualize uses the Global Mirror with Change Volumes to replicate data between sites.
IBM Storage and VMware Integration
This session was co-presented by Brian Sherman, IBM Distinguished Engineer, and Steve Solewin, IBM Corporate Solutions Architect.
For nearly two decades, IBM is a "Technology Alliance Partner" with VMware. To provide consistent integration to all the features and functions of VMware, IBM Spectrum Control Base Edition (SCBE) is provided at no additional charge for IBM DS8000, XIV, FlashSystem and Spectrum Virtualize products.
SCBE is downloadable as an RPM for RedHat Enterprise Linux (RHEL) can run bare-metal or as a VM.
For those using Hyper-Scale Manager, it will automatically install a special A-line-only version of SCBE. It will install SCBE, but it will only manage the A-line products (FlashSystem A9000, FlashSystem A9000R, XIV and Spectrum Accelerate).
Storage admins can define "storage services" that can be assigned to vCenter. This allows VMware admins to allocate storage in self-service mode.
After the meetings were over, IBM had a special event at the Universal City Walk to enjoy some drinks, food, and conversation, and to watch Blue Man Group.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the afternoon sessions of Day 2.
IBM Spectrum Protect deep dive into Container Storage Pools
Ron Henkhaus, IBM Certified Consulting IT Specialist, presented the new Spectrum Protect concept of "Container Pools" that can either be "Directory Pools" on SAN or NAS-based disk storage, or "Cloud Pools". Container pools can contain deduplicated and non-dedupe data.
Ron cautioned that directory pools should not be placed on the same file system as your Spectrum Protect database or logs. Also, best practice for any directory pool is to assign an "overflow" pool to any non-directory pool, such as disk, tape or cloud container.
Cloud pools can use either OpenStack Swift, V1 Swift, Amazon S3 protocol, Amazon Web Services, IBM Bluemix, and IBM Cloud Object Storage. You can pre-define the vaults and buckets in the configuration.
For off-premises Cloud pools, the data is encrypted by default. For other container pools, encryption is optional. Performance to Cloud pools have been improved by using "accelerator storage", basically a disk cache to collect data before sending over to the Cloud pool. Backups to Cloud pools can reach 8 TB per hour. Restore times varies from 500 to 1500 GB per hour.
Container Pools were designed for the new "Deduplication 2.0" feature introduced in version 7. Traditional Dedupe 1.0 to Device Class FILE is still available, but not recommended.
Version 7.1.6 changed the compression algorithm from LZW to LZ4. In all cases, Spectrum Protect performs these actions in this order: deduplication, compression, encryption. Data that is encrypted by the Spectrum Protect client is therefore not deduped.
The "Protect Storage Pool" command can replicate a directory pool to either a remote directory pool or Cloud pool. In addition to this remote replication, you can copy a directory pool to tape to offer air-gap protection against ransomware. Such tapes are considered part of the "Copy Container Pool". In the event of directory pool corruption, the data can be repaired from either replication or tape.
IBM Aspera can now be used for replication, using SSL and AES-128 bit encryption. If your latency is greater than 50 msec, and have more than 0.5 percent packet loss, Aspera might help. This is available for Linux on x86 platforms running v7.1.6 or higher.
For existing customers, IBM Spectrum Protect allows you to convert your FILE, VTL and TAPE device class pools to directory or Cloud pools.
Introduction to IBM Cloud Object Storage (powered by Cleversafe)
In 2015, IBM acquired Cleversafe, recognized as the #1 Object Storage vendor. Their flagship product was officially renamed to the IBM Cloud Object Storage System, which some abbreviate informally as IBM COS. IBM offers the IBM Cloud Object Storage System in three ways: as software, as pre-built systems, and as a cloud service on IBM Bluemix (formerly known as SoftLayer).
Since then, IBM has been busy integrating IBM COS into the rest of the storage portfolio. I explained how IBM COS can be used for all kinds of static-and-stable data, but not suited for frequently changed data, such as Virtual machines or Databases.
Object storage can be access via NFS or SMB NAS-protocols using a gateway product, like IBM Spectrum Scale, or those from third-party partners like Ctera, Avere, Nasuni or Panzura. It can also be used as an alternative to tape for backup copies, and is already supported by the major backup software like IBM Spectrum Protect, Commvault Simpana, or Veritas NetBackup.
While other cloud service providers have offered data storage in the cloud, this new offering also allows hybrid configurations with geographically dispersed erasure coding.
Unlike RAID which protects against the loss of one or two drives, erasure coding can protect against a larger number of concurrent failures. For example, using an Information Dispersal Algorithm (IDA) of "7+5", where seven pieces of data are encoded on twelve independent disks, the system can lose up to five disk drives without losing any data.
Combining this with Geographically Dispersed Configuration across three or more sites means that you can lose an entire data center, four of the twelve disks, and still have instant full access to all of your data from eight drives at the other locations. In the graphic, you see two on-premise data centers combined with a third location in IBM SoftLayer.
New Generation of Storage Tiering: Simpler Management, Lower Costs, and Improved Performance
With ever changing amounts of storage, it is hard to find metrics that are consistent year to year. Fortunately, we found I/O density as the metric to focus my efforts, armed with real data from Intelligent Information Lifecycle Management (IILM) studies done at various clients. From that, I was able to talk about storage tiering on three fronts:
Storage tiering between Flash and disk. IBM FlashSystem and IBM Easy Tier on DS8000 and Spectrum Virtualize family for hybrid Flash-and-disk configurations.
Storage tiering between disk, tape, and Cloud. HSM and Information Lifecycle Management (ILM) on Spectrum Scale, Elastic Storage Server (ESS), Spectrum Archive and IBM Cloud Object Storage System.
Storage tiering automation across your entire environment. IILM studies can help identify a target mix of Tier 0, Tier 1, Tier 2 and Tier 3 storage. IBM Spectrum Storage Suite and the Virtual Storage Center (VSC) can recommend or perform the movement of LUNs to more appropriate tiers, based on age and I/O density measurements.
It's hard to say what the correct sequence of presentations should be. Some thought it might have been better for my talk on IBM Cloud Object Storage System prior to Ron's talk on Cloud container pools, but perhaps hearing Ron first helped drive more interest to my session.
I have been involved with Business Continuity and Disaster Recovery my entire career at IBM System Storage. However, with new workloads like Hadoop analytics and new Hybrid Cloud deployments, I thought it would be good to provide a refresh.
The need for Business Continuity and Disaster Recovery has increased recently due to (a) climate change caused by human activity, (b) ransomware and other cyber attacks, and (c) disgruntled employees.
Back in 1983, a task force of IBM clients at a GUIDE conference developed "Seven Business Continuity Tiers for Disaster Recovery", which I refer to as "BC Tiers". I divided the presentation into three sections:
Backup and Restore: BC tiers 1 through 3 are based on backup and restore methodologies. I explained how to backup Hadoop analytics data, all of the various options for IBM Spectrum Protect software, and how to encrypt the tape data that gets sent off premises.
Rapid Data Recovery: BC tiers 4 and 5 reduce the Recovery Point Objective (RPO) and Recovery Time Objective (RTO) with snapshots, database journal shadowing, and IBM Cloud Object Storage.
Continuous Operations: BC tiers 6 and 7 provide data replication mirroring across locations. I covered 2-site, 3-site and 4-site configurations.
IBM Spectrum Virtualize - How it works - Deep dive
Barry Whyte, IBM Master Inventor and ATS for Spectrum Virtualize, covered a variety of internal topics "under the hood" of Spectrum Virtualize. This covers the SAN Volume Controller (SVC), FlashSystem V9000, Storwize V7000 and V5000 products, as well as Spectrum Virtualize sold as software.
In version 7.7, IBM raised the limits. You can now have 10,000 virtual disks per cluster, rather than 2,048 per node-pair. Also, you can now have up to 512 compressed volumes per node-pair. With the new 5U-high 92-drive expansion drawers, Storwize V7000 can now support up to 3,040 drives, and Storwize V5030 can support up to 1,520 drives.
While each Spectrum Virtualize node has redundant components, the architecture is designed to handle entire node failure. The term "I/O Group" was created to refer to the node-pair of Spectrum Virtualize engines and the set of virtual disks it manages. This made sense when virtual disks were dedicated to a single node-pair. Now, virtual disks can be assigned to multiple node-pairs, dynamically adding or removing node-pairs as needed for each virtual disk.
However, even if you have a virtual disk assigned to multiple node-pairs, only one node-pair would manage its cache, causing all other node-pairs to coordinate I/O through the cache-owning node-pair. The other node-pairs are called "access I/O groups".
The architecture allows for linear scalability, double the number of nodes, and you double your performance. Some competitors use n-way caching across four or more nodes, and it is a semi-religious argument on the pros and cons of each approach. Barry feels the 2-way caching implemented by Spectrum Virtualize is the most effective and efficient for performance.
All of the nodes are connected over IP network, but there is one designated as a "config node", and one, often the same, as a "boss node".
A cluster can have up to three physical quorum disks (either drive or mDisk) and optionally up to five IP-based quorums. The IP-based is just a Java program that runs on any server or Cloud, provided it can respond within 80 msec.
Either IP-based or physical quorum can be used for "tie-breaking" a split-brain situations. In the event there is no "active" quorum, the administrator can now serve as the tie-breaker manually. Barry recommends for Storwize clusters, where physical quorum disks are attached to a single node-pair, that you have at least one IP-based quorum for tie-breaking.
However, only physical quorum can be used for T3 Recovery. T3 Recovery happens after power outages. All of the nodes update the quorum disk with critical information of all of the virtual mappings of blocks to volumes, and this is used when bringing up the nodes again.
To protect against one pool consuming all of the cache, Spectrum Virtualize will partition the cache, and prevent any one pool from consuming more than a certain percentage of the total cache. The percentage depends on the number of pools:
Number of Pools
Max percentage of any individual pool
5 or more
Barry explained how failover works in the event of node failure. There is voting involved, and the majority remains in the cluster. In the case of an even split, called a "split brain" situation, the quorum decides. Orphaned nodes in a node-pair go into write-through mode, since the cache is no longer mirrored.
The I/O forwarding layer has been split between upper and lower roles. The upper layer handles access I/O groups. The lower layer handles asymmetric access to drives, mDisks and arrays.
N-port ID Virtualization (NPIV) drastically improves multi-pathing. Perhaps one of the coolest improvements in awhile, NPIV allows us to assign "Virtual" WWPN to other ports. When an I/O sent to a single port fails, it retries one or more times again, then waits 30 seconds, and then invokes multi-pathing to find a completely different path to the data. With NPIV, when a port fails, its WWPN is re-assigned to a different port, so the retries are likely to be successful before having to wait 30 seconds!
Lastly, Barry covered the delicate art of Software upgrades. Software is rolled forward one node at a time, and the "cluster state" is maintained during this time.
Different presentations this week are at different technical levels. My session was meant to be an overview of the concepts of Business Continuity, independent of specific operating system platform, using specific IBM products to help illustrate specific examples. Barry's was a deep dive into a single product family.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the afternoon sessions of Day 1.
Storage Brand Opening Session - Craig Nelson
Craig Nelson, Brocade manager for IBM Field Sales Channel, indicated the network equipment is the bridge that brings servers and storage together.
The squeeze -- faster servers and Flash storage causes storage networking to become the bottleneck. Fibre Channel will remain the protocol of choice for the next decade.
"Speed is the net currency of Business" -- Marc Benioff, Salesforce CEO.
Craig drew an analogy. We have been focused on making hard disk drives faster, and then Flash changed the game. Likewise, car manufacturers have focused on making gas engines better, and then Tesla Motors introduces an electric car with insane performance. The early models actually had an "Insane Mode".
The new Gen6 models of IBM b-type SAN equipment will support 32Gbps and 128Gbps ports. That's Insane!
Later models of Tesla Motors offer a "Ludicrous Mode". For flash storage, it is NVMe. NVMe can get storage down to 20 microsecond latency. That's Ludicrous!
Craig put in a plug for two Brocade sessions: "BEWARE - The four potholes on your road to success when deploying flash storage" and "Tune up your storage network! Is it healthy enough for flash storage and next-gen server platforms?"
Storage Brand Opening Session - Clod Barrera
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, presenting storage industry trends.
IDC predicts data capacity to grow 60-80% CAGR. This would require 44 percent drop in $/GB per year to maintain flat budget. Unfortunately, flash media cost is only dropping 25-30 percent per year, and spinning disk only 19 percent per year.
Since storage media will not offset capacity growth, we need other technologies to compensate, including compression, deduplication, defensible disposal, and "cold" storage to tape or optical media.
The smallest persistent storage that IBM has been able to achieve is 12 atoms. Current disk technology is 1200 atoms. Since 1956, IBM and the rest of the IT industry have improved storage 9 orders of magnitude, and now there are only 2 orders of magnitude left.
Clod poked fun at the "Star Wars: Rogue One" movie, indicating that their idea of the future of storage was a huge tape library. See my December 2016 blog post [Has your data gone rogue?]
What does it take to storage information forever? Tape will certainly be around. IBM Zurich demonstrated a 220TB back in 2015 as proof of technology.
A good example of the need for long-term retention are US films. Of those from the silent era, over 90 percent are lost. Over half of the films prior to 1950 are lost. The silver nitrate film stock that the reels were made of have deteriorated. Now that more movies are made digitally, can we do better?
Clouds will move from 10GbE to 25GbE. No slow down for FC in datacenters. Flash storage and object storage are both growing quickly
Move over Software-Defined Storage, Converged and Hyperconverged systems, the new up-and-coming thing are "Composable Systems deployed in Pods" adjustable hourly by workload requirements.
To protect against Ransomware, use "air gap" protection, not on the same network as production workload.
New storage models are needed for Cognitive workloads. Clod put in a plug for Joe Dain's presentation "Introducing cognitive index and search for IBM Cloud Object Storage leveraging Watson"
Storage Brand Opening Session - Axel Koester
Axel Koester, IBM Storage Chief Technologist, presented more storage industry directions.
What will the world look like in 10 years. Today mostly procedural programming, with some statistical big data, and a bit of machine learning. In 10 years, it will be mostly statistical and machine learning, with very little procedural programming. Why? Because it is faster to train computers with Machine Learning, than to program procedurally.
Examples of machine learning are IBM Watson, Google AlphaGo, drive-AI. Axel would rather be a passenger in a machine-learned self-driving car, than a procedurally-programmed one.
Neural networks to interpret hand-written numbers. Welcome to "Unsupervised learning".
A subset of Machine Learning is Deep Learning, a major breakthrough in 2006. Deep Learning is a subset of Machine Learning that uses three or more layers of neural networks. For example, face recognition "deep learning" algorithms can also be used to detect defects through visual inspection of circuit boards.
How does this impact storage?
Procedural -- archive test cases used
Statistical -- store all data for parallel processing
Machine Learning - train sample data, then archive and re-train yearly. Driving 5 minutes = 4 TB of sensor data used for self-driving cars
For Neural processing, x86 CPU are suitable for prototyping. GPU co-processors better, efficient but uncommon. IBM has developed the "TrueNorth" chip does nothing by Neural - 4096 cores with only 70 mW of energy consumption. No clock, instead dendrites, synapses, axons and neurons.
Instead of "Build or Buy?" the new question is "Train or Buy?" Train with confidential data, or buy ready-to-run 100% pre-trained cognitive systems as a service.
AI Frameworks are available on Docker containers with Kubernetes with Persistent storage (Ubiquity) such as Spectrum Scale. These frameworks include DL4J, Chainer, Caffe, torch, theano, tensorflow.
NVMe -- NVM is local only, how to do HA and DR? There are three options:
DB asynchronous shadowing
DB mirroring over NVMeOF
Cluster file system replication of persistent data, such as IBM Spectrum Scale
Example car manufacturer with 50 SAP HANA in memory instances on 4 Spectrum Scale nodes. IBM achieved 50,000 new files per second. Most NAS systems do much less.
Faster media on smaller electronics Holmium atoms on Magnesium Oxide on silver base, resulting in "single atom storage." ATM needle tip magnetizes, measured with Tunnel Magneto-resistance. Unfortunately, reading the data causes it to lose its value, so it is not as persistent as the 12-atom method described by Clod earlier.
As the title suggests, I explained why there is so much interest in Software-Defined Storage in the IT industry, what software-defined storage is, and how to deploy these solutions in your existing infrastructure without the full rip-and-replace. I covered which IBM products are available as software, pre-built systems and/or Cloud services.
This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Day 1 included keynote sessions. Here is my recap for the morning.
General Session "The Quantum Age"
Amy Hirst, IBM Director of Systems Training, served as emcee for the General Session. The theme this week is "Power of Knowledge, Power of Technology, Power of You. You to the IBM'th power".
Chris Schnabel, IBM Q Offering Manager, explained what "IBM Q" is.
Chris feels "our intuition of what we can compute is wrong". Classical (non-Quantum) computing has evolved over past 100 years.
Consider Molecular geometry. The best supercomputer can only handle the smallest molecules, those with 40 to 50 electrons, and even then are unable to calculate bond lengths within 10 percent accuracy. Quantum computing can.
Another area is what computer scientists call the "Traveling Salesman Problem". If you had a list of 57 cities, what would be the optimal path to minimize the distance traveled to get to all of the cities. Doing an exhaustive search would be 10 to the 76th power. Dynamic Programming techniques provide some shortcuts, reducing this down to 10 to the 20th power, but still, that is impossible on most computers.
Chris mentioned that there are easy problems to solve in polynomial time, and hard problems that are exponential, in that they get worse and worse the bigger the input set. There will always be hard problems.
"Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."
-- Richard Feynman
Nature encodes information, but not in ones and zeros. Quantum computers are measured on the number of Qubits, their error rate, etc. The three factors that IBM focuses on are Coherence, Controllability and Connectivity.
Chris explained how Superposition and Entanglement are used in Quantum Computers. I won't bore you with the details here, but rather save this for a future post.
Today: 5 to 16 Qubits (can be simulated with today's classical computers. 5 Qubits is the power of your typical laptop)
Near future: 50-100 Qubits (too big to simulate on supercomputers), with answers that are approximate or correct only 2/3 of the time.
Future: millions of Qubits, fault-tolerant to provide exact, precise answers consistently.
Quantum Computing opens up a new range of problems, what Chris call "Quantum Easy" problems. Problems that might take years to solve on classical supercomputers could be solved in seconds on a Quantum computer.
Chris showed a picture of [Colossus], the first digital electronic computer used in the 1940s. Quantum computing today is like 1940's of classical computing.
IBM is now working on Hybrid Quantum-Classical algorithms, for example:
Quantum Chemistry - can be used in material design, healthcare pharmaceuticals
Optimization - logistics/shipping, risk analytics
There are different ways to build a quantum computer. IBM chose a single-junction transmon design, using Josephson junctions. While the chips are small, the refrigerators they are contained in are huge, and have to keep the chips at very cold 15 milliKelvin temperature (minus 459 Fahrenheit)!
To get people excited about Quantum computing, IBM created the "IBM Q Experience" [ibm.com/ibmq] that allows the public to run algorithms on a basic 5 Qubit system using a simple drag-and-drop interface to put different transformational gates in sequence.
IBM Research team were shocked to see 17 publications in prestigious journals make practical use of this 5 qubit system! Since then, IBM now offers a Software Developers Kit (SDK) called QISkit (pronounced Cheese-kit) as a text-based alternative to the drag-and-drop interface.
Amy Hirst came back on stage to remind people to use Twitter hashtag #ibmtechu to follow the event. There are two more events like this planned for the end of the year. A Power/Storage conference in New Orleans, October 16-20, and another event focused on z Systems mainframe, November 13-17.
Pendulum Swings Back -- Understanding Converged and Hyperconverged Systems
This presentation has an interesting back-story. At a client briefing, I was asked to explain the difference between "Converged" and "Hyperconverged" Systems, which I did with the analogy of a pendulum. I used the whiteboard, and then later made it into a single chart.
At the far left of the pendulum, I start with mainframe systems of the early 1950s that had internal storage. As the pendulum swings to the middle, I discuss the added benefits of external storage, from RAID protection and Cache memory to centralized management and backup.
To the far right of the pendulum, it swings over to networked storage, from NAS to SAN attached devices for flash, disk and tape. This offers excellent advantages, including greater host connectivity, and greater distances supported to help with things like disaster recovery.
Here is where the pendulum swings back. IBM introduced the AS/400 a long while ago, and more recently IBM PureSystems that combined servers, storage and switches into a single rack configuration. Other vendors had similar offerings, such as VCE Vblock, Flexpod from NetApp and Cisco, and Oracle Exadata.
Lately, the pendulum has swung fully back to internal storage, with storage-rich servers running specialized software on commodity servers. There are two kinds:
Pre-built systems like Nutanix, Simplivity or EVO:Rail which are x86 based server systems, pre-installed with software and internal flash and disk storage.
Software that can be deployed on your own choice of hardware, such as IBM Spectrum Accelerate, IBM Spectrum Scale FPO, or VMware VSAN.
So, over time, my single slide has evolved, and fleshed out into a full blown hour-long presentation!
Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".
I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.
Finally, I covered some of our new public cloud storage offerings, using OpenStack Swift and Amazon S3 protocols to access objects off premises, including the new Cold Vault and Flex pricing on IBM Cloud Object Storage System in IBM Bluemix Cloud.
(FCC Disclosure: I work for IBM. I have no financial interest in SUSE, Scality, or any other storage vendor mentioned in this post. This blog post can be considered a "paid celebrity endorsement" for IBM Storwize, IBM Cloud Object Storage, and IBM Spectrum Storage software mentioned below.)
The study takes a realistic request for 250 TB of storage, at 25 percent compound annual growth rate (CAGR), to store infrequently accessed data in an online archive, and then looks at the Total Cost of Ownership (TCO) over five year period.
The study compares five different Software-Defined Solutions and three pre-built systems. The Software-defined solutions come as software-only, requiring that you purchase the hardware separately and build it yourself. The three pre-built systems were chosen from the top three storage vendors in the marketplace: Dell EMC, IBM and NetApp.
The cost of support is factored in, as it should be. To keep things equal, no data reduction like data deduplication or compression were used.
In an odd approach, the study mixes block, file and object based approaches all in the same study.
You can read the full 14-page study (linked above). I have organized the results into a single table, ranked from best to worst, color coded for the best deals in green ($100K to $200K), moderate solutions in yellow ($200K to $300K) and most expensive in red (over $300K). I put the software-only options on the left and pre-built systems on the right.
SUSE Enterprise Storage 4
IBM Storwize V5010
DataCore SAN Symphony
Red Hat Ceph Storage
Dell EMC Unity 300
I am often asked, "Isn't the software-only, build-it-yourself approach, always the lowest cost option?" Now, I can answer, "Sometimes yes, sometimes no." Fortunately, IBM offers Software-Defined Storage in a variety of packaging options including software-only, pre-built systems, and in the Cloud as a service.
IBM Storwize V5010 is based on IBM Spectrum Virtualize software, which you can deploy as software-only on your own x86 servers. This was not mentioned in the study, and perhaps it is my job to remind people that this option is also available for those who want to build their own storage.
For that matter, IBM Cloud Object Storage System -- available as software-only, pre-built systems, and in the Cloud -- might also be a cost-effective alternative.
Next week I will be in Orlando, Florida for the IBM Systems Technical University. If you are attending, stop by one of my presentations, or look for me at the Solution Center at one of the IBM peds, or attend the "Meet the Experts for IBM Storage" on Thursday!
I have been blogging for more than 10 years now, so I am no stranger to commenting on competitive comparisons. In some cases, I am setting the record straight, and other times, poking fun at competitor results, claims or conclusions. This comparison from Brian Carmody was too juicy to ignore.
(FCC Disclosure: I work for IBM. I have no financial interest in Infinidat, Dell EMC, nor Pure Storage, mentioned in this post. I do have friends and former co-workers who now work for Infinidat. This blog post can be considered a "paid celebrity endorsement" for IBM FlashSystem products.)
Here is an excerpt, I have added (Infinidat) wherever Brian says "we" just so there is no confusion:
"... So last week we (Infinidat) finally got around to running the same profiles against an INFINIDAT F6230 in our Waltham Solution Center, configured with 1.1TB of DDR-4 DRAM, 200TB TLC NAND, and 480 3TB Nearline HDDs.
In summary, we (Infinidat) wrecked the Pure and EMC systems. Here are the results side by side with EMC's data:
EMC Unity 600F
16K IOPS (80% Read)
9x Pure, 5x Unity
256K BW MBps
10.6x Pure, 3x Unity
4.5x Pure, 1.6x Unity
Steady-state latency (ms)
1/7 Pure, 1/2 Unity
By the way, we (Infinidat) took the liberty of running the test with a 200TB data set instead of Pure and EMC's 50TB because modern workloads require performance at scale, and we ran it with in-line compression enabled because our compression algorithm doesn't hurt performance.
This was an interesting test to run, and we (Infinidat) hope it helps the storage industry move away from media type wars and benchmarks (you will lose every time on performance if INFINIDAT is in the mix) ..."
Notice anything wrong here? anything missing?
The Tortoise beat "Hare 1" and "Hare 2", but did not invite the Cheetah to the race?
Brian was smart enough not to compare their product to anything from IBM. IBM has a wide variety of All-Flash Arrays, including the DS8880F models, the Storwize V7000F and V5030F models, and Elastic Storage Server models. However, for this workload, IBM would probably recommend the FlashSystem V9000, A9000 or A9000R.
Any All-Flash Array with a steady-state latency of 2 milliseconds or greater is embarassing, but then Infinibox is not really an All-Flash Array.
The architecture of their Infinibox appears much like the original XIV. It has a mix of DRAM memory and SSD cache, combined with spinning drives. It offers only compression, not data deduplication. Unlike the IBM XIV powered by six to 15 servers, the Infinibox appears under-powered with just three servers.
The Infinibox uses software-based in-line compression, which must put a huge tax on the few CPUs they have in those three servers. Infinidat chose not to compress the data in their cache, probably to reduce the additional overhead on their over-taxed CPUs.
The IBM FlashSystem V9000 has an innovative design, based on IBM Spectrum Virtualize, the mature software that you also find in the IBM SAN Volume Controller and Storwize family of products.
The FlashSystem V9000 offers hardware-accelerated compression. IBM takes advantage of the integrated Intel QuickAssist co-processor which runs the compression algorithm 20 times faster than standard Intel Broadwell CPU.
IBM compresses its cache, using a two-tier approach. The "upper cache" receives the data uncompressed, so that it can then tell the application to continue, for fastest turn-around time. Then the data is compressed, and stored in the "lower cache", optimizing the value and benefits of DRAM memory. Many databases get up to 80 percent savings, resulting in a 5-to-1 benefit in DRAM cache memory.
The IBM FlashSystem A9000 and A9000R also have an innovative, based on IBM Spectrum Accelerate, the code originally developed for IBM XIV storage system.
(Fun fact: Infinidat's founder, [Moshe Yanai], was formerly the founder and designer of XIV, and it appears that Infinidat is just a re-design of old XIV technology architecture, re-packaged with a few differences. Since Moshe left, IBM has drastically enhanced the IBM XIV.)
Like the IBM Spectrum Virtualize family, the IBM FlashSystem A9000 and A9000R have hardware-accelerated in-line compression, and two-tier approach to cache. The "upper cache" receives the data uncompressed, then the data is compressed and deduplicated, and stored in the "lower cache", optimizing the value and benefits of DRAM memory.
The IBM FlashSystem A9000 and A9000R also offer in-line data deduplication. Modern workloads are virtualized, and Virtual Machine (VM) and Virtual Desktop Infrastructure (VDI) get significant benefits from data deduplication. Infinidat does not play here. For the FlashSystem A9000, most of the metadata related to data deduplication is in cache, minimizing the overhead.
IBM FlashSystem A9000 and A9000R have full performance that blows these published Infinibox results away WITH compression and deduplication turned on.
Brian ran a workload that used the DRAM and SSD cache exclusively, eliminating the reality that any REAL WORLD workload would have to tap into those much slower spinning drives. This is not really a side-to-side benchmark. He is comparing his live run on Infinibox to published numbers from a previous comparison run on a completely different set of data.
This raises the question, why pay for all those spinning drives at all, if you plan to only use the DRAM and Flash storage for your workloads?
A week later, Brian followed up with another post [The INFINIDAT Challenge], acknowledging his comparison was bogus. Here's an excerpt. Again, I have added (Infinidat) wherever Brian is referring to his employer just so there is no confusion:
"... It's not likely that a room full of storage engineers will ever agree on parameters for a synthetic benchmark since storage evaluations are competitive and control of test parameters will invariably predetermine the 'winner'. However, I hope we can all agree that synthetic benchmarks are a waste of time, and that real world performance is what matters in the data center.
So, what can we (Infinidat) do about it?
We (Infinidat) cordially invite every enterprise storage customer who wants lower latency and lower storage cost to visit [FasterThanAllFlash.com] and sign up for The INFINIDAT Challenge.
We (Infinidat) will Give you an Infinibox system to test
We (Infinidat) will Help you clone and test your environment with Infinibox
We (Infinidat) Guarantee your applications will run faster on Infinibox than your All-Flash Array.
If we (Infinidat) fail, we'll take the system back and Donate $10,000 to the charity of your choice.
If our technology delivers, you can keep the system, and we'll (Infinidat) Donate $10,000 in your name to the charity of our choice (The American Cancer Society).
Thanks again to all who participated in the dialog over the past week. I know the post generated some controversy. Traditional storage companies are fighting for their lives trying to keep enterprise storage expensive; indeed their business models are predicated upon maintaining price levels from a bygone era...."
As consolidation play doing full range of data services, I do not see this Infinibox working out. Talking to clients who have the Infinibox, the performance deteriorates in REAL WORLD workloads as you add more data to the unit.
The Infinibox seems fine for workloads that do not demand high performance, so I was surprised Brian compared it to All-Flash arrays. The Infinibox is out of its league!
(To be fair, Pure Storage and EMC XtremeIO aren't really in the same league as IBM FlashSystem, either, given that both of those products are based on commodity SSD. IBM FlashSystem models are consistently 4 to 10 times lower latency than these Commodity-SSD based competitors.)
The Infinibox also lacks features many people expect in an Enterprise-class storage array, like Call-Home capability to identify problems quickly, and Synchronous remote mirroring for disaster recovery. It is often common for startups like Infinidat to deliver a [Minimum Viable Product] as their first offering.
To paraphrase Brian himself, your applications will lose every time on performance if INFINIDAT is in your datacenter.
The new TS1155 enterprise tape drive can write up to 15 TB uncompressed data to existing JD/JZ/JL media.
It can read/write existing 10TB-formatted JD media, and 7TB-formatted JC media, written by former TS1150 drives. It also can offer read-only support for older 4TB-formatted JC media from TS1140 drives.
These are uncompressed capacities, and some clients achieve 2x or 3x compression on top of these capacities. This depends heavily on the type of data. Your mileage may vary, as they say.
Most of the rest of the features of the TS1150 drives carry forward., The performance 360 MB/sec is similar, encryption via IBM Security Key Lifecycle Manager (SKLM) is similar, and support for IBM Spectrum Archive via Linear Tape File System (LTFS) format is similar.
An interesting development is that the TS1155, in addition to standard 8Gb Fibre Channel attach, is the first IBM enterprise drive to also offer 10Gb Ethernet support. IBM will offer both RDMA over Converged Ethernet (RoCE) as well as iSCSI support.
The newest member of the IBM Spectrum Storage software family, IBM Spectrum Copy Data Management automates the creation of snapshot images (FlashCopy for those familiar with IBM terminology) on IBM, NetApp and EMC storage arrays. These copies can be made for various uses, such as DevOps, Dev/Test, Backup/Restore, and Disaster Recovery.
At some data centers, these copies can consume as much as 60 percent of your total storage space, because often each developer and tester are generating their own copies. Instead, having copies automated, registered, cataloged, and made available to developers and testers eliminates rogue copies.
This release adds support for additional databases, including Microsoft SQL Server on physical machines, SAP HANA in-memory databases, and Epic/Caché from InterSystems used in Electronic Health Records (EHR) management systems.
IBM also adds support for long-distance Vmotion for VMware virtual machine images. The target for this movement is IBM Spectrum Accelerate running on IBM Bluemix Cloud, supporting Hybrid Cloud configurations.
Over the past ten years, my co-workers have asked to write a "guest post" on this blog. This time, Moshe Weiss, IBM Senior Manager, Development and Design, has offered the following post, not in his own voice, but in the voice of his "baby", the Hyper-Scale Manager software.
You might think this is a strange approach, but today we have robots that can dance, and cars that can drive themselves! If software could talk, this is what IBM Hyper-Scale Manager would say:
"I was born a year ago.
It wasn't an easy birth… there were many complications. In fact, so many, that I was almost prematurely born!
Most of my development, in preparation for labor and delivery, was done within the last 6 months of the overall 18 months. I was shaped and designed, and sometimes re-shaped, three times. Lots of assumptions had to be made in hopes to ease a successful delivery and help bring me to full term of the birthing process.
During my first year of maturity, I focused on learning how customers used me; what frustrated them the most, and what they loved or 'almost' loved, while still needing refinement and redesign.
The number of customers adopting me grew higher and higher, as did the number of complaints and bugs that I had to deal with, and my users’ frustrations and dislikes because I wasn't yet a complete solution and still had some missing features.
I was renewed four times! Each time of which improved me and made my senses better, faster, adding new capabilities that helped make me more approachable, intuitive and delightful.
Choosing how to renew, and what to add to each renewal, is not an easy task. Basically, it was about prioritizing user experience versus gaps that were deferred from my birth, versus differentiators to make me unique and sell more, versus features in my roadmap, versus investing huge efforts in my quality.
Each renewal was a complex process with lots of features and behaviors to add, while trying to make my customers’ life a bit easier, since features that were important to them were sometimes considered low priority.
But, there were also good times during my first year:
Huge customer adoption rate
100 new customers in two months!
Growing was a great thing and my parents were and are still so proud! But, like with most things, it came with a price - a lot of sustain issues from the field, requests for changes and bad feedback that I am hard to use and missing core elements.
Being a new baby in the Storage world is not a simple thing, as expectations are huge (mainly because of my successful elder brother, the XIV GUI) and I must quickly keep up with all of them.
Although, I am getting tons of good feedback for being revolutionary and unique. People are emotionally engaged with me, and being that I’m a baby, I love to see emotions!
Huge marketing efforts to put me center stage
However, because of some initial problems at the start -- I am a new product, remember? -- I was thrown out of multiple customer sites, and some sales/marketing guys just stopped believing in me. That made me sad.
My parents did a great job, though, in talking, explaining and demonstrating what I can do, together with what I can’t do now, but will do soon. This really helped in some areas, and customers began to see what my parents saw in me for so many years.
I’m really enthusiastic to hear what people will think of me when I’m two years old!
As part of the renewal I had four times during my first year, design elements were reconsidered, redesigned and rewritten to find the best solutions ever. No product has come even close to what I suggest to the world… I am so proud of myself!
Additionally, my parents wrote approximately 20 patents on my User Interface (UI) elements and User Experience (UX) concepts, which makes me extremely unique.
Prioritization of what goes in and what doesn't, especially during a time when fewer and fewer babysitters handled me during that year. It was a real challenge. Read my parent's post [How to drive forward an exhausted team?] for more details.
But my parents did it! They succeeded to add cool features like:
Filter analytics and free text, making the filter a great experience that everyone is using.
Great UX improvements like redesigning the tabs, adding right click menus, and adding more on-boarding enablers
Improving the dashboard.
Improving my core business, capacity management (four different times!), and still working on it.
Adding features that were initially deferred in my birth. Deferring features back then was the way to make my birth go smoother. Now, these missing features annoy people.
Improving quality dramatically, adding automation to the way people test me.
Adding differentiators, like the health widget, with more than 20 best practices that provide helpful tips to the customer when there’s a need to change something in their environment, to avoid future issues.
Continue to bring added values for the 'A-family'. I am monitoring: FlashSystem A9000/R, XIV and Spectrum Accelerate, both on and off premises. This added value makes for a family with the most powerful management solutions and experience."
If you are planning to attend the upcoming IBM Systems Technical University, Orlando Florida, May 22-26, There will also be a variety of hands-on labs. I recommend participating in the hands-on session to feel and witness the next release of IBM Hyper-Scale Manager.
This week, I was part of an all-day event called "Healthcare and Research Trends & Directions in a Cognitive World" at the IBM Executive Briefing Center (EBC) in Rochester, MN. I was one of many presenters covering Information Technology to improve healthcare outcomes. Todd Stacy, IBM Director Server Sales for US Public Market, served as our emcee.
This was a great day. Special thanks to Kathy Lehr, Trish Froeschle, and Scott Gass for organizing this event! We had clients from a variety of Health Care and Life Science industry backgrounds. I certainly learned a few things myself.
Dr. Michael Weiner, IBM Chief Medical Information Officer, Watson Health, covered some of the real challenges not just facing the United States, but also other countries. On average, healthcare in USA [costs over $10,000 USD per American citizen]! Compare that to only $3,700 USD for the folks in the United Kingdom! In fact, nearly all industrial nations spend between $2,000 and $5,000 per person. Where does all the U.S. money go?
A big challenge is our ever-aging population. Every day, there are 10,000 [Baby Boomers] reaching their 65th birthday, with fewer people in the 25-44 age group to work as nurses to take care of them. About 15 percent of the US population are elderly (over age 65) and this is expected to grow to 20 percent in year 2040. The situation is even worse in Japan, where 25 percent of the population today is elderly, and this is expected to be 40 percent by year 2060.
New Care Models
In some countries, like Australia and Japan, post office workers who spent their time delivering mail, now can stop in to check in on elderly people. As people ship less mail, using social media or email instead, this keeps the postal workers employed, in a manner that provides society value.
The USA enjoys one of the lowest costs for food, but then suffers from an epidemic of obesity, with over 34 percent of Americans are obese. When New York City eliminated Trans Fats, heart attacks dropped considerably.
In 2009, the Health Information Technology for Economic and Clinical Health [HITECH] Act required the digitization of medical information, known as "Meaningful Use", which has greatly influenced healthcare facilities. This was implemented by a combination of incentives and penalties. Now, more than than 92 percent of hospitals in the USA have digitized medical information! The rest are still using paper and Xray film images. Some places were initially exempted, such as Assisted Living Homes for example, so there is still more work to be done.
An advantage of using computer-based solutions like Artificial Intelligence is that it eliminates bias. When a woman walks into an Emergency Room complaining about chest pains, few health staff would consider this a sign of heart attack. When a man does same, health staff considers heart attack as the first diagnosis, at the risk of missing out on other possibilities.
Every year, over a million articles related to healthcare research are published. Who can read all this in a timely manner? IBM Watson! After [winning in Jeopardy], IBM Watson was "sent to medical school" to learn how to assist doctors in diagnosing patients.
Transforming Health Care Data Management with IBM Spectrum Storage
Greg Tevis, IBM Software Defined Storage Architect, and Raj Tandon, IBM Senior Strategist, co-presented this introduction to IBM Spectrum Storage family of products. They covered examples with IBM Spectrum Virtualize, IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum Scale, IBM Cloud Object Storage, and IBM Copy Data Management. The latter having support directly for EPIC and Cache databases.
Cognitive Imaging Solutions for Healthcare Providers
Jason Crites, IBM Healthcare and Life Sciences Data Solutions Leader, and Wayland Vacek, Enterprise Sales Manager for Merge, presented IBM Watson Imaging Clinical Review, from IBM's acquisition of the Merge company. The solution is based on IBM Spectrum Scale as the back-end storage repository.
Merge has been around for more than 20 years, with clinical workflow offerings in Cardiology, Radiology, Orthopedics and Eye care. Often, IBM Watson is able to identify things in medical images that escape the review or radiologists or other medical specialists.
At HIMSS conference earlier this year, The human radiologists were shown a collection of images used to train IBM Watson. The human radiologists only identified 20 percent of the images correctly, while IBM Watson got all of them, every time. In many cases, human radiologists have only a few seconds to look at an Xray image. Computers like IBM Watson are now fast enough to compete directly with human radiologists in the same number of seconds.
Building a Foundation for the Cognitive Era in Healthcare and Life Sciences
Dr. Jane Yu, IBM Systems Architect, Healthcare & Life Sciences, and Dr. Frank Lee, IBM Global Sales Leader, IBM Software Defined Infrastructure & Life Sciences, co-presented this topic. They present five challenges:
Growing data volumes are making it more difficult to manage, process and store this data.
Scientists find themselves spending more than 80 percent of their time manually integrating data from silos, and less than 20 percent of their time doing actual research and deriving insights from their analyses.
Compute- and data-intensive workflows may take days to complete on existing server and storage systems.
IT organizations must keep up with rapidly evolving applications, development frameworks, and databases for preferred. Health care Life Science (HCLS) applications. This includes SAS, Matlab, Hadoop, Spark, NoSQL databases, as well as Deep Learning and Machine Learning workloads.
Scientific integrity and government mandates increasingly require collaboration across organizational boundaries.
In one example, Sidra Medical and Research Center plans to map the genomes of all 250,000 citizens in the Middle Eastern country of Qatar. Imagine that processing each Qatari citizen will generate 200 GB of data for this project, resulting in 50 Petabytes (PB) of data!
Combining IBM Spectrum Compute products with IBM Spectrum Scale storage, can help address these challenges.
Modernize & Transform Helathcare with IBM Storage Solutions
Finally, I presented a 90-minute breakout session that covered three solution areas:
Flash storage to speed up medical records and research. Those who have already implemented Electronic Health Records (EHR) for "Meaningful Use" compliance recognize the value this provides to improving healthcare. Adding All-Flash Arrays such as IBM FlashSystem, Storwize V7000F or DS8000F can drastically improve application performance.
Spectrum Scale and IBM Cloud Object Storage for Vendor Neutral Archive. It seems silly that each PACS vendor has its own little island of storage. A better approach is to send all PACS data from various vendors into a "Vendor-Neutral" storage repository. Both IBM Spectrum Scale and IBM Cloud Object Storage System, either linked together or used separately, can be part of a VNA solution.
VersaStack to simplify deployments. VersaStack is a Converged System that combines best-of-breed Cisco servers and switches with best-of-breed IBM storage, pre-cabled, pre-configured, and pre-loaded with all the necessary software to manage the environment as a single entity. This can reduce the time it takes to deploy new medical applications from weeks to just hours.
Next month, I will be presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. There will not be an "IBM Edge" conference this year, so this is your best opportunity to hear the latest information on all of the IBM server and storage products at one conference.
I will be there! Here are the topics I will be presenting:
The pendulum swings back -- Understanding Converged and Hyperconverged environments
IBM cloud storage options
Software Defined Storage -- Why? What? How?
Business continuity -- The seven tiers of business continuity and disaster recovery
Introduction to object storage and its applications - Cleversafe
New generation of storage tiering: Less management, lower investment and increased performance
IBM Spectrum Scale for file and object storage
This conference is not all lectures, which some refer to as "Death by Powerpoint".
There will also be a variety of hands-on labs. I recommend participating in the hands-on session to feel and witness the next release of IBM Hyper-Scale Manager, which is the management application for what IBM calls its A-line storage family -- FlashSystem A9000/R, XIV Storage System, and Spectrum Accelerate software.
Hyper-Scale Manager is the most advanced GUI in the market today, may help reduce your management total cost of ownership (TCO) in half!
You can [Enroll Today!] There is an "early-bird" special to save hundreds of dollars if you enroll by April 16!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Last week, at the InterConnect conference, IBM's premiere Cloud and Mobile event, there were announcements regarding IBM Cloud Object Storage.
IBM Cloud Object Storage Cold Vault
It seems that all the major Cloud Service Providers (CSPs) offer storage for warm, cool, and cold data.
IBM Cloud Object Storage Cold Vault service gives clients access to cold storage data on the IBM Cloud and is designed to lead the category for cold data among major competitors. Cold Vault joins the existing Standard and Vault tiers.
For rarely accessed mixed workloads and applications
Archiving, long-term data retention, historical data compliance
For cooler workloads accessed once a month or less
Archiving, short-term data retention, digital asset preservation, tape replacement and disaster recovery
For warm data, active workloads accessed multiple times per month
DevOps, Social, mobile, collaboration and analytics
These tiers are available in both "Regional" and "Cross Regional" deployment models. Customers can choose between Regional Service, which spreads their data across multiple data centers in a given region, and Cross Regional Service, which spans at least three geographically dispersed regions. For example, Regional may have data in three data centers all located in Texas, but Cross-Regional might have data in California, Texas and New York.
IBM Cloud Object Storage Flex
In the past, clients used "Information Lifecycle Management" (ILM) policies to place data where it was initially needed, then move it to less expensive tiers storage as it ages and is accessed less frequently. This was is done between Flash, Disk and Tape storage on premises, but the concept could also be applied to off-premises Cloud storage as well.
Flex is a new cloud storage service offering simplified pricing for clients whose data usage patterns are difficult to predict. Flex enables clients to benefit from the cost savings of cold storage for rarely accessed data, while maintaining high accessibility to all data.
Data that is accessed several times per month (warm) will be charged at Standard rates, data that is accessed once a month or so (cool) will be charged Vault rates, and data that is rarely accessed (cold) will be charged at Cold Vault rates.
In effect, this gives you ILM cost savings without ever having to move your data between Standard, Vault or Cold Vault tiers.
IBM Cloud Object Storage partnership with NetApp AltaVault
Many backup software products have been enhanced to write directly to IBM Cloud Object Storage, including IBM Spectrum Protect (formerly known as IBM Tivoli Storage Manager or TSM for short), as well as Commvault Simpana and Veritas NetBackup.
IBM extends this partnership to NetApp. NetApp AltaVault cloud backup solutions will now automatically be able to send backups to IBM Cloud Object Storage on IBM Cloud. Incorporating native support for IBM Cloud Object Storage into AltaVault solutions allows NetApp users to deploy backup data to the IBM Cloud.
IBM Cloud Object Storage for Bluemix Garage
For software developers, [IBM Bluemix Garage] provides a global consultancy with the DNA of a startup that combines the best practices of Design Thinking, Agile Development and Lean Startup to accelerate application development and cloud innovation.
Now, you can use IBM Cloud Object Storage in the IBM Cloud to support Garage method projects.
Want to try it out? With Promo Code "COSFREE", you can get a free year of Standard Cross-region IBM Cloud Object Storage with up to 25 GB/month access! See [Free Storage Promotion] for details.
This week, IBM InterConnect conference is going on in Las Vegas, Nevada.
One time in Las Vegas, I took the gondola ride at the Venetian Hotel. These are not boats with a motor on a chain or track, a but actually steered and propelled independently by the gondolier. At various points on our path, our gondolier would serenade our group with beautiful Italian songs.
As the ride was ending, I asked our gondolier how long their training program was to do this job. He told me "six weeks". I said "Wow, I would love to learn how to sing Italian songs like that in six weeks". He corrected me, "No, silly, they only hire experienced singers, and teach them six weeks to manage the gondola by turning the oar in the water."
(FCC Disclosure: I work for IBM. I have no financial interest in the Venetian Hotel, CBS Studios, or the producers of any television shows mentioned in this post. David Spark has provided me a complimentary copy of his book. This blog post can be considered an "unpaid celebrity endorsement" for the book reviewed below.)
InterConnect 2017 includes "Concourse", a trade show floor with people showing off the latest technologies. In the past 25 years, I have attended many conferences, and on occasion I have worked "booth duty". I am not in Las Vegas this week, so this post is advice to those that are.
One time, when the coordinators for an upcoming conference announced at an all-hands meeting they were looking for "a number of knowledgeable and outgoing volunteers" to work the IBM booth, one of the employees in the audience asked "How many of each?" While this might have meant to draw laughs, it underscored a real problem.
In many IT and engineering fields, the terms "knowledgeable" and "outgoing" are seen as mutually exclusive. People are either one or the other. A study titled [Personality types in software engineering], by Luiz Fernando Capretz of The University of Western Ontario, analyzed Myers-Briggs Type Indicator of personality and found the majority of engineers were "Introverts".
This line of thinking is further reinforced by the various characters on the television shows like "The Big Bang Theory". If you are familiar with the show, you have Sheldon and Amy are the most knowledgeable, but also the most socially awkward, and then you have Penny and Howard, less knowledgeable but at the more outgoing end of the spectrum.
I understand that for many engineers, working a booth at a trade show is far outside their "comfort zone". But what do you think is more likely, that you can train an engineer to work a booth in six weeks, be more outgoing, hold the right conversations, tell the right stories -- or -- train a professional model, a young, good looking man or woman, who is already outgoing and friendly, to answer technical engineering questions about your products and services?
I have been attending conferences for over 25 years, and occasionally have worked a booth or two. I started out as an engineer, but went through extensive training for public speaking, talking to the media and press, and moderating Q&A Expert panels.
Sadly, most people who work the booth get little to no training at all. You might be told your scheduled hours, how to scan bar codes on badges, and where the brochures and swag are stored. Then, you get your official "shirt" and told to wear it with a certain color pants, so that everyone looks like part of the team.
Fortunately, fellow blogger David Spark, of Spark Media Solutions, has written a book titled "Three feet from Seven Figures" with loads of advice on how to work a booth with one-on-one engagement techniques to qualify more leads at trade shows.
The title of his book warrants a bit of explanation. When you are working a booth, potential buyers and influencers are walking by, often just three feet away from you, and these could represent million-dollar opportunities.
Too often, the folks working a booth take a passive approach. They look down at their phones, chat with their colleagues, and basically wait for complete strangers to ask them a question or request a demo. This non-verbal communication can really be a turn-off. David explains this in all-too-familiar detail and how to be more actively engaged.
David shows how to break the ice and build rapport with each attendee, how to qualify them as legitimate leads, and how to handle each type of situation.
For qualified leads, you need to maximize the opportunity. If you imagine how much a company spends to send its employees to work the booth, plus the cost of the booth itself, and divide it by the limited number of hours that the trade show floor is open, you quickly realize that each hour is precious.
Your time is valuable, and certainly their time is valuable also. Let's not spend too much time on a single lead, but rather capture the information, end the conversation, and move on.
If you are working a booth at IBM InterConnect, or plan to work a booth at an event later this year, I highly recommend getting this book! It is available in a variety of hard copy and online formats at [ThreeFeetBook.com].
I am not in Las Vegas this week for this year's event, but the sessions will be streamed live through [IBM GO].
IBM Systems Technical University - May 22-26, 2017 - Orlando, FL
IBM Systems Technical University is the evolution of a variety of other conferences related to servers, storage and software. Starting out as the "IBM Storage Symposium", then added "System x" servers and renamed to "Storage and System x University", then dropped "System x" when IBM sold off that business to Lenovo.
A few years ago, it was renamed "Edge", initially just focused on Storage, but then two years ago combined with System z mainframe servers and POWER Systems for IBM i and AIX platforms. It also covers software products that previously had their own conferences, like IBM Pulse or MaximoWorld
Last year, the IBM Marketing team tried a daring experiment. Let's change "Edge" to be a "Cognitive Solutions and Cloud Platform" conference, with emphasis on IT Infrastructure.
The experiment failed. Not because IBM Systems don't support these new initiatives, but because the audience were more interested to hear about how IBM Systems help their current day-to-day business. As many attendees told me, "If we wanted to hear about Cognitive or Cloud, we have plenty of other of conferences that cover that already!"
While 40 percent of IBM revenues are generated from Cognitive Solutions and Cloud Platform, the other 60 percent are traditional, on-premise, systems-of-record application workloads, the kind that business, non-profit groups, and government agencies have been using for the past few decades!
To address this need, IBM offered three-day "IBM Systems Technical University" events at various locations. Last year, I presented storage topics at events in Atlanta, Austin, Bogota, Boston, Chicago, Dubai, Nairobi, and São Paulo.
We will have several of those this year as well. The main one will be a full 5-day event, May 22-26, in Orlando Florida. I will be there presenting various sessions on storage!
IBM World of Watson - October 29-November 2, 2017 - Las Vegas, NV
This is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Analytics and Database technologies.
I did not attend World of Watson, or WoW for short, last year, but it was an evolution of the conference previously called "IBM Insight". I am sure everything from DB2 and Open Source databases to Hadoop and Spark will be covered this year as well.
In writing this post, I realize that this year will be like a "Conference Sandwich". Cognitive-and-Cloud at the top and bottom, with all the meat, veggies and garnish in the middle!
This week, IBM sponsored a nice multi-client event in San Juan, Puerto Rico. I was quite impressed with the quality of this video. Our marketing department has really done a good job on this!
This event was not just multi-client, but also spanned different industry sectors. IBM recently has realigned to five different sectors, and we had clients from different sectors attending the event.
The night before, I was able to meet most of the other IBM executives who came down for the event. Unfortunately, two were delayed because of the snow storms in the Northeast part of the United States, but they were able to arrive the next day.
The venue was the El Touro restaurant, near the Hilton Caribe. The weather was just right, about 75 degrees and breezy. It was a little humid for me, but everyone else were just happy to be out of the cold. Meanwhile it is nearly 90 degrees in Tucson, Arizona where I am from.
This was billed as a "Lunch and Learn" and the food was delicious! In an effort to keep it simple, we had small dishes of fish with fruit-based cream sauce, paella with rabbit meat and rice, pork belly, Crema Catalana and a churo for dessert. This gave everyone a sample taste of everything, without having to order off a menu.
We basically took the same approach with the presentation. First, Marcos Obermaeir and Marcos Otero, the two leads for this event, thanked the audience and explained their new roles. Marcos Obermaeir is focused on Financial and Insurance sector, while Marcos Otero focused on Communications sector.
Next we had Debbie Niven and Roopam Master, both IBM Executives, explain their roles, and how IBM can help both clients and Business Partners in Puerto Rico.
I presented samples of much larger presentations on three topics. First, the excitement over Software Defined Storage with IBM Spectrum Storage family of products. Second, IBM Spectrum Scale as a better replacement for Hadoop File System (HDFS) for Hadoop, IBM BigInsights and Hortonworks analytics deployments. Third, IBM Cloud Object Storage, and how this can be combined with IBM Spectrum Protect to backup your data to object storage either on premises, or in the Cloud.
I could have easily spoken an hour on each topic, but instead, we shortened to about 20 minutes each, in keeping with the "Tapas" theme of the restaurant. This allowed those clients who wanted to hear more to have a reason to request a follow-up visit or call.
After the clients left, the IBM team had a reception for the IBM Business Partners. About 80 percent of IBM's storage business in Puerto Rico is done through IBM Business Partners, so they are an important link in IBM's "Go-to-Market" strategy.
The moon was nearly full, and the breeze and waves were a spectacular backdrop to the conversations I had with each person I met.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
IBM Storwize V5030F and V7000F all-flash high-density expansion enclosure
The 5U-high, 92-drive expansion enclosure introduced for the IBM Storwize V5000 and V7000 is now available for the all-flash models V5030F and V7000F. High-density expansion enclosure Model A9F requires IBM Spectrum Virtualize Software V7.8, or later, for operation.
The enclosure allows any mix of "Tier 0" write-endurance SSD at 1.6TB and 3.2TB capacities, and "Tier 1" read-intensive SSD at 1.92TB, 3.84TB, 7.68TB and 15.36TB capacities.
Storwize V5030F control enclosure models support attachment of up to 40U of expansion enclosures, which equates to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 1,056 per clustered system.
Storwize V7000F control enclosure models support attachment of up to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 3,040 drives per clustered system.
IBM has adopted "Agile" process for all of its IBM Spectrum Storage software. Spectrum Virtualize is offered in a variety of forms. IBM offers the FlashSystem V9000, SAN Volume Controller, Storwize family, and Spectrum Virtualize as software that runs on Lenovo and SuperMicro servers. This means quarterly delivery of new features and functions!
Lots of small enhancements were added in this release:
Apply Quality-of-Service (QoS) to a Host Cluster in terms of IOPS and or MB/s throughput.
SAN Congestion reporting, via buffer credit starvation reporting in Spectrum Control and via the XML statistics reporting, for the 16Gbps FCP Host Bus Adapter (HBA).
Resizing for Metro Mirror and Global Mirror remote copy services of thin provisioned volumes.
Consistency Protection for Metro Mirror and Global Mirror. You can now define "Change Volumes" to be used in the event of problems with MM or GM, it will switch over to GMCV mode.
Increased FlashCopy Background Copy Rates
Proactive Host Failover during temporary and permanent node removals from cluster
IBM Aspera® Files cloud service helps to enable fast, easy, and secure exchange of files and folders of any size between users, even across separate organizations. Aspera Files is currently available in three all-inclusive editions of Personal, Business, and Enterprise. Clients can subscribe either to a committed amount of data transferred on a monthly or annual basis or as a pay-per-use option.
Personal edition now includes 20 authorized users and a single workspace.
Business edition now includes 100 authorized users, 100 workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
Enterprise edition now includes 500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.
IBM is now introducing a new "Elite edition" includes 2500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, support for Single-Sign-On, and access to IBM Aspera Developer Network and nonproduction organization.
With the addition of the new Elite edition, clients have the flexibility to subscribe to additional functionality in Aspera Files that helps provide higher value and greater differentiation. The Elite edition is available as a subscription and on a pay-per-use basis.
In addition to the existing charge metric of data transferred, a user subscription metric is now included for all four editions. Each edition comes with an included number of authorized users in addition to other key features and capabilities.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.
IBM Spectrum Virtualize Software V7.8.1
IBM Spectrum Virtualize&trade: V7.8.1 is the latest software for FlashSystem V9000, SAN Volume Controller and Storwize products.
Last release, IBM introduced "Host Groups" for clusters that needed to share a common set of volumes. This release offers "Host cluster I/O throttling": I/O throttling can be managed at the host level (individual or groups) and at managed disk levels for improved performance management,and GUI support.
Increased background FlashCopy transfer rates: This feature enables you to increase the rate of background FlashCopy transfers, providing faster copies as the infrastructure allows. This takes advantage of the higher performance capabilities of today's systems, processing the copy in a shorter period of time. The default was 64 MB/sec, and now we can go up to 2 GB/sec, for those who want their FlashCopy to be done as fast as possible.
Port Congestion Statistic: Zero buffer credits help detect SAN congestion in performance-related issues, improving support in high-performance environments. IBM had this for the 8Gbps FCP cards, but not for the 16Gbps cards, so now that's fixed.
Resizing of volumes in remote mirror relationships: Target volumes in remote mirror relationships will be automatically resized when source volumes are resized. Lots of clients asked for this, and IBM delivered!
Consistency protection for Metro/Global Mirror relationships: An automatic restart of mirroring relationships after a link fails between the mirror sites improves disaster recovery scenarios, helping to ensure the applications are protected throughout the process.
When IBM introduced "Global Mirror with Change Volumes" (GM CV), I wanted to call it "Trickle Mirror", because the primary site takes a FlashCopy, trickles the data over, then FlashCopy at the remote site. Now, clients using traditional Metro or Global Mirror can add "Change Volumes" as protection. In the unlikely event a network disruption occurs, it drops down to GMCV until the link resumes full speed.
Support of SuperMicro servers for the Spectrum Virtualize as Software Only offering: Support for x86-based Intel™ servers by SuperMicro for Spectrum Virtualize Software is available with this release.
Last year, IBM offered Spectrum Virtualize as software that could run on Lenovo servers. However, now there are clients who want alternative server choices.
Supermicro SuperServer 2028U-TRTP+ is supported to run Spectrum Virtualize Software. This is a great option for end clients, managed service or cloud service providers deploying private clouds, building hosted services, or using software-defined storage on third party Intel servers. This a fully inclusive license with all key features available on Spectrum Virtualize in a single, downloadable image.
IBM Spectrum Control V5.2.13 and IBM Virtual Storage Center V5.2.13
We often joke that IBM Virtual Storage Center is the [Happy Meal] combining storage virtualization with Spectrum Virtualize hardware like FlashSystem V9000, SAN Volume Controller or Storwize as the "hamburger", Spectrum Control as the "fries" and "Spectrum Protect Snapshot" as the "soft drink". Storage Analytics was included as a "prize inside" only available in the VSC bundle to entice clients to chose this option.
Whenever IBM updates Spectrum Control, they often put out a new version of the Virtual Storage Center bundle as well. I was the Chief Architect for Spectrum Control 2001-2002, and Technical Evangelist for SVC in 2003 when we first introduced the product, so I have long history with both products.
This release provides additional information and performance metrics on Dell EMC VMAX and EMC VNX devices. This is done natively, they do not need to be virtualized by Spectrum Virtualize as was often done in the past.
IBM now offers better visibility of drives within IBM Cloud Object Storage Slicestor® nodes. IBM acquired Cleversafe 18 months ago, and are working to get it under the Spectrum Control management umbrella.
IBM Spectrum Scale™ file system to external pool correlation. Spectrum Scale can migrate data to three different type of "external pools":
Cloud Object pool, either on-premise Object Storage or off-premise Cloud Service Provider storage.
Spectrum Protect pool, where Spectrum Protect manages the migrated data on one of 700 supported devices, including tape, virtual tape, optical, flash, disk, object storage or cloud.
Spectrum Archive pool, where data is written directly to physical tape using the Industry-standard LTFS format.
This release provides additional information on the copy data panel about SAN Volume Controller (SVC) HyperSwap® and vDisk mirror.
While the "Virtual Storage Center" bundle is an awesome deal, some clients have asked for the "Vegetarian Option" (Fries and Drink only). Why? Because they want the advanced storage analytics (prize inside) for other devices like DS8000, XIV, etc. So, IBM created the "IBM Spectrum Control Advanced Edition", which has everything in VSC except the Spectrum Virtualize itself.
Advanced edition adds improvements to the chargeback report. It also includes IBM Spectrum Protect™ Snapshot V8.1 release.
IBM Spectrum Control Storage Insights Software as a Service
Storage Insights is IBM's "Software-as-a-Service" reporting-only offering subset of Spectrum Control Advanced Edition. It includes direct support for Dell EMC VMAX, VNX, and VNXe storage systems. This is huge! Now, clients who have only EMC hardware can now, on a monthly basis, figure out where they are wasting money and decrease their costs.
Other features carried over include the enhanced drive support for IBM® Cloud Object Storage, enhanced external capacity views for IBM Spectrum Scale™ and additional replication views for vDisk mirror and HyperSwap® relationships for SAN Volume Controller (SVC) and Storwize® devices that I mention above.
Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family.
IBM TS7700 Virtual Tape System
IBM TS7700 release 4.1.1 now supports seven- and eight-way grids with approved RPQs. Before this, grids could only have up to six TS7700 systems connected together.
IBM also plans to extend the capacity of the TS7760 base frame to over 600 TB, and to extend the capacity of a fully configured TS7760 system to over 2.45 PB, before compression, by supporting 8 TB disk drives. This is a huge increase over the 4TB and 6TB drives used today.
IBM offers the IBM Cloud Object Storage System in three ways: as software, as pre-built systems, and as a cloud server on IBM Bluemix (formerly known as SoftLayer).
For those not familiar with IBM Cloud Object Storage (IBM COS), consider it "Valet Parking" for your storage. In a valet parking environment, you have valet parking attendants that drive the cars, parking garages that hold the cars, and a manager that oversees the operation. With IBM COS, you have Accesser® nodes that receive and retrieve your data like valet parking attendants, you have Slicestor® nodes that store your objects like cars in a parking garage, and you have IBM COS Manager to oversee the operation.
Today, IBM announced new HDD options for their S01, S03 and S03 models of Slicestor nodes. These are all 7200 rpm, 3.5-inch Nearline drives, at capacities of 4 TB, 6 TB, 8 TB and 10 TB.
In addition, a short-range 40 GbE SFP+ transceiver is available for ordering on IBM Cloud Object Storage Accesser models A00, A01, and A02, and IBM Cloud Object Storage Slicestor models S01 and S02. This improves the performance of data transfer between the Accesser nodes and the Slicestor nodes. Think of it like shortening the distance valet parking attendants have to drive your car to the garage and run back.
I have been presenting Cloud Storage for nearly 10 years now. People are often shocked to learn that most of the major cloud providers -- including Amazon, Google, Microsoft -- do not offer "Data at Rest" encryption on their storage offerings.
Why not? Because it would mean investing in Self-Encrypting Drives, Key management software, and other related technology to make it happen. Instead, Cloud Service Providers (CSPs) expect you to encrypt the data in software. Most users encrypt data before it lands on the cloud, but what if you create the data in the cloud?
IBM solved this by offering IBM Cloud Object Storage in its IBM Cloud (formerly known as SoftLayer). It has integrated encryption software that takes care of this for you.
This new product, IBM Multi-Cloud Data Encryption V1.0, enables you to encrypt files, folders, and volumes in any cloud while maintaining local control of encryption keys. It integrates with IBM Security Key Lifecycle Manager (SKLM). This is designed to allow you to move cipher data between clouds that are running Multi-Cloud Data Encryption without decrypting and re-encrypting the data.
For example, you can use IBM Multi-Cloud Data Encryption to protect your data on Amazon, Google or Microsoft, then later realize that you can save a ton of money moving to IBM Cloud instead, and you are now able to move the data over seamlessly!
(Back in 2010, I poked fun at EMC with my post [VPLEX: EMC's Latest Wheel is Round]. I pointed out that EMC's announcement of "new features" that already existed in IBM's SAN Volume Controller. Oops! They did it again!)
Basically, Dell EMC is working on a new "2 Tiers" approach that combines high-performance flash tier with high-capacity object storage. Guess what? IBM already offers this! Why wait?
IBM Spectrum Scale, formerly known as the General Parallel File System (GPFS), supports POSIX, HDFS, OpenStack Swift, Amazon S3, NFS, SMB and iSCSI protocols.
Spectrum Scale can provide this front-end abstraction layer between flash and object storage, including IBM Cloud Object Storage system and IBM Bluemix (formerly SoftLayer) cloud services.
But why limit yourself to just two tiers? IBM Spectrum Scale can also support 15K, 10K and 7200 RPM spinning disk drive tiers, as well as virtual or physical tape tier, the ultimate low-cost high-capacity tier!
Several years ago, IBM coined the phrase "FLAPE" to discuss the two-tier approach of combining Flash with Tape using Spectrum Scale as the front-end abstraction layer.
Perhaps we should call combinations of Flash and Object "FLobject" storage? If the name catches on, you read it here first!
IBM is in a transition from being a "Systems, Software and Services" company, to become the leading "Cognitive Solutions and Cloud Platform" company. IBM has been in this transformation for the past three years or so, and [over 40 percent of its revenue] now comes from these strategic initiatives.
The purpose of AI and cognitive systems developed and applied by the IBM company is to augment human intelligence. Our technology, products, services and policies will be designed to enhance and extend human capability, expertise and potential. Our position is based not only on principle but also on science.
Cognitive systems will not realistically attain consciousness or independent agency. Rather, they will increasingly be embedded in the processes, systems, products and services by which business and society function -- all of which will and should remain within human control.
For cognitive systems to fulfill their world-changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear:
When and for what purposes AI is being applied in the cognitive solutions we develop and deploy.
The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions.
The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.
The economic and societal benefits of this new era will not be realized if the human side of the equation is not supported. This is uniquely important with cognitive technology, which augments human intelligence and expertise and works collaboratively with humans.
Therefore, the IBM company will work to help students, workers and citizens acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy.
(As IBM is focused on its transformation from a "Systems, Software and Services" company to a "Cognitive Solutions and Cloud Platform" company, it seems appropriate to highlight my 1,000 blog post on the concept of cognitive solutions.)
A lot of people ask me to explain what exactly does IBM mean by "cognitive", which is a fair question. Let's start with the [Dictionary definition]:
of or relating to cognition; concerned with the act or process of knowing, perceiving, etc.
of or relating to the mental processes of perception, memory, judgment, and reasoning, as contrasted with emotional and volitional processes.
What exactly does IBM mean by Cognitive? IBM has taken this definition, and focused on four key strategic areas:
In the summer of 1981, I spent a summer debugging a "Pascal" compiler at the University of Texas at Austin. I wasn't told that was what I was doing. Rather, I was tasked with writing sample Pascal programs that would demonstrate the features and capabilities of the language.
Every day, I would come up with a concept of a program, punch up the cards, run it through the CDC hopper, and verify that it would work properly. If I didn't have it working by lunch, I would take it to the "help desk", they would look it over, and tell me how to fix it after I got back.
Most of the time, it was a mistake in my software. A few times, however, it was a flaw in the compiler itself. My programs were basically test cases, and the Pascal Compiler development team was fixing or enhancing the compiler code every time I had a problem.
Compilers basically work by parsing the program text, looking for fixed keywords that are entered in a specifically prescribed order to make sense. Other keywords may represent data types, variables, constants or pre-defined macros.
But compilers are not cognitive. Cognitive solutions can understand natural language, and have to handle all the ambiguity of words not being in the correct order, or different words having different meanings.
As an Electrical Engineer, I had to take many classes on classical analog signal processing. In fact, all computers have some amount of analog components, where threshold processing is used to differentiate a zero (0) from a one (1).
For example, if a "zero" value was represented by 1 volt, and a "one" value by 5 volts, then you can set a threshold at 3 volts. Any voltage less than 3 would be considered a "zero" value, and anything 3 volts or greater a "one" value.
But threshold processing is not cognitive. Cognitive solutions also use thresholds, but their thresholds are dynamically determined, through advanced analytics and statistical mathematical models, and may adjust up and down as needed, based on machine learning over time.
IBM Research is proud to have developed the world's most advanced caching algorithms for its storage systems. Cache memory is very fast, but also very expensive, so offered in limited quantities. Caching algorithms decide which blocks of data should remain in cache, and which should be kicked out.
Ideally, a block in read cache would be kicked out precisely after the last time it was read, with little or no expectation for being read again anytime soon. Likewise, a block in write cache would be destaged to persistent storage precisely after the last time it was updated, with little or no expectation for being updated again anytime soon.
Traditional approach is "Least Recently Used" or [LRU]. Cache entries that were read recently or updated recently, would be placed on the top of the list, and the least referenced would be at the bottom of the list. When space is needed in cache, the entries at the bottom of the list would be kicked out.
IBM's [Adaptive Cache Algorithm outperforms LRU]. For example, on a workstation disk drive workload, at 16MB cache, LRU delivers a hit ratio of 4.24 percent while ARC achieves a hit ratio of 23.82 percent, and, for a SPC1 benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19 percent while ARC achieves a hit ratio of 20 percent.
But caching algorithms, including IBM's Adaptive Cache, are not cognitive. These algorithms respond pragmatically based on the current state of the cache. Cognitive solutions learn, and improve with usage. This is often referred to as "Machine Learning".
The human-computer interface (HCI) has much room for improvement in a variety of areas.
Take for example a snack vending machine. In college, we had assignments to simulate the computing logic of these. We had to interact with the buyer, receive coins entered into the slot--nickels, dimes and quarters representing 5, 10 and 25 cents--determine a total monetary balance, and then dispense snacks of various prices and return an appropriate amount of change, if any. There is even a [greedy algorithm] designed to optimize how the change is returned.
But vending machines are not cognitive. Like the caching algorithms, vending machines interact based on fixed programmatic logic, treating all buyers in the same manner. Cognitive solutions can interact with different users in different ways, customized to their needs, and these interactions can improve over time, based on machine learning.
IBM is exploring the use of Cognitive Solutions in a variety of different industries, from Healthcare to Retail, Financial Services to Manufacturing, and more.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
(Yes, OK, it's actually Thursday. I wrote this post weeks ago, but was embargoed until Jan 10, and then was asked to wait until Jan 12 so that the IBM Marketing team could translate my text into 15 different languages.)
This week, the IBM DS8000 team announces a new High Performance Flash Enclosure (HPFE-Gen2) and a series of All-Flash Array DS8880F models that exploit this new technology.
New High Performance Flash Enclosure (HPFE-Gen2)
The original HPFE was 1U high with 16 or 30 flash cards, and could support RAID-5 or RAID-10. Most used RAID-5, resulting in four array sites of 6+P each, leaving two cards for spare. These 1.8-inch cards were only 400 or 800 GB in size, so the maximum raw capacity was only 24TB per 1U enclosure.
The new HPFE-Gen2 enclosure is a complete re-design, consisting of two Microbays and two TeraPacks. The I/O Bays attach to the Microbays via PCIe Gen3. The Microbays in turn attach to both TeraPacks via redundant 6 Gb or 12 Gb SAS.
Each TeraPack holds 24 flash cards each. Since the TeraPacks come in pairs, you can install 16, 32 or 48 flash cards per enclosure. Each 16-card set represents two array sites, for a maximum of six array sites per HPFE-Gen2.
RAID-5 for 400/800 GB. Two 6+P arrays, four 7+P arrays, and two spares.
RAID-6 for 400/800/1600/3200 GB. Two 5+P+Q arrays, four 6+P+Q arrays, and two spares.
RAID-10 for 400/800/1600/3200 GB. Two 3+3 arrays, four 4+4 arrays, and four spares.
(Technically, these new "Flash cards" are 2.5-inch Solid State Drives (SSD) placed into the HPFE Gen2 connected to the PCIe Gen3 interface, with 50 percent additional capacity to tolerate up to 10 drive-writes-per-day (DWDP). IBM will continue to call them "Flash Cards" for naming consistency between the two generations of HPFE.)
The new HPFE-Gen2 enclosures are substantially faster, offering up to 90 percent more IOPS, and up to 268 percent more throughput (GB/sec). The Microbays use a new flash-optimized ASIC to perform the RAID calculations.
New All-Flash Array DS8880F models
IBM introduces the DS8884F, DS8886F and DS8888F that are based entirely on the HPFE-Gen2 enclosures described above.
Hybrid - HDD/SSD/HPFE mix
Hybrid - HDD/SSD/HPFE mix
AFA - HPFE only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
AFA - HPFE-Gen2 only
New zHyperLink connection
Also, as a "Statement of Direction", IBM intends to deliver field upgradable support for zHyperLink on existing IBM System Storage DS8880 machines for connection to z System servers. zHyperLink is a short-distance, mainframe-attach link designed for lower latency than High Performance FICON.
Typical latency with FICON/zHPF is around 140-170 microseconds, and this new zHyperLink is estimated to reduce this down to 20-30 microseconds, but is limited to 150 meter fiber optic cable distance. zHyperLink is intended to speed up DB2® for z/OS® transaction processing and improve active log throughput.
Fellow blogger Chris Mellor from The Register has an interesting post titled [It's a ratchet: Old storage guard face incoming tech squeeze]. Chris opines that the big traditional storage vendors -- which he refers to as the "old guard": Dell EMC, HDS, HPE, IBM and NetApp -- are being squeezed out by startups with new technologies.
Last week, I saw the play [Fiddler on the Roof], a musical production by Arizona Theater Company (ATC), and thought of various parallels with Chris's post.
For those not familiar, the story centers around a father named Tevye and his wife trying to stick to tradition, with five daughters who are open to breaking with tradition to get married. The family lives in a small rural town, back in a time long ago when people were persecuted for their religious and ethnic background. Aren't you glad we live in [more enlightened times]!
Back to Chris Mellor, he writes in his post:
"This old guard has so far failed to squash newcomers in the all-flash array, hyperscale, object and software-defined storage areas. This is despite the established firms adopting these technologies and acquiring some startups."
Should the old guard try to squash newcomers? Often, these startups provide much needed innovations that move the IT industry forward.
In the play, Tevye wants to stick to tradition, whereby the town's matchmaker would find a husband for each daughter, and he, as father of each bride, would then provide his permission and blessing to the match.
Obviously, these startups are neither asking the old guard for their permission nor their blessing. While I can't speak for the rest of the "old guard", IBM is leading in these various spaces. Let's look at each of these new trends.
All-Flash Arrays (AFA)
The category of "All-Flash Arrays" include both purpose-built hardware as well as traditional devices based on solid-state drives (SSD). While the R&D investment needed for purpose-built hardware can limit this to some of the largest vendors, nearly any startup can slap commodity SSD into traditional HDD controllers and call it AFA.
IBM offers the world's fastest AFA, and has been a leader in the AFA category for the past three years, investing over $1 Billion USD on its FlashSystem, DS8000, Elastic Storage Server (ESS), SVC and Storwize product families.
Software-Defined Storage (SDS)
While the definition for SDS is still in a bit of flux, IDC has tried to identify three characteristics:
Storage software stack that can be installed on commodity resources (x86 hardware, hypervisors, or cloud) and/or off-the-shelf computing hardware
SDS should offer a full suite of storage services
Federation between the underlying persistent data placement resources to enable data mobility of its tenants between these resources
IBM has been ranked [Number 1 in Software Defined Storage] for several years now, investing over $1 Billion USD in its IBM Spectrum Storage family. This collection of software is implemented in a variety of offerings, including pre-built systems, software that you can deploy on commodity off-the-shelf servers, and in the Cloud.
Object storage breaks tradition with block and file-based storage solutions. Rather than reading and writing files using POSIX, NFS or SMB protocols, objects are accessed via HTTP GET and PUT requests. The two most common protocols are Amazon S3 and OpenStack Swift.
Object storage is ideal for static and stable data that either never changes, or changes infrequently. A lot of new workloads are based on unstructured data that falls in this category, such as Big Data Analytics, High-performance Computing (HPC), and active archives.
In the latest IDC Marketscape, [IBM is ranked #1 in Object Storage]. IBM has actually three software-defined storage offerings that support Object access methods. IBM Spectrum Scale, IBM Spectrum Archive and IBM Cloud Object storage System. The latter from 2015 acquisition of Cleversafe.
"Hyperscale leverages commodity servers and a software-defined approach, scaling the resources needed for applications and storage separately. As storage needs grow, companies can add servers running software-defined storage (SDS) to the storage tier to expand capacity... Data is automatically distributed across the entire cluster of storage servers as new nodes are added to the system... With hyperscale, .. cluster nodes network together to form a storage resource pool."
This breaks from the tradition of dual-controller high-end arrays, which scale-up, rather than scale-out. IBM offers its IBM Spectrum Accelerate, IBM Spectrum Scale, and IBM Cloud Object Storage System to fill this hyperscale requirement.
In the play, Tevye realizes the world is changing all around him, he can either fight these changes and stick to tradition, or accept that he must change also, and move on. After 105 years, IBM continues to lead the IT industry, primarily by adopting new trends and technologies, moving to new business opportunities as they present themselves.