This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
Today, I'll cover the announcements related to our IBM System Storage N series disk systems, which ties inwith Valentines Day theme nicely. The phrase we use for "unified storage" is that N series allows you to "share the closet, not necessarily the clothes". Couples recognize the value of a shared closet over having one closet for just the man's clothes, and a separate closet for just the woman's clothes. (For some couples, the man's closet would be terribly under utilized!). By analogy, the N series allows you to share one solution for LUNs that can be accessed via FCP or iSCSI protocols, and NAS file systems that can be accessed via NFS and CIFS protocols. In most data centers, Windows and UNIX applications are about as likely to share files as men and women are to wear each other's clothes, so the analogy is in tact.
Let's take a look at what got announced:
N7700 and N7900
There are actually [eight new high-end N series] models. the N7900 has 4 processors and 32GB of cache. The N7700 has 2 processors and 16GB cache. Each has two appliance models (A11 single node and A21 dual node) and two gateway models (G11 single node and G21 dual node).
The appliance models support both FC and SATA disk. The N7900 A models support a maximum of 1176 drives; the N7700 A models supports 840 drives. The gateway models provide FCP, iSCSI and NAS host access through external disk attachment. The N7900 gateway models support 1176 LUNs on external disk systems; the N7700 gateway models support 840 external LUNs.
N series now supports 1 TB SATA disk
The [EXN1000 expansion drawer] can now have up to fourteen 1TB SATA drives. This is in addition to previousannouncements supporting 500GB and 750GB drive capacities. These drawer support the entire N series line.
With 1 TB drives, the N7900 now supports up to 1176 TB of raw capacity, which is over 1PB of usabledata in 12+2P RAID-DP mode. This is greater than the internal disk capacity limits of current IBM DS8000, EMC DMX andHDS USP-V models.
At the low end, both the N3300 and N3600 now support 500GB, 750GB and 1TB SATA drives in addition to the SASdrives they supported.
SnapManager for Microsoft SharePoint
There is a new SnapManager in town. This one is for Microsoft SharePoint data. See the announcementfor the [N3300 and N3600] for details.
On Jan 24, IBM signed agreements with [Ingram Micro, Tech Data, and Synnex], to distribute the N Series products and work with IBM to recruit new solution providers to the line. These three are all well-respected world-class distribution providers, so weare glad to have increased our partnership with them on this.
Well, I'm back in Tucson, and thought I would close out my coverage of this year's Data Center Conference 2009 with some pictures. These first few are from the Solution Showcase.
There were four stations at the IBM booth. I had the "Information Infrastructure" station, you can see here I had my blook (blog-based book) on display "Inside System Storage: Volume I", a solid-state drive (in clear plexiglas to show all the chips inside), and the GUI panel for XIV.
What really stole the show was the IBM Portable Mobile Data Center (PMDC), which is a shipping crate with a fully running data center inside. In the one shown here, we had iDataPlex servers connected to an IBM XIV Storage System. Here is David Bricker striking a pose.
Inside, Monica Martinez shows off the iDataPlex servers. These are 1U servers that are only half as deep as regular servers, so you can pack 84 servers in the floorspace of 42 traditional 1U servers.
Two of these fit into a 2U chassis to share a common power supply and fan set. The trouble with traditional 1U servers is that fans do not have enough radius, so putting wider 2U fans for two servers gives you much better airflow.
Monica Martinez, Ruth Weinheimer, and Tamara Rice.
Normally, IBM only makes announcements on Tuesdays, but today, Friday, IBM announces that it acquired Diligent Technologies. What? I got a lot ofquestions about this, so I thought I would start with this...
When I posted in January that[IBM Acquires XIV],fellow EMC blogger Mark Twomey of StorageZilla fame, sent me a comment:
"Ah now Tony I wasn't poking fun. Indeed I find it fascinating that Moshe who's been sitting out on the fringes for years having been banished for being an obstructionist to EMC entering the mid-market is now back.
Which reminds me what happens with Diligent? There his as well aren't they or has he packed his stake in that in?"
As you might have guessed, I am privy to a lot of stuff going on behind the scenes at IBM that I can't talk about in this blog, and all these rumors in the blogosphere about IBM acquisition of Diligent was a topic I couldn't officially recognize, defend or deny, until official IBM announcements were made.
In his latest post, Mark wonders about[the last Tape and Mainframe sales person on earth]. He recounts my interaction with fellow HDS blogger Hu Yoshia about the energy benefits ofVirtual Tape Libraries. Knowing that we were going to announcement IBM's acquisition of Diligent soon, I thoughtthis would be a worthy exchange, driving up the sales of Diligent boxes (whether you buy them from IBM or HDS).Diligent already had reselling arrangements with HDS, and IBM plans to continue thosearrangements going forward with HDS. As I have explained before in my post [Supermarketsand Specialty Shops], IBM and HDS cater to different customers, so if a customer who wants the best technologyfrom a specialty shop, they can buy IBM Diligent products from HDS, but if they want one-stop shopping, they can buyIBM Diligent directly from IBM or its other IBM Business Partners.
(Perhaps a more tricky situation is that Diligent also had an arrangement with Sun Microsystems, which competesdirectly against IBM as another IT supermarket vendor, but I have not heard how IBM has decided to handle thisgoing forward.)
For more on this intricate mess of interconnected companies, alliances and partnerships, read Dave Raffo's article[Data dedupe dance cardfilling up] over at Storage Soup.
So, let's tackle the first question:
Q1. What will happen to IBM's real tape library business?
Come on! IBM is Number one in tape, we've had virtual tape libraries since 1997 (the first in the industry)and continue to do well in both virtual and real tape libraries. Both provide value to the customer, and bothhave their place as part of the overall "information infrastructure". This acquisition provides yet another choicefor clients on our "supermarket" shelf.
(For those following the ["which is greener"] discussion, the robot of the IBM TS3500 real tape library consumes185W per frame (when moving) and each tape drive consumes 50W (when actively working on a tape). Compared to 13W per SATA disk drive, each 6-drive frame of a TS3500 consumes as much electricity as 37 SATA disk drives. If you are not running backups 24x7, the total KWh per day for your tape library is actually quite less, but as several people have pointed out, there are customers that do run backups 80-90 percent of the time. LTO-4 tapes can hold 800GB uncompressed, and SATA disk are now available in 1TB (1000 GB) size, so you can have fun with your own comparisons.)
Meanwhile, Scott Waterhouse, one of the few people at EMC who understand tape workloadslike backup and archive, takes me to task in his Backup Blog with his post[I want a Red Ferrari].For those who are surprised that anyone at EMC might understand backup workloads, EMC did acquire a company calledLegato, and perhaps Scott came from that acquisition. I've never met Scott in person, but based solely only fromhis writings, he seems to know his stuff and makes strong arguments for using IBM Tivoli Storage Manager (TSM) with deduplication and virtual tape libraries.
While TSM does a good job of "deduplicating" at the client first, backing up only changed data, Scott feels database and email repositories must be backed up entirely each time, which is what happens in many other backup software products. Some clients might have 80 percent database/email and only 20 percent files, while others might have less than 20 percent database/email and 80 percent files, so this might influence whether deduplication will have small or big benefit.If TSM has to backup the entire database, even though little has changed since the last backup, that is where deduplication on a virtual tape library can come in handy. For IBM DB2 and Oracle databases, IBM TSM application-aware Tivoli Data Protection module interface backs up only changed data, not the entire file. Thanks to IBM's FilesX acquisition-- (also coincidently from Israel) --IBM can extend this support now to SQL Server databases as well.However, to be fair, Scott is partly correct, TSM does backup some database and email repositories in their entirety, which is why it is a good idea to have BOTH an IBM virtual tape library with deduplication and Tivoli Storage Manager to handle all cases. This brings us to the next question:
Q2. What will happen to IBM's patented "progressive backup" technology?
IBM will continue to use TSM's progressive backup technology. TSM already works great with Diligent virtual tapelibraries. One example is LAN-free backup. In this configuration, the TSM client writes its backups directly toa virtual or real tape library, over the SAN, and then sends the list of files backed up to the TSM server over theLAN to record in its database. This can greatly reduce IP traffic on your LAN during peak backup periods. For more about this, see the IBM Redbook titled["Get More Out of Your SAN with IBM Tivoli Storage Manager"].
Jon Toigo from DrunkenData asks[Did IBM Do Due Diligence Before Making Diligent Acquisition a Done Deal?] which is probably always a valid question. Unlike XIV, I wasn't part of the Diligent acquisition team, so I can't provide first hand account of the process. I am told that the IBM team did all the right things to make sure everything is going to turn out right.Sadly, many companies that make acquisitions in the IT industry fail to make them work. Fortunately, IBM is one of the few companies that has a great success record, with over 60 acquisitions in the past six years.In the Xconomy forum, Wade Rousch writes[IBM and the Art of Acquisitions]and gives some insight why IBM is different. Jon did not understand why Cindy Grossman, IBM VP of tape and archive solutions, ran the analyst conference call for this announcement, which brings me to the next question:
Q3. What is Diligent virtual tape library going to be categorized as, a disk system or a tape system?
IBM organizes its storage systems based on the host application workloads.Products to address disk workloads (SVC, DS8000 series, DS6000 series, DS4000 series, DS3000 series, N series, XIV Nextra) are in our disk systems group. Storage that appears to host applications like a tape system to address workloads like backup and archive (tape drives, libraries and tape virtualization) are in our tape and archive group. IBM Diligent has two products, one for big workloads and one for medium workloads. Both look liketape systems, so our tape and archive team, who understand tape workloads like backup and archive the best, are obviously the best choice to support IBM Diligent in the mix.
IBM will offer both N series and Diligent deduplication capabilities. For disk workloads, IBM N series offers a post-process deduplication feature at no additional charge. For tape workloads, IBM will now offer an in-line deduplication feature with Diligent Technologies. Different workloads, different offerings.
As with any acquisition, there will be some changes. The 100 folks from Diligent will get to learn the IBM wayof doing things. This brings me to our fifth and final question:
Q5. What is the correct spelling: deduplication or de-duplication?
It appears that Diligent has a corporate-wide standard to hyphenate this term (de-duplication), but the "word police" at IBM that control and standardize all "proper spellings, trademarks, and capitalization" have sent me corporate instructions a few days ago that IBM does not to hyphenate this term (deduplication). So, going forward, it will be "deduplication", or "dedupe" for short.I suspect one of the first tasks that our new IBMers from Diligent will be doing is removing all those hyphens fromthe [Diligent Technologies website]!
That's all for now, I'm off to Chicago, Illinois tomorrow!
Jamie Thomas, IBM General Manager of Storage and Software Defined Environments
Jamie announced [IBM Elastic Storage], a new offering that is available as a software defined storage solution, based on IBM's General Parallel File System (GPFS) technology already deployed at 45,000 installations.
IBM Elastic Storage provides a global name view across data center locations. It can manage up to a Yotabyte of information, combining Flash, disk and tape resources. It supports OpenStack interfaces, Hadoop and standard POSIX file system conventions.
IBM Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages.
IBM Elastic Storage software can run on a cluster of x86 and/or POWER-based servers, and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors.
IBM partnered with various clients in different industries in a special beta program. Jamie led a client panel to discuss their experiences with IBM Elastic Storage:
Alan Malek, Director of IT, Cypress Semiconductor.
"Total cycle time is key". Over the past 31 years, they bought whatever file storage was available. Now, with IBM Elastic Storage, the performance was very consistent for their engineering workloads with full load balancing.
Russell Schneider, Principal Storage Consultant, Jeskell.
Russell's company works with a lot of federal agencies, "Big Data has become Bigger Data". For example, research on Global Warming and Climate Change requires a large amount of storage across agencies.
In another example, when the tsunami hit Japan a few years ago, an agency here in the USA realized they had 14PB of data stored as a single copy in a data center at sea level less than a mile from the coast. They realized they needed to have a secondary copy, and an option to cache to a third location depending on regional disasters.
Matthew Richards, Products, OwnCloud.
For those not familiar with OwnCloud, it provides a Dropbox-like file sharing service, but in the Enterprise, with on-premise storage. It has been fully tested and certified with IBM Elastic Storage to provide a secure file sharing platform.
With IBM Elastic Storage, they were able to scale linearly up to 20,000 users, and are now testing 100,000 users. The need to have intelligent access to files at scale is what Matthew likes about IBM Elastic Storage.
Dr. Michael Factor, IBM Distinguished Engineer at IBM Research
Michael started out explaining there are three areas for storage: block, file and object. The fastest growing type of data is unstructured fixed content with associated metadata. This is ideal for object storage. Michael has been working with OpenStack Swift, an open source interface defined for object storage. He defined "storlets" as follows:
Storlets extend an object store by moving computation to the data -- filtering, transforming, analyzing -- instead of bringing data to the computation.
Storlets have been deployed on a variety of European Union research projects. For example, in partnership with Phillips, a pathology storlet can count the number of cancer cells in an image. By bringing the computation to the data, it eliminates having to transfer large amounts of data over the network.
Storlets can run on-premise and on IBM's SoftLayer IaaS cloud offering.
Bruce Hillsberg, IBM Director of Storage Systems at IBM Research
Bruce led another panel discussion, this time of IBM storage experts:
Vincent Hsu, IBM Fellow and CTO of Storage.
The problem is the isolation of data into "storage silos". Isolation causes problems in managing large amounts of data at scale, and costs more as storage is not fully utilized. IBM Elastic Storage brings everything together, eliminating storage silos.
Michael explained how IBM works with clients all over the world to ensure that storage solutions meet client requirements. For example, storlets can be used to use rich metadata to manage photographs, and display them based on GPS satellite location, or other content that makes it easier to manage these images.
IBM Elastic Storage will support OpenStack Cinder and Swift interfaces. IBM is a platinum sponsor of OpenStack foundation, and is now its second most prolific contributor, with hundreds of full-time employees working on this.
Tom Clark, IBM Distinguished Engineer, Chief Architect, Storage Software, Cloud & Smarter Infrastructure.
Storage Management is a critical piece of Software Defined Storage. This is done in three ways:
The use of analytics to optimize the deployment of storage, based on workload requirements. Storage admins set policies, and then IBM Elastic Storage analytics gather metrics and then optimize data placement and movement based on these policies. IBM Elastic Storage has 70 percent lower TCO that competitive offerings.
The focus on backup services. Backups are not just for data protection, but rather can be used to duplicate or replicate data for testing, for training, and for other purposes. IBM Elastic Storage is fully supported by IBM Tivoli Storage Manager.
Being able to support Hybrid Cloud environments, where some data can be on-premise, and other data off-premise. Storage Management challenges will need to deal with this possibility. IBM Elastic Storage is well positioned for this.
Carl Kraenzel, IBM Distinguished Engineer, Director of Watson Cloud Technology and Support.
Watson is ground-breaking technology, and IBM Elastic Storage technology was at the heart of the Watson that was first introduced in 2011.
To consider IBM Elastic Storage based on lower-cost and higher-scalability is not the full picture. Rather, this is an important platform for Cognitive Computing, which we are just at the tip of the iceberg in exploring. IT systems need to be aware of the context of what we are doing.
While the Grand Challenge demonstration on Jeopardy! was exciting, it is time we stop playing games and apply IBM Elastic Storage to business, to help with health care and medical research, and other problems in society. IBM has already deployed this at Anderson Cancer Center and Memorial Sloan Kettering Cancer Center, for example.
Tom Rosamilia provided closing remarks. IBM Elastic Storage is not just for new workloads in Cloud, Analytics, Mobile and Social (CAMS) but also traditional workloads as well. IBM Elastic Storage provides "data democracy" and allows for "better rested storage administrators" that make fewer mistakes.
Tom opened the floor for questions from the audience:
Q1. Data integrity, not just security but also quality? IBM Elastic Storage has end-to-end data integrity checking built-in.
Q2. How does IT transition from full control to auto-pilot? IBM allows you to tap into existing storage. This is not rip-and-replace. With storage virtualization, IBM hides the complexity that normally requires full control over specific assets.
Q3. Storage admins would rather have a root canal without Novocaine than move their data. What is IBM doing to offer automation to help storage admins move to this new infrastructure? IBM storage virtualization breaks that hard link between applications and specific storage devices. IBM Elastic Storage eliminates application downtime previously associated with data movement.
Tom Rosamilia assured the audience that IBM is fully committed to its storage portfolio. IBM Elastic Storage is not just about the profoundness of what IBM announced today, but also where IBM is investing in the future of storage.
On Tuesday, I covered much of the Feb 26 announcements, but left the IBM System Storage DS8000 for today so that it can haveits own special focus.
Many of the enhancements relate to z/OS Global Mirror, which we formerly called eXtended Remote Copy or "XRC", not to be confused with our "regular" Global Mirror that applies to all data. For those not familiar with z/OS Global Mirror, here is how it works. The production mainframe writes updates to the DS8000, and the DS8000 keeps track of these in cache until a "reader" can pull them over to the secondary location.The "reader" is called System Data Mover (SDM) which runs in its own address space under z/OS operating system. Thanks to some work my team did several years ago, z/OS Global Mirror was able to extend beyond z/OS volumes and include Linux on System z data. Linux on System z can use a "Compatible Disk Layout" (CDL) format (now the default) that meetsall the requirements to be included in the copy session.
IBM has over 300 deployments of z/OS Global Mirror, mostly banks, brokerages and insurance companies. The feature can keep tens of thousands of volumes in one big "consistency group" and asynchronously mirror them to any distance on the planet, with the secondary copy recovery point objective (RPO) only a few seconds behind the primary.
Extended Distance FICON
Extended Distance FICON is an enhancement to the industry-standard FICON architecture (FC-SB-3) that can help avoid degradation of performance at extended distances by implementing a new protocol for "persistent" Information Unit (IU) pacing. This deals with the number of packets in flight between servers and storage separated by long distances, andcan keep a link fully utilized at 4Gpbs FICON up to 50 kilometers. This is particularly important for z/OS GlobalMirror "reader" System Data Mover (SDM). By having many "reads" in flight, this enhancementcan help reduce the need for spoofing or channel-extender equipment, or allow you to choose lower-costchannel extenders based on "frame-forwarding" technology. All of this helps reduce your total cost of ownership (TCO)for a complete end-to-end solution.
This feature will be available in March as a no-charge update to the DS8000 microcode.For more details, see the [IBM Press Release]
z/OS Global Mirror process offload to zIIP processors
To understand this one, you need to understand the different "specialty engines" available on the System z.
On distributed systems where you run a single application on a single piece of server hardware, you mightpay "per server", "per processor" or lately "per core" for dual-core and quad-core processors. Software vendors were looking for a way to charge smaller companies less, and larger companies more. However, you might end up paying the same whether you use 1GHz Intelor 4GHz Intel processor, even though the latter can do four times more work per unit time.
The mainframe has a few processors for hundreds or thousands of business applications.In the beginning, all engines on a mainframe were general-purpose "Central Processor" or CP engines. Based on theircycle rate, IBM was able to publish the number of Million Instructions per Second (MIPS) that a machine witha given number of CP engines can do. With the introduction of side co-processors, this was changed to "Millionsof Service Units" or MSU. Software licensing can charge per MSU, and this allows applications running in aslittle as one percent of a processor to get appropriately charged.
One of the first specialty engines was the IFL, the "Integrated Facility for Linux". This was a CP designatedto only run z/VM and Linux on the mainframe. You could "buy" an IFL on your mainframe much cheaper than a CP,and none of your z/OS application software would count it in the MSU calculations because z/OS can't run on theIFL. This made it very practical to run new Linux workloads.
In 2004, IBM introduced "z Application Assist Processor" (zAAP) engines to run Java, and in 2006, the "z Integrated Information Processor" (zIIP) engines to run database and background data movement activities.By not having these counted in the MSU number for business applications, it greatly reduced the cost for mainframe software.
Tuesday's announcement is that the SDM "reader" will now run in a zIIP engine, reducing the costs for applicationsthat run on that machine. Note that the CP, IFL, zAAP and zIIP engines are all identical cores. The z10 EC hasup to 64 of these (16 quad-core) and you can designate any core as any of these engine types.
Faster z/OS Global Mirror Incremental Resync
One way to set up a 3-site disaster recovery protection is to have your production synchronously mirrored to a second site nearby, and at the same time asynchronously mirrored to a remote location. On the System z,you can have site "A" using synchronous IBM System Storage Metro Mirror over to nearby site "B", and alsohave site "A" sending data over to size "C" using z/OS Global Mirror. This is called "Metro z/OS Global Mirror"or "MzGM" for short.
In the past, if the disk in site A failed, you would switch over to site B, and then send all the data all over again. This is because site B was not tracking what the SDM reader had or had not yet processed.With Tuesday's announcement, IBM has developed an "incremental resync" where site B figures out what theincremental delta is to connect to the z/OS Global Mirror at site "C", and this is 95% faster than sendingall the data over.
IBM Basic HyperSwap for z/OS
What if you are sending all of your data from one location to another, and one disk system fails? Do you declare a disaster and switch over entirely? With HyperSwap, you only switch over the disk systems, but leave therest of the servers alone. In the past, this involved hiring IBM Global Technology Services to implementa Geographically Dispersed Parallel Sysplex (GDPS) with software that monitors the situation and updates thez/OS operating system when a HyperSwap had occurred. All application I/O that were writing to the primary locationare automatically re-routed to the disks at the secondary location. HyperSwap can do this for all the disk systems involved,allowing applications at the primary location to continue running uninterrupted.
HyperSwap is a very popular feature, but not everyone has implemented the advanced GDPS capabilities.To address this, IBM now offers "Basic HyperSwap", which is actually going to be shipped as IBMTotalStorage Productivity Center for Replication Basic Edition for System z. This will run in a z/OSaddress space, and use either the DB2 RDBMS you already have, or provide you Apache Derby database for thosefew out there who don't have DB2 on their mainframe already.
Update: There has been some confusion on this last point, so let me explain the keydifferences between the different levels of service:
Basic HyperSwap: single-site high availability for the disk systems only
GDPS/PPRC HyperSwap Manager: single- or multi-site high availability for the disk systems, plus some entry-level disaster recovery capability
GDPS/PPRC: highly automated end-to-end disaster recovery solution for servers, storage and networks
I apologize to all my colleagues who thought I implied that Basic HyperSwap was a full replacement for the morefull-function GDPS service offerings.
Extended Address Volumes (EAV)
Up until now, the largest volume you could have was only 54 GB in size, and many customers still are using 3 GB and 9 GB volume sizes. Now, IBM will introduce 223 GB volumes. You can have any kind of data set on these volumes,but only VSAM data sets can reside on cylinders beyond the first 65,280. That is because many applications still thinkthat 65,280 is the largest cylinder number you can have.
This is important because a mainframe, or a set of mainframes clustered together, can only have about 60,000disk volumes total. The 60,000 is actually the Unit Control Block (UCB) limit, and besides disk volumes, youcan have "virtual" PAVs that serve as an alias to existing volumes to provide concurrent access.
Aside from the first item, the Extended Distance FICON, the other enhancements are "preview announcements" which means that IBM has not yet worked out the final details of price, packaging or delivery date. In many cases, the work is done, has been tested in our labs, or running beta in select client locations, but for completeness I am required to make the following disclaimer:
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Availability, prices, ordering information, and terms and conditions will be provided when the product is announced for general availability.
Normally, IBM has its announcements on Tuesdays, but this week it was on Monday!
I am here in New York City, at the Kaufmann Theater of the American Museum of Natural History, for the
[IBM Storage Innovation Executive Summit]. We have about 250 clients here, as well as many bloggers and storage analysts.
My day started out being interviewed by Lynda from Stratecast, a division of [Frost & Sullivan]. This interview will be part of a video series that Stratecast is doing about the storage industry.
(About the venue: American Museum of Natural History was built in 1869. It was featured in the film "Night at the Museum". In keeping with IBM's focus on scalability and preservation, the museum here boasts skeletons of the largest dinosaurs. The five-story building takes up several city blocks, and the Kaufmann theater is buried deep in the bottom level, well shielded from cell phone or Wi-Fi signals allowing me to focus on taking notes the traditional way, with pen and paper.)
Deon Newman, IBM VP of Marketing for Northa America, was our Master of Ceremonies. Today would be filled with market insight, best practices, thought leadership, and testimonials of powerful results.
This is my first in a series of blog posts on this event.
Information Explosion on a Smarter Planet
Bridget van Kralingen, IBM General Manager for North America, indicated that storage is finally having its day in the sun, moving from the "back office" to the "front office". According to Google's Eric Schmidt, we now create, capture and replicate more date in two days than all of the information recorded from the dawn of time to the year 2003.
1928: IBM's innovative 80-column punch card stored nearly twice as much as its 50-column predecessor.
1947: Bing Crosby decided to do his radio show by recording it at his convenience on magnetic tape, rather than doing it live. This was the motivation for IBM researches to investigate tape media, delivering the first commercial tape drive in 1952. One tape reel could hold the equivalent of 30,000 punch cards.
1956: the IBM RAMAC mainframe was the first computer to access data randomly with an externally-attached disk system, the "350 Disk Unit", which stored 5 million 7-bit characters (about 5MB) and weighed over 500 pounds. Compare that today's cell phone that can store several GB of data in a handheld device.
1978: IBM invented Redundant Array of Independent Disks (RAID) through a collaboration with University of Berkeley.
1993: IBM introduces the [IBM 9337 Disk Storage Array], the first external disk storage system for distributed operating systems. This was based on the Serial Storage Architecture [SSA] protocol.
1995: IBM launches products that support Storage Area Networks (SAN), based on the Fibre Channel Protocol. IBM's internal codenames for disk products were all names of sharks, and so our internal mantra was that a healthy storage diet was comprised of "Plenty of Fish and Fibre".
2010: IBM ships Easy Tier, the world's easiest-to-use sub-LUN automated tiering capability, for the IBM System Storage DS8700 disk system.
Storage is growing (in capacity) at 40 percent per year, but IT budgets are only growing (in dollars) by a measly 1 to 5 percent. She cited the success at [Sprint], presented at the October 2010 launch. By combining IBM SAN Volume Controller with a three-tier storage architecture, Sprint lowered their raw capacity from 10PB to 8.4PB, increasing utilization from 35 to 78 percent. This involved shrinking from six storage vendors to three, and reducing total number of disk arrays from 166 down to 96. The resulting system has only 38 percent of their data on their most expensive Tier-1 storage, the rest is now living on less expensive Tier-2 and Tier-3 storage.
Companies are entering the era of Big Data with an insatiable appetite for collecting and analyzing data for marketplace insights. IBM [InfoSphere BigInsights], based on the Apache Hadoop, has helped customers make sense of it all. Innovative technology, expertise and marketplace insight will provide the competitive path forward in the coming decade.
Storage Challenges and Opportunities in 2011 and Beyond
I always enjoy hearing Stan Zaffos, Gartner Research VP, present at the annual [Data Center Conference] in Las Vegas every December. His analysis and research focuses on storage systems and emerging storage technologies.
Stan provided his perspective on the storage industry. He suggested a top-down approach, based on the market trends that Gartner is closely monitoring. He suggests focusing heavily on managing data growth, using SLAs to improve efficiency, and to follow Gartner's recommended actions. His statement, "If something is not sustainable, then it is unsustainable." resonated well with the audience. His key three points:
Design to meet but not exceed Service Level Agreements (SLAs)
Re-evaluate your ratio of SAN versus NAS based on growth of unstructured data content,
Explore the variety of Cloud options available.
Those of us who have been in this business a long time recognize that the problems haven't changed, just the dimensions. When in the past three decades were IT budgets generous and plentiful? When was there more than enough IT staff to handle all the requests in a timely manner? When hasn't there been a period of information growth? Gartner's analysis external control block (RAID protected disk systems) is growing revenue at 8.7 percent. Raw TBs of disk capacity is growing at 55 percent, and expected to be 100 Exabytes by 2015.
SAN has four times more revenue than NAS today, but NAS is growing faster. NAS was only 9 percent marketshare in 2010, but is projected to grow to 32 percent by 2015. SAN can offer higher price/performance for traditional OLTP and database workloads, but NAS is better suited for unstructured data, backups and archives, assisted by storage efficiency features like real-time compression and data deduplication. Which industries create the most unstructured data? The ones involved in filling out forms! This includes government, insurance agencies, manufacturing, mining and pharmaceuticals.
The phrase "good enough" should no longer be considered an insult. Too often IT departments design solutions that far exceed negotiated Service Level Agreements (SLAs), and they should instead focus on just meeting them instead. Modular storage systems are often sufficient for most workloads. Slower 7200RPM SATA disks can be one third the price of faster 15K RPM Fibre Channel drives, and often sufficient performance for the tasks required. Unified storage, such as IBM N series, can help simplify capacity planning, as storage can be re-purposed if different workloads grow at different rates. The key is to focus on meeting SLAs based on the price-vs-risk factor. Take a minimalist approach with fewer SLAs, fewer management classes, and fewer storage vendors.
Stan suggests a two-pronged approach: Capacity management through content analytics and classification, and Efficient Utilization through Thin Provisioning, storage virtualization, Quality of Service (QoS), compression and deduplication capabilities. This features will be ubiquitous by 2013. If you are worried that these technologies mean more information packed onto fewer devices, Stan's response was "If it's not there, it can't break." Storing data on fewer disks or tape cartridges means less chance something will fail.
Stan feels IT shops using Thin Provisioning should continue to charge their end-users on what they ask for (the full allocation request) rather than what the thin-provisioned amount actually is on the storage devices themselves. For example, if someone asks for 100GB LUN to be allocated to their system, but this only takes up 30GB of actual data space, chargeback the full 100GB!
It can take five years for new technology to get 50 percent adopted. The Romans took eight years to build the [Colosseum]. His research on "network convergence" found that 42 percent planned to use iSCSI, 32 percent Fibre Channel over Ethernet (FCoE) or other Top-of-Rack(TOR) converged switches, and 16 percent looking for full convergence of servers, switches and storage. Features like IBM Easy Tier automatic sub-LUN tiering were introduced later, and so have not been adopted as widely as other features like Thin Provisioning that have been around since the 1990's IBM RAMAC Virtual Array.
Stan felt that Public and Private clouds were two different approaches. Public clouds offer reservation-less provisioning. Private clouds offer improved agility, but can be more complex to set up, and has the risk of idle capacity similar to traditional IT datacenter deployments. Storage and File virtualization should be considered a pre-req for adopting Cloud technologies.
Storage IT teams need to adopt more than just technical skills. They need to learn about legal and government regulatory compliance issues, financial considerations, and would even benefit doing some "marketing". Why marketing? Because often IT departments need end-users to change their attitudes and behaviours, and this can be accomplished through internal marketing campaigns.
I've blogged about some of these videos already, but since there are probably a few out there buying the brand new Apple iPhone looking for YouTube videos to play on them, these links might provide some exampleentertainment on your new handheld device.
Next week has "Fourth of July" Independence Day holiday in the USA smack in the middle of the week, so I suspect the blogosphereto quiet down a bit. So whether you are working next week or not, in the USA or elsewhere, take some time to enjoy your friends and family.
Lakota Industries made news with the introduction of its [Sarah-Cuda Hunting Bow], named after moose-huntingU.S. Vice President nominee and Governor of Alaska [Sarah Palin]. This has all the same features as their other high-end hunting bows, but is lighter, smaller and available in Pink Camo. This "pink-it-and-shrink-it" move was designed to broaden the market share of hunting bows by reaching out to the needs of women hunters.
Not to be outdone, today, at the Storage Networking World Conference, IBM announced the new IBM System Storage SAN Volume Controller Entry Edition [SVC EE].
The new SVC Entry Edition, available in Flamingo Pink* or traditional Raven Black.
* RPQ required. Default color is Raven Black.
You might be thinking: "Wait! IBM SVC is already the leading storage virtualization product among SMB clients today,why introduce a less expensive model?" With the global economy in the tank, IBM thought it would be nice to help outour smaller SMB clients with this new option.
This new offering is actually a combination of new software (SVC 4.3.1) and new hardware (2145-8A4). Here are thekey differences:
by usable capacity managed, up to 8 PB
by number of disk drives, up to 60 drives
2145-4F2, 8F2, 8F4, 8G4, 8A4
1, 2, 3 or 4 node-pairs, depending on performance requirements
only one node-pair needed
FlashCopy, Metro Mirror and Global Mirror, licensed by subset of capacity used
FlashCopy, Metro Mirror and Global Mirror, but with simplified licensing
The SVC EE is not a "dumbed-down" version of the SVC Classic. It has all the features and functions of theSVC Classic, including thin provisioning with "Space-efficient volumes", Quality of Service (QoS) performance prioritization for more important applications, point-in-time FlashCopy, and both synchronous and asynchronous disk mirroring (Metro and Global Mirror).
While IBM has not yet have SPC-1 benchmarks published, IBM is positioning the SVC EE as roughly 60 percent of the performance, at 60 percent of the list price, compared to a comparable SVC Classic 2145-8G4 configuration. The SVC Classic is already one of the fastest disk systems in the industry. By comparison, the SVC EE is twice as fast as the original SVC 2145-4F2 introduced five years ago.If you outgrow the SVC EE, no problem! The 2145-8A4 can be used in traditional SVC Classic mode, and the SVC EE software can be converted into the SVC Classic software license for upgrade purposes, protecting your originalinvestment!
For those considering an HP EVA 4400 or EMC CX-4 disk system, you might want to look at combining an SVC EE with [IBM System Storage DS3400] disk. The combination offers more features and capabilities, and helps reduce your IT costs at the same time.
And if you are worried you can't afford it right now, IBM Global Financing is offering a ["Why Wait?" world-wide deferral of interest and payments] for 90 days, so you don't have to make your first payment until 2009, applicable to all IBM System Storage products, including the SVC EE, SVC Classic and DS3400 disk systems.
Just in time for [Cyber Monday], Volume II of my "Inside System Storage" book series is now available. As I mentioned in my post on the [October 7th Launch announcement], I finally got past all the internal restrictions that prevented this volume from being published earlier.
My first book covered my initial 12 months of blogging experience, from September 2006 to August 2007. This book covers the history of my career transition from software engineer developer to marketing strategist.
My second book covers the next 8 months, from September 2007 to April 2008, spanning the acquisitions of XIV and Diligent companies that were part of an overall strategic re-alignment of storage within the broader "Systems and Technology Group" of IBM.
The books come in a variety of formats, including hardcover with dust jacket, paperback, and online eBook (PDF). My publisher, Lulu, now supports ePub format, so I am investigating the time and effort required to build this format from the source files.