Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Jamie Thomas, IBM General Manager of Storage and Software Defined Environments
Jamie announced [IBM Elastic Storage], a new offering that is available as a software defined storage solution, based on IBM's General Parallel File System (GPFS) technology already deployed at 45,000 installations.
IBM Elastic Storage provides a global name view across data center locations. It can manage up to a Yotabyte of information, combining Flash, disk and tape resources. It supports OpenStack interfaces, Hadoop and standard POSIX file system conventions.
IBM Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages.
IBM Elastic Storage software can run on a cluster of x86 and/or POWER-based servers, and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors.
IBM partnered with various clients in different industries in a special beta program. Jamie led a client panel to discuss their experiences with IBM Elastic Storage:
Alan Malek, Director of IT, Cypress Semiconductor.
"Total cycle time is key". Over the past 31 years, they bought whatever file storage was available. Now, with IBM Elastic Storage, the performance was very consistent for their engineering workloads with full load balancing.
Russell Schneider, Principal Storage Consultant, Jeskell.
Russell's company works with a lot of federal agencies, "Big Data has become Bigger Data". For example, research on Global Warming and Climate Change requires a large amount of storage across agencies.
In another example, when the tsunami hit Japan a few years ago, an agency here in the USA realized they had 14PB of data stored as a single copy in a data center at sea level less than a mile from the coast. They realized they needed to have a secondary copy, and an option to cache to a third location depending on regional disasters.
Matthew Richards, Products, OwnCloud.
For those not familiar with OwnCloud, it provides a Dropbox-like file sharing service, but in the Enterprise, with on-premise storage. It has been fully tested and certified with IBM Elastic Storage to provide a secure file sharing platform.
With IBM Elastic Storage, they were able to scale linearly up to 20,000 users, and are now testing 100,000 users. The need to have intelligent access to files at scale is what Matthew likes about IBM Elastic Storage.
Dr. Michael Factor, IBM Distinguished Engineer at IBM Research
Michael started out explaining there are three areas for storage: block, file and object. The fastest growing type of data is unstructured fixed content with associated metadata. This is ideal for object storage. Michael has been working with OpenStack Swift, an open source interface defined for object storage. He defined "storlets" as follows:
Storlets extend an object store by moving computation to the data -- filtering, transforming, analyzing -- instead of bringing data to the computation.
Storlets have been deployed on a variety of European Union research projects. For example, in partnership with Phillips, a pathology storlet can count the number of cancer cells in an image. By bringing the computation to the data, it eliminates having to transfer large amounts of data over the network.
Storlets can run on-premise and on IBM's SoftLayer IaaS cloud offering.
Bruce Hillsberg, IBM Director of Storage Systems at IBM Research
Bruce led another panel discussion, this time of IBM storage experts:
Vincent Hsu, IBM Fellow and CTO of Storage.
The problem is the isolation of data into "storage silos". Isolation causes problems in managing large amounts of data at scale, and costs more as storage is not fully utilized. IBM Elastic Storage brings everything together, eliminating storage silos.
Michael explained how IBM works with clients all over the world to ensure that storage solutions meet client requirements. For example, storlets can be used to use rich metadata to manage photographs, and display them based on GPS satellite location, or other content that makes it easier to manage these images.
IBM Elastic Storage will support OpenStack Cinder and Swift interfaces. IBM is a platinum sponsor of OpenStack foundation, and is now its second most prolific contributor, with hundreds of full-time employees working on this.
Tom Clark, IBM Distinguished Engineer, Chief Architect, Storage Software, Cloud & Smarter Infrastructure.
Storage Management is a critical piece of Software Defined Storage. This is done in three ways:
The use of analytics to optimize the deployment of storage, based on workload requirements. Storage admins set policies, and then IBM Elastic Storage analytics gather metrics and then optimize data placement and movement based on these policies. IBM Elastic Storage has 70 percent lower TCO that competitive offerings.
The focus on backup services. Backups are not just for data protection, but rather can be used to duplicate or replicate data for testing, for training, and for other purposes. IBM Elastic Storage is fully supported by IBM Tivoli Storage Manager.
Being able to support Hybrid Cloud environments, where some data can be on-premise, and other data off-premise. Storage Management challenges will need to deal with this possibility. IBM Elastic Storage is well positioned for this.
Carl Kraenzel, IBM Distinguished Engineer, Director of Watson Cloud Technology and Support.
Watson is ground-breaking technology, and IBM Elastic Storage technology was at the heart of the Watson that was first introduced in 2011.
To consider IBM Elastic Storage based on lower-cost and higher-scalability is not the full picture. Rather, this is an important platform for Cognitive Computing, which we are just at the tip of the iceberg in exploring. IT systems need to be aware of the context of what we are doing.
While the Grand Challenge demonstration on Jeopardy! was exciting, it is time we stop playing games and apply IBM Elastic Storage to business, to help with health care and medical research, and other problems in society. IBM has already deployed this at Anderson Cancer Center and Memorial Sloan Kettering Cancer Center, for example.
Tom Rosamilia provided closing remarks. IBM Elastic Storage is not just for new workloads in Cloud, Analytics, Mobile and Social (CAMS) but also traditional workloads as well. IBM Elastic Storage provides "data democracy" and allows for "better rested storage administrators" that make fewer mistakes.
Tom opened the floor for questions from the audience:
Q1. Data integrity, not just security but also quality? IBM Elastic Storage has end-to-end data integrity checking built-in.
Q2. How does IT transition from full control to auto-pilot? IBM allows you to tap into existing storage. This is not rip-and-replace. With storage virtualization, IBM hides the complexity that normally requires full control over specific assets.
Q3. Storage admins would rather have a root canal without Novocaine than move their data. What is IBM doing to offer automation to help storage admins move to this new infrastructure? IBM storage virtualization breaks that hard link between applications and specific storage devices. IBM Elastic Storage eliminates application downtime previously associated with data movement.
Tom Rosamilia assured the audience that IBM is fully committed to its storage portfolio. IBM Elastic Storage is not just about the profoundness of what IBM announced today, but also where IBM is investing in the future of storage.
Well it's Tuesday again, and you know what that means? IBM announcements! Many of the announcements were made by IBM Executives at the [IBM Pulse 2014 conference].
IBM BlueMix is the newest cloud offering from IBM, providing Platform-as-a-Service (PaaS) offering based on the Cloud Foundry open source project that promises to deliver enterprise-level features and services that are easy to integrate into cloud applications.
This week, my fifth-line manager Tom Rosamilia, IBM Senior Vice President IBM Systems & Technology Group and Integrated Supply Chain made two announcements at Pulse. First, in additional to x86-based servers, SoftLayer will also offer POWER-based servers to run AIX, IBM i and [Linux on POWER] applications.
Second, SoftLayer will support PureApplication Patterns of Expertise. What is a pattern of expertise? It can be as simple as a virtual machine encapsulated in [Open Virtual Format], to more dynamic architectures, packaged with required platform services, that are deployed and managed by the system according to a set of policies.
Patterns simplify and automate tasks across the lifecycle of the application. Customers and partners alike are [seeing significant reductions in cost and time] across the application lifecycle with the deployment of a PureApplication System.
Also, this week at Pulse, Robert LaBlanc, IBM Senior Vice President of Software and Cloud Solutions, announced [IBM plans to Acquire Cloudant] which offers an open, cloud Database-as-a-Service (DBaaS) that helps organizations simplify mobile, web app and big data development efforts.
When I introduced [SmartCloud Virtual Storage Center] back in October 2012, I mentioned that it was a great solution for large enterprise that have all of their disk behind SAN Volume Controller (SVC).
To reach smaller accounts, IBM has announced two new offerings:
IBM SmartCloud Virtual Storage Entry for customers that have less than 250TB of disk behind two or four SVC nodes. It is priced per terabyte, by the amount of capacity that is virtualized.
IBM SmartCloud Virtual Storage for Storwize Family for customers that have other Storwize family products (Storwize V7000 or V5000, for example). It is priced per the number of storage enclosures that are managed by the Storwize family hardware.
From the photo, the marketing people staggered the various components to give it a stylized [Dagwood Sandwich] effect. I can assure you that these are just standard 19-inch rack components that fit into 6U of space in standard IT racks.
Starting top to bottom, we have the first FlashSystem V840 Control Enclosure, its 1U-high UPS, a second FlashSystem V840 Control Enclosure and its UPS, and finally a 2U-high FlashSystem V840 Storage Enclosure.
You can have up to a dozen Flash modules, either 2TB or 4TB size, for a maximum of 40TB usable RAID-protected capacity. These can be protected with AES 256-bit encryption. The FlashSystem modules are front-loaded, and slide in and out for easy maintenance.
The system is fully redundant and hot-swappable with concurrent code load to ensure high availability.
(Update: In the comments, readers thought that this was nothing more than just a two-node SVC with FlashSystem 840. There are differences, so I have added the following table.)
SVC with FlashSystem 840
Cabling from controllers to storage
Through SAN fabric ports
Direct attach from V840 Controllers to V840 Storage Enclosures
Call Home Support
GUI screen branding
The system is fully VMware-certified, supporting VAAI interfaces, and an SRA for VMware's Site Recovery Manager (SRM). With Real-time Compression, you can get up to 80 percent capacity savings for workloads like Virtual Desktop Infrastructure (VDI). That in effect gives you up to 5x (200TB) of virtual capacity in 6U of rack space!
You can either keep it as an All-Flash array, or you can virtualize external IBM and non-IBM disk systems, and use the Flash capacity in the Storage Enclosure for IBM's Easy Tier automated sub-volume tiering and data migration. With or without external storage, the FlashSystem V840 can provide local and remote mirroring and point-in-time copies.
However, I was speaking to various clients in Winnipeg, Canada Tuesday and Wednesday this week, so marketing moved the announcement date to today to accommodate my schedule. Sometimes, being the #1 most influential IBM employee in storage comes in handy!)
Here, then, is a quick review of the storage portion of today's announcements.
IBM FlashSystem 840
The [IBM FlashSystem 840] offers twice the capacity as its predecessors, the 810 and 820, with up to 48TB in a dense 2U package.
(Quick recap of previous models: Both the FlashSystem 810 and 820 supported ECC-protected memory and Variable-striped RAID (VSR). The [FlashSystem 810] supported RAID-0 striped across the modules, and the [FlashSystem 820] supported two-dimensional 2D-RAID across modules for higher availability. Fellow blogger Jim Kelley (IBM) on his Storage Buddhist blog has a great post on this: [IBM FlashSystem: Feeding the Hogs].
The new FlashSystem 840 in effect replaces both, so you can choose RAID-0 striping or 2D-RAID, along with your ECC-protected memory and Variable-striped RAID. It offers hot-swappable Flash modules, redundant components, and non-disruptive concurrent code load (CCL).
The FlashSystem 840 also introduces military-grade AES-XTS 256 bit encryption to provide added protection to your data.
For host attachment, you have some great choices: 16Gb/8Gb/4Gb auto-negotiated Fibre Channel (FCP), 40Gb InfiniBand QDR, and 10Gb FCoE. Whatever you decide, you get 90 microsecond writes, and 135 microsecond reads.
Since its introduction just over a year ago, IBM has sold FlashSystem to over 1,000 clients! For more on how this compares to other all-flash arrays, read my previous post about [IBM FlashSystem].
Adding SAN Volume Controller provides some key advantages, including Real-time compression, Thin provisioning, FlashCopy point-in-time copies, Stretched Cluster support, Easy Tier sub-LUN automated tiering, and remote copy services like Metro Mirror (synchronous) and Global Mirror (asynchronous).
Adding the SVC also changes the host attachment options: 8Gb/4Gb/2Gb Fibre Channel (FCP), 1Gb and 10Gb iSCSI, and 10Gb FCoE. Depending on the options and features you choose, the SVC layer adds a modest 60 to 100 microseconds to each read and write.
Each SVC node dedicates four of its six cores, and 2GB of its 24GB cache, to use with compression. Those interested in beefing up compression performance, either with FlashSystems or with any other disk, can choose the "Compression Hardware Upgrade Boosts Base I/O Efficiency" (affectionately known as the CHUBBIE) RPQ 8S1296 for SVC systems with software version 126.96.36.199 or higher. Basically, this RPQ adds another 6-core CPU and another 24GB of cache, so that each node can dedicate 8 cores for compression, and 26GB of cache for compression processing. Initial test results show this can increase performance 3x!
IBM Network Advisor
The [IBM Network Advisor v12.1] management software provides comprehensive management for data, storage and converged networks. This single application can deliver end-to-end visibility and insight across different network types--it supports Fibre Channel SANs (including Gen 5 Fibre Channel platform), IBM FICON and IBM b-type SAN FCoE networks--and provides new features to manage your Brocade and IBM b-type SAN switches.
Cisco MDS 9710 Multilayer Director
The [Cisco MDS 9710 Multilayer Director] is mainframe-ready, with full support for System z FICON and Fibre Channel protocol (FCP) environments. This director supports eight module slots for a maximum of 384 ports.
Well it's Tuesday again, and you know what that means? IBM Announcements!
You might be thinking, didn't IBM just have a [huge storage announcement October 8, 2013]? You would be right! IBM's $1B additional investment in Storage has been like shot of adrenaline in getting new features and functions out sooner to our clients.
DS8870 Disk System Release 7.2
New IBM POWER7+ controllers. The previous models of DS8870 were based on the POWER7 controllers, and these new models have POWER7+ processors. This change enhances the performance across the board, from mainframe to distributed systems, from sequential to random. Customers with existing POWER7-based models will be able to do MES upgrade to the new POWER7+ next year.
For comparison with older DS8000 models, here are some internal IBM measurements we took for Database workloads on both z/OS(mainframe) and Distributed systems with typical 70% read, 30% write and 50% cache hit:
IBM Internal Measurements (thousands of IOPS)
DB Distributed systems
New 1.2TB (10K RPM) and 4TB (7200 RPM) self-encrypting enterprise drives (SED). This is a 33% boost over the previous 900GB and 3TB drives previously available. As with all the other drives in the DS8870, these new drives include the encryption chip right on the drive itself, offering encryption with scalability.
Improved security. Release 7.2 will support the U.S. National Institute of Standards and Technology [NIST.gov]] 800-131A specification, raising the 96-bit encryption to the required 112 bits on the customer IP network. This involves updates to the security firmware, management software and digital signatures on code loads.
Metro Mirror enhancement for System z. By avoiding serial conflicts of updated blocks, this enhancement can boost performance up to 100 percent when using Metro Mirror with z/OS applications on System z mainframes.
Easy Tier™ reporting and graphs to determine optimal mix. Now you can see for yourself how sub-LUN automated tiering is helping your applications.
Easy Tier Workload Categorization
New workload visuals help clients and IBM technical specialists compare activity across tiers within and across pools to help determine optimal drive mix for current workloads
Easy Tier Data Movement Daily Report
New Easy Tier summary report every 24 hours illustrating data migration activity (5-minute intervals) can help visualize migration types and patterns for current workloads
Easy Tier Workload Skew Curve
Shows skew of all workloads across the system in a graph to help clients visualize and accurately tier configurations when adding capacity or a new system Clients can import data into Disk Magic
All-Flash Optimization. Yesterday, in my post [IBM FlashSystem versus EMC XtremeIO], I mentioned that any hybrid systems like the IBM Storwize V7000 that can support a mix of SSD and HDD can obviously be configured as SSD-only. Apparently, that was not obvious to many readers, so I apologize. For the DS8870, you can configure an all-Flash (SSD only) configuration, and Release 7.2 added some optimization when configured with SSD only.
1,056 drives 15K @146GB in RAID-10
224 drives SSD @400GB in RAID-5
Same - Usable 72 TB
70 percent faster
33 percent less floor space required
62 percent less energy consumed
(Note: Performance results based on measurements and projections using IBM benchmarks in a controlled environment.)
OpenStack™ support DS8870 now offers the [OpenStack Cinder] interface for block LUN allocations in OpenStack environments. IBM is a Platinum sponsor of OpenStack, and Opentack is the strategic platform for IBM private and hybrid clouds.
XIV Storage System
Following on the heels of the [XIV enhancements announced], IBM has now added 800GB Solid State Drives (SSD) as Read cache for its 4TB drive-based models.
DCS3860 Disk System
The DCS3860 is the next generation of the DCS3700 disk system. Designed with Linux-x86 servers in mind, the system offers direct SAS host attachement, 24GB of cache, and 60 drives in a compact 4U drawer. Like its predecessor, the drives are stored on five pull-out trays, with twelve hot-swappable drives per tray. You can add up to five more expansion units, with 60 drives each, for a total of 360 drives in 24U rack space.
These new models will help our clients deploy new workloads and consolidate existing workloads.
Each resident presented at least six proposals for blog post ideas. A proposal included a title and short description of what it would entail. Titles had to be less than 70 characters, and the short descriptions were typically just a few sentences.
These were presented to the entire team, and we picked them apart, suggested better wording for the titles, or different ways to approach the topic.
"I treat others respectfully, attacking ideas and not people. I also welcome respectful disagreement with my own ideas.
I believe in intellectual property rights, providing links, citing sources, and crediting inspiration where appropriate.
I disclose my material relationships, policies and business practices. My readers will know the difference between editorial, advertorial, and advertising, should I choose to have it. If I do sponsored or paid posts, they are clearly marked.
When collaborating with marketers and PR professionals, I handle myself professionally and abide by basic journalistic standards.
I always present my honest opinions to the best of my ability.
I own my words. Even if I occasionally have to eat them."
Words to live by.
The residents spent most of the day working on our blogs from the proposals that were approved. The target was around 400 to 600 words in length, with one or two stock photos.
IBM is the #1 vendor for Social Business tools, so it makes sense for us to use our own stuff to facilitate the submission process. The residents submit their blog posts to IBM Connections as an activity in the Cloud Social Media Residency community. All of the resources we used, and all the presentations we saw, are all here in the community.
As an incentive, prizes were given out to those who submitted the most posts by end of the day.
We were given certificates for completing the class, and a "Redbooks Thought Leader" emblem to put on our blog.
Ryan Boyles took a group photo! If it seems that the photo is slightly askew, it is to make me look taller. Yes, I could have used GIMP to fix the orientation, but why bother? I look tall! Woo hoo! I will have to remember this technique for future group photos.
ITSO Cloud Social Media Residency, Oct 2013. Photo by Ryan Boyles.
Lastly, I would like to thank Vasfi, Tamikia, Hillary, Caroline, Ric, Jane, LeeAnne, Tina, Karen, Michael, Shelbee, Farzad, Stewart, Arun, Eric, Chris, Hans, Odilon, Mohsin, Wolfgang and the rest of the ITSO team for a wonderful job organizing this week!
The gondolier propels the boat with an oar, and stopped rowing a few times to belt out beautiful Italian songs.
Truly impressed, I asked the gondolier how long was the training for this job. "Six weeks!" he answered. Wow! Where can I learn to sing like that in six weeks?
He clarified. No, the Venetian hotel hires competent singers, and then spends six weeks to teach them to row the gondola. Duh!
I asked Vasfi Gucer, our ITSO project leader for this residency, why there were so many Cloud topics on the agenda for this social media training. He explained it was just as important to emphasize "why" people need to be passionate about Cloud, in addition to the "what" and "how" of blogging.
This reminded me of this quote from fellow author Hugh MacLeod. I highly recommend his series of books.
"Blogging requires passion and authority. Which leaves out most people."
--- Hugh MacLeod.
Vasfi had invited Cloud experts who already have the authority to blog, and the point of this residency is for the residents to become passionate in sharing their expertise.
Here are some of the people that spoke on Cloud:
Ric Telford, IBM VP of Cloud Services
Ric Telford shared with us IBM's point of view of where the Cloud industry is going. He has been in this job position since 2009, and shared with us the history of how the IBM Cloud business has evolved in the past four years.
Jane Munn, IBM VP Business Line Executive for Cloud hardware
As the Center of Competency on Cloud for all 12 IBM Executive Briefing Centers in my group, I had to report to Jane Munn on a frequent basis. I was pretty candid on those calls on what we should change, and I am glad to see that many of my suggestions have been implemented, or being considered for 2014.
Michael Fork, IBM Lead Architect for Hosted Private Cloud
Michael Fork gave two great presentations, one on [IBM SoftLayer] Cloud services, and the second on IBM's support of open standards, such as [OpenStack] and Cloud Foundry.
Hans Zai, IBM Cloud Service Line Leader; and Odilon Magroski Goulart Junior, IBM Technical Solution Architect
All the residents had to present in front of the class on their expertise. Hans and Odilon presented their work on [IBM SmartCloud for SAP Applications]. Hans is from Sweden, and Odilon from Brazil, so their perspectives on this was quite interesting.
When IBM renamed LotusLive to [SmartCloud for Social Business], I thought this would be the naming convention for all of our Software-as-a-Service (SaaS) offerings.
But SmartCloud for SAP Applications is a Platform-as-a-Service, providing the SAP environment as a platform, which allows clients to then deploy their customized SAP applications on this platform.
What did I present on for my "Share your expertise" session? IBM System Storage, of course! Storage is a critical part of Cloud!
So, my gentle readers, what topics do you want me to write about that combines Storage and Cloud? Enter your suggestions in the comments below.
"SmartCloud Enterprise Object Storage is switching from 3rd-party Nirvanix to its internal IBM Softlayer. This one involves more in-depth explanation which I will save for another post."
It's time to make good on that promise! Here is a quick diagram to help visualize the agreement (with sincere apologies to [Jessica Hagy]!) but not to scale, of course!
Last month, Nirvanix announced it was shutting down October 15. Here was the exact wording from their website:
For the past seven years, we have worked to deliver cloud storage solutions. We have concluded that we must begin a wind-down of our business and we need your active participation to achieve the best outcome.
We are dedicating the resources we can to assisting our customers in either returning their data or transitioning their data to alternative providers who provide similar services including IBM SoftLayer, Amazon S3, Google Storage or Microsoft Azure.
We have an agreement with IBM, and a team from IBM is ready to help you. In addition, we have established a higher speed connection with some companies to increase the rate of data transfer from Nirvanix to their servers.
We are working hard to have resources available through October 15 to assist you with the transition process, and have set up a rapid response team that can be reached at (619) 764-5650 [press 2 for customer support during normal business hours] or (888) 791-0365 after business hours, or contact email@example.com.
Please check back to this web page periodically for status updates.
We thank you for your support and patience.
The Nirvanix team
UPDATE ON NIRVANIX
On October 1, 2013, Nirvanix voluntarily sought Chapter 11 bankruptcy protection in order to pursue all alternatives to maximize value for its creditors while continuing its efforts to provide the best possible transition for customers."
In response, IBM put out this press release:
"In light of reports that Nivanix has decided to soon cease operations, IBM is moving quickly to help clients of our Nivanix-based Object Storage offering to move their data to other solutions such as the robust and highly scalable IBM SoftLayer Object Storage or IBM's persistent storage solution."
To understand why this is a big deal, consider the difference between Cloud Computing and Cloud Storage. Cloud Computing is like buying gasoline at your favorite gas station. If the station is closed, you can just drive a few blocks to another gas station. The ease with which customers can switch from one Cloud Compute provider to another is part of the appeal, forcing Cloud Compute providers to be extremely efficient at what they do to offer the lowest price.
Cloud Storage is completely different, more like a safety-deposit box at the bank, or a storage unit to hold all of your boxes of tax receipts. Now if you have a small amount stored away in a safety-deposit box, this is probably just a minor inconvenience. You can take out the contents and store at home, or find another bank and open a new safe deposit account.
However, if you have a lot stored in a storage unit, it may be more difficult.
For example, I am in the process of remodeling my home, so I have moved a lot of my stuff to a 400 cubic-foot storage unit during the process. There were a variety of storage units within miles of my home. Some are fully air-conditioned, some offered 24x7 access, while others are not air-conditioned, or only allowed access during business hours. It has taken me several weekends to box up and move them to the storage unit. My car only holds 12-14 boxes at a time, so many trips were involved.
If the Storage Unit company told me that they were closing down, and that I would have to move all of these boxes to another facility, I would have to hire moving professionals to do all the work. This is in effect what companies need to do with their data. They must take the data off Nirvanix systems, and either store it in-house, or find another cloud storage provider.
IBM offers three options:
IBM [SoftLayer Object Storage] offering which is an OpenStack Swift-based Object Storage solution. IBM's SoftLayer object based storage solution provides a robust, highly scalable solution, with the ability to retrieve and leverage data the way you want to, and grow when you need. You can choose to store your objects in Dallas, Texas (USA), Amsterdam (Europe), and/or Singapore (Asia).
SCE persistent storage solution where you will be able to manage storage resources by attaching an instance during the instance creation process.
An alternate storage solution of your choice. Yes, IBM will help you move your data to Amazon, Google, Microsoft, etc. While technically competitors, IBM also has strategic partnerships in place with each to facilitate the movement.
These options are not just for IBM's SmartCloud Enterprise Object Storage clients. Nirvanix has named IBM the savior for all of its other non-IBM customers as well. Why IBM? Well, IBM is one of the most recognized names in the IT industry. Not just one of the biggest Cloud Service providers, IBM also has an army of professionals in its Global Services division to help.
Well, it's Tuesday again, and you know what that means? Announcements!
Today, IBM's announcements are designed to change the economics of big data analytics, cloud, mobile and social media.
[Software Defined Environments] require [Software Defined Storage], combining storage virtualization with open, extensible, industry-led interfaces. The IBM SmartCloud Virtual Storage Center (VSC) and IBM Storwize Family are the market leaders in storage virtualization. SmartCloud VSC, Storwize Family, and XIV support the industry-led OpenStack interfaces.
Here are some of the announcements today:
IBM Storwize® Family
The [SAN Volume Controller] was first introduced 10 years ago, in 2003. Today, clients enjoy these storage virtualization capabilities across a variety of offerings, known collectively as the [IBM Storwize Family].
IBM adds a new member to the Storwize Family. In addition to SAN Volume Controller, Storwize V7000, Storwize V7000 Unified, Flex System V7000, Storwize V3700, and Storwize V3500, IBM is announcing the [IBM Storwize V5000]. Here's a quick side-by-side comparison:
Scalability: Maximum configuration
Four control enclosures clustered together, 36 expansion enclosures, 960 drives, 64GB cache
Two control enclosures clustered together, 12 expansion enclosures, 336 drives, 32GB cache
One control enclosure, 4 expansion enclosures, 120 drives, 8GB cache upgradeable to 16GB, optional Turbo performance
8Gbps FCP and 1GbE iSCSI standard; optional 10GbE iSCSI/FCoE.
Can upgrade to Storwize V7000 Unified by adding NAS File Modules to add support for CIFS, NFS, HTTPS, SCP and FTP protocols
1GbE iSCSI, 6Gbps SAS, 8Gbps FCP and 10GbE iSCSI/FCoE Standard
1GbE iSCSI, 6Gbps SAS standard; optional 8Gbps FCP and 10GbE iSCSI/FCoE
Storage virtualization/Data Migration
Internal virtualization, Data Migration standard; optional external virtualization
Internal virtualization, Data Migration standard; optional external virtualization
Internal virtualization, Data Migration (external devices can be attached to ingest data only) standard
optional Metro Mirror, Global Mirror, Global Mirror with Change Volumes
optional Metro Mirror, Global Mirror, Global Mirror with Change Volumes
optional Metro Mirror, Global Mirror, Global Mirror with Change Volumes
Sub-LUN Automated Tiering
Easy Tier standard
optional Easy Tier
optional Easy Tier
VMware VAAI, VASA, vCenter plug-in, and OpenStack Cinder APIs standard
VMware VAAI, VASA, vCenter plug-in, and OpenStack Cinder APIs standard
VMware VAAI, VASA, vCenter plug-in, and OpenStack Cinder APIs standard
Storwize V7000, V5000 and V37000 now support larger 800GB SSD drives. Previously, they only support SSD drives up to 400GB.
VMware 5.5 and VASA support. VMware ships every release with built-in support for all members of the IBM Storwize Family, but it bears repeating here just in case you were interested. IBM is a leading reseller of VMware, so it makes sense for IBM's storage devices to support everything that VMware customers could possibly want in terms of VMware integration. IBM SmartCloud VSC, Storwize Family, and XIV Storage System are no exception!
New IP-based replication driving lower costs for replication. Previously, Metro Mirror, Global Mirror and Global Mirror with Change Volumes were FCP-based, and many clients bought extra equipment to run FCP packets over long-distance IP (known as FCIP). Now, clients can replicate across distnace natively without FCIP routers, and use IP-based connections natively.
In my blog posts covering [Edge 2013 - Day 3 Solution Center], I mentioned that IBM has certified Bridgeworks' SANSlide 150SVCV7K unit that provides a Riverbed-like WAN Optimization for long-distance replication. Now, IBM has fully integrated Bridgeworks' SANSlide network optimization technology directly into Storwize Family!
All members of the Storwize Family will support 1GbE remote disk replication, and this will be extended to 10GbE support at a later date.
The [Storwize V3700] is now offered in 48-volt Direct Current (DC) models, [NEBS/ETSI compliance] for Telecommunications companies that require this, and now support 4TB drives.
When we introduced [IBM SmartCloud Storage Access] in February, it was to offer self-service, automated policy-based provisioning for file storage on the SONAS and Storwize V7000 Unified. Today, we add self-service, automated policy-based provisioning for block storage. The first products to be supported are SmartCloud VSC, the entire Storwize Family, and XIV Storage Systems. In addition to the web portal, the Storage Cloud Integration API enables 3rd party ISV applications to support SmartCloud Storage Access.
Storage admins will no longer need to be bothered with tedious provisioning requests, freeing up more time for them to work on more strategic, transformational projects.
[IBM SmartCloud Virtual Storage Center] was introduced last year, combining SAN Volume Controller, Tivoli Storage Productivity Center, Tivoli FlashCopy Mangaer and the Storage Analytics Engine into a single license. The initial offering provided the cross-platform "Tiered Storage Optimization" that provided recommendations for what LUNs should be moved from one disk array to another to manage performance vs. cost. Today, IBM is first to market with an automated version, moving LUNs automatically from one disk array to another.
[SmartCloud Enterprise Object Storage] is switching from 3rd-party Nirvanix to its internal IBM Softlayer. This one involves more in-depth explanation which I will save for another post.
IBM XIV Storage System
As part of the [due diligence] team for IBM to acquire the XIV company back in 2007, I am glad to see how this system has evolved since then. I have certainly [blogged quite a bit on XIV] over the years.
Earlier this year, IBM introduced Hyper-Scale Mobility which allows the storage admin to move LUNs non-disruptively from one XIV frame to another. Today, Hyper-Scale Cross-system Consistency Groups allows you to have snapshots of collections of volumes across multiple XIV frames, up to 3PB of capacity snapped at the same instance of time.
The current supported releases of OpenStack are Folsom and Grizzly, and the newest release is Havana. XIV now offers OpenStack Cinder interfaces at the Havana level.
XIV now offers a RESTful API for monitoring and provisioning. [REST] is a de-facto standard in WEB services and cloud implementations. XIV's RESTful API is a programmatic management interface that follows REST principles:
Resources are identified by global identifiers (URIs)
Data is sent as JSON/XML over HTTP
Manipulations of resources are done by HTTP methods (GET, PUT, POST, DELETE)
The interface is Stateless and Hypertext driven
The interface is universally supported, programming language and platform agnostic. For monitoring, the following GET example could show the list of volumes on a particular XIV storage system:
For provisioning, the following PUT example could create "vol1" on that XIV storage system.
IBM SmartCloud Storage Access to allow self-service provisioning (see the SmartCloud section above).
Data-at-Rest encryption, using Self-Encrypting Drives (SED). XIV will encrypt the data, and IBM's Security Key Lifecycle Manager (SKLM) or Tivoli Key Lifecycle Manager (TKLM). If you have an XIV already, you may already have SED drives ready to use! The XIV will also encrypt the data on the SSD drives used for persistent read-cache.
Other new and enhanced offerings
For our mainframe clients, the Virtualization Engine TS7700 now supports 60 percent more capacity, and can now support 8Gbps FICON attachement.
N series N3000, N6000 and N7000 support new disk drive types and sizes, as well as Data OnTap 8.2 Cluster mode. You can now lash together up to 16 N series together into a SONAS-like single system image.
Cisco MDS 9710 Multilayer Director for IBM® System Networking is a new 16 Gbps SAN director with robust security to support multi-tenancy cloud configurations.
Whew! That is a lot of things to discuss in one post. Since they were all related, I did not want to split it up into parts.
Wrapping up my coverage of the of the [IBM Edge 2013] conference, I have some photos of people I ran into at the Solutions Center.
Leslie Hattig and Lisa Stone, both account managers for [MarkIII Systems], an IBM Business Partner located in Houston, TX. These ladies are inseparable BFFs, I have never seen one without the other! I first met them at the [Storage Symposium in Chicago] back in 2009.
Stacy Tabor was our Community Manager for the [Storage Community]. This community covers IT Storage challenges, hot topics, architecture and solutions. You'll find industry news, videos, blog discussion threads on timely topics, exclusive analyst white papers and experts opinions. I am a frequent contributor, myself, and thank Stacy for her past service. She helped run a "Social Media Hour" at Edge for all the bloggers like me to get to meet each other.
I could not resist getting a picture with this Las Vegas Cirque du Soleil] dancer. This was an invitation-only event, sponsored by IBM Business Partners, that I was invited to during the Social Media Hour. (See, it pays to be social!) I think the visual effects of the flag she was waving turned out really well in the picture! And yes, in case you are wondering, that is my favorite grape-flavored beverage (GFB) in my left hand. Posing for this picture was quite the balancing act, but then I am also a certified yoga instructor, so I was able to manage!
Tanaz Sowdagar is an IBM Storage Rep for our Business Development Team. This includes finding other companies to OEM our technology and re-brand it under their own names. I have worked with Tanaz for many years, helping answer questions that potential OEM parnters have about our products and technologies for this purpose.
This was Michelle, my Conference Room Monitor. Each room had one, scanning the bar-codes on each badge for all the attendees, keeping count of the number of people for each session, supporting anything the speaker needs, like getting the A/V guy to come help set up the laptop projector.
Since this was Friday, last day of the conference. I decide to dress casually, consistent with many company's [Casual Friday] dress code policy. I am wearing the "IBM Edge Rocks" tee-shirt given out at the concert and Solutions center the first few nights.
Getting this shot right took several takes, as the man I handed my camera to had apparently never seen a digital camera before, did not know how to focus, and some
Finally, leaving Las Vegas, I sat next to Mrs. Joey Clark, wife of "Bulldog" Clark of the Utah band [Blammity Blam]. She also sometimes plays violin with the band. She is a newly-wed, and not sure if Joey is her name, or her husband's name. (Joey, if you are out there, and want me to correctly identify you, please write a comment in the section below.)
What I have learned however, is that if a beautiful girl is sitting next to me on the plane, she will either talk to me the entire flight, implying that she is single, or mention within the first 30 seconds of conversation that she is married. Sadly for me, it was the latter.
(We were both flying on to Dallas, TX, whereupon she was going to visit her parents in Florida, and I was on my way to Sao Paulo, Brazil to get stuck there amongst the protesters in what is now called the [V for Vinegar movement], but I will save that for another blog post!)
Well, that wraps up my coverage of Edge 2013. I am sorry it took so many months to cover all the material, but I did not want to have it go uncovered much longer.
Next year's [Edge 2014] is expected to be bigger and better. It will in Las Vegas again, but this time at the Venetian Hotel, May 19-23, 2014. I plan to be there!
Monday marked the first official day of [IBM Edge 2013] conference. This is actually three conferences in one: Executive Edge for the high-level executives, Winning Edge for the Business Partners, and Technical Edge for storage administrators and IT manager/directors. I attended the latter.
The General Session was kicked off by an awesome drumbeat-heavy song performed by a band from North Carolina called [Delta Rae]. Their use of drums reminded me of Adam Ant.
Deon Newman, IBM VP of Marketing, Systems and Technology Group, North America, served as today's master of ceremonies. He was pleased to announce there were more then 4,700 attendees at this event -- representing more than 60 countries -- a huge increase over the attendance we had last year. Here are my notes of the opening General Session:
Stephen Leonard, IBM General Manager, Sales, Systems & Technology Group
Consumers expect an always-on technology experience. We, as consumers, are leaving a trail of data that is getting wider and wider every day. Data is the new "natural resource", but plentiful and never ending.
In 1996, about 29 percent of IT spend was for adminstration and management, today it has grown to 68 percent. Some 34 percent of IT projects deploy late.
Stephen emphasized the themes of Smarter Computing: (a) systems that are designed for the data, (b) software-defined environments, that are (c) open and collaborative.
Stephen cited a customer example from [Jaguar Land Rover], a manufacturer of sporty automobiles and rugged 4x4 vehicles. IBM developed a ["Virtual Dealership"] for them. Rather that trying to maintain additional physical bricks-and-mortar facilities, which can be expensive to staff and fill with vehicles across their wide portfolio, the virtual dealership allows prospective customers to try out vehicles through simulation. This virtual dealership could be taken to where prospective clients are, such as a sporting event or shopping mall.
Ed Walsh, IBM VP of Marketing, System Storage and Networking
Ed presented the "data economics" of all-Flash arrays. IBM recently acquired Texas Memory Systems, and renamed the RamSan products to IBM FlashSystem, and committed to invest an additional $1 Billion US dollars in flash technologies.
On a $-per-IOPS basis, IBM FlashSystems can be 30 percent lower total-cost-of-ownership TCO than disk-based alternatives. The cost of Flash is offset by 17 percent fewer servers from having higher CPU utilization rates, resuling in 38 percent lower software license fees. Flash is also more efficient, with 74 percent lower in environmental costs, and 35 percent lower operational support costs. For many situations, Flash is the solution for poorly written software applications.
Ed also mentioned IBM's strong support for open source and open standards. Over the past 15 years, IBM as been a major contributor for open source efforts like Linux, Eclipse and Apache. IBM continues that tradition, with contributions to OpenStack and Hadoop.
Without going into any details, Ed also hinted that IBM announced 65 new or refreshed products in Storage, Networking and PureSystems. The details of each announcement would be explained during the break-out sessions during the week.
Charles Long, Founder and CEO of Centerline Digital
[Centerline Digital] does computer-generated animations in support of corporate marketing efforts.
(FTC disclosure: I work for IBM, and have worked closely with Centerline Digital marketing agency when I was the chief marketing strategist for System Storage back in 2006-2007. I was not paid or provided any products or services to mention any of the clients mentioned in this post.)
Charles indicates that internet technologies have converted "Analog dollars to digital pennies". Using IBM PureFlex with Storwize V7000 storage, real-time compression, and Tivoli Endpoint Manager, Centerline was able to drastically improve their business. He feels the old joke of "Better, Faster, Cheaper - Choose Any Two!" no longer applies with IBM solutions!
Ambuj Goyal, IBM General Manager, System Storage and Networking
Formerly my fifth-line manager in charge of Software and Systems, Ambuj switched to be the General Manager of System Storage and Networking group earlier this year.
In his former roles, Ambuj managed software and hardware product lines, but he feels storage is a completely different animal. In the past, clients focused on choosing the best servers, then chose their storage as an afterthought. Today, Ambuj feels that processors are now a commodity, and that storage is becoming the forethought.
Ambuj also highlighted the evolution of IBM's Software-Defined Environment:
In 2003, IBM introduced its the SAN Volume Controller, a storage hypervisor. Now, over 10,000 clients enjoy the benefits of a Software-Defined Environment using SAN Volume Controller.
SmartCloud Virtual Storage Center represents the "third generation" for policy-driven management, combining SAN Volume Controller, Tivoli Storage Productivity Center, FlashCopy Manager and the Storage Analytics Engine.
IBM is trying to help people keep their business critical apps running securely, to be able to start quickly, add value and functions at scale, and to leverage all of this data-intensive solutions to help drive new business and gain customer insight.
Joseph Balsamo, VP of Platform Engineering at Prudential Insurance
While the IT department of [Prudential Insurance] is focused on the three V's -- Volume, Velocity and Variety -- Joe is more focused on solutions, status and cost. His mission was to strengthen the role of IT as a partner through business aligned services. Prudential has deployed XIV, N series, SAN Volume Controller (SVC) and Storwize V7000 disk systems, with the following results:
Reduced their $-per-IOPS by 75 percent
No additional storage administrators
85 percent utilization through thick-to-thin migrations
Reduced their $-per-MB by 50 percent
Reduced their 72-hour RPO to 15 seconds
These benefits were achieved over the past 24 months of deployment.
Paulo Carvao, IBM Vice President, North America Systems & Technology Group
Paulo is Deon Newman's boss. He presented BlueInsight, IBM's internal "Business Analytics" cloud accessible by over 200,000 users, with over 1 PB of content.
Inside IBM, the deployment of a Smarter Infrastructure has allowed for 25 percent capacity growth at flat IT budget, with 30,000 fewer Megawatts and 103,000 square feet.
Why is this significant? Today's disk writes each bit of information across 1200 atoms, and the smallest number of atoms that can retain information is 12 bits, so sometime in the next 7 to 10 years, the improvements in magnetic bit density for disk will stop.
For silicon chips, the smallest practical feature is 7 nanometers, about 35 atoms wide. We are quickly approaching that limit also.
I can already tell that it's going to be a busy week! Follow me on twitter (@az990tony) and tag your posts and tweets with #IBMedge hashtag.
Continuing this week's theme about the future, fellow blogger, published author, and futurist David Houle is coming out with a new book this month titled [Entering the Shift Age]. This is a follow-on to his book, [The Shift Age].
Since this book cites IBM studies explicitly, his PR department asked me to review it. If you are an aspiring author that has a book you want me review, and it relates to the topics my blog covers like Cloud, Big Data, storage, and the explosion of information, feel free to send me a copy!
(FTC Disclosure: I work for IBM. I was not paid by anyone to mention this book on my blog. I was provided an "Uncorrected Advanced Copy" of this book at no cost to me for this review. I do not know David Houle personally, have not read any of his prior works, nor have I ever seen him speak at public events. This post is neither a paid nor celebrity endorsement of this author, his book, nor any other books by this author.)
First, let's get a few details out of the way:
Title:Entering the Shift Age, 284 pages Author: David Houle, futurist Genre: Non-fiction, trends and predictions
Publisher: Sourcebooks, Inc. Publish date: January 2013
As I mentioned in my post [Historians vs. Futurists], there is only one past, but there are many potential futures. There seems to be as many futurists out there as there are potential futures. I suspect not everyone will agree with all that David has written. However, this reminds me of one of my favorite quotes:
"When two futurists always agree, one is no longer necessary." -- old Italian adage
In his book, David asks a series of thought-provoking questions, then answers them with his views and opinions on how the future will roll out:
Is humanity now entering a new age that is different than the Information age?
If so, what should we call it?
Which forces are driving this new age?
How will this impact various aspects and institutions of society?
David feels humanity is indeed entering a new age, which he calls the Shift Age. This is driven by three forces: the shift to globalization of culture and politics, the flow of power and influence to individuals, and the acceleration of electronic connectedness.
In a sense, David is like a hunter-gatherer from the Stone age, hunting down trends and gathering ideas from others. In much the same way my compost brings renewed purpose to the rinds and pits of my fruits and vegetables, David's book does a good job paraphrasing the works of many of today's leading futurists.
David predicts the decade we are now in, the 2010's, will mark the end of the Information age, a transition period to this new era, that will lead to transformations in government, education, health, technology, and energy.
Over the past two weeks, I had time to enjoy a variety of movies. I had seen several whose stories wrapped around key moments of transition.
"Gone with the Wind", as well as the new offering "Lincoln" from Steven Spielberg. Both are set in the 1860's, the time of the [American Civil War], pitting the Industrial-age forces of the North, against the Agricultural-age economy of the South. This time saw the transition from slavery to freedom.
"Doctor Zhivago", set in the time of World War I, on the German-Russian front, as well as the Russian Revolution of 1917, and the resulting Civil War between the Red Guard and the White Army. This saw the transition from a Russian government ruled by Czars, to one ruled by the people through Communism.
"Lawrence of Arabia", also set in the time of World War I, but south in Arabia. T. E. Lawrence was able to bring several warring Arab tribes together to defeat the Turks, and was a key figure in the transition to an Arab National Council.
Some might call these completely unexpected [Black Swan] events, while others might feel they are merely fortunate (or misfortunate) sequences of events that led to inevitable social change. Has something happened, or will something happen later this decade, that will drive us to leave the Information Age?
David's previous book, The Shift Age, was published back in 2007, and a lot has happened in the past six years: a global financial melt-down recession; the Arab Spring uprisings in the Middle East; Barack Obama was elected and re-elected; man-made climate change in the form of hurricanes, tsunamis and superstorms hit various parts of the world; brush fires lit up Australia, and BP's Deepwater Horizon oil rig exploded off the Gulf coast, just to name a few.
David's new book reflects the impact of these recent events, from discussions on his [Evolutionshift] blog, to Q&A sessions he has after his public speaking presentations. For those who are not interested in the wide array of topics he covers in this one book, David also offers [a dozen different mini-eBooks] that cover specific topics like [Technology, Energy and Health].
My Rating: Moist and Flaky
Who should read this book: If you are a time-traveler from 1975 that came to this decade to learn all about what your future has in store, but can only select one book to read before you zoom back to your own time period, this would be the book I recommend.
I do not want to imply this is a quick read, or one that you can't put down once you start reading it. Just like you should not gulp down a full bottle of cheap Vodka in one sitting, this book should be read over a series of days, as I did, so that you can mull over in your mind the different points and thoughts he is trying to convey.
If you store your VMware bits on external SAN or NAS-based disk storage systems, this post is for you. The subject of the post, VM Volumes, is a potential storage management game changer!
Fellow blogger Stephen Foskett mentioned VM Volumes in his [Introducing VMware vSphere Storage Features] presentation at IBM Edge 2012 conference. His session on VMware's storage features included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol", now more formally known as VM Volumes. This post provides a follow-up to this, describing the VM Volumes concepts, architecture, and value proposition.
"VM Volumes" is a future architecture that VMware is developing in collaboration with IBM and other major storage system vendors. So far, very little information about VM Volumes has been released. At VMworld 2012 Barcelona, VMware highlights VM Volumes for the first time and IBM demonstrates VM Volumes with the IBM XIV Storage System (more about this demo below). VM Volumes is worth your attention -- when it becomes generally available, everyone using storage arrays will have to reconsider their storage management practices in a VMware environment -- no exaggeration!
But enough drama. What is this all about?
(Note: for the sake of clarity, this post refers to block storage only. However, the VM Volumes feature applies to NAS systems as well. Special thanks to Yossi Siles and the XIV development team for their help on this post!)
The VM Volumes concept is simple: VM disks are mapped directly to special volumes on a storage array system, as opposed to storing VMDK files on a vSphere datastore.
The following images illustrate the differences between the two storage management paradigms.
You may still be asking yourself: bottom line, how will I benefit from VM Volumes?
Well, take a VM snapshot for example. With VM Volumes, vSphere can simply offload the operation by invoking a hardware snapshot of the hardware volume. This has significant implications:
VM-Granularity: Only the right VMs are copied (with datastores, backing up or cloning individual-VM portions of hardware snapshot of a datastore would require more complex configuration, tools and work)
Hardware Offload: No ESXi server resources are consumed
XIV advantage: With XIV, snapshots consume no space upfront and are completed instantly.
Here's the first takeaway: With VM Volumes, advanced storage services (which cost a lot when you buy a storage array), will become available at an individual VM level. In a cloud world, this means that applications can be provisioned easily with advanced storage services, such as snapshots and mirroring.
Now, let's take a closer look at another relevant scenario where VM Volumes will make a lot of difference - provisioning an application with special mirroring requirements:
VM Volumes case: The application is ordered via the private cloud portal. The requestor checks a box requesting an asynchronous mirror. He changes the default RPO for his needs. When the request is submitted, the process wraps up automatically: Volumes are created on one of the storage arrays, configured with a mirror and RPO exactly as specified. A few minutes later, the requestor receives an automatic mail pointing to the application virtual machine.
Datastores case #1: As may be expected, a datastore that is mirrored with the special RPO does not exist. As a result, the automated workflow sets a pending status on the request, creates an urgent ticket to a VMware administrator and aborts. When the VMware admin handles that ticket, she re-assigns the ticket to the storage administrator, asking for a new volume which is mirrored with the special RPO, and mapped to the right ESXi cluster. The next day, the volume is created; the ticket is re-assigned to the storage admin, with the new LUN being pointed to. The VMware administrator follows and creates the datastore on top of it. Since the automated workflow was aborted, the admin re-assigns the ticket to the cloud administrator, who sometime later completes the application provisioning manually.
Datastores case #2: Luckily for the requestor, a datastore that is mirrored with the special RPO does exist. However, that particular datastore is consuming space from a high performance XIV Gen3 system with SSD caching, while the application does not require that level of performance, so the workflow requires a storage administrator approval. The approval is given to save time, but the storage administrator opens a ticket for himself to create a new volume on another array, as well as a follow-up ticket for the VMware admin to create a new datastore using the new volume and migrate the application to the other datastore. In this case, provisioning was relatively rapid, but required manual follow up, involving the two administrators.
Here's the second takeaway: With VM Volumes, management is simplified, and end-to-end automation is much more applicable. The reason is that there are no datastores. Datastores physically group VMs that may otherwise be totally unrelated, and require close coordination between storage and VMware administrators.
Now, the above mainly focuses on the VMware or cloud administrator perspective. How does VM Volumes impact storage management?
VM's are the new hosts: Today, storage administrators have visibility of physical hosts in their management environment. In a non-virtualized environment, this visibility is very helpful. The storage administrator knows exactly which applications in a data center are storage-provisioned or affected by storage management operations because the applications are running on well-known hosts. However, in virtualized environments the association of an application to a physical host is temporary. To keep at least the same level of visibility as in physical environments, VMs should become part of the storage management environment, like hosts. Hosts are still interesting, for example to manage physical storage mapping, but without VM visibility, storage administrators will know less about their operation than they are used to, or need to. VM Volumes enables such visibility, because volumes are provided to individual VMs. The XIV VM Volumes demonstration at VMworld Barcelona, although experimental, shows a view of VM volumes, in XIV's management GUI.
Here's a screenshot:
That's not all!
Storage Profiles and Storage Containers: A Storage Profile is a vSphere specification of a set of storage services. A storage profile can include properties like thin or thick provisioning, mirroring definition, snapshot policy, minimum IOPS, etc.
Storage administrators define a portfolio of supported storage services, maintained as a set of storage profiles, and published (via VASA integration) to vSphere.
VMware or cloud administrators define the required storage profiles for specific applications
VMware and storage administrators need to coordinate the typical storage requirements and the automatically-available storage services. When a request to provision an application is made, the associated storage profiles are matched against the published set of available storage profiles. The matching published profiles will be used to create volumes, which will be bound to the application VMs. All that will happen automatically.
Note that when a VM is created today, a datastore must be specified. With VM Volumes, a new management entity called Storage Container (also known as Capacity Pool) replaces the use of datastore as a management object. Each Storage Container exposes a subset of the available storage profiles, as appropriate. The storage container also has a capacity quota.
Here are some more takeaways:
New way to interface vSphere and storage management: Storage administrators structure and publish storage services to vSphere via storage profiles and storage containers.
Automated provisioning, out of the box: The provisioning process automatically matches application-required storage profiles against storage profiles available from the specified storage containers. There is no need to build custom scripts and custom processes to automate storage provisioning to applications
The XIV advantage:
XIV services are very simple to define and publish. The typical number of available storage profiles would be low. It would also be easy to define application storage profiles.
XIV provides consistent high performance, up to very high capacity utilization levels, without any maintenance. As a result, automated provisioning (which inherently implies less human attention) will not create an elevated risk of reduced performance.
Note: A storage vendor VASA provider is required to support VM Volumes, storage profiles, storage containers and automated provisioning. The IBM Storage VASA provider runs as a standalone service that needs to be deployed on a server.
To summarize the VM Volumes value proposition:
Streamline cloud operation by providing storage services at VM and application level, enabling end-to-end provisioning automation, and unifying VMware and storage administration around volumes and VMs.
Increase storage array ROI, improve vSphere scalability and response time, and reduce cloud provisioning lag, by offloading VM-level provisioning, failover, backup, storage migration, storage space recycling, monitoring, and more, to the storage array, using advanced storage operations such as mirroring and snapshots.
Simplify the adoption of VM Volumes using XIV, with smaller and simpler sets of storage profiles. Apply XIV's supreme fast cloning to individual VMs, and keep automation risks at bay with XIV's consistent high performance.
Until you can get your hands on a VM Volumes-capable environment, the VMware and IBM developer groups will be collaborating and working hard to realize this game-changing feature. The above information is definitely expected to trigger your questions or comments, and our development teams are eager to learn from them and respond. Enter your comments below, and I will try to answer them, and help shape the next post on this subject. There's much more to be told.
This month, I am pleased to announce the new [IBM STG Executive Briefing Center] website, representing a huge improvement over the previous website we had been using over the past two years. STG refers to IBM's Systems and Technology Group, the division that focuses on servers, storage, switches and the system software that makes them run. This new website is for the dozen STG EBCs that span the globe. The new website reminds me of this famous quote:
"Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away"
-- Antoine de Saint-Exupery
Let's take a quick look at what makes it so much better.
The previous website required registration. At every briefing, those of us who work in the EBCs had to pass around a sign-up sheet for email addresses from each attendee so that we could send them an invitation to register for the site. We would have a hard time reading people's handwriting, resulting in some emails coming back rejected.
Inspired by self-service gas stations, automated teller machines, and the many self-service portals of Cloud Computing, the new website has everything up-front, without registration. IBM Business Partners and sales representatives can easily request a briefing at any of the dozen briefing centers represented!
IBM-managed and IBM-hosted
We had a difficult time explaining to our attendees why our previous website was hosted on a lone machine and maintained by a third party. Think about it, IBM manages the data centers of over 400 clients. IBM has provided web hosting to the most mission critical workloads, with high levels of availability and reliability, and is recognized as one of the "Big 5" Cloud companies. I have done web design myself in my career, and we were terribly disappointed with the third party chosen to create and maintain our previous website, constantly having to point out errors in their HTML and CSS.
For the new website, IBM took back control. Staff from each EBC, myself included, came up with a simple page to bring the essence of each location to life. Special thanks to my colleage Hal Jennings, from the Austin EBC, for bringing this altogether!
Despite two years of manually registering attendees to use the previous website, Google Analytics showed that few people visited, and the few that did spent little time exploring the vast repository of content.
The new website is vastly simpler. The front page points to all twelve EBCs, and a single mouse click gets you to the location you are interested in, with all the details you need to make a decision to book a briefing, and the contact information to make it happen.
Elimination of Wasted and Duplicate Effort
In the previous website, we spent as much as 15 hours just to create, voice over, edit and produce a single 15-minute recorded presentation. Less than six percent of the previous website visitors watched more than five minutes of these videos, making us feel that most of our effort was wasted.
The EBC staff kept wasting their time, month after month, thanks to all-stick, no-carrot tactics that mandated minimums for contributions for more and more content that nobody was ever looking at. Even more disappointing was that much of our work duplicated the formal responsibilities of our IBM Marketing team. They weren't happy about this either, causing confusion between the roles of our two teams.
Finally, we said enough was enough! The new STG EBC website is a marvel in minimalism. If you want to see presentations, videos, expert profiles, or partake in on-going conversations, I welcome you to visit the [IBM Expert Network], the [IBM Storage YouTube Channel], and the [Storage Community] where they belong.
Can Structured Query Language [SQL] be considered a storage protocol?
Several months ago, I was asked to review a book on SQL, titled appropriately enough "The Complete Idiot's Guide to SQL", by Steven Holzner, Ph.D. As a published author myself, I get a lot of these requests, and I agreed in this case, given that SQL was invented by IBM, and is a good fundamental skill to have for Business Analytics and Database Management.
(FTC Disclosure: I work for IBM but was not part of the SQL development team. I was provided a copy of this book for free to review it. I was not paid to mention this book, nor told what to write. I do not know the author personally nor anyone that works for his publicist. All of my opinions of the book in this blog post are my own.)
Despite an agreed-upon standard for SQL, each relational database management system (RDBMS) has decided to customize it for their own purposes. First, SQL can be quite wordy, so some RDBMS have made certain keywords optional. Second, RDBMS offer extra features by adding keywords or programming language extentions, options or parameters above and beyond what the SQL standard calls for. Third, the SQL standard has changed over the years, and some RDBMS have opted to keep some backward compatibility with their prior releases. Fourth, some RDBMS want to discourage people from easily porting code from one RDBMS to another, known in the industry as vendor lock-in.
Throughout my career, I have managed various databases, including Informix, DB2, MySQL, and Microsoft SQL Server, so I am quite familiar with the differences in SQL and the problems and implications that arise.
Most authors who want to write about SQL typically make a choice between (a) stick to the SQL standard, and expect the reader to customize the examples to their particular DBMS; or (b) stick to a single RDBMS implemenation, and offer examples that may not work on other RDBMS.
I found the book "The Complete Idiot's Guide to SQL" covered the basics quite well, but with an odd twist. The basics include creating databases and tables, defining columns, inserting and deleting rows, updating fields, and performing queries or joins. The odd twist is that Steven does not make the typical choice above, but rather shows how the various DBMS are different than standard SQL syntax, with actual working examples for different RDBMS.
You might be thinking to yourself that only an idiot would work in a place that had to require knowledge of multiple RDBMS. The sad truth is that most of the medium and large companies I speak to have two or more in production. This is either through acquisitions, or in some cases, individual business units or departments implementing their own via the [Shadow IT].
(For those who want to learn SQL and try out the examples in this book, IBM offers a free version of DB2 called [DB2-C Express] that runs on Windows, Linux, Mac OS, and Solaris.)
Last week, while I was in Russia for the [Edge Comes to You] event, I was interviewed by a journalist from [Storage News] on various topics. One question stuck me as strange. He asked why I did not mention IBM's acquisition of Netezza in my keynote session about storage. I had to explain that Netezza was not in the IBM System Storage product line, it is in a different group, under Business Analytics, where it belongs.
While it is true that Netezza can store data, because it has storage components inside, the same could also be said about nearly every other piece of IT equipment, from servers with internal disk, to digital cameras, smart phones and portable music players. They can all be considered storage devices, but doing so would undermine what differentiates them from one another.
Which brings me back to my original question: Should we consider SQL to be a storage protocol? For the longest time, IT folks only considered block-based interfaces as storage protocols, then we added file-based interfaces like CIFS and NFS, and we also have object-based interfaces, such as IBM's Object Access Method (OAM) and the System Storage Archive Manager (SSAM) API. Could SQL interfaces be the next storage protocol?
Let me know what you think on this. Leave a comment below.