Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
SAP HANA is an in-memory, relational database management system supported on Linux for x86 and POWER servers. The "HANA" acronym is short for "High-Performance Analytic Appliance" software. By keeping the data in memory, analytics and queries can be performed much faster than from traditional disk repositories.
Server memory, however, is volatile storage, so the data needs to be stored on persistent storage such as flash or disk drives. SAP has certified several configurations, some involve IBM Spectrum Scale solutions. I will use the following graphic to explain the three configurations.
Linux on x86-64 with Spectrum Scale FPO
With SAP HANA on Lenovo x86-64 servers, SAP has certified internal flash or disk drives running IBM Spectrum Scale in "File Placement Optimization" (FPO) mode. FPO provides a shared-nothing architecture that matches the SAP HANA architecture. IBM Spectrum Protect can backup this configuration, providing data protection and disaster recovery support.
Linux on POWER with Elastic Storage Server
With SAP HANA on POWER servers, SAP has certified external Elastic Storage Server (ESS). Not only is POWER the better platform to run SAP HANA than x86-64, but Elastic Storage Server offers excellent erasure coding to provide excellent rebuild times and storage efficiency.
The ESS is a pre-built system that combines IBM Spectrum Scale software with server and storage hardware. IBM Spectrum Protect can also backup this configuration, providing data protection and disaster recovery support.
Block-level Storage over Storage Area Network (SAN)
Various IBM block-level devices are support for SAP HANA on both Linux on x86-64 and Linux on POWER. Unfortunately, SAP only has certified (to date) the use of the XFS file system. The problem many clients mention about this configuration is the lack of end-to-end backup and disaster recovery. This is solved by the Spectrum Scale configurations in the previous two examples.
Other combinations, such as SAP HANA on POWER with Spectrum Scale FPO, or on x86-64 servers with Elastic Storage Serer, are either not SAP-certified, or not directly supported by SAP without their approval.
IBM and SAP have worked closely together for many years, and I am glad to see SAP HANA and IBM Spectrum Scale based solutions continue this tradition.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
Last week, IBM announced a variety of tape system enhancements.
IBM TS7760 Virtual Tape System
The IBM TS7760 combines the benefits of the previous TS7720 and TS7740 offerings. Those with IBM z System mainframes will recognize both. The TS7740 has a small amount of disk that pretend to be a tape library, with enough capacity to hold a few hours to a few days worth of data. After that, the data is moved to physical tape. The TS7720 is an all-disk solution, holding up to 1 PB of disk to hold weeks or months worth of data, but did not have tape attachment. Previously, IBM announced the TS7720T, a high-capacity offering with tape attachment. The new TS7760 is now the replacement for all three of these, powered by the latest POWER8 processor.
In addition to all the features available in the former models, the new TS7760 uses 4TB drives instead of 3TB drives, resulting in a maximum capacity of 1.3PB of disk capacity before compression. The disks are encrypted and protected by distributed RAID-6 referred to as "Dynamic Disk Pooling". While tape attachment is still optional, it supports both IBM TS3500 and TS4500 tape libraries.
This week, I am attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
Monday afternoon, I attended various break-out sessions.
1441A Data Resiliency: Data-Driven Analytics and Beyond
Ramani Routray (IBM) and B.J. Klingenberg, IBM, co-presented. Aggressive and differentiated Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs) create data protection silos. Resiliency for an enterprise data center is often achieved via redundant components, periodic backup, continuous replication and/or highly available architectures. With the emergence of cloud delivery models, Backup-as-a-Service and DR-as-a-Service have gained wide acceptance. This uniquely challenges service providers to quickly analyze all the metadata from these environments to enable problem determination, fault isolation, capacity management, SLA violation, etc. Learn about a big data analytics framework that analyzes millions of resiliency metadata tuples in near real-time to generate actionable insights.
1267A Prudential and IBM: Integrating Application and Storage Management to Drive Cloud Service Levels
This was a 50/50 presentation, with the first half covered by clients OJ Dua, supported by his boss, Scott Singerline, both from Prudential Financial.
Prudential explored their successful approach for optimizing storage and improving service. First, experts from Prudential Financial will describe their experiences integrating IBM Spectrum Control v5.2 (formerly IBM Tivoli Storage Productivity Center) inventory, availability, and performance data with Tivoli Application Dependency Discovery Manager (TADDM) and Netcool OMNIbus to improve services for core business applications.
(Over 10 years ago, I was the chief architect for IBM TotalStorage Productivity Center v1. The clients from Prudential could not emphasize enough how much better Spectrum Control v5.2 was compared to their experiences with the prior versions. It has come a long way, baby!)
The second half was covered by Brian Sherman, IBM Distinguished Engineer. He described how related IBM Spectrum Storage solutions are transforming storage. IBM Spectrum Storage solutions deliver reliable, flexible service levels at a significantly lower cost than traditional storage.
6523A VersaStack: Because Time and Cost are of the Essence for Cloud Service Providers
This was more of a 25/75 presentation. Ian Shave, IBM Business Line Executive for Spectrum Virtualize and VersaStack, kicked off the session with a quick overview of VersaStack, which combines Cisco UCS x86 blade servers and Cisco network switches with IBM Spectrum Virtualize storage solutions. This is often referred to as "Integrated Infrastructure" or "Converged Systems". While the growth of Integrated Infrastructure adoption is growing 15 percent, storage within Integrated Infrastructure solutions is growing faster at 44 percent.
VersaStack can be implemented as follows:
Cisco UCS Mini with Storwize V5000, either iSCSI or FCP
Cisco UCS with Storwize V7000 (block-only) or V7000 Unified (file and block access)
Cisco UCS with FlashSystem V9000, for high-speed, low-latency application requirements
John Buskermolen and Dan Simunic, both from i-Virtualize, covered their experiences with VersaStack. Founded in 2009, i-Virtualize is a Managed Services Provider (MSP), Cloud Service Provider (CSP) and value-added reseller, for clients in both USA and Canada, growing 41 percent year over year.
They reduced the time to market from weeks to days, cut new environment provisioning time from days to minutes, and simplified management when it implemented VersaStack, an integrated infrastructure solution that combines Cisco UCS Integrated Infrastructure with IBM storage solutions built with IBM Spectrum Virtualize to deliver extraordinary levels of performance and efficiency.
Why did i-Virtualize choose VersaStack?
79 percent reduced provisioning time
60 percent lower costs
10x performance acceleration
Higher flexibility, with clustered systems that scale up and out
Let's i-Virtualize administrators and management sleep at night
47 percent capacity savings with Real-time Compression
IBM Spectrum Virtualize HyperSwap for high availability
Storage-based replication across multiple datacenters
Cisco UCS director provides single-pane-of-glass management
Their latest project is called VIXO, a Cloud Managed Services Console which stacks Cloud Foundry, Docker, OpenStack, VMware and other 3rd party services on top of their VersaStack. This is a collaboration with Oxbury Group.
VersaStack is an ideal solution for Cloud Service Providers (CSP) or for any client interested in "cloud-in-a-box."
3690A Meet the Experts on IBM Cloud Storage Services
Ann Corrao and Mike Fork, both from IBM, presented IBM's various storage capabilities on SoftLayer and Cloud Managed Services (CMS). Of IBM's 43 Cloud datacenters, 28 are SoftLayer, and the other 15 are CMS.
For block-based volume storage, SoftLayer offers "Endurance" and "Performance". These are backed by multi-pathed iSCSI volumes.
With "Endurance" option, you purchase a fixed I/O density, either 0.5 IOPS/GB, 1 IOPS/GB or 4 IOPS/GB. If you choose a 100 GB volume, you are guaranteed 400 IOPS. Typical business applications like database or email consume about 0.7 IOPS/GB.
With the "Performance" option, you pick the IOPS for your volume, up to 6,000 IOPS, and then pick the size to match your needs, say 100 GB. This is best suited for clients who know their application well enough to specify this.
IBM Bluemix also has a block service, based on OpenStack Cinder drivers. These are backed by internal disk on storage-rich servers. IBM SoftLayer can pack 4 drives into a 1U server, 12 drives into a 2U server and 36 drives into a 3U server.
For object store, IBM SoftLayer supports OpenStack Swift. They support content expiration, versioning and metadata search.
(When asked if this was Cleversafe or something else, Mike was quick to point out that IBM SoftLayer focuses on the "Service Level Agreement (SLA), the client experience, and the APIs" so however they chose to back this storage is internally determined. The client should not have to specify product xyz in their contract.)
An extra feature for object store is "Content Delivery Network" (CDN) which uses EdgeCast to cache content at the edges of the network to improve performance delivery. You designate which object containers you want to accelerate performance, and you pay for the amount of bandwidth consumed.
For file space, IBM SoftLayer supports NFS and SFTP only. Supporting CIFS, or rather its replacement SMB, is a known requirement. In the meantime, there are a variety of 3rd party "Cloud Gateway" solutions, like NetApp AltaVault, Panzura global namespace, or CTERA.
For file sync-and-share, IBM has partnered with Box to provide Enterprise-class service.
How do clients ingest data into their IBM SoftLayer account? One option is to use Aspera, a recent IBM acquisition that is 3x faster than traditional SCP. Another option is to ship disk or tape cartridges to IBM SoftLayer facility.
Well, it's Tuesday again, and you know what that means? IBM Announcements!
This week, IBM announces the second generation of Storwize V5000 flash and disk storage systems. There are the V5000F All-flash configurations, as well as the V5000 that can support a variety of flash and spinning disk drives.
There are three models:
The V5010 has dual 2-core/2-thread processors and 16GB of cache. It supports thin provisioning, FlashCopy, Easy Tier, and remote mirroring. The base unit includes 1 GbE Ethernet ports for iSCSI host connectivity, with options to add 16GB Fibre Channel, 12Gb SAS, and 10GbE iSCSI/FCoE as well.
The 2U controllers and expansion enclosures can hold either 24 small 2.5-inch drives, or 12 larger 3.5-inch drives. A single control enclosure has two active/active IBM Spectrum Virtualize nodes, and can attach up to 10 expansion enclosures for a maximum of 264 drives.
The V5020 unit has dual 2-core/4-thread processors and up to 32GB of cache. It supports everything the V5010 does, plus encryption. The encryption is done via the Intel AES-NI instruction set to eliminate the need for special "self-encrypting drives" (SED) that other storage devices may require.
The V5030 has dual 6-core/4-thread processors and up to 64GB of cache. It supports everything the V5010 and V5020 do, plus Real-time Compression and external virtualization. The Real-time Compression can achieve up to 80 percent space savings, representing a 5:1 compression ratio.
Each control enclosure can attach to 20 expansion enclosures, which can support 504 internal drives per controller, and up to 1,008 with two controllers (four Spectrum Virtualize nodes) clustered together. This is in addition to the drives in external storage systems virtualized.
Well it's Tuesday again, and you know what that means? IBM Announcements!
(FCC Disclosure: This official launch also includes October 6 announcements. In any case, the usual disclaimer applies: I currently work for IBM, and this blog post can be considered a "paid celebrity endorsement" of the IBM products mentioned below.)
IBM announced various updates to its Spectrum Storage product line. Here is a quick recap.
IBM Spectrum Virtualize 7.6
Spectrum Virtualize is the new name of the "storage hypervisor" code that resides in IBM SAN Volume Controller (SVC) and Storwize family products. When you buy an SVC, you will license Spectrum Virtualize software on it. It is NOT available separately as software-only that you can install on any other hardware. There are three major improvements:
Software-based Data-at-Rest Encryption
Earlier this year, IBM delivered data-at-rest encryption for the Storwize V7000 and V7000 Unified. This week, IBM extends this support to other storage hypervisors.
Since this feature is based on the Intel processor that supports the Advanced Encryption Standard New Instructions (AES-NI), it applies only to the newer hardware: SAN Volume Controller 2145-DH8, the Storwize V7000 Gen2, FlashSystem V9000, and VersaStack converged systems that contain these. You can run Spectrum Virtualize v7.6 on older hardware models, but the encryption feature will be disabled.
Basically, by taking advantage of AES-NI commands, IBM can now offer data-at-rest encryption on any virtualized flash or disk arrays, eliminating the need for special "Self-Encrypting Drives", or SED.
The encryption keys are kept on USB memory sticks, that you can either leave in the machine, or stash away in some vault or safe somewhere.
The other improvement is distributed RAID. Distributed RAID has been hugely popular on IBM XIV products, and has since found its way into the DCS3700, DCS3860 and Elastic Storage Server models.
With this new enhancement, storage admins can select "Distributed RAID-5" or "Distributed RAID-6" as alternate choices to traditional RAID ranks.
Why use it? All the drives are now active, eliminating idle spare drives that do nothing collecting dust and cobwebs waiting for an opportunity to spin up, and when they finally are used for a rebuild become a terrible bottleneck. Since all drives are reading and writing, the rebuild rate is an order of magnitude (5 to 10x) faster!
For those clients nervous about large 8TB drives and the number of days it would take to perform a traditional RAID rebuild, this should calm all of your fears.
This is one of those line-items that we have told clients that it was "just around the corner" and "coming soon, watch this space", and finally it is available. For clients using Stretched Cluster or HyperSwap across two buildings, best practices suggests keeping the quorum disk in a third building. This often met having to dedicate a single 2U disk system in a closet somewhere, with expensive Fibre Channel cables connecting to the other two buildings.
To address this, IBM now allows the quorum disk to be based on Internet Protocol (the IP portion of TCP/IP), which can be any bare-metal or virtual machine that is LAN or WAN attached. The "quorum disk" is just a little Java program. This can run on any cloud service provider as well, such as IBM SoftLayer, that both buildings have connectivity.
A minor improvement worth mentioning is that the IBM "Comprestimator" tool that estimates the capacity savings of Real-time Compression is now integrated into Spectrum Virtualize v7.6 command line interface (CLI), allowing you to run the tool on demand, as needed, on any virtual volume.
IBM Spectrum Scale v4.2
IBM plans to offer all of its solutions in any of three flavors: software-only that you can deploy on your own server hardware, pre-built system appliances, and cloud services on IBM SoftLayer, IBM Cloud Managed Services or third-party cloud providers. Spectrum Scale is the software-only flavor, and Elastic Storage Server and Storwize V7000 Unified are pre-built systems based on that software.
File and Object access
IBM published a "Redbook" on how to implement OpenStack Swift and Amazon S3 interfaces to an existing Spectrum Scale deployment. IBM supported it, but it was basically Do-it-Yourself DIY implementation. This has now been resolved, with full integration of OpenStack Swift and Amazon S3 object-protocol interfaces.
(For those unfamiliar with "Object storage", think of it like valet parking for your data. Before working for IBM, I was previously employed as a valet attendant, so I feel qualified to make this analogy.
If you park your car in a 10-story high parking structure, you have to remember where you parked to go find the car again. With valet parking, you hand over the keys to the valet attendant, the car gets parked, and you get a claim stub that you then use to get your car back. In the meantime, you don't know where your car is parked, and you don't care either!
Storing files in volume-level or file-level storage is like that 10-story high parking structure. You have to remember where you put it, which LUN or which sub-directory. With object storage, the system provides a "claim stub" in the form of an Universal Record Identifier, or URI, and simple HTTP commands like GET and POST can be used to upload and download the content.)
Policy-driven Compression and Quality of Service (QoS)
If you want to differentiate the levels of service provided by files and objects stored in your infrastructure, look no further. Simple SQL-like language is used to set up policies that are invoked when needed.
Hadoop Connector for File and Objects
The IBM Hadoop Connector allows Hadoop and Spark analytics applications to treat Spectrum Scale as a 100 percent compatible alternative to Hadoop File Systems (HDFS). Previously, this was only available for files, but now it has been extended to include objects as well.
Advanced Graphical User Interface (GUI)
Based on the award-winning GUI that has been used for IBM XIV, SVC, Storwize and various other members of the IBM System Storage family, IBM announces an HTML5-based web-browser GUI for configuring and managing Spectrum Scale and Elastic Storage Server (ESS).
Storwize V7000 Unified
The "file modules" that run IBM Spectrum Scale will get updated to R1.6 level, which supports SMB 3.0 and NFS 4.0 protocols. SMB support will now include both internal and externally-virtualized storage. You will also be able to use Active File Management to migrate to other Spectrum Scale implementations.
IBM Spectrum Control
As the former chief architect of IBM Tivoli Storage Productivity Center v1, I have been a big fan of the advancements and evolution of Spectrum Control. IBM offers three levels. The first level is "Basic Edition", entitled at no additional charge for IBM storage hardware clients. The second level is "Standard Edition" which offers configuration, provisioning and performance monitoring. The third level is "Advanced Edition", which includes advanced storage analytics, file-level reporting, storage tiering and data placement optimization.
You can imagine my skepticism when I was told that Spectrum Control was going to be enhanced to support Spectrum Scale. What could it offer? IBM Spectrum Scale already has built-in storage tiering and data placement optimization!
It turns out that having effective "management tools" was the #1 reason clients have stated were needed to implement and deploy Spectrum Scale. Since 1998, back when it was called General Parallel File System, or GPFS, the target market was High Performance Computing (HPC) familiar with Command Line Interfaces (CLI).
But IBM was to broaden the reach of IBM Spectrum Scale, to financial services, health care and life sciences, government and education, and a variety of other industries. They won't tolerate being limited to CLI interfaces.
For clients with multiple Spectrum Scale clusters, Spectrum Control can offer the following:
Visibility across the capacity utilization (file systems, pools, file sets, quotas) and cluster health across all Spectrum Scale clusters in the data center
Ability to specify alerts which are applied across all Spectrum Scale clusters, for things like relative or absolute free space in a file system, or inodes used, nodes going down, etc.
Understand the cross-cluster relationships established by remote cluster mounts, and seamlessly navigate between them
If external SAN storage is used, Spectrum Control shows the correlation between Spectrum Scale Network Shared Disks (NSD) and their corresponding SAN volumes, again with the ability to navigate between them; also it can provide performance monitoring for the volumes backing the NSD
Ability to monitor file capacity usage in the context of applications, by adding Spectrum Scale "file set containers" to application groups defined in Spectrum Control
Compare file system activity across Spectrum Scale clusters, with the ability to drill into file system and node performance charts
Support for object storage on Spectrum Scale, determine which object-enabled clusters are closest to running out of free space
While the basic built-in GUI is great for smaller deployments, if you have a dozen or more Spectrum Scale clusters, or have Spectrum Scale clusters intermixed with traditional block-level and NAS storage devices, then Spectrum Control is for you!
It used to take weeks to deploy the original versions of Tivoli Storage Productivity Center, but now, Spectrum Control is now offered in the cloud, and you can deploy it in as little as 30 minutes.
Want to check it out? You can explore Spectrum Control Storage Insights cloud service as a [Live Demo], or [Start your free trial]! The reporting capabilities of Spectrum Scale are identical between the on-premise version of Spectrum Control, and this cloud service offering.
Here's a great quote from a leading IT industry analyst:
"In multi-petabyte, multivendor installations, overall storage costs of ownership for use of IBM Spectrum Storage solutions averaged 73 percent less than EMC, and 61 percent less than Hitachi equivalents" -- Brian Jeffery, Managing Director, International Technology Group, Naples, FL
As IBM continues its transition from a hardware-oriented company founded over a century ago, manufacturing meat scales and cheese slicers, to one more focused on higher value-add software and services, the Spectrum Storage software family will play a critical role of this transformation!
It's Tuesday, and you know what that means? IBM Announcements! This week I am in beautiful Orlando, Florida for the [IBM Systems Technical University] conference.
This week, IBM announced its latest tape offerings for the seventh generation of Linear Tape Open (LTO-7), providing huge gains in performance and capacity.
For capacity, the new LTO-7 cartridges can hold up to 6TB native capacity, or 15TB effective capacity with 2.5x compression that for typical data. That is 2.4x larger than the 2.5TB catridges available with LTO-6. Performance is also nearly doubled, with a native throughput of 315 MB/sec, or effective 780 MB/sec effective capacity with 2.5x compression. The LTO consortium, of which IBM is a founding member, has published the roadmap for LTO generations to LTO-8, LTO-9 and LTO-10.
IBM will offer both half-height and full-height LTO-7 tape drives. All the features you love from LTO-6 like WORM, partitioning and Encryption carry forward. These drives will be supported on a variety of distributed operating systems, including Linux on z System mainframes, and the IBM i platform on POWER Systems.
The Linear Tape File System (LTFS) can be used to treat LTO-7 cartridges in much the same way as Compact Discs or USB memory sticks, allowing one person to create conent on an LTO-7 tape cartridge, and pass that cartridge to the next employee, or to another company. LTFS is also the basis for IBM Spectrum Archive that allows tape data to be part of a global namespace with IBM Spectrum Scale.
LTO-7 will be supported on the TS2900 auto-loader, as well as all of IBM's tape libraries: TS3100, TS3200, TS3310, TS3500 and TS4500. You can connect up to 15 TS3500 tape libraries together with shuttle connectors, for a maximum capacity of 2,700 drives serving 300,000 cartridges, for a maximum capacity of 1.8 Exabytes of data in a single system environment.
In addition to LTO-7 support, the IBM TS4500 tape library was also enchanced. You can now grow it up to 18 frames, and have up to 128 drives serving 23,170 cartridges, for a maximum capacity of 139 PB of data. You can now also intermix LTO and 3592 frames in the same TS4500 tape library.
For comptability, LTO-7 drives can read existing LTO-5 and LTO-6 tape cartridges, and can write to LTO-6 media, to help clients with transition.
Officially, the VVol concept was still just a "technology preview" in 2012, to be fleshed out over the next few years through extensive collaboration between VMware and all the major players: IBM, HP, Dell, NetApp and EMC.
In 2013 and 2014, IBM attended VMworld with live demonstrations of VVol support. VMware vSphere v6 was not yet available, but when it was, we assured them, IBM would be one of the first vendors with support!
To understand why VVol is such a game-changer, you have to understand a major problem with VMware version 4 and version 5, namely their Virtual Machine File System, or [VMFS].
Here is a picture to help illustrate:
On the left, we see that VMFS datastore is a set of LUNs from the storage admin perspective, and a set of VMDK and related files from the vCenter admin perspective.
If there was a storage-related problem, such as bandwidth performance or latency, how would the two admins communicate to perform troubleshooting? For many disk systems, it is not obvious which VMDK file sits on which LUN.
There are also a variety of hardware capabilities that work at the LUN level, such as snapshots, clones or remote distance mirroring, and this would apply to all the VMDK files in the data store across the set of LUNs, which may not be what you want.
There are two ways to address this in vSphere v4 and v5:
The first method is to have fewer VMDK files per datastore. By defining smaller datastores with just a few VMs associated with each, you can then have a closer mapping of VMDK files to datastore LUNs. Unfortunately, VMware ESXi has a 256 limit on the number of different datastores that can be attached, so this method has its own limitations.
The other method around this is "Raw Device Mapping" (RDM) which allowed Virtual Machines to be attached to specific LUNs. Some of the earlier restrictions and limitations for RDMs have since been relaxed over the releases, but your disk system still needs to expose the SCSI identifiers of each LUN to make this work, and additional setup is required if you plan to cluster two or more systems together, such as for a Microsoft Cluster Server (MSCS).
On the right side of the picture, using VMware v6, vCenter admins can now allocate VVols, which are mapped to specific "VVol Storage Containers" on specific storage systems. The storage admin knows exactly which VVol is in which container, so they can now communicate and collaborate on troubleshooting!
The vSphere ESXi host communicates to storage arrays via a new "virtual LUN id" called a "Protocol Endpoint". This is to allow FCP, iSCSI and FCoE traffic to flow correctly through SAN or LAN switches. For NFS, the Protocol Endpoint represents a "virtual mount point", so that traffic can be routed through LAN switches correctly.
Storage Policies can help determine which attributes or characteristics you want for your VVol. For example, you may want your VVol to be on a storage container that supports snapshots at the hardware level. The vCenter server can be aware of which storage arrays, and which storage containers in those arrays, through the VMware API for Storage Awareness, or VASA.
Different storage manufactures can implement their VASA provider in different ways. IBM has opted to have a single VASA provider for all of its supported devices, so as to provide consistent client experience. When you purchase any VVol-supported storage system from IBM, you are entitled to download the IBM VASA provider at no additional charge!
Initially, the IBM VASA provider will focus on IBM XIV Storage System, an ideal platform for your VVol needs. The XIV is a grid-based storage system, utilizing unique algorithms that give optimal data placement for every LUN or VVol created, and virtually guarantees there will be no hot spots. The XIV provides an impressive selection of Enterprise-class features, including snapshot, mirroring, thin provisioning, real-time compression, data-at-rest encryption, performance monitoring, multi-tenancy and data migration capabilities.
Let me give some real world examples from Paul Braren, an IBM XIV and FlashSystem Storage Technical Advisor from Connecticut, who has been working directly with clients over the past five years:
"Many of my customers have clearly said they really want the ability to have a granular snapshot that grabs a moment in time of just one VM, rather than all the VMs that happen to be on the same LUN. They also want to delete VMs, and have the storage array automatically present that newly available space. Even better, with VVol, these SAN related tasks appear to be executed nearly instantly, leaving behind those legacy shared VMFS datastore limitations and overhead.
The same benefits of VVol are evident when cloning or deploying VMs. Imagine being to create a Windows Server VM with a 400GB thick-provisioned drive in under 20 seconds. Well, you don't have to imagine it! I recorded video of this actually happening over at IBM's European Storage Competence Center, featured in this 8-minute video: [IBM XIV Storage System and VMware vSphere Virtual Volumes (VVol). An ideal combination!]"
-- Paul Braren
In addition to XIV, all of IBM's Spectrum Virtualize products also support VVolLs, including SAN Volume Controller, Storwize including the Storwize in VersaStack, and FLashSystem V9000.
I am not in San Francisco this week for VMworld, but lots of my IBM colleagues are, so please, stop by the IBM booth and tell them I sent you!
Every year, March 31 marks "World Backup Day". Sadly, many people forget the importance of backing up their critical information. This is not just true for businesses, non-profit organizations and government agencies, but also for all of your personal information that you keep on computer devices.
My friends over at Cloudwards had developed an awesome infographic related to World Backup Day. Here it is.
(FTC Disclosure: I work for IBM, which has no business relationship with Cloudwards. Cloudwards does not itself provide backup services, but rather reviews services provided by others. This post should not be considered an endorsement of Cloudwards or their reviews.)
Courtesy of: Cloudwards.net
I hope you find this information helpful and informative!
Well it's Tuesday again, and you know what that means? IBM Announcements!
Today, IBM announced an exciting new addition to the IBM System Storage™ product line, [IBM Spectrum Storage™], a family of software defined storage offerings.
To understand its significance, I need to explain a few things first. Software defined storage is part of a larger concept of software defined environment.
How is software defined environment different than what you have now? In every data center, you need to map business requirements of an application workload with an appropriate set of IT infrastructure, including server, network and storage resources.
The traditional approach involves an application owner or database administrator reviewing the business requirements documented for the application, calling the server, network and storage administrators, who match those requirements to appropriate IT hardware and notify the folks in facilities to rack and stack the gear accordingly.
In a software defined environment, Application Programming Interfaces (API), Service Level Agreements (SLA) and Orchestration workflows can automate the request for the appropriate resources. This is referred as the "Control Plane".
Responding to these requests, the software can provision the appropriate server, network and storage resources required. Server, network and storage virtualization, standard interfaces and deployment technologies exist to make this practical. This is referred to as the "Data Plane".
Any time new a way of doing things is introduced into the world, there could be some resistance. Let's tackle the three most frequently stated objections:
"IT infrastructure resources are rare and expensive! Administrators need to control or approve how resources are doled out!" An objection to self-service automation is the fear that employees would take too much.
If you have a bank account, Automated Teller Machines (ATM) can restrict the amount of cash you can take out, based on what is appropriate per request, or per day, with an upper limit of what you have in your personal checking or savings account. You enter your debit card and PIN into the "Control Plane" keypad and out comes a stack of 20-dollar bills from the "Data Plane" slot. In a software defined environment, you can limit requests through quotas and resource pools.
"Some application workloads are more important than others! Another objection is that every workload will be treated in the same standard way, mission critical workloads and dev/test would be treated alike.
At the gas station, you can select different levels of octane gasoline. You enter your credit card and zip code into the "Control Plane" keypad and selected octane comes out of the "Data Plane" hose. In a software defined environment, resources can be provisioned with different Quality of Service (QoS) levels.
"Different applications require different combinations of resources!" Another objection is the fear that fixed combinations of server, storage and network resources will be stifling to innovation and productivity.
At the vending machine, you can choose which candy bar and which chips to have with whatever soft drink you choose for lunch. You enter your bills and coins into the "Control Plane" slot, select the row letter and column number for your snack of choice, and then fetch your purchases from the "Data Plane" flap. In a software defined environment, a Service Catalog can offer a virtual menu of different server, network and storage resources to be combined together as needed.
These concerns are addressed well enough in software defined environments, in general, and with IBM Spectrum Storage family of products, in particular.
(Nostalgia: I remember the days before self-service automation. At the bank, I had to stand in line at the bank until I could to talk to a human bank teller to get cash from my savings account. At the gas station, human gas attendants would come out and pump the gas for me, check my oil and wash my windshield. And at a restaurant, I felt like I waited an eternity from the time I ordered my meal to the time the human short-order cook had it ready and human wait staff delivered it to my table. These all seem silly today, doesn't it?)
Jamie Thomas, IBM General Manager of Storage and Software Defined Environments
Jamie announced [IBM Elastic Storage], a new offering that is available as a software defined storage solution, based on IBM's General Parallel File System (GPFS) technology already deployed at 45,000 installations.
IBM Elastic Storage provides a global name view across data center locations. It can manage up to a Yotabyte of information, combining Flash, disk and tape resources. It supports OpenStack interfaces, Hadoop and standard POSIX file system conventions.
IBM Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages.
IBM Elastic Storage software can run on a cluster of x86 and/or POWER-based servers, and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors.
IBM partnered with various clients in different industries in a special beta program. Jamie led a client panel to discuss their experiences with IBM Elastic Storage:
Alan Malek, Director of IT, Cypress Semiconductor.
"Total cycle time is key". Over the past 31 years, they bought whatever file storage was available. Now, with IBM Elastic Storage, the performance was very consistent for their engineering workloads with full load balancing.
Russell Schneider, Principal Storage Consultant, Jeskell.
Russell's company works with a lot of federal agencies, "Big Data has become Bigger Data". For example, research on Global Warming and Climate Change requires a large amount of storage across agencies.
In another example, when the tsunami hit Japan a few years ago, an agency here in the USA realized they had 14PB of data stored as a single copy in a data center at sea level less than a mile from the coast. They realized they needed to have a secondary copy, and an option to cache to a third location depending on regional disasters.
Matthew Richards, Products, OwnCloud.
For those not familiar with OwnCloud, it provides a Dropbox-like file sharing service, but in the Enterprise, with on-premise storage. It has been fully tested and certified with IBM Elastic Storage to provide a secure file sharing platform.
With IBM Elastic Storage, they were able to scale linearly up to 20,000 users, and are now testing 100,000 users. The need to have intelligent access to files at scale is what Matthew likes about IBM Elastic Storage.
Dr. Michael Factor, IBM Distinguished Engineer at IBM Research
Michael started out explaining there are three areas for storage: block, file and object. The fastest growing type of data is unstructured fixed content with associated metadata. This is ideal for object storage. Michael has been working with OpenStack Swift, an open source interface defined for object storage. He defined "storlets" as follows:
Storlets extend an object store by moving computation to the data -- filtering, transforming, analyzing -- instead of bringing data to the computation.
Storlets have been deployed on a variety of European Union research projects. For example, in partnership with Phillips, a pathology storlet can count the number of cancer cells in an image. By bringing the computation to the data, it eliminates having to transfer large amounts of data over the network.
Storlets can run on-premise and on IBM's SoftLayer IaaS cloud offering.
Bruce Hillsberg, IBM Director of Storage Systems at IBM Research
Bruce led another panel discussion, this time of IBM storage experts:
Vincent Hsu, IBM Fellow and CTO of Storage.
The problem is the isolation of data into "storage silos". Isolation causes problems in managing large amounts of data at scale, and costs more as storage is not fully utilized. IBM Elastic Storage brings everything together, eliminating storage silos.
Michael explained how IBM works with clients all over the world to ensure that storage solutions meet client requirements. For example, storlets can be used to use rich metadata to manage photographs, and display them based on GPS satellite location, or other content that makes it easier to manage these images.
IBM Elastic Storage will support OpenStack Cinder and Swift interfaces. IBM is a platinum sponsor of OpenStack foundation, and is now its second most prolific contributor, with hundreds of full-time employees working on this.
Tom Clark, IBM Distinguished Engineer, Chief Architect, Storage Software, Cloud & Smarter Infrastructure.
Storage Management is a critical piece of Software Defined Storage. This is done in three ways:
The use of analytics to optimize the deployment of storage, based on workload requirements. Storage admins set policies, and then IBM Elastic Storage analytics gather metrics and then optimize data placement and movement based on these policies. IBM Elastic Storage has 70 percent lower TCO that competitive offerings.
The focus on backup services. Backups are not just for data protection, but rather can be used to duplicate or replicate data for testing, for training, and for other purposes. IBM Elastic Storage is fully supported by IBM Tivoli Storage Manager.
Being able to support Hybrid Cloud environments, where some data can be on-premise, and other data off-premise. Storage Management challenges will need to deal with this possibility. IBM Elastic Storage is well positioned for this.
Carl Kraenzel, IBM Distinguished Engineer, Director of Watson Cloud Technology and Support.
Watson is ground-breaking technology, and IBM Elastic Storage technology was at the heart of the Watson that was first introduced in 2011.
To consider IBM Elastic Storage based on lower-cost and higher-scalability is not the full picture. Rather, this is an important platform for Cognitive Computing, which we are just at the tip of the iceberg in exploring. IT systems need to be aware of the context of what we are doing.
While the Grand Challenge demonstration on Jeopardy! was exciting, it is time we stop playing games and apply IBM Elastic Storage to business, to help with health care and medical research, and other problems in society. IBM has already deployed this at Anderson Cancer Center and Memorial Sloan Kettering Cancer Center, for example.
Tom Rosamilia provided closing remarks. IBM Elastic Storage is not just for new workloads in Cloud, Analytics, Mobile and Social (CAMS) but also traditional workloads as well. IBM Elastic Storage provides "data democracy" and allows for "better rested storage administrators" that make fewer mistakes.
Tom opened the floor for questions from the audience:
Q1. Data integrity, not just security but also quality? IBM Elastic Storage has end-to-end data integrity checking built-in.
Q2. How does IT transition from full control to auto-pilot? IBM allows you to tap into existing storage. This is not rip-and-replace. With storage virtualization, IBM hides the complexity that normally requires full control over specific assets.
Q3. Storage admins would rather have a root canal without Novocaine than move their data. What is IBM doing to offer automation to help storage admins move to this new infrastructure? IBM storage virtualization breaks that hard link between applications and specific storage devices. IBM Elastic Storage eliminates application downtime previously associated with data movement.
Tom Rosamilia assured the audience that IBM is fully committed to its storage portfolio. IBM Elastic Storage is not just about the profoundness of what IBM announced today, but also where IBM is investing in the future of storage.
Well it's Tuesday again, and you know what that means? IBM announcements! Many of the announcements were made by IBM Executives at the [IBM Pulse 2014 conference].
IBM BlueMix is the newest cloud offering from IBM, providing Platform-as-a-Service (PaaS) offering based on the Cloud Foundry open source project that promises to deliver enterprise-level features and services that are easy to integrate into cloud applications.
This week, my fifth-line manager Tom Rosamilia, IBM Senior Vice President IBM Systems & Technology Group and Integrated Supply Chain made two announcements at Pulse. First, in additional to x86-based servers, SoftLayer will also offer POWER-based servers to run AIX, IBM i and [Linux on POWER] applications.
Second, SoftLayer will support PureApplication Patterns of Expertise. What is a pattern of expertise? It can be as simple as a virtual machine encapsulated in [Open Virtual Format], to more dynamic architectures, packaged with required platform services, that are deployed and managed by the system according to a set of policies.
Patterns simplify and automate tasks across the lifecycle of the application. Customers and partners alike are [seeing significant reductions in cost and time] across the application lifecycle with the deployment of a PureApplication System.
Also, this week at Pulse, Robert LaBlanc, IBM Senior Vice President of Software and Cloud Solutions, announced [IBM plans to Acquire Cloudant] which offers an open, cloud Database-as-a-Service (DBaaS) that helps organizations simplify mobile, web app and big data development efforts.
When I introduced [SmartCloud Virtual Storage Center] back in October 2012, I mentioned that it was a great solution for large enterprise that have all of their disk behind SAN Volume Controller (SVC).
To reach smaller accounts, IBM has announced two new offerings:
IBM SmartCloud Virtual Storage Entry for customers that have less than 250TB of disk behind two or four SVC nodes. It is priced per terabyte, by the amount of capacity that is virtualized.
IBM SmartCloud Virtual Storage for Storwize Family for customers that have other Storwize family products (Storwize V7000 or V5000, for example). It is priced per the number of storage enclosures that are managed by the Storwize family hardware.
From the photo, the marketing people staggered the various components to give it a stylized [Dagwood Sandwich] effect. I can assure you that these are just standard 19-inch rack components that fit into 6U of space in standard IT racks.
Starting top to bottom, we have the first FlashSystem V840 Control Enclosure, its 1U-high UPS, a second FlashSystem V840 Control Enclosure and its UPS, and finally a 2U-high FlashSystem V840 Storage Enclosure.
You can have up to a dozen Flash modules, either 2TB or 4TB size, for a maximum of 40TB usable RAID-protected capacity. These can be protected with AES 256-bit encryption. The FlashSystem modules are front-loaded, and slide in and out for easy maintenance.
The system is fully redundant and hot-swappable with concurrent code load to ensure high availability.
(Update: In the comments, readers thought that this was nothing more than just a two-node SVC with FlashSystem 840. There are differences, so I have added the following table.)
SVC with FlashSystem 840
Cabling from controllers to storage
Through SAN fabric ports
Direct attach from V840 Controllers to V840 Storage Enclosures
Call Home Support
GUI screen branding
The system is fully VMware-certified, supporting VAAI interfaces, and an SRA for VMware's Site Recovery Manager (SRM). With Real-time Compression, you can get up to 80 percent capacity savings for workloads like Virtual Desktop Infrastructure (VDI). That in effect gives you up to 5x (200TB) of virtual capacity in 6U of rack space!
You can either keep it as an All-Flash array, or you can virtualize external IBM and non-IBM disk systems, and use the Flash capacity in the Storage Enclosure for IBM's Easy Tier automated sub-volume tiering and data migration. With or without external storage, the FlashSystem V840 can provide local and remote mirroring and point-in-time copies.
However, I was speaking to various clients in Winnipeg, Canada Tuesday and Wednesday this week, so marketing moved the announcement date to today to accommodate my schedule. Sometimes, being the #1 most influential IBM employee in storage comes in handy!)
Here, then, is a quick review of the storage portion of today's announcements.
IBM FlashSystem 840
The [IBM FlashSystem 840] offers twice the capacity as its predecessors, the 810 and 820, with up to 48TB in a dense 2U package.
(Quick recap of previous models: Both the FlashSystem 810 and 820 supported ECC-protected memory and Variable-striped RAID (VSR). The [FlashSystem 810] supported RAID-0 striped across the modules, and the [FlashSystem 820] supported two-dimensional 2D-RAID across modules for higher availability. Fellow blogger Jim Kelley (IBM) on his Storage Buddhist blog has a great post on this: [IBM FlashSystem: Feeding the Hogs].
The new FlashSystem 840 in effect replaces both, so you can choose RAID-0 striping or 2D-RAID, along with your ECC-protected memory and Variable-striped RAID. It offers hot-swappable Flash modules, redundant components, and non-disruptive concurrent code load (CCL).
The FlashSystem 840 also introduces military-grade AES-XTS 256 bit encryption to provide added protection to your data.
For host attachment, you have some great choices: 16Gb/8Gb/4Gb auto-negotiated Fibre Channel (FCP), 40Gb InfiniBand QDR, and 10Gb FCoE. Whatever you decide, you get 90 microsecond writes, and 135 microsecond reads.
Since its introduction just over a year ago, IBM has sold FlashSystem to over 1,000 clients! For more on how this compares to other all-flash arrays, read my previous post about [IBM FlashSystem].
Adding SAN Volume Controller provides some key advantages, including Real-time compression, Thin provisioning, FlashCopy point-in-time copies, Stretched Cluster support, Easy Tier sub-LUN automated tiering, and remote copy services like Metro Mirror (synchronous) and Global Mirror (asynchronous).
Adding the SVC also changes the host attachment options: 8Gb/4Gb/2Gb Fibre Channel (FCP), 1Gb and 10Gb iSCSI, and 10Gb FCoE. Depending on the options and features you choose, the SVC layer adds a modest 60 to 100 microseconds to each read and write.
Each SVC node dedicates four of its six cores, and 2GB of its 24GB cache, to use with compression. Those interested in beefing up compression performance, either with FlashSystems or with any other disk, can choose the "Compression Hardware Upgrade Boosts Base I/O Efficiency" (affectionately known as the CHUBBIE) RPQ 8S1296 for SVC systems with software version 184.108.40.206 or higher. Basically, this RPQ adds another 6-core CPU and another 24GB of cache, so that each node can dedicate 8 cores for compression, and 26GB of cache for compression processing. Initial test results show this can increase performance 3x!
IBM Network Advisor
The [IBM Network Advisor v12.1] management software provides comprehensive management for data, storage and converged networks. This single application can deliver end-to-end visibility and insight across different network types--it supports Fibre Channel SANs (including Gen 5 Fibre Channel platform), IBM FICON and IBM b-type SAN FCoE networks--and provides new features to manage your Brocade and IBM b-type SAN switches.
Cisco MDS 9710 Multilayer Director
The [Cisco MDS 9710 Multilayer Director] is mainframe-ready, with full support for System z FICON and Fibre Channel protocol (FCP) environments. This director supports eight module slots for a maximum of 384 ports.
Well it's Tuesday again, and you know what that means? IBM Announcements!
You might be thinking, didn't IBM just have a [huge storage announcement October 8, 2013]? You would be right! IBM's $1B additional investment in Storage has been like shot of adrenaline in getting new features and functions out sooner to our clients.
DS8870 Disk System Release 7.2
New IBM POWER7+ controllers. The previous models of DS8870 were based on the POWER7 controllers, and these new models have POWER7+ processors. This change enhances the performance across the board, from mainframe to distributed systems, from sequential to random. Customers with existing POWER7-based models will be able to do MES upgrade to the new POWER7+ next year.
For comparison with older DS8000 models, here are some internal IBM measurements we took for Database workloads on both z/OS(mainframe) and Distributed systems with typical 70% read, 30% write and 50% cache hit:
IBM Internal Measurements (thousands of IOPS)
DB Distributed systems
New 1.2TB (10K RPM) and 4TB (7200 RPM) self-encrypting enterprise drives (SED). This is a 33% boost over the previous 900GB and 3TB drives previously available. As with all the other drives in the DS8870, these new drives include the encryption chip right on the drive itself, offering encryption with scalability.
Improved security. Release 7.2 will support the U.S. National Institute of Standards and Technology [NIST.gov]] 800-131A specification, raising the 96-bit encryption to the required 112 bits on the customer IP network. This involves updates to the security firmware, management software and digital signatures on code loads.
Metro Mirror enhancement for System z. By avoiding serial conflicts of updated blocks, this enhancement can boost performance up to 100 percent when using Metro Mirror with z/OS applications on System z mainframes.
Easy Tier™ reporting and graphs to determine optimal mix. Now you can see for yourself how sub-LUN automated tiering is helping your applications.
Easy Tier Workload Categorization
New workload visuals help clients and IBM technical specialists compare activity across tiers within and across pools to help determine optimal drive mix for current workloads
Easy Tier Data Movement Daily Report
New Easy Tier summary report every 24 hours illustrating data migration activity (5-minute intervals) can help visualize migration types and patterns for current workloads
Easy Tier Workload Skew Curve
Shows skew of all workloads across the system in a graph to help clients visualize and accurately tier configurations when adding capacity or a new system Clients can import data into Disk Magic
All-Flash Optimization. Yesterday, in my post [IBM FlashSystem versus EMC XtremeIO], I mentioned that any hybrid systems like the IBM Storwize V7000 that can support a mix of SSD and HDD can obviously be configured as SSD-only. Apparently, that was not obvious to many readers, so I apologize. For the DS8870, you can configure an all-Flash (SSD only) configuration, and Release 7.2 added some optimization when configured with SSD only.
1,056 drives 15K @146GB in RAID-10
224 drives SSD @400GB in RAID-5
Same - Usable 72 TB
70 percent faster
33 percent less floor space required
62 percent less energy consumed
(Note: Performance results based on measurements and projections using IBM benchmarks in a controlled environment.)
OpenStack™ support DS8870 now offers the [OpenStack Cinder] interface for block LUN allocations in OpenStack environments. IBM is a Platinum sponsor of OpenStack, and Opentack is the strategic platform for IBM private and hybrid clouds.
XIV Storage System
Following on the heels of the [XIV enhancements announced], IBM has now added 800GB Solid State Drives (SSD) as Read cache for its 4TB drive-based models.
DCS3860 Disk System
The DCS3860 is the next generation of the DCS3700 disk system. Designed with Linux-x86 servers in mind, the system offers direct SAS host attachement, 24GB of cache, and 60 drives in a compact 4U drawer. Like its predecessor, the drives are stored on five pull-out trays, with twelve hot-swappable drives per tray. You can add up to five more expansion units, with 60 drives each, for a total of 360 drives in 24U rack space.
These new models will help our clients deploy new workloads and consolidate existing workloads.
Each resident presented at least six proposals for blog post ideas. A proposal included a title and short description of what it would entail. Titles had to be less than 70 characters, and the short descriptions were typically just a few sentences.
These were presented to the entire team, and we picked them apart, suggested better wording for the titles, or different ways to approach the topic.
"I treat others respectfully, attacking ideas and not people. I also welcome respectful disagreement with my own ideas.
I believe in intellectual property rights, providing links, citing sources, and crediting inspiration where appropriate.
I disclose my material relationships, policies and business practices. My readers will know the difference between editorial, advertorial, and advertising, should I choose to have it. If I do sponsored or paid posts, they are clearly marked.
When collaborating with marketers and PR professionals, I handle myself professionally and abide by basic journalistic standards.
I always present my honest opinions to the best of my ability.
I own my words. Even if I occasionally have to eat them."
Words to live by.
The residents spent most of the day working on our blogs from the proposals that were approved. The target was around 400 to 600 words in length, with one or two stock photos.
IBM is the #1 vendor for Social Business tools, so it makes sense for us to use our own stuff to facilitate the submission process. The residents submit their blog posts to IBM Connections as an activity in the Cloud Social Media Residency community. All of the resources we used, and all the presentations we saw, are all here in the community.
As an incentive, prizes were given out to those who submitted the most posts by end of the day.
We were given certificates for completing the class, and a "Redbooks Thought Leader" emblem to put on our blog.
Ryan Boyles took a group photo! If it seems that the photo is slightly askew, it is to make me look taller. Yes, I could have used GIMP to fix the orientation, but why bother? I look tall! Woo hoo! I will have to remember this technique for future group photos.
ITSO Cloud Social Media Residency, Oct 2013. Photo by Ryan Boyles.
Lastly, I would like to thank Vasfi, Tamikia, Hillary, Caroline, Ric, Jane, LeeAnne, Tina, Karen, Michael, Shelbee, Farzad, Stewart, Arun, Eric, Chris, Hans, Odilon, Mohsin, Wolfgang and the rest of the ITSO team for a wonderful job organizing this week!
The gondolier propels the boat with an oar, and stopped rowing a few times to belt out beautiful Italian songs.
Truly impressed, I asked the gondolier how long was the training for this job. "Six weeks!" he answered. Wow! Where can I learn to sing like that in six weeks?
He clarified. No, the Venetian hotel hires competent singers, and then spends six weeks to teach them to row the gondola. Duh!
I asked Vasfi Gucer, our ITSO project leader for this residency, why there were so many Cloud topics on the agenda for this social media training. He explained it was just as important to emphasize "why" people need to be passionate about Cloud, in addition to the "what" and "how" of blogging.
This reminded me of this quote from fellow author Hugh MacLeod. I highly recommend his series of books.
"Blogging requires passion and authority. Which leaves out most people."
--- Hugh MacLeod.
Vasfi had invited Cloud experts who already have the authority to blog, and the point of this residency is for the residents to become passionate in sharing their expertise.
Here are some of the people that spoke on Cloud:
Ric Telford, IBM VP of Cloud Services
Ric Telford shared with us IBM's point of view of where the Cloud industry is going. He has been in this job position since 2009, and shared with us the history of how the IBM Cloud business has evolved in the past four years.
Jane Munn, IBM VP Business Line Executive for Cloud hardware
As the Center of Competency on Cloud for all 12 IBM Executive Briefing Centers in my group, I had to report to Jane Munn on a frequent basis. I was pretty candid on those calls on what we should change, and I am glad to see that many of my suggestions have been implemented, or being considered for 2014.
Michael Fork, IBM Lead Architect for Hosted Private Cloud
Michael Fork gave two great presentations, one on [IBM SoftLayer] Cloud services, and the second on IBM's support of open standards, such as [OpenStack] and Cloud Foundry.
Hans Zai, IBM Cloud Service Line Leader; and Odilon Magroski Goulart Junior, IBM Technical Solution Architect
All the residents had to present in front of the class on their expertise. Hans and Odilon presented their work on [IBM SmartCloud for SAP Applications]. Hans is from Sweden, and Odilon from Brazil, so their perspectives on this was quite interesting.
When IBM renamed LotusLive to [SmartCloud for Social Business], I thought this would be the naming convention for all of our Software-as-a-Service (SaaS) offerings.
But SmartCloud for SAP Applications is a Platform-as-a-Service, providing the SAP environment as a platform, which allows clients to then deploy their customized SAP applications on this platform.
What did I present on for my "Share your expertise" session? IBM System Storage, of course! Storage is a critical part of Cloud!
So, my gentle readers, what topics do you want me to write about that combines Storage and Cloud? Enter your suggestions in the comments below.
"SmartCloud Enterprise Object Storage is switching from 3rd-party Nirvanix to its internal IBM Softlayer. This one involves more in-depth explanation which I will save for another post."
It's time to make good on that promise! Here is a quick diagram to help visualize the agreement (with sincere apologies to [Jessica Hagy]!) but not to scale, of course!
Last month, Nirvanix announced it was shutting down October 15. Here was the exact wording from their website:
For the past seven years, we have worked to deliver cloud storage solutions. We have concluded that we must begin a wind-down of our business and we need your active participation to achieve the best outcome.
We are dedicating the resources we can to assisting our customers in either returning their data or transitioning their data to alternative providers who provide similar services including IBM SoftLayer, Amazon S3, Google Storage or Microsoft Azure.
We have an agreement with IBM, and a team from IBM is ready to help you. In addition, we have established a higher speed connection with some companies to increase the rate of data transfer from Nirvanix to their servers.
We are working hard to have resources available through October 15 to assist you with the transition process, and have set up a rapid response team that can be reached at (619) 764-5650 [press 2 for customer support during normal business hours] or (888) 791-0365 after business hours, or contact email@example.com.
Please check back to this web page periodically for status updates.
We thank you for your support and patience.
The Nirvanix team
UPDATE ON NIRVANIX
On October 1, 2013, Nirvanix voluntarily sought Chapter 11 bankruptcy protection in order to pursue all alternatives to maximize value for its creditors while continuing its efforts to provide the best possible transition for customers."
In response, IBM put out this press release:
"In light of reports that Nivanix has decided to soon cease operations, IBM is moving quickly to help clients of our Nivanix-based Object Storage offering to move their data to other solutions such as the robust and highly scalable IBM SoftLayer Object Storage or IBM's persistent storage solution."
To understand why this is a big deal, consider the difference between Cloud Computing and Cloud Storage. Cloud Computing is like buying gasoline at your favorite gas station. If the station is closed, you can just drive a few blocks to another gas station. The ease with which customers can switch from one Cloud Compute provider to another is part of the appeal, forcing Cloud Compute providers to be extremely efficient at what they do to offer the lowest price.
Cloud Storage is completely different, more like a safety-deposit box at the bank, or a storage unit to hold all of your boxes of tax receipts. Now if you have a small amount stored away in a safety-deposit box, this is probably just a minor inconvenience. You can take out the contents and store at home, or find another bank and open a new safe deposit account.
However, if you have a lot stored in a storage unit, it may be more difficult.
For example, I am in the process of remodeling my home, so I have moved a lot of my stuff to a 400 cubic-foot storage unit during the process. There were a variety of storage units within miles of my home. Some are fully air-conditioned, some offered 24x7 access, while others are not air-conditioned, or only allowed access during business hours. It has taken me several weekends to box up and move them to the storage unit. My car only holds 12-14 boxes at a time, so many trips were involved.
If the Storage Unit company told me that they were closing down, and that I would have to move all of these boxes to another facility, I would have to hire moving professionals to do all the work. This is in effect what companies need to do with their data. They must take the data off Nirvanix systems, and either store it in-house, or find another cloud storage provider.
IBM offers three options:
IBM [SoftLayer Object Storage] offering which is an OpenStack Swift-based Object Storage solution. IBM's SoftLayer object based storage solution provides a robust, highly scalable solution, with the ability to retrieve and leverage data the way you want to, and grow when you need. You can choose to store your objects in Dallas, Texas (USA), Amsterdam (Europe), and/or Singapore (Asia).
SCE persistent storage solution where you will be able to manage storage resources by attaching an instance during the instance creation process.
An alternate storage solution of your choice. Yes, IBM will help you move your data to Amazon, Google, Microsoft, etc. While technically competitors, IBM also has strategic partnerships in place with each to facilitate the movement.
These options are not just for IBM's SmartCloud Enterprise Object Storage clients. Nirvanix has named IBM the savior for all of its other non-IBM customers as well. Why IBM? Well, IBM is one of the most recognized names in the IT industry. Not just one of the biggest Cloud Service providers, IBM also has an army of professionals in its Global Services division to help.
Well, it's Tuesday again, and you know what that means? Announcements!
Today, IBM's announcements are designed to change the economics of big data analytics, cloud, mobile and social media.
[Software Defined Environments] require [Software Defined Storage], combining storage virtualization with open, extensible, industry-led interfaces. The IBM SmartCloud Virtual Storage Center (VSC) and IBM Storwize Family are the market leaders in storage virtualization. SmartCloud VSC, Storwize Family, and XIV support the industry-led OpenStack interfaces.
Here are some of the announcements today:
IBM Storwize® Family
The [SAN Volume Controller] was first introduced 10 years ago, in 2003. Today, clients enjoy these storage virtualization capabilities across a variety of offerings, known collectively as the [IBM Storwize Family].
IBM adds a new member to the Storwize Family. In addition to SAN Volume Controller, Storwize V7000, Storwize V7000 Unified, Flex System V7000, Storwize V3700, and Storwize V3500, IBM is announcing the [IBM Storwize V5000]. Here's a quick side-by-side comparison:
Scalability: Maximum configuration
Four control enclosures clustered together, 36 expansion enclosures, 960 drives, 64GB cache
Two control enclosures clustered together, 12 expansion enclosures, 336 drives, 32GB cache
One control enclosure, 4 expansion enclosures, 120 drives, 8GB cache upgradeable to 16GB, optional Turbo performance
8Gbps FCP and 1GbE iSCSI standard; optional 10GbE iSCSI/FCoE.
Can upgrade to Storwize V7000 Unified by adding NAS File Modules to add support for CIFS, NFS, HTTPS, SCP and FTP protocols
1GbE iSCSI, 6Gbps SAS, 8Gbps FCP and 10GbE iSCSI/FCoE Standard
1GbE iSCSI, 6Gbps SAS standard; optional 8Gbps FCP and 10GbE iSCSI/FCoE
Storage virtualization/Data Migration
Internal virtualization, Data Migration standard; optional external virtualization
Internal virtualization, Data Migration standard; optional external virtualization
Internal virtualization, Data Migration (external devices can be attached to ingest data only) standard
optional Metro Mirror, Global Mirror, Global Mirror with Change Volumes
optional Metro Mirror, Global Mirror, Global Mirror with Change Volumes
optional Metro Mirror, Global Mirror, Global Mirror with Change Volumes
Sub-LUN Automated Tiering
Easy Tier standard
optional Easy Tier
optional Easy Tier
VMware VAAI, VASA, vCenter plug-in, and OpenStack Cinder APIs standard
VMware VAAI, VASA, vCenter plug-in, and OpenStack Cinder APIs standard
VMware VAAI, VASA, vCenter plug-in, and OpenStack Cinder APIs standard
Storwize V7000, V5000 and V37000 now support larger 800GB SSD drives. Previously, they only support SSD drives up to 400GB.
VMware 5.5 and VASA support. VMware ships every release with built-in support for all members of the IBM Storwize Family, but it bears repeating here just in case you were interested. IBM is a leading reseller of VMware, so it makes sense for IBM's storage devices to support everything that VMware customers could possibly want in terms of VMware integration. IBM SmartCloud VSC, Storwize Family, and XIV Storage System are no exception!
New IP-based replication driving lower costs for replication. Previously, Metro Mirror, Global Mirror and Global Mirror with Change Volumes were FCP-based, and many clients bought extra equipment to run FCP packets over long-distance IP (known as FCIP). Now, clients can replicate across distnace natively without FCIP routers, and use IP-based connections natively.
In my blog posts covering [Edge 2013 - Day 3 Solution Center], I mentioned that IBM has certified Bridgeworks' SANSlide 150SVCV7K unit that provides a Riverbed-like WAN Optimization for long-distance replication. Now, IBM has fully integrated Bridgeworks' SANSlide network optimization technology directly into Storwize Family!
All members of the Storwize Family will support 1GbE remote disk replication, and this will be extended to 10GbE support at a later date.
The [Storwize V3700] is now offered in 48-volt Direct Current (DC) models, [NEBS/ETSI compliance] for Telecommunications companies that require this, and now support 4TB drives.
When we introduced [IBM SmartCloud Storage Access] in February, it was to offer self-service, automated policy-based provisioning for file storage on the SONAS and Storwize V7000 Unified. Today, we add self-service, automated policy-based provisioning for block storage. The first products to be supported are SmartCloud VSC, the entire Storwize Family, and XIV Storage Systems. In addition to the web portal, the Storage Cloud Integration API enables 3rd party ISV applications to support SmartCloud Storage Access.
Storage admins will no longer need to be bothered with tedious provisioning requests, freeing up more time for them to work on more strategic, transformational projects.
[IBM SmartCloud Virtual Storage Center] was introduced last year, combining SAN Volume Controller, Tivoli Storage Productivity Center, Tivoli FlashCopy Mangaer and the Storage Analytics Engine into a single license. The initial offering provided the cross-platform "Tiered Storage Optimization" that provided recommendations for what LUNs should be moved from one disk array to another to manage performance vs. cost. Today, IBM is first to market with an automated version, moving LUNs automatically from one disk array to another.
[SmartCloud Enterprise Object Storage] is switching from 3rd-party Nirvanix to its internal IBM Softlayer. This one involves more in-depth explanation which I will save for another post.
IBM XIV Storage System
As part of the [due diligence] team for IBM to acquire the XIV company back in 2007, I am glad to see how this system has evolved since then. I have certainly [blogged quite a bit on XIV] over the years.
Earlier this year, IBM introduced Hyper-Scale Mobility which allows the storage admin to move LUNs non-disruptively from one XIV frame to another. Today, Hyper-Scale Cross-system Consistency Groups allows you to have snapshots of collections of volumes across multiple XIV frames, up to 3PB of capacity snapped at the same instance of time.
The current supported releases of OpenStack are Folsom and Grizzly, and the newest release is Havana. XIV now offers OpenStack Cinder interfaces at the Havana level.
XIV now offers a RESTful API for monitoring and provisioning. [REST] is a de-facto standard in WEB services and cloud implementations. XIV's RESTful API is a programmatic management interface that follows REST principles:
Resources are identified by global identifiers (URIs)
Data is sent as JSON/XML over HTTP
Manipulations of resources are done by HTTP methods (GET, PUT, POST, DELETE)
The interface is Stateless and Hypertext driven
The interface is universally supported, programming language and platform agnostic. For monitoring, the following GET example could show the list of volumes on a particular XIV storage system:
For provisioning, the following PUT example could create "vol1" on that XIV storage system.
IBM SmartCloud Storage Access to allow self-service provisioning (see the SmartCloud section above).
Data-at-Rest encryption, using Self-Encrypting Drives (SED). XIV will encrypt the data, and IBM's Security Key Lifecycle Manager (SKLM) or Tivoli Key Lifecycle Manager (TKLM). If you have an XIV already, you may already have SED drives ready to use! The XIV will also encrypt the data on the SSD drives used for persistent read-cache.
Other new and enhanced offerings
For our mainframe clients, the Virtualization Engine TS7700 now supports 60 percent more capacity, and can now support 8Gbps FICON attachement.
N series N3000, N6000 and N7000 support new disk drive types and sizes, as well as Data OnTap 8.2 Cluster mode. You can now lash together up to 16 N series together into a SONAS-like single system image.
Cisco MDS 9710 Multilayer Director for IBM® System Networking is a new 16 Gbps SAN director with robust security to support multi-tenancy cloud configurations.
Whew! That is a lot of things to discuss in one post. Since they were all related, I did not want to split it up into parts.
Wrapping up my coverage of the of the [IBM Edge 2013] conference, I have some photos of people I ran into at the Solutions Center.
Leslie Hattig and Lisa Stone, both account managers for [MarkIII Systems], an IBM Business Partner located in Houston, TX. These ladies are inseparable BFFs, I have never seen one without the other! I first met them at the [Storage Symposium in Chicago] back in 2009.
Stacy Tabor was our Community Manager for the [Storage Community]. This community covers IT Storage challenges, hot topics, architecture and solutions. You'll find industry news, videos, blog discussion threads on timely topics, exclusive analyst white papers and experts opinions. I am a frequent contributor, myself, and thank Stacy for her past service. She helped run a "Social Media Hour" at Edge for all the bloggers like me to get to meet each other.
I could not resist getting a picture with this Las Vegas Cirque du Soleil] dancer. This was an invitation-only event, sponsored by IBM Business Partners, that I was invited to during the Social Media Hour. (See, it pays to be social!) I think the visual effects of the flag she was waving turned out really well in the picture! And yes, in case you are wondering, that is my favorite grape-flavored beverage (GFB) in my left hand. Posing for this picture was quite the balancing act, but then I am also a certified yoga instructor, so I was able to manage!
Tanaz Sowdagar is an IBM Storage Rep for our Business Development Team. This includes finding other companies to OEM our technology and re-brand it under their own names. I have worked with Tanaz for many years, helping answer questions that potential OEM parnters have about our products and technologies for this purpose.
This was Michelle, my Conference Room Monitor. Each room had one, scanning the bar-codes on each badge for all the attendees, keeping count of the number of people for each session, supporting anything the speaker needs, like getting the A/V guy to come help set up the laptop projector.
Since this was Friday, last day of the conference. I decide to dress casually, consistent with many company's [Casual Friday] dress code policy. I am wearing the "IBM Edge Rocks" tee-shirt given out at the concert and Solutions center the first few nights.
Getting this shot right took several takes, as the man I handed my camera to had apparently never seen a digital camera before, did not know how to focus, and some
Finally, leaving Las Vegas, I sat next to Mrs. Joey Clark, wife of "Bulldog" Clark of the Utah band [Blammity Blam]. She also sometimes plays violin with the band. She is a newly-wed, and not sure if Joey is her name, or her husband's name. (Joey, if you are out there, and want me to correctly identify you, please write a comment in the section below.)
What I have learned however, is that if a beautiful girl is sitting next to me on the plane, she will either talk to me the entire flight, implying that she is single, or mention within the first 30 seconds of conversation that she is married. Sadly for me, it was the latter.
(We were both flying on to Dallas, TX, whereupon she was going to visit her parents in Florida, and I was on my way to Sao Paulo, Brazil to get stuck there amongst the protesters in what is now called the [V for Vinegar movement], but I will save that for another blog post!)
Well, that wraps up my coverage of Edge 2013. I am sorry it took so many months to cover all the material, but I did not want to have it go uncovered much longer.
Next year's [Edge 2014] is expected to be bigger and better. It will in Las Vegas again, but this time at the Venetian Hotel, May 19-23, 2014. I plan to be there!