Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
As you can imagine, I get a lot of email from around the world. This one, from a loyal reader from overseas, was particularly interesting. Normally, I would direct them to read the fantastic manual [RTFM], but decided instead to go ahead and tackle it here in my blog.
I follow your blog for several years, it has served as a reference and training for me in my professional career and I want to thank you.
I am writing because my company has acquired a new IBM Storwize V7000 Gen2 to replace a Gen1, with 16 FC ports, 8 ports per controller node and 8-port FC FlashSystem 900. The idea is to virtualize the V7000 storage part Flash900 and other hand assign directly to the host directly. After much reading on forums and storage Redbooks I have nothing clear as it should be wiring the SAN or as zoning would be made to carry out this installation. I would appreciate if you can write on this subject as controversial as seems to be the zoning and wiring SAN and if possible be clarified by me onstage.
I will tackle this in three steps.
First, let's attach "Server 1" and the FlashSystem 900 to the SAN fabric. IBM Spectrum Virtualize can handle one, two or even four separate fabrics. Let's assume you have a dual-port Host Bus Adapter (HBA) in server 1, and two redundant fabrics. We will connect each server port to each FCP switch. Likewise, we will connect each FCP switch to the FlashSystem 900, carve up "Volume 1", and create SAN "Zone A1" and "Zone A2", which identify "Server 1" as the initiator, and "FlashSystem 900" as the target. This is all basic stuff.
"All Storwize V7000 Gen2 nodes in the Storwize V7000 Gen2 clustered system are connected
to the same SANs, and they present volumes to the hosts. These volumes are created from
storage pools that are composed of mDisks presented by the disk subsystems.
The fabric must have three distinct zones:
Storwize V7000 Gen2 cluster system zones
Create one cluster zone per fabric, and include any port per node that is designated for
intra-cluster traffic. No more than four ports per node should be allocated to intra-cluster
Create a host zone for each server host bus adapter (HBA) port accessing Storwize
Create one Storwize V7000 Gen2 storage zone for each storage system that is
virtualized by the Storwize V7000 Gen2. Some storage control systems need two
separate zones (one per controller) so that they do not 'see' each other."
Second, we connect the Storwize V7000 Gen2 to the FCP switches. You don't need to connect all of the ports, but I recommend that you have each controller node to each FCP switch, requiring four cables. Add more connections for added performance bandwidth.
Carve up "Volume 2" and this will be referred to as a "managed disk", mDisk for short, and create a "storage pool" which were formerly known as a "managed disk group" which is why you often see MDG in the naming conventions and examples. Storage pools can have one or more managed disks, and you can add more dynamically as needed.
The "storage zone" indicates the Storwize V7000 Gen2 as the initiator, and the FlashSystem 900 as target. If you want to increase the performance bandwidth, consider more cables between the FCP switches and the FlashSystem 900. We create "Zone B1" and "Zone B2". I recommend a separate "storage zones" for each additional storage system that you choose to attach to the Storwize V7000 Gen2.
The "cluster zone" that connects all of the Storwize V7000 Gen2 node ports together for node-to-node (intra-cluster) communication. Storwize V7000 Gen2 ports can serve as both initiators and targets dynamically. For example, when you write to one node, the node then copies the cache block over to the second node so there are two copies stored safely on separate nodes. Since we have two fabrics, we create "Zone C1" and "Zone C2".
Third, we connect "Server 2" to FCP switches, same as we did with "Server 1". We create "Volume 3" which is a "virtual disk, or vDisk for short, from the storage pool containing Volume 2. The "host zone"indicates Server 2 as the initiator, and Storwize V7000 Gen2 as the target. We create "Zone D1" and "Zone D2". I recommend putting each additional server in its own set of host zones.
In theory, you could have a server connected to both Volume 1 and Volume 3. For example, a Windows server would have a "C:" drive connected directly to FlashSystem 900 for high-speed performance, and have a "D:" drive on Storwize V7000 Gen2 to contain data. The Storwize V7000 Gen2 introduces 60 to 100 microseconds of added latency, but provides added value such as FlashCopy, Thin Provisioning, and Real-time compression.
Of course, there are unique situations that might require special configurations, depending on the servers, operating systems, host bus adapters, FCP switches, and storage systems involved.
In the 2004 comedy ["A Day Without a Mexican"], the director envisions how disruptive life would be in California if all the Mexicans suddenly disappeared. The point is that sometimes you take things in the background for granted.
I was reminded of this when I saw Mark Underwood's blog post [Mainframe: Still Not Crazy After All These Years]. The article reminds us how critical IBM z Systems mainframes (and related storage like the IBM DS8880 disk systems) are in our lives. Here's an excerpt:
"Warren Buffett's Berkshire Hathaway started buying up IBM stock in 2011 and bought still more of IBM later. Despite its disappointing short-term valuation, Berkshire Hathaway is standing by its IBM investment, which is one of Berkshire's top four plays. ... To make this case, some statistics may be needed:
The z13 can withstand an 8.0 earthquake.
z Systems enjoy the highest standardized security certification (FIPS 140-2, highest level 4 of 4).
23 of the world's top 25 retailers use a mainframe.
92 of the top 100 banks are mainframe users.
All 10 of the top 10 insurers have commitments in mainframe technologies.
Around 80 percent of all corporate data is managed by mainframes.
The z13 can process 2.5 billion transactions daily (that's 100 [Cyber Mondays], as IBM's Mark Anzani, VP of z Systems Strategy, Resilience and Ecosystems, observed)."
... In fact, and notwithstanding perceptions to the contrary, the mainframe's center-stage position in large corporations around the world has not budged. That's the conclusion of an industry survey sponsored by Syncsort Inc. and conducted in 2015 by Enterprise Systems Media, a publisher of magazines for IT managers and technical professionals. Seven out of 10 respondents (IT planners, architects and managers at global enterprises with $1 billion or more in annual revenues) ranked the use of the mainframe for large-scale transaction processing as very important."
What would a comparable film depicting "A Day without a Mainframe" be like? I would imagine it somewhere between a disaster movie like  and an end-of-the-world zombie horror movie like [28 Days Later]. I would gladly take a million dollars to write the screenplay!
(FCC Disclosure: I work for IBM and am a filmmaker as well. Earlier in my career, I was chief architect of IBM's Data Facility Storage Management Subsystem (DFSMS) which manages around 80 percent of the world's corporate data. This blog post can be considered a "paid celebrity endorsement" for IBM's z13 System mainframes and DS8880 Disk Systems. I have personal experience with both and highly recommend them. I am neither a Mexican nor resident of California, but work regularly with both in my job responsibilities. Like Warren Buffett, I also own stock in both IBM and Berkshire Hathaway companies. I had no involvement in the making of any of the major motion pictures mentioned in this blog post, have no financial interest in their distribution, and have not been provided any compensation for mentioning them in this blog post. They are all great movies worth watching!)
What do you think the movie would be like? Enter your comments below!
(Actually, the [XIV Model 314] was announced on Nov 10, 2015 last year, but announcements made in November and December are often overlooked between distractions like holidays and year-end processing. Today's announcement was to eliminate the "not available in some countries" restriction. The last time I mentioned on this blog that a product was not available in some countries, I had tons of questions of "why". Hopefully, waiting until a product is available in all countries eliminates that concern.)
What does the XIV model 314 offer? IBM doubled the processors, up to 180 cores, and doubled the DRAM cache, up to 1440 GB. Both of these changes were done to improve the Real-time compression capability.
To reduce test effort cycle time, IBM simplified the configuration options:
Instead of ranging from 6 to 15 modules, the model 314 is limited to 9-15 modules.
The drive sizes are reduced to just 4TB and 6TB capacities.
If you want a Solid-State drive (SSD) for cache boost, only the 800GB option is available.
Through a combination of thin provisioning and compression, you can define up to 2 PB of soft capacity per rack.
The firmware v11.6.1 reduces the minimum volume size for compression from 103GB to 51GB. Firmware perpetually licensed for Spectrum Accelerate can be used with the XIV Model 314.
From left to right: Melinda Jensen, Bill Terry, Lee Olguin, Kris Keller, Tony Pearson, and Kristy Knight.
The storage, cloud and analytics team celebrated with cake and party hats. None of us "birthday boys" eat chocolate, so this year we chose a new flavor: Strawberry Cream! It was delicious.
It was a good time to reflect on our success and accomplishments. In 2015, I helped close over $270 million USD in revenues for IBM, meaning that I helped close over a million [per day on the job].
The IT industry went through a lot of changes also. Hewlett-Packard [split into two smaller pieces]. Dell started [EMC's fade to non-existence]. Cisco and IBM joined forces to create VersaStack, a converged system that combines the most popular x86 servers with the industry's best storage. Analysts recognized IBM's leadershp in today's [Cognitive Era].
My friends over at Appcessories sent me an awesome infographic on the Internet of Things. If you happen to receive any gifts this holiday related to any of these categories, mention them in the comments below!
The State of Internet of Things in 6 Visuals – By the team at Appcessories
Last Friday, I helped students learn about Science, Technology, Engineering and Math (STEM). This was the annual [2015 Arizona STEM Adventure] event in Tucson, Arizona. This year, Pima Community College Northwest Campus provided the venue.
The event hosted more than 900 students, ranging from fourth to eighth graders. Buses collected them from 31 schools across seven cities and towns in the Tucson area. Home-schooled, private-schooled and charter-schooled children participated as well.
As I arrived, students lined up to ride this "hover chair". A lawn-blower motor floated a chair attached to a platform. A blue tarp represented water. Volunteers would pull the hover chair across the tarp, giving the kids a fun ride. I wanted to ride it myself, but it was not engineered for my body weight!
Students chose among the most interesting of 50 exhibits. IBM led two of these exhibits.
First, we had the [Bike Wheel Gyroscope]. The students would stand on a rotating swivel platform, holding a spinning bicycle wheel. When the student tipped the wheel left or right, the students body would rotate on the platform!
Second, we had Share with Storyboarding. This is the one I volunteered for. IMHO, the best part of STEM is the Arts and Design aspect needed to make products usable. Perhaps we should rename STEM to STEAM to add "A" for Arts and Design.
We held six 30-minute sessions with each group of students. Our team lead, Brenton Elmore, IBM Design Principal, explained what storyboards are, and then gave the students five topics to choose from:
Adopting homeless pets
Improving communication with teachers
A short cartoon
An idea for a mobile phone app
An idea for a new video game
Children paired up in two-person teams based on their topic interest. Why teams? Many creative collaborations involve the strengths of different teammates. For example, an author and an illustrator work together to create a comics or children's book. Broadway musicals often have a writer and composer.
Each team spent 10 minutes to draw a six-panel storyboard on [Post-it notes]. These would be stuck to a single sheet of paper. The team then would write underneath each panel the narrative of what was occurring.
Brenton taped five or six of these to the wall to share with the rest of the class. Each team would then explain to the other students what they drew, and the narrative to go with it.
When there were an odd number of students, one of us volunteers paired up with a student. Shown here is Marilynn Franco, IBM Manager, helping young Bailey in explaining their storyboard. I helped young Lili with her storyboard about a new mobile phone app idea she had.
Well it's Tuesday again, and you know what that means? IBM Announcements!
(FCC Disclosure: This official launch also includes October 6 announcements. In any case, the usual disclaimer applies: I currently work for IBM, and this blog post can be considered a "paid celebrity endorsement" of the IBM products mentioned below.)
IBM announced various updates to its Spectrum Storage product line. Here is a quick recap.
IBM Spectrum Virtualize 7.6
Spectrum Virtualize is the new name of the "storage hypervisor" code that resides in IBM SAN Volume Controller (SVC) and Storwize family products. When you buy an SVC, you will license Spectrum Virtualize software on it. It is NOT available separately as software-only that you can install on any other hardware. There are three major improvements:
Software-based Data-at-Rest Encryption
Earlier this year, IBM delivered data-at-rest encryption for the Storwize V7000 and V7000 Unified. This week, IBM extends this support to other storage hypervisors.
Since this feature is based on the Intel processor that supports the Advanced Encryption Standard New Instructions (AES-NI), it applies only to the newer hardware: SAN Volume Controller 2145-DH8, the Storwize V7000 Gen2, FlashSystem V9000, and VersaStack converged systems that contain these. You can run Spectrum Virtualize v7.6 on older hardware models, but the encryption feature will be disabled.
Basically, by taking advantage of AES-NI commands, IBM can now offer data-at-rest encryption on any virtualized flash or disk arrays, eliminating the need for special "Self-Encrypting Drives", or SED.
The encryption keys are kept on USB memory sticks, that you can either leave in the machine, or stash away in some vault or safe somewhere.
The other improvement is distributed RAID. Distributed RAID has been hugely popular on IBM XIV products, and has since found its way into the DCS3700, DCS3860 and Elastic Storage Server models.
With this new enhancement, storage admins can select "Distributed RAID-5" or "Distributed RAID-6" as alternate choices to traditional RAID ranks.
Why use it? All the drives are now active, eliminating idle spare drives that do nothing collecting dust and cobwebs waiting for an opportunity to spin up, and when they finally are used for a rebuild become a terrible bottleneck. Since all drives are reading and writing, the rebuild rate is an order of magnitude (5 to 10x) faster!
For those clients nervous about large 8TB drives and the number of days it would take to perform a traditional RAID rebuild, this should calm all of your fears.
This is one of those line-items that we have told clients that it was "just around the corner" and "coming soon, watch this space", and finally it is available. For clients using Stretched Cluster or HyperSwap across two buildings, best practices suggests keeping the quorum disk in a third building. This often met having to dedicate a single 2U disk system in a closet somewhere, with expensive Fibre Channel cables connecting to the other two buildings.
To address this, IBM now allows the quorum disk to be based on Internet Protocol (the IP portion of TCP/IP), which can be any bare-metal or virtual machine that is LAN or WAN attached. The "quorum disk" is just a little Java program. This can run on any cloud service provider as well, such as IBM SoftLayer, that both buildings have connectivity.
A minor improvement worth mentioning is that the IBM "Comprestimator" tool that estimates the capacity savings of Real-time Compression is now integrated into Spectrum Virtualize v7.6 command line interface (CLI), allowing you to run the tool on demand, as needed, on any virtual volume.
IBM Spectrum Scale v4.2
IBM plans to offer all of its solutions in any of three flavors: software-only that you can deploy on your own server hardware, pre-built system appliances, and cloud services on IBM SoftLayer, IBM Cloud Managed Services or third-party cloud providers. Spectrum Scale is the software-only flavor, and Elastic Storage Server and Storwize V7000 Unified are pre-built systems based on that software.
File and Object access
IBM published a "Redbook" on how to implement OpenStack Swift and Amazon S3 interfaces to an existing Spectrum Scale deployment. IBM supported it, but it was basically Do-it-Yourself DIY implementation. This has now been resolved, with full integration of OpenStack Swift and Amazon S3 object-protocol interfaces.
(For those unfamiliar with "Object storage", think of it like valet parking for your data. Before working for IBM, I was previously employed as a valet attendant, so I feel qualified to make this analogy.
If you park your car in a 10-story high parking structure, you have to remember where you parked to go find the car again. With valet parking, you hand over the keys to the valet attendant, the car gets parked, and you get a claim stub that you then use to get your car back. In the meantime, you don't know where your car is parked, and you don't care either!
Storing files in volume-level or file-level storage is like that 10-story high parking structure. You have to remember where you put it, which LUN or which sub-directory. With object storage, the system provides a "claim stub" in the form of an Universal Record Identifier, or URI, and simple HTTP commands like GET and POST can be used to upload and download the content.)
Policy-driven Compression and Quality of Service (QoS)
If you want to differentiate the levels of service provided by files and objects stored in your infrastructure, look no further. Simple SQL-like language is used to set up policies that are invoked when needed.
Hadoop Connector for File and Objects
The IBM Hadoop Connector allows Hadoop and Spark analytics applications to treat Spectrum Scale as a 100 percent compatible alternative to Hadoop File Systems (HDFS). Previously, this was only available for files, but now it has been extended to include objects as well.
Advanced Graphical User Interface (GUI)
Based on the award-winning GUI that has been used for IBM XIV, SVC, Storwize and various other members of the IBM System Storage family, IBM announces an HTML5-based web-browser GUI for configuring and managing Spectrum Scale and Elastic Storage Server (ESS).
Storwize V7000 Unified
The "file modules" that run IBM Spectrum Scale will get updated to R1.6 level, which supports SMB 3.0 and NFS 4.0 protocols. SMB support will now include both internal and externally-virtualized storage. You will also be able to use Active File Management to migrate to other Spectrum Scale implementations.
IBM Spectrum Control
As the former chief architect of IBM Tivoli Storage Productivity Center v1, I have been a big fan of the advancements and evolution of Spectrum Control. IBM offers three levels. The first level is "Basic Edition", entitled at no additional charge for IBM storage hardware clients. The second level is "Standard Edition" which offers configuration, provisioning and performance monitoring. The third level is "Advanced Edition", which includes advanced storage analytics, file-level reporting, storage tiering and data placement optimization.
You can imagine my skepticism when I was told that Spectrum Control was going to be enhanced to support Spectrum Scale. What could it offer? IBM Spectrum Scale already has built-in storage tiering and data placement optimization!
It turns out that having effective "management tools" was the #1 reason clients have stated were needed to implement and deploy Spectrum Scale. Since 1998, back when it was called General Parallel File System, or GPFS, the target market was High Performance Computing (HPC) familiar with Command Line Interfaces (CLI).
But IBM was to broaden the reach of IBM Spectrum Scale, to financial services, health care and life sciences, government and education, and a variety of other industries. They won't tolerate being limited to CLI interfaces.
For clients with multiple Spectrum Scale clusters, Spectrum Control can offer the following:
Visibility across the capacity utilization (file systems, pools, file sets, quotas) and cluster health across all Spectrum Scale clusters in the data center
Ability to specify alerts which are applied across all Spectrum Scale clusters, for things like relative or absolute free space in a file system, or inodes used, nodes going down, etc.
Understand the cross-cluster relationships established by remote cluster mounts, and seamlessly navigate between them
If external SAN storage is used, Spectrum Control shows the correlation between Spectrum Scale Network Shared Disks (NSD) and their corresponding SAN volumes, again with the ability to navigate between them; also it can provide performance monitoring for the volumes backing the NSD
Ability to monitor file capacity usage in the context of applications, by adding Spectrum Scale "file set containers" to application groups defined in Spectrum Control
Compare file system activity across Spectrum Scale clusters, with the ability to drill into file system and node performance charts
Support for object storage on Spectrum Scale, determine which object-enabled clusters are closest to running out of free space
While the basic built-in GUI is great for smaller deployments, if you have a dozen or more Spectrum Scale clusters, or have Spectrum Scale clusters intermixed with traditional block-level and NAS storage devices, then Spectrum Control is for you!
It used to take weeks to deploy the original versions of Tivoli Storage Productivity Center, but now, Spectrum Control is now offered in the cloud, and you can deploy it in as little as 30 minutes.
Want to check it out? You can explore Spectrum Control Storage Insights cloud service as a [Live Demo], or [Start your free trial]! The reporting capabilities of Spectrum Scale are identical between the on-premise version of Spectrum Control, and this cloud service offering.
Here's a great quote from a leading IT industry analyst:
"In multi-petabyte, multivendor installations, overall storage costs of ownership for use of IBM Spectrum Storage solutions averaged 73 percent less than EMC, and 61 percent less than Hitachi equivalents" -- Brian Jeffery, Managing Director, International Technology Group, Naples, FL
As IBM continues its transition from a hardware-oriented company founded over a century ago, manufacturing meat scales and cheese slicers, to one more focused on higher value-add software and services, the Spectrum Storage software family will play a critical role of this transformation!
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Days 4 (Thursday).
Technology Trends in IBM Storage
Jack Arnold, IBM Client Technical Architect, provide an entertaining session on various technology trends in the industry. For example, What is the fastest growing storage medium for 2015? Answer: [Vinyl LP] records, which have seen a resurgence recently, growing at over 40 percent!
IBM Spectrum Scale and Elastic Storage Server offerings
Tony Pearson provided an architectural overview of both Spectrum Scale software, as well as the Elastic Storage Server pre-built system appliance.
IBM Spectrum Scale for File and Object storage
Tony Pearson explained the differences between file and object-level storage, and how IBM Spectrum Scale can provide both access methods in a single infrastructure.
IBM Storage Integration with OpenStack
IBM Spectrum Virtualize IP Replication 101
Andrea Sipka, IBM Software Developer for SVC/Storwize Copy Services from the UK Hursley lab, presented the implementation details of IP-based replication using the built-in WAN Acceleration that IBM licensed from Bridgeworks SANslide.
Storage Meet the Experts
Mo McCullough hosted the last session of Thursday with a "Meet the Experts" Q&A panel. Tony Pearson, Brian Sherman, Clod Barrera, John Wilkinson, Mike Griese and Jim Blue were among the storage experts fielding questions. Tony Pearson provided a quick overview of the LTO-7 and TS4500 tape library announcements made earlier in the week.
Most IBM conferences are 4.5 days long, which means that there are typically two or three sessions on Friday morning. Unfortunately, the two sessions I was planning to attend on Friday were both cancelled, so Day 4 was the end of my week for this conference.
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 3 (Wednesday).
What is Big Data? Architectures and Use Cases
Tony Pearson explained what Big Data analytics are, and IBM's various products to support this, incluidng BigInsights, BigSQL and Spectrum Scale with the Hadoop Connector.
Why use IBM Spectrum Virtualize for High Availability
John Wilkinson, IBM Storage Software Engineer from the UK Hursley lab, presented the latest enhancements to Spectrum Virtualize-based products, such as SVC and Storwize V7000, related to Stretch Cluster and HyperSwap functions for High Availability.
IBM Systems Hybrid Cloud Strategy, POV and Showcase
Dave Willoughby, IBM z System Hardware Architect for Systems Cloud Emerging Technologies, provided a high-level "Point-of-View" for Hybrid Cloud, and why IBM is focused on helping clients transition from traditional IT infrastructures.
Data Footprint Reduction - Understanding IBM Storage Efficiency Options
Tony Pearson presented an overview of Thin Provisioning, Space-efficient snapshots, Data deduplication and Real-time Compression features.
IBM Spectrum Virtualize - Understnding SVC, Storwize and FlashSystem V9000
Tony Pearson provide an overview of SAN Volume Controller, the Storwize family of products and FlashSystem V9000, all of which are based on Spectrum Virtualize software.
The day ended with a trip to Universal Studios. Dinner on the City Walk offered entertainment with Dueling Pianos. This was then followed by a trip to Hogsmeade, the Harry Potter themed portion of the resort.
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 2 (Tuesday).
Andrew Greenfield, IBM Global XIV Storage and Networking Client Technical Specialist, presented IBM's future plans for XIV and FlashSystem products. This was a special NDA session.
Eric Aquaronne, IBM Systems and Cloud Business Development lead, explained what OpenStack was, and why IBM is so heavily invested in its success. OpenStack is cloud management software that can be used to manager both on-premise and off-premise environments, including computer, storage and networking resources.
Software Defined Storage - Why? What? How?
Tony Pearson presented an overview of Software Defined Environments and how storage fits into this.
Suspiciously, there was a lot of overlap with Brian Sherman's presentation on Day 1. As Charles Caleb Colton would say, "Imitation is the sincerest form of flattery."
Making Sense of IBM Cloud Offerings
Jay Kruemcke, IBM Cloud Program Executive Client Collaboration Market Management Offering Manager, gave a high-level overview of IBM's various Cloud offerings from SoftLayer to Managed Cloud Services.
The Pendulum Swings Back - Understanding Converged and Hyperconverged environments
Tony Pearson presented IBM's involvement with Converged Systems like VersaStack and Hyperconverged systems with Spectrum Accelerate and Spectrum Scale software.
Next Generation Storage Tiering: Less Management, Lower Cost and Increased Performance
Tony Pearson presented Easy Tier, Storage Analytics Engine in Spectrum Control Advanced Edition, and Spectrum Scale tiering across flash, disk and tape media.
The second day ended with a "Networking" Reception in the Solution Center, serving food and my favorite grape-flavored beverages.