Modified by TonyPearson
Can you believe it has been a year already since IBM announced VersaStack?
In my May 2012 blog post, [EMC Strikes Back], I poked fun at the fact that Cisco had two
girlfriends "significant others": EMC and NetApp.
Cisco originally partnered with EMC to create a converged system called Vblock which combined Cisco UCS servers and switches with EMC storage. The partnership between VMware, Cisco and EMC was dubbed Virtual Computing Environment (VCE).
However, Cisco then partnered with NetApp to create Flexpod, a converged system that combined Cisco UCS servers and switches with NetApp storage. Many of my clients felt that Flexpod was an improvement over Vblock.
A lot has happened since then. In 2014, [drastically reduced its investment in VCE]. Last year, Dell then spent $67 Billion dollars to effectively take EMC out of the storage business. While this was a huge birthday present for IBM, not everyone is happy to see EMC fade away. Whitney Garcia has a great article titled [Crying at the Dell-EMC wedding: Why VCE customers should consider alternatives].
Before VersaStack, IBM had its own converged system, PureSystems, which combined IBM POWER and x86 servers with IBM storage. The x86 server portion of this business was sold off to Lenovo, but IBM continues to sell POWER-only and blended x86-and-POWER PureFlex systems, as well as PureApplication and PureData systems.
The [VersaStack] collaboration between IBM and Cisco offers an alternative to Vblock and Flexpod converged systems. Cisco is a leader in x86 blades and networking switches, and IBM is #1 in Flash and Software Defined Storage, including Storage Virtualization. VersaStack gives you the best of both worlds!
The VersaStack has Cisco Validated Designs for use with IBM's Spectrum Virtualize products:
- FlashSystem V9000
- Storwize V7000
- Storwize V7000 Unified
- Storwize V5000
This week, February 11, 2016, 12pm EDT, IBM and Cisco are hosting a webinar on VersaStack. Join us for the one year anniversary of VersaStack in a discussion with IBM, Cisco and VersaStack customers.
The speakers will be discussing VersaStack progress to date and the value VersaStack brings to client workloads. Topics of discussion will include how VersaStack can lower TCO, administrative overhead, reduce downtime and improve resource utilization, and allow for business innovation. The speakers include:
- Jonathan Cox, Medicat, Director, Technology Services
- Susan Martens, IBM, Director, VersaStack Sales, North America
- Kent Hixson, Cisco, Sales Business Development Manager
Here is the [Registration Link] to participate. Hope you can make it!
technorati tags: IBM, Cisco, EMC, VCE, VMware, Vblock, NetApp, Flexpod, VersaStack, #VersaStack, POWER, x86, Lenovo, PureSystems, PureFlex, PureApplication, PureData, Whitney Garcia, Jonathan Cox, Susan Martens, Kent Hixson, FlashSystem V9000, Storwize V7000, Storwize V7000 Unified, Storwize V5000, Medicat
Later this month, I will be attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
Fellow blogger Stuart Thomson has a great post title [Storage & infrastructure @ InterConnect 2016: The choices are all yours] which provides some interesting statistics:
- More than 500 client success stories
- Over 2,000 technical sessions scheduled
- 25,000 expected attendees
Wow! That can seem overwhelming. While the conference spans multiple hotels on the strip, I personally will be focusing my time at the [Mandalay Bay resort]. My session will be held at the Solutions Expo on Wednesday 1:45pm. Here are the details:
- YSS-1841 IBM Cloud Storage Options
This session will cover private and public cloud storage options, including flash, disk and tape, to address the different types of cloud storage requirements. It will also explain the use of Active File Management for local space management and global access to files, and support for file-and-sync.
Program: Core Curriculum
Topic: Systems Hardware
Sub-topic: Storage Systems & Software
To help attendees plan your week, InterConnect has a [Session Preview Tool]. I have already found over 40 sessions related to Storage that I am interested in attending!
Need to register? Here is the [Registration Link].
I will be there all week, so if you see me, stop and say "Hello!"
technorati tags: IBM, Stuart Thomson, IBM Cloud, IBM Mobile, Cloud Storage, YSS-1841, InterConnect,
Las Vegas, Mandalay Bay
As you can imagine, I get a lot of email from around the world. This one, from a loyal reader from overseas, was particularly interesting. Normally, I would direct them to read the fantastic manual [RTFM], but decided instead to go ahead and tackle it here in my blog.
I follow your blog for several years, it has served as a reference and training for me in my professional career and I want to thank you.
I am writing because my company has acquired a new IBM Storwize V7000 Gen2 to replace a Gen1, with 16 FC ports, 8 ports per controller node and 8-port FC FlashSystem 900. The idea is to virtualize the V7000 storage part Flash900 and other hand assign directly to the host directly. After much reading on forums and storage Redbooks I have nothing clear as it should be wiring the SAN or as zoning would be made to carry out this installation. I would appreciate if you can write on this subject as controversial as seems to be the zoning and wiring SAN and if possible be clarified by me onstage.
I will tackle this in three steps.
First, let's attach "Server 1" and the FlashSystem 900 to the SAN fabric. IBM Spectrum Virtualize can handle one, two or even four separate fabrics. Let's assume you have a dual-port Host Bus Adapter (HBA) in server 1, and two redundant fabrics. We will connect each server port to each FCP switch. Likewise, we will connect each FCP switch to the FlashSystem 900, carve up "Volume 1", and create SAN "Zone A1" and "Zone A2", which identify "Server 1" as the initiator, and "FlashSystem 900" as the target. This is all basic stuff.
For those who want to follow along, I suggest you review the full implementation guidance in the IBM Redbook [Implementing the IBM Storwize V7000 Gen2]. Here is an excerpt:
"All Storwize V7000 Gen2 nodes in the Storwize V7000 Gen2 clustered system are connected
to the same SANs, and they present volumes to the hosts. These volumes are created from
storage pools that are composed of mDisks presented by the disk subsystems.
The fabric must have three distinct zones:
- Storwize V7000 Gen2 cluster system zones
Create one cluster zone per fabric, and include any port per node that is designated for
intra-cluster traffic. No more than four ports per node should be allocated to intra-cluster
- Host zones
Create a host zone for each server host bus adapter (HBA) port accessing Storwize
- Storage zone
Create one Storwize V7000 Gen2 storage zone for each storage system that is
virtualized by the Storwize V7000 Gen2. Some storage control systems need two
separate zones (one per controller) so that they do not 'see' each other."
Second, we connect the Storwize V7000 Gen2 to the FCP switches. You don't need to connect all of the ports, but I recommend that you have each controller node to each FCP switch, requiring four cables. Add more connections for added performance bandwidth.
Carve up "Volume 2" and this will be referred to as a "managed disk", mDisk for short, and create a "storage pool" which were formerly known as a "managed disk group" which is why you often see MDG in the naming conventions and examples. Storage pools can have one or more managed disks, and you can add more dynamically as needed.
The "storage zone" indicates the Storwize V7000 Gen2 as the initiator, and the FlashSystem 900 as target. If you want to increase the performance bandwidth, consider more cables between the FCP switches and the FlashSystem 900. We create "Zone B1" and "Zone B2". I recommend a separate "storage zones" for each additional storage system that you choose to attach to the Storwize V7000 Gen2.
The "cluster zone" that connects all of the Storwize V7000 Gen2 node ports together for node-to-node (intra-cluster) communication. Storwize V7000 Gen2 ports can serve as both initiators and targets dynamically. For example, when you write to one node, the node then copies the cache block over to the second node so there are two copies stored safely on separate nodes. Since we have two fabrics, we create "Zone C1" and "Zone C2".
Third, we connect "Server 2" to FCP switches, same as we did with "Server 1". We create "Volume 3" which is a "virtual disk, or vDisk for short, from the storage pool containing Volume 2. The "host zone"indicates Server 2 as the initiator, and Storwize V7000 Gen2 as the target. We create "Zone D1" and "Zone D2". I recommend putting each additional server in its own set of host zones.
In theory, you could have a server connected to both Volume 1 and Volume 3. For example, a Windows server would have a "C:" drive connected directly to FlashSystem 900 for high-speed performance, and have a "D:" drive on Storwize V7000 Gen2 to contain data. The Storwize V7000 Gen2 introduces 60 to 100 microseconds of added latency, but provides added value such as FlashCopy, Thin Provisioning, and Real-time compression.
Of course, there are unique situations that might require special configurations, depending on the servers, operating systems, host bus adapters, FCP switches, and storage systems involved.
Modified by TonyPearson
In the 2004 comedy ["A Day Without a Mexican"], the director envisions how disruptive life would be in California if all the Mexicans suddenly disappeared. The point is that sometimes you take things in the background for granted.
I was reminded of this when I saw Mark Underwood's blog post [Mainframe: Still Not Crazy After All These Years]. The article reminds us how critical IBM z Systems mainframes (and related storage like the IBM DS8880 disk systems) are in our lives. Here's an excerpt:
"Warren Buffett's Berkshire Hathaway started buying up IBM stock in 2011 and bought still more of IBM later. Despite its disappointing short-term valuation, Berkshire Hathaway is standing by its IBM investment, which is one of Berkshire's top four plays. ... To make this case, some statistics may be needed:
- The z13 can withstand an 8.0 earthquake.
- z Systems enjoy the highest standardized security certification (FIPS 140-2, highest level 4 of 4).
- 23 of the world's top 25 retailers use a mainframe.
- 92 of the top 100 banks are mainframe users.
- All 10 of the top 10 insurers have commitments in mainframe technologies.
- Around 80 percent of all corporate data is managed by mainframes.
- The z13 can process 2.5 billion transactions daily (that's 100 [Cyber Mondays], as IBM's Mark Anzani, VP of z Systems Strategy, Resilience and Ecosystems, observed)."
... In fact, and notwithstanding perceptions to the contrary, the mainframe's center-stage position in large corporations around the world has not budged. That's the conclusion of an industry survey sponsored by Syncsort Inc. and conducted in 2015 by Enterprise Systems Media, a publisher of magazines for IT managers and technical professionals. Seven out of 10 respondents (IT planners, architects and managers at global enterprises with $1 billion or more in annual revenues) ranked the use of the mainframe for large-scale transaction processing as very important."
What would a comparable film depicting "A Day without a Mainframe" be like? I would imagine it somewhere between a disaster movie like  and an end-of-the-world zombie horror movie like [28 Days Later]. I would gladly take a million dollars to write the screenplay!
(FCC Disclosure: I work for IBM and am a filmmaker as well. Earlier in my career, I was chief architect of IBM's Data Facility Storage Management Subsystem (DFSMS) which manages around 80 percent of the world's corporate data. This blog post can be considered a "paid celebrity endorsement" for IBM's z13 System mainframes and DS8880 Disk Systems. I have personal experience with both and highly recommend them. I am neither a Mexican nor resident of California, but work regularly with both in my job responsibilities. Like Warren Buffett, I also own stock in both IBM and Berkshire Hathaway companies. I had no involvement in the making of any of the major motion pictures mentioned in this blog post, have no financial interest in their distribution, and have not been provided any compensation for mentioning them in this blog post. They are all great movies worth watching!)
What do you think the movie would be like? Enter your comments below!
technorati tags: Mexican, California, Mark Underwood, Warren Buffett, Berkshire Hathaway, earthquake, Cyber Monday, Mark Anzani, SyncSort, Enterprise Systems Media, John Cusack
Happy New Year!
Well, it's Tuesday again, and you know what that means? IBM Announcements!
This week, the new model 314 is [now available in all countries] that IBM does business in.
(Actually, the [XIV Model 314] was announced on Nov 10, 2015 last year, but announcements made in November and December are often overlooked between distractions like holidays and year-end processing. Today's announcement was to eliminate the "not available in some countries" restriction. The last time I mentioned on this blog that a product was not available in some countries, I had tons of questions of "why". Hopefully, waiting until a product is available in all countries eliminates that concern.)
What does the XIV model 314 offer? IBM doubled the processors, up to 180 cores, and doubled the DRAM cache, up to 1440 GB. Both of these changes were done to improve the Real-time compression capability.
To reduce test effort cycle time, IBM simplified the configuration options:
- Instead of ranging from 6 to 15 modules, the model 314 is limited to 9-15 modules.
- The drive sizes are reduced to just 4TB and 6TB capacities.
- If you want a Solid-State drive (SSD) for cache boost, only the 800GB option is available.
Through a combination of thin provisioning and compression, you can define up to 2 PB of soft capacity per rack.
The firmware v11.6.1 reduces the minimum volume size for compression from 103GB to 51GB. Firmware perpetually licensed for Spectrum Accelerate can be used with the XIV Model 314.
Happy Holidays everyone!
Every December, the "birthday boys" -- Bill, Kris and I -- celebrate our birthdays. For me, it is the big five-0h. According to a recent Harris poll, it is [America's favorite age!] For some people, [fifty is the new thirty]!
From left to right: Melinda Jensen, Bill Terry, Lee Olguin, Kris Keller, Tony Pearson, and Kristy Knight.
The storage, cloud and analytics team celebrated with cake and party hats. None of us "birthday boys" eat chocolate, so this year we chose a new flavor: Strawberry Cream! It was delicious.
It was a good time to reflect on our success and accomplishments. In 2015, I helped close over $270 million USD in revenues for IBM, meaning that I helped close over a million [per day on the job].
The IT industry went through a lot of changes also. Hewlett-Packard [split into two smaller pieces]. Dell started [EMC's fade to non-existence]. Cisco and IBM joined forces to create VersaStack, a converged system that combines the most popular x86 servers with the industry's best storage. Analysts recognized IBM's leadershp in today's [Cognitive Era].
Looking forward to an exciting 2016!
Modified by TonyPearson
My friends over at Appcessories sent me an awesome infographic on the Internet of Things. If you happen to receive any gifts this holiday related to any of these categories, mention them in the comments below!
The State of Internet of Things in 6 Visuals – By the team at Appcessories
Enjoy your time off with friends and family!
Last Friday, I helped students learn about Science, Technology, Engineering and Math (STEM). This was the annual [2015 Arizona STEM Adventure] event in Tucson, Arizona. This year, Pima Community College Northwest Campus provided the venue.
The event hosted more than 900 students, ranging from fourth to eighth graders. Buses collected them from 31 schools across seven cities and towns in the Tucson area. Home-schooled, private-schooled and charter-schooled children participated as well.
I was just one of 130 volunteers. IBM, [Raytheon], [Pima Community College], [Agents of STEM], [SARSEF, [StemAZing], [Office of Pima County School Superintendent], [UA Stem Learning Center], and other individuals volunteered their time to make this happen.
As I arrived, students lined up to ride this "hover chair". A lawn-blower motor floated a chair attached to a platform. A blue tarp represented water. Volunteers would pull the hover chair across the tarp, giving the kids a fun ride. I wanted to ride it myself, but it was not engineered for my body weight!
Students chose among the most interesting of 50 exhibits. IBM led two of these exhibits.
First, we had the [Bike Wheel Gyroscope]. The students would stand on a rotating swivel platform, holding a spinning bicycle wheel. When the student tipped the wheel left or right, the students body would rotate on the platform!
Second, we had Share with Storyboarding. This is the one I volunteered for. IMHO, the best part of STEM is the Arts and Design aspect needed to make products usable. Perhaps we should rename STEM to STEAM to add "A" for Arts and Design.
We held six 30-minute sessions with each group of students. Our team lead, Brenton Elmore, IBM Design Principal, explained what storyboards are, and then gave the students five topics to choose from:
- Adopting homeless pets
- Improving communication with teachers
- A short cartoon
- An idea for a mobile phone app
- An idea for a new video game
Children paired up in two-person teams based on their topic interest. Why teams? Many creative collaborations involve the strengths of different teammates. For example, an author and an illustrator work together to create a comics or children's book. Broadway musicals often have a writer and composer.
Each team spent 10 minutes to draw a six-panel storyboard on [Post-it notes]. These would be stuck to a single sheet of paper. The team then would write underneath each panel the narrative of what was occurring.
Brenton taped five or six of these to the wall to share with the rest of the class. Each team would then explain to the other students what they drew, and the narrative to go with it.
When there were an odd number of students, one of us volunteers paired up with a student. Shown here is Marilynn Franco, IBM Manager, helping young Bailey in explaining their storyboard. I helped young Lili with her storyboard about a new mobile phone app idea she had.
Storyboards are an essential part of IBM's [Design Thinking]. We use them in a variety of ways, from designing business strategies and product enhancements, to creating videos about the [IBM Tucson Executive Briefing Center]!
When I make presentations to clients at briefings or conferences, I use 36 slides per hour. Each PowerPoint slide serves like a storyboard panel, and I provide the narrative on each one.
Special thanks go to Kathy Carlisle, IBM Tucson Site Operations Manager, and Mike Hernandez, IBM IBM Corporate Citizenship and Corporate Affairs Manager, for setting this up!
To learn more, see [STEM Adventure Shows Students Science Up Close] by Mariana Dale, and [1,000 students visit STEM fair at Pima college] by Yoohyun Jung.
technorati tags: IBM, STEM, Raytheon, Pima Community College, SARSEF, Brenton Elmore, Marilynn Franco, Kathy Carlisle, Mike Hernandez, bike wheel gyroscope, storyboarding, Mariana Dale, Yoohyun Jung
Modified by TonyPearson
Well it's Tuesday again, and you know what that means? IBM Announcements!
(FCC Disclosure: This official launch also includes October 6 announcements. In any case, the usual disclaimer applies: I currently work for IBM, and this blog post can be considered a "paid celebrity endorsement" of the IBM products mentioned below.)
IBM announced various updates to its Spectrum Storage product line. Here is a quick recap.
- IBM Spectrum Virtualize 7.6
Spectrum Virtualize is the new name of the "storage hypervisor" code that resides in IBM SAN Volume Controller (SVC) and Storwize family products. When you buy an SVC, you will license Spectrum Virtualize software on it. It is NOT available separately as software-only that you can install on any other hardware. There are three major improvements:
- Software-based Data-at-Rest Encryption
Earlier this year, IBM delivered data-at-rest encryption for the Storwize V7000 and V7000 Unified. This week, IBM extends this support to other storage hypervisors.
Since this feature is based on the Intel processor that supports the Advanced Encryption Standard New Instructions (AES-NI), it applies only to the newer hardware: SAN Volume Controller 2145-DH8, the Storwize V7000 Gen2, FlashSystem V9000, and VersaStack converged systems that contain these. You can run Spectrum Virtualize v7.6 on older hardware models, but the encryption feature will be disabled.
Basically, by taking advantage of AES-NI commands, IBM can now offer data-at-rest encryption on any virtualized flash or disk arrays, eliminating the need for special "Self-Encrypting Drives", or SED.
The encryption keys are kept on USB memory sticks, that you can either leave in the machine, or stash away in some vault or safe somewhere.
- Distributed RAID
The other improvement is distributed RAID. Distributed RAID has been hugely popular on IBM XIV products, and has since found its way into the DCS3700, DCS3860 and Elastic Storage Server models.
With this new enhancement, storage admins can select "Distributed RAID-5" or "Distributed RAID-6" as alternate choices to traditional RAID ranks.
Why use it? All the drives are now active, eliminating idle spare drives that do nothing collecting dust and cobwebs waiting for an opportunity to spin up, and when they finally are used for a rebuild become a terrible bottleneck. Since all drives are reading and writing, the rebuild rate is an order of magnitude (5 to 10x) faster!
For those clients nervous about large 8TB drives and the number of days it would take to perform a traditional RAID rebuild, this should calm all of your fears.
- IP-based Quorum
This is one of those line-items that we have told clients that it was "just around the corner" and "coming soon, watch this space", and finally it is available. For clients using Stretched Cluster or HyperSwap across two buildings, best practices suggests keeping the quorum disk in a third building. This often met having to dedicate a single 2U disk system in a closet somewhere, with expensive Fibre Channel cables connecting to the other two buildings.
To address this, IBM now allows the quorum disk to be based on Internet Protocol (the IP portion of TCP/IP), which can be any bare-metal or virtual machine that is LAN or WAN attached. The "quorum disk" is just a little Java program. This can run on any cloud service provider as well, such as IBM SoftLayer, that both buildings have connectivity.
A minor improvement worth mentioning is that the IBM "Comprestimator" tool that estimates the capacity savings of Real-time Compression is now integrated into Spectrum Virtualize v7.6 command line interface (CLI), allowing you to run the tool on demand, as needed, on any virtual volume.
- IBM Spectrum Scale v4.2
IBM plans to offer all of its solutions in any of three flavors: software-only that you can deploy on your own server hardware, pre-built system appliances, and cloud services on IBM SoftLayer, IBM Cloud Managed Services or third-party cloud providers. Spectrum Scale is the software-only flavor, and Elastic Storage Server and Storwize V7000 Unified are pre-built systems based on that software.
- File and Object access
IBM published a "Redbook" on how to implement OpenStack Swift and Amazon S3 interfaces to an existing Spectrum Scale deployment. IBM supported it, but it was basically Do-it-Yourself DIY implementation. This has now been resolved, with full integration of OpenStack Swift and Amazon S3 object-protocol interfaces.
(For those unfamiliar with "Object storage", think of it like valet parking for your data. Before working for IBM, I was previously employed as a valet attendant, so I feel qualified to make this analogy.
If you park your car in a 10-story high parking structure, you have to remember where you parked to go find the car again. With valet parking, you hand over the keys to the valet attendant, the car gets parked, and you get a claim stub that you then use to get your car back. In the meantime, you don't know where your car is parked, and you don't care either!
Storing files in volume-level or file-level storage is like that 10-story high parking structure. You have to remember where you put it, which LUN or which sub-directory. With object storage, the system provides a "claim stub" in the form of an Universal Record Identifier, or URI, and simple HTTP commands like GET and POST can be used to upload and download the content.)
- Policy-driven Compression and Quality of Service (QoS)
If you want to differentiate the levels of service provided by files and objects stored in your infrastructure, look no further. Simple SQL-like language is used to set up policies that are invoked when needed.
- Hadoop Connector for File and Objects
The IBM Hadoop Connector allows Hadoop and Spark analytics applications to treat Spectrum Scale as a 100 percent compatible alternative to Hadoop File Systems (HDFS). Previously, this was only available for files, but now it has been extended to include objects as well.
- Advanced Graphical User Interface (GUI)
Based on the award-winning GUI that has been used for IBM XIV, SVC, Storwize and various other members of the IBM System Storage family, IBM announces an HTML5-based web-browser GUI for configuring and managing Spectrum Scale and Elastic Storage Server (ESS).
- Storwize V7000 Unified
The "file modules" that run IBM Spectrum Scale will get updated to R1.6 level, which supports SMB 3.0 and NFS 4.0 protocols. SMB support will now include both internal and externally-virtualized storage. You will also be able to use Active File Management to migrate to other Spectrum Scale implementations.
- IBM Spectrum Control
As the former chief architect of IBM Tivoli Storage Productivity Center v1, I have been a big fan of the advancements and evolution of Spectrum Control. IBM offers three levels. The first level is "Basic Edition", entitled at no additional charge for IBM storage hardware clients. The second level is "Standard Edition" which offers configuration, provisioning and performance monitoring. The third level is "Advanced Edition", which includes advanced storage analytics, file-level reporting, storage tiering and data placement optimization.
You can imagine my skepticism when I was told that Spectrum Control was going to be enhanced to support Spectrum Scale. What could it offer? IBM Spectrum Scale already has built-in storage tiering and data placement optimization!
It turns out that having effective "management tools" was the #1 reason clients have stated were needed to implement and deploy Spectrum Scale. Since 1998, back when it was called General Parallel File System, or GPFS, the target market was High Performance Computing (HPC) familiar with Command Line Interfaces (CLI).
But IBM was to broaden the reach of IBM Spectrum Scale, to financial services, health care and life sciences, government and education, and a variety of other industries. They won't tolerate being limited to CLI interfaces.
For clients with multiple Spectrum Scale clusters, Spectrum Control can offer the following:
- Visibility across the capacity utilization (file systems, pools, file sets, quotas) and cluster health across all Spectrum Scale clusters in the data center
- Ability to specify alerts which are applied across all Spectrum Scale clusters, for things like relative or absolute free space in a file system, or inodes used, nodes going down, etc.
- Understand the cross-cluster relationships established by remote cluster mounts, and seamlessly navigate between them
- If external SAN storage is used, Spectrum Control shows the correlation between Spectrum Scale Network Shared Disks (NSD) and their corresponding SAN volumes, again with the ability to navigate between them; also it can provide performance monitoring for the volumes backing the NSD
- Ability to monitor file capacity usage in the context of applications, by adding Spectrum Scale "file set containers" to application groups defined in Spectrum Control
- Compare file system activity across Spectrum Scale clusters, with the ability to drill into file system and node performance charts
- Support for object storage on Spectrum Scale, determine which object-enabled clusters are closest to running out of free space
While the basic built-in GUI is great for smaller deployments, if you have a dozen or more Spectrum Scale clusters, or have Spectrum Scale clusters intermixed with traditional block-level and NAS storage devices, then Spectrum Control is for you!
It used to take weeks to deploy the original versions of Tivoli Storage Productivity Center, but now, Spectrum Control is now offered in the cloud, and you can deploy it in as little as 30 minutes.
Want to check it out? You can explore Spectrum Control Storage Insights cloud service as a [Live Demo], or [Start your free trial]! The reporting capabilities of Spectrum Scale are identical between the on-premise version of Spectrum Control, and this cloud service offering.
Here's a great quote from a leading IT industry analyst:
"In multi-petabyte, multivendor installations, overall storage costs of ownership for use of IBM Spectrum Storage solutions averaged 73 percent less than EMC, and 61 percent less than Hitachi equivalents" -- Brian Jeffery, Managing Director, International Technology Group, Naples, FL
As IBM continues its transition from a hardware-oriented company founded over a century ago, manufacturing meat scales and cheese slicers, to one more focused on higher value-add software and services, the Spectrum Storage software family will play a critical role of this transformation!
technorati tags: IBM, Spectrum Virtualize, data-at-rest, encryption, SVC, Storwize, Storwize V7000, FlashSystem V9000, VersaStack, storage hypervisor, distributed RAID, RAID-5, RAID-6, Spectrum Scale, Elastic Storage Server, OpenStack, OpenStack Swift, Amazon S3, HTTP, Compression, Quality of Service, QoS, Hadoop, Spark, Hadoop Connector, HDFS, GUI, XIV, DCS3700, DCS3860, Spectrum Control, Tivoli Storage, Productivity Center, TPC, CLI, NAS, Storage Insights, SoftLayer, IBM Cloud Managed Services,
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Days 4 (Thursday).
- Technology Trends in IBM Storage
Jack Arnold, IBM Client Technical Architect, provide an entertaining session on various technology trends in the industry. For example, What is the fastest growing storage medium for 2015? Answer: [Vinyl LP] records, which have seen a resurgence recently, growing at over 40 percent!
- IBM Spectrum Scale and Elastic Storage Server offerings
Tony Pearson provided an architectural overview of both Spectrum Scale software, as well as the Elastic Storage Server pre-built system appliance.
- IBM Spectrum Scale for File and Object storage
Tony Pearson explained the differences between file and object-level storage, and how IBM Spectrum Scale can provide both access methods in a single infrastructure.
- IBM Storage Integration with OpenStack
- IBM Spectrum Virtualize IP Replication 101
Andrea Sipka, IBM Software Developer for SVC/Storwize Copy Services from the UK Hursley lab, presented the implementation details of IP-based replication using the built-in WAN Acceleration that IBM licensed from Bridgeworks SANslide.
- Storage Meet the Experts
Mo McCullough hosted the last session of Thursday with a "Meet the Experts" Q&A panel. Tony Pearson, Brian Sherman, Clod Barrera, John Wilkinson, Mike Griese and Jim Blue were among the storage experts fielding questions. Tony Pearson provided a quick overview of the LTO-7 and TS4500 tape library announcements made earlier in the week.
Most IBM conferences are 4.5 days long, which means that there are typically two or three sessions on Friday morning. Unfortunately, the two sessions I was planning to attend on Friday were both cancelled, so Day 4 was the end of my week for this conference.
technorati tags: IBM, #ibmtechu, Jack Arnold, Andrea Sipka, Mo McCullough, Vinyl LP, Spectrum Scale, Elastic Storage Server, ESS, IP Replication, SVC, Storwize V7000, LTO-7, TS4500, Spectrum Virtualize, Mike Griese, Jim Blue
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 3 (Wednesday).
- What is Big Data? Architectures and Use Cases
Tony Pearson explained what Big Data analytics are, and IBM's various products to support this, incluidng BigInsights, BigSQL and Spectrum Scale with the Hadoop Connector.
- Why use IBM Spectrum Virtualize for High Availability
John Wilkinson, IBM Storage Software Engineer from the UK Hursley lab, presented the latest enhancements to Spectrum Virtualize-based products, such as SVC and Storwize V7000, related to Stretch Cluster and HyperSwap functions for High Availability.
- IBM Systems Hybrid Cloud Strategy, POV and Showcase
Dave Willoughby, IBM z System Hardware Architect for Systems Cloud Emerging Technologies, provided a high-level "Point-of-View" for Hybrid Cloud, and why IBM is focused on helping clients transition from traditional IT infrastructures.
- Data Footprint Reduction - Understanding IBM Storage Efficiency Options
Tony Pearson presented an overview of Thin Provisioning, Space-efficient snapshots, Data deduplication and Real-time Compression features.
- IBM Spectrum Virtualize - Understnding SVC, Storwize and FlashSystem V9000
Tony Pearson provide an overview of SAN Volume Controller, the Storwize family of products and FlashSystem V9000, all of which are based on Spectrum Virtualize software.
The day ended with a trip to Universal Studios. Dinner on the City Walk offered entertainment with Dueling Pianos. This was then followed by a trip to Hogsmeade, the Harry Potter themed portion of the resort.
technorati tags: IBM, #ibmtechu, big data, analytics, BigInsights, BigSQl, Spectrum Scale, Hadoop, John Wilkinson, SVC, Storwize, Stretch Cluster, HyperSwap, Dave Willoughby, Thin Provisioning, Space-Efficient Snapshot, Deduplication, Real-time Compression, Spectrum Virtualize, FlashSystem V9000
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 2 (Tuesday).
- Storage Futures
Andrew Greenfield, IBM Global XIV Storage and Networking Client Technical Specialist, presented IBM's future plans for XIV and FlashSystem products. This was a special NDA session.
- Demystify OpenStack
Eric Aquaronne, IBM Systems and Cloud Business Development lead, explained what OpenStack was, and why IBM is so heavily invested in its success. OpenStack is cloud management software that can be used to manager both on-premise and off-premise environments, including computer, storage and networking resources.
- Software Defined Storage - Why? What? How?
Tony Pearson presented an overview of Software Defined Environments and how storage fits into this.
Suspiciously, there was a lot of overlap with Brian Sherman's presentation on Day 1. As Charles Caleb Colton would say, "Imitation is the sincerest form of flattery."
- Making Sense of IBM Cloud Offerings
Jay Kruemcke, IBM Cloud Program Executive Client Collaboration Market Management Offering Manager, gave a high-level overview of IBM's various Cloud offerings from SoftLayer to Managed Cloud Services.
- The Pendulum Swings Back - Understanding Converged and Hyperconverged environments
Tony Pearson presented IBM's involvement with Converged Systems like VersaStack and Hyperconverged systems with Spectrum Accelerate and Spectrum Scale software.
- Next Generation Storage Tiering: Less Management, Lower Cost and Increased Performance
Tony Pearson presented Easy Tier, Storage Analytics Engine in Spectrum Control Advanced Edition, and Spectrum Scale tiering across flash, disk and tape media.
The second day ended with a "Networking" Reception in the Solution Center, serving food and my favorite grape-flavored beverages.
technorati tags: IBM, #ibmtechu, Andrew+Greenfield, Eric+Aquaronne, Jay+Kruemcke, XIV, FlashSystem, OpenStack, SDS, Software+Defined+Storage, IBM+Cloud, SoftLayer, Cloud+Managed+Services, converged+Systems, hyperconverged, VersaStack, Spectrum+Accelerate, Spectrum+Scale, Easy+Tier, Storage+Analytics+Engine, Spectrum+Control
Modified by TonyPearson
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 1 (Monday).
- Storage Keynote Session
This was a three-part kick-off keynote session. Mo McCullough, IBM Systems Lab Services and Training, coordinated the storage track of this event and provided some details on how to use the website portal and smartphone app.
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for Storage, presented the future of the storage industry, including trends in storage media technologies, data plane and control plane level enhancements, and broader system-wide considerations.
Tony Pearson, IBM Master Inventor and Senior Software Engineer, wrapped up the session with an overview of IBM's Smarter Storage strategy.
- IBM Software Defined Storage Overview, Concepts and IBM SDS Family
Brian Sherman, IBM Distinguished Engineer and Client Technical Specialist for Advanced Technical Skills in the Americas, provided an overview of Software Defined Environments and how storage fits in that view, especially IBM's Spectrum Storage family.
- IBM Cloud Storage Options
Tony Pearson presented on IBM's various Cloud Storage options.
While my original focus was on-premise storage solutions for use by Data Centers and Cloud Service providers, there was a lot of interest in IBM's storage available from SoftLayer and other Cloud providers. During this week, IBM announced its acquisition of CleverSafe, which I had not incorporated into the deck.
- What's New in IBM Spectrum Protect v7.1.3
Tricia Jiang, IBM Technical Enablement Specialist for IBM Spectrum Storage, presented the latest release of IBM Spectrum Protect. That's an inside joke--this is the first release, but since it was based on IBM Tivoli Storage Manager (TSM) v7.1.2, it was easier just to continue the same numbering scheme.
The main features of v7.1.3 is the new in-line dedupe capability, the new "deduplication containers" concept, and support for backing up to object storage either on-premise or in the cloud
- IBM Spectrum Scale v4.1 Overview
Glen Corneau, IBM Client Technical Specialist for Power Systems, presented the latest features of IBM Spectrum Scale, formerly known as IBM General Parallel File System (GPFS). It was interesting to hear this from a Power Systems perspective, as IBM Spectrum Scale supports both AIX and Linux on POWER.
The day ended with a Welcome Reception at the IBM Solution Center that had various z System, Power System and System Storage solutions, as well as solutions from various IBM Business Partners and other third parties.
technorati tags: IBM, #ibmechu, Clod Barrera, Brian Sherman, Mo McCullough, Tricia Jiang, Glen Corneau, Smarter Storage, Cloud Storage, Spectrum Storage, Spectrum Protect, Spectrum Scale, SDS, Software Defined Storage, AIX, Linux POWER, TSM, GPFS
Modified by TonyPearson
Oh my, it is Tuesday again, and you know what that means? IBM Announcements!
This week, IBM announced its latest storage arrays in its IBM System Storage DS8000 series: the DS8880 models. Similar to the "Business Class" vs. "Enterprise Class" distinctions of the DS8870, IBM announced two new models, the DS8884 and the DS8886.
All of the new DS8880 models are based on the latest IBM POWER8 processors, and are noticeably thinner! These are now standard 19-inch wide, fitting nicely into standard IBM racks alongside most other standard 19-inch rack equipment.
The DC-UPS that used to be on the side are now at the bottom of each frame, taking up 8U of space. The High Performance Flash Enclosures (HPFE) that formerly were stored vertically above the DC-UPS will be stored horizontally with the rest of the HDD and SSD drives.
- DS8884 model
- The DS8884 will have 6-core controllers, up to 256 GB Cache, 64 ports that can negotiate between 16Gbps and 8Gbps, up to 240 drives in a single-rack configuration or 768 drives in a three-frame configuration, and up to 120 flash cards in HPFEs. The performance of this one is equal or better to existing DS8870 systems.
- DS8886 model
- The DS8886 will have 8-core, 16-core and 24-core controllers, offering up to three times the performance as the previous DS8870 models, with up to 2 TB of Cache, 128 ports, up to 1,536 drives across five frames, and up to 240 flash cards in HPFEs.
Field model conversion from DS8870 to DS8886 is available for existing clients with DS8870 Enterprise Configurations. This will let clients move their existing HDD, SSD, HPFE and Host Adapters over to the new DS8880 models.
In previous DS8000 models, clients would have one Hardware Management Console (HMC) inside the array, and an optional second HMC workstation somewhere else for high availability. While the second one was optional, it was always considered best practice to have it for redundancy sake. In the new DS8880 models, you can have both HMC in the array, and the Keyboard/Video/Monitor (KVM) can select between the two.
The new I/O enclosure pairs are four times faster, supporting six Device Adapters and two HPFE connections over PCIe Gen 3 network, the fastest available in the industry.
Lastly, IBM simplified the licensing of software features into three bundles, based on TB total capacity of Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes:
- Base function License: Logical Configuration support for FB, Operating Environment License, Thin Provisioning, Easy Tier® automated sub-volume tiering, and I/O Priority Manager.
- Copy Services License: FlashCopy®, Metro Mirror, Global Mirror, Metro/Global Mirror, z/Global Mirror (XRC), z/Global Mirror Resync, and Multi-Target PPRC.
- z-Synergy Service License: Parallel Access Volumes (PAV), HyperPAV, FICON® attachment, High performance FICON (zHPF), and IBM z/OS® Distributed Data Backup (zDDB).
IBM also provided a "Product preview", announcing plans for a third member of the DS8880 family in 2016 that will be flash-optimized to provide an all-flash, higher performance storage system model.
To learn more, read the [IBM Press Release] and [Function authorizations].
technorati tags: IBM, DS8000, DS8870, DS8880, DS8884, DS8886, HPFE, HDD, SSD, HMC, KVM, FB, CKD, Easy Tier, FlashCopy, FICON, zHPF, zDDB, all-flash
It's Tuesday, and you know what that means? IBM Announcements! This week I am in beautiful Orlando, Florida for the [IBM Systems Technical University] conference.
This week, IBM announced its latest tape offerings for the seventh generation of Linear Tape Open (LTO-7), providing huge gains in performance and capacity.
For capacity, the new LTO-7 cartridges can hold up to 6TB native capacity, or 15TB effective capacity with 2.5x compression that for typical data. That is 2.4x larger than the 2.5TB catridges available with LTO-6. Performance is also nearly doubled, with a native throughput of 315 MB/sec, or effective 780 MB/sec effective capacity with 2.5x compression. The LTO consortium, of which IBM is a founding member, has published the roadmap for LTO generations to LTO-8, LTO-9 and LTO-10.
IBM will offer both half-height and full-height LTO-7 tape drives. All the features you love from LTO-6 like WORM, partitioning and Encryption carry forward. These drives will be supported on a variety of distributed operating systems, including Linux on z System mainframes, and the IBM i platform on POWER Systems.
The Linear Tape File System (LTFS) can be used to treat LTO-7 cartridges in much the same way as Compact Discs or USB memory sticks, allowing one person to create conent on an LTO-7 tape cartridge, and pass that cartridge to the next employee, or to another company. LTFS is also the basis for IBM Spectrum Archive that allows tape data to be part of a global namespace with IBM Spectrum Scale.
LTO-7 will be supported on the TS2900 auto-loader, as well as all of IBM's tape libraries: TS3100, TS3200, TS3310, TS3500 and TS4500. You can connect up to 15 TS3500 tape libraries together with shuttle connectors, for a maximum capacity of 2,700 drives serving 300,000 cartridges, for a maximum capacity of 1.8 Exabytes of data in a single system environment.
In addition to LTO-7 support, the IBM TS4500 tape library was also enchanced. You can now grow it up to 18 frames, and have up to 128 drives serving 23,170 cartridges, for a maximum capacity of 139 PB of data. You can now also intermix LTO and 3592 frames in the same TS4500 tape library.
For comptability, LTO-7 drives can read existing LTO-5 and LTO-6 tape cartridges, and can write to LTO-6 media, to help clients with transition.
technorati tags: IBM, #ibmtechu, LTO, LTO-7, TS2900, TS2270, TS1070, TS3100, TS3200, TS3500, TS3310, TS4500
Modified by TonyPearson
This week I am in beautiful Orlando, Florida for the [IBM Systems Technical University] conference.
Amy Hirst, IBM Director, z Systems, Power, & Storage Technical Training, kicked off the general session.
Dr. Seshadri "Sesha" Subbanna, IBM Corporate Innovation and Technology Evaluation, asked the audience what capability is needed to drive business growth. A recent poll indicated that the ability for businesses to innovate was the number one response.
The IT industry has had its own version of growth. Consider the Apollo 11 [Guidance Computer] used to land a man on the moon had just 4KB or RAM, and 36KB or ROM. A typical smartphone has 62,000,000 times as much.
The Appollo missions led and motivated the Integrated-Circuit technology, but soon, maybe in the next 10 years, Dr. Subbanna feels that Silicon may run its course. Today, both POWER8 and z13 servers are based on 22nm. IBM has projected possible reductions to 17nm, 13nm, 10nm, and finally 7nm. That's it, smaller than 7nm may not be possible without hitting atomic issues.
The City of Rio de Janeiro, Brazil is a good example. In 2010, heavy rains resulted in flooding and landslides that killed over 110 residents. To prevent such high death rates in the future, IBM helped the city government predictive analytics and forecasting that allows "rain simulations" to see how well the city can handle different situations.
IBM is already looking for a more holistic view of systems, and new technologies like cognitive computing. New 3D technology allows various chip technologies to be stacked as layers on a single chip. For example, you could have computer on the bottom layer, flash non-volatile storage in middle layers, and networking at top layer. Connecting the layers is merely a matter of drilling holds and filling them with metal.
The idea that compute is the center of the universe, with a mainframe server surrounded by input and output "peripheral" storage devices, is giving way to a more storage-centric model, where central storage repositories (or data lakes) are accessed by "peripheral" smartphones, tablets and variety of servers. For example, the IBM DB2 Accerlation Appliance acts as a storage-centric model that IBM z System mainframes can connect to, send data in, process complex database queries, and get the results 2000x faster.
In another client example, IBM helped a bank in China to determine optimal placement of bank branches, based on public information of average salary levels of each neighborhood.
CPU processors are also getting help from co-processor accelerators like GPU (Graphical Processing Unit) and FPGA (Field Programmable Gate Arrays). Comparing a single IBM POWER8 server that is CAPI-attached to an IBM FlashSystem to a stack of x86 servers with internal SSD, the POWER8 solution connsumes 12x less rackspace, consumes 12x less electricity, and reduces per-user costs from $24/user for x86 down to $7.50/user on POWER8.
While social media, mobile phones and the Internet of Things (IoT) generate a lot data. If you then factor the "context multiplier effect" of all the links, connections and cross-references, you quickly see that data is growing at incredible rates.
Another issue is the difficulty to identify application inter-dependencies. Forecasting disruptive anamolies can be quite difficult. In one example, adminstrators received warning messages 65 minutes before a major outage, but they did not respond in time because they were unable to understand the full implications.
Cognitive computing is different than the tabulating and programming paradigms of prior decades. It is focused on Natural Language Processing, citing evidence to base responsed, and the ability to learn and improve based on learning from experience. The IBM Watson group is working with Memorial Sloane Kettering to help oncology doctors with cancer patients.
In an interesting demo, IBM Watson computer analyzed thousands of "TED Talk" videos, and was able to respond to search queries by playing a 30-second video clip that most closely address the search topic.
Cognitive computing is also looking at "Neuro-Synaptic" chips that work very much like the neurons and synapses in the brain. I have seen some of this work already at the IBM Almaden Research Center in California.
The general session ended with a Q&A panel with Dr. Subbanna, Frank De Gilio, and Bill Starke.
technorati tags: IBM, #ibmtechu, Seshadri Subbanna, Frank DeGilio, Bill Starke, Apollo 11, Apollo Guidance Computer, IoT, context multiplier effect, Rio Brazil, weather prediction, GPU, FPGA, POWER8, cognitive computing, TED talk, Watson
This week I am in beautiful Orlando, Florida for the [Systems Technical University].
Here are the sessions I will be speaking at:
|Monday||10:15am||Opening Session - Storage|
|01:45am||IBM's Cloud Storage Options|
|05:30pm||Solution Center Reception|
|Tuesday||11:30am||Software Defined Storage - Why? What? How?|
|03:15pm||The Pendulum Swings Back - Understanding Converged and Hyperconverged Environments|
|04:30pm||New Generation of Storage Tiering: Less Management, Lower Cost, and Increased Performance|
|05:30pm||Solution Center Reception|
|Wednesday||09:00am||What is Big Data? Architectures and Use Cases|
|01:45pm||Data Footprint Reduction - Understanding IBM Storage Efficiency Options|
|03:15pm||IBM Spectrum Virtualize - SVC, Storwize and FlashSystem V9000|
|Thursday||10:15am||IBM Spectrum Scale and Elastic Storage Server|
|01:45am||IBM Spectrum Scale for File and Object storage|
|01:45am||IBM Storage Integration with OpenStack|
|05:30pm||Storage! Meet the Experts|
|Friday||10:15am||IBM Spectrum Virtualize - SVC, Storwize and FlashSystem V9000|
It looks like a busy week!
technorati tags: IBM, Systems, STU, Orlando, Conference
Modified by TonyPearson
This post was originally written as a guest post for VMware for VMworld 2015 conference. Read the full blog post [IBM Storage and the Beauty and Benefits of VVol]. The following is an exerpt:
Back in 2012, I had mentioned that VMware was cooking up an exciting new feature called VVol, short for VMware vSphere Virtual Volume.
Officially, the VVol concept was still just a "technology preview" in 2012, to be fleshed out over the next few years through extensive collaboration between VMware and all the major players: IBM, HP, Dell, NetApp and EMC.
In 2013 and 2014, IBM attended VMworld with live demonstrations of VVol support. VMware vSphere v6 was not yet available, but when it was, we assured them, IBM would be one of the first vendors with support!
When vSphere v6 was finally made available earlier this year, [only four vendors support VVols on Day 1 of vSphere 6 GA]! Keeping true to its promises, IBM was indeed one of them.
To understand why VVol is such a game-changer, you have to understand a major problem with VMware version 4 and version 5, namely their Virtual Machine File System, or [VMFS].
Here is a picture to help illustrate:
On the left, we see that VMFS datastore is a set of LUNs from the storage admin perspective, and a set of VMDK and related files from the vCenter admin perspective.
If there was a storage-related problem, such as bandwidth performance or latency, how would the two admins communicate to perform troubleshooting? For many disk systems, it is not obvious which VMDK file sits on which LUN.
There are also a variety of hardware capabilities that work at the LUN level, such as snapshots, clones or remote distance mirroring, and this would apply to all the VMDK files in the data store across the set of LUNs, which may not be what you want.
There are two ways to address this in vSphere v4 and v5:
- The first method is to have fewer VMDK files per datastore. By defining smaller datastores with just a few VMs associated with each, you can then have a closer mapping of VMDK files to datastore LUNs. Unfortunately, VMware ESXi has a 256 limit on the number of different datastores that can be attached, so this method has its own limitations.
- The other method around this is "Raw Device Mapping" (RDM) which allowed Virtual Machines to be attached to specific LUNs. Some of the earlier restrictions and limitations for RDMs have since been relaxed over the releases, but your disk system still needs to expose the SCSI identifiers of each LUN to make this work, and additional setup is required if you plan to cluster two or more systems together, such as for a Microsoft Cluster Server (MSCS).
On the right side of the picture, using VMware v6, vCenter admins can now allocate VVols, which are mapped to specific "VVol Storage Containers" on specific storage systems. The storage admin knows exactly which VVol is in which container, so they can now communicate and collaborate on troubleshooting!
The vSphere ESXi host communicates to storage arrays via a new "virtual LUN id" called a "Protocol Endpoint". This is to allow FCP, iSCSI and FCoE traffic to flow correctly through SAN or LAN switches. For NFS, the Protocol Endpoint represents a "virtual mount point", so that traffic can be routed through LAN switches correctly.
Storage Policies can help determine which attributes or characteristics you want for your VVol. For example, you may want your VVol to be on a storage container that supports snapshots at the hardware level. The vCenter server can be aware of which storage arrays, and which storage containers in those arrays, through the VMware API for Storage Awareness, or VASA.
Different storage manufactures can implement their VASA provider in different ways. IBM has opted to have a single VASA provider for all of its supported devices, so as to provide consistent client experience. When you purchase any VVol-supported storage system from IBM, you are entitled to download the IBM VASA provider at no additional charge!
Initially, the IBM VASA provider will focus on IBM XIV Storage System, an ideal platform for your VVol needs. The XIV is a grid-based storage system, utilizing unique algorithms that give optimal data placement for every LUN or VVol created, and virtually guarantees there will be no hot spots. The XIV provides an impressive selection of Enterprise-class features, including snapshot, mirroring, thin provisioning, real-time compression, data-at-rest encryption, performance monitoring, multi-tenancy and data migration capabilities.
With the XIV 11.6 firmware level, you can define up to 12,000 VVols across one or more storage containers in a single XIV system. For more details, see IBM Redbook [Enabling VMware Virtual Volumes with IBM XIV Storage System].
Let me give some real world examples from Paul Braren, an IBM XIV and FlashSystem Storage Technical Advisor from Connecticut, who has been working directly with clients over the past five years:
"Many of my customers have clearly said they really want the ability to have a granular snapshot that grabs a moment in time of just one VM, rather than all the VMs that happen to be on the same LUN. They also want to delete VMs, and have the storage array automatically present that newly available space. Even better, with VVol, these SAN related tasks appear to be executed nearly instantly, leaving behind those legacy shared VMFS datastore limitations and overhead.
The same benefits of VVol are evident when cloning or deploying VMs. Imagine being to create a Windows Server VM with a 400GB thick-provisioned drive in under 20 seconds. Well, you don't have to imagine it! I recorded video of this actually happening over at IBM's European Storage Competence Center, featured in this 8-minute video: [IBM XIV Storage System and VMware vSphere Virtual Volumes (VVol). An ideal combination!]"
-- Paul Braren
In addition to XIV, all of IBM's Spectrum Virtualize products also support VVolLs, including SAN Volume Controller, Storwize including the Storwize in VersaStack, and FLashSystem V9000.
I am not in San Francisco this week for VMworld, but lots of my IBM colleagues are, so please, stop by the IBM booth and tell them I sent you!
Next week, I will return to Istanbul, Turkey to present at the [IBM Systems Technical Symposium], June 1-3 at the Hilton Bomonti hotel.
(Frequent readers of my blog may remember that I had been to Istanbul for a similar conference last year. I arrived a day earlier to do some sightseeing, which I documented in my April 2014 blog post [Arrived Safely to Istanbul].)
Like IBM Edge conference in Las Vegas earlier this month, this conference will not just be for Storage, but also include z Systems and POWER Systems content. Here are the sessions I will be presenting:
|Monday||11:30||Software Defined Storage: IBM Vision and Strategy|
|14:45||Software Defined Storage: Technical Overview|
|Tuesday||11:30||IBM's Cloud Storage Options|
|16:00||What is Big Data? Architectures and Practical use Cases|
|Wednesday||10:15||IBM Spectrum Storage Integration with OpenStack|
|14:45||New Generation of Storage Tiering: Less Management, Lower Costs and Increased Performance|
If you are attending next week in Istanbul, I will see you there!
technorati tags: IBM, Systems Technical Symposium, Istanbul Turkey, Software Defined Storage, Cloud Storage, Big Data, Spectrum Storage, OpenStack, Storage Tiering
The [IBM Edge2015 conference] is premiere conference covering Infrastructure Innovations for IBM System Storage, as well as sessions about z Systems and POWER Systems from our IBM Enterprise conference.
Here is my quick recap of my fifth and final day, Friday, May 15, 2015.
IBM Spectrum Storage™ Integration with OpenStack
At the Systems Technical University in Prague last month, I had submitted "IBM Spectrum Storage overview", while another speaker submitted "Storage Integration with OpenStack" and somehow the two topics got merged into a single title "IBM Spectrum Storage Integration with OpenStack" through perhaps some cut-and-paste error.
It turns out, it was a [chocolate-and-peanut-butter] situation! Combining the two topics worked out well.
I first had to explain the basics of OpenStack, how OpenStack manages pools of compute, storage and network resources. Then I explained specific details on Cinder, Swift and Manila interfaces. Finally, having laid the groundwork and reviewed the basics, I was able to explain how IBM's various storage offerings support these OpenStack interfaces.
The feedback from the audience was that this should have been presented earlier in the week! Attendees mentioned that other presentations earlier in the week merely assumed the audience was already familiar with OpenStack concepts and terminology, which obviously is not the case.
Storwize V7000 Unified with Spectrum Scale (formerly Elastic Storage)
Cameron McAllister, IBM Systems Architect for Spectrum Scale, presented an overview how Storwize V7000 Unified can interconnect with IBM Spectrum Scale deployments. The secret is a feature in both called Active File Management (AFM).
Shankar Balasubramanian, IBM Senior Technical Staff Member for Active File Management, went into details on how to set up Active File Management for a variety of use cases. For example, you could have Storwize V7000 Unified boxes in Remote Office/Branch Office (ROBO) locations replicating data to a centralized Spectrum Scale datacenter.
This week was a great conference! I received great feedback overall from many attendees about all the quality presentations they enjoyed this week.
Next year, Edge will be held in October 10-14, 2016. Save the date! Mark your calendars now!
technorati tags: IBM, #ibmedge, Edge2015, System Storage, IBM Expert Network, SlideShare, OpenStack, OpenStack Cinder, OpenStack Manila, OpenStack Swift, Cameron McAllister, Shankar Balasubramanian, Spectrum Scale, Elastic Storage, Storwize V7000 Unified