Next week, I will be attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
(For those not attending in person, you can watch live streams of the event at [IBMGO InterConnect channel].)
With over 2,000 technical sessions and 500 client testimonials, the event can be intimidating. For those of you attending this conference for the first time, I have some advice:
Here's my first cut at my schedule. Maybe this will help you organize your own.
If you use Twitter, follow @IBMInterConnect, @IBMSystems and @IBMStorage for updates, and my own tweets @az990tony. If you take a photo at the event, tag it with #ileadIT to enter into the social-photo contest!
I will be there all week! Contact me if you want to get together.
technorati tags: IBM, InterConnect, #IBMInterConnect, IoT, Internet of Things, Blockchain, Cloud, Storlets, University of Chicago, Prudential, Spectrum Control, Storage Insights, Software Defined Storage, Solution EXPO, Manulife, Weather Company, IBM and Box, Cloud-based Tape, Cleversafe, Object-based storage, Web-Scale, DS8880, Aspera, SoftLayer, OpenStack, Hybrid Cloud, Converged Infrastructure, Hyperconvergence, Anvia, Jabil, FlashSystem, Boeing, Ubuntu, Scale-out Linux, POWER8, Elton John, Cybersecurity, Jeopardy, zOS, Catalogic, DevOps, Spectrum Virtualize
Well, it's Tuesday again, and you know what that means? IBM Announcements!
This week, IBM announces the second generation of Storwize V5000 flash and disk storage systems. There are the V5000F All-flash configurations, as well as the V5000 that can support a variety of flash and spinning disk drives.
There are three models:
To learn more, read the [Storwize V5000 Gen2 announcement letter].
technorati tags: IBM, Storwize, Spectrum Virtualize, Storwize V5000F, Storwize V5000, Storwize V5010, Storwize V5020, Storwize V5030, Thin Provisioning, FlashCopy, Easy Tier, Remote Mirroring, Metro Mirror, Global Mirror, iSCSI, Fibre Channel, SAS, FCoE, Encryption, Intel AES-NI, SED, Real-time Compression
Comment (1) Visits (7092)
Can you believe it has been a year already since IBM announced VersaStack?
In my May 2012 blog post, [EMC Strikes Back], I poked fun at the fact that Cisco had two
Cisco originally partnered with EMC to create a converged system called Vblock which combined Cisco UCS servers and switches with EMC storage. The partnership between VMware, Cisco and EMC was dubbed Virtual Computing Environment (VCE).
However, Cisco then partnered with NetApp to create Flexpod, a converged system that combined Cisco UCS servers and switches with NetApp storage. Many of my clients felt that Flexpod was an improvement over Vblock.
A lot has happened since then. In 2014, [drastically reduced its investment in VCE]. Last year, Dell then spent $67 Billion dollars to effectively take EMC out of the storage business. While this was a huge birthday present for IBM, not everyone is happy to see EMC fade away. Whitney Garcia has a great article titled [Crying at the Dell-EMC wedding: Why VCE customers should consider alternatives].
Before VersaStack, IBM had its own converged system, PureSystems, which combined IBM POWER and x86 servers with IBM storage. The x86 server portion of this business was sold off to Lenovo, but IBM continues to sell POWER-only and blended x86-and-POWER PureFlex systems, as well as PureApplication and PureData systems.
The [VersaStack] collaboration between IBM and Cisco offers an alternative to Vblock and Flexpod converged systems. Cisco is a leader in x86 blades and networking switches, and IBM is #1 in Flash and Software Defined Storage, including Storage Virtualization. VersaStack gives you the best of both worlds!
The VersaStack has Cisco Validated Designs for use with IBM's Spectrum Virtualize products:
Later this month, I will be attending the [InterConnect Conference] in Las Vegas, Feb 21-25, 2016. This is IBM's premier Cloud & Mobile conference for the year.
Fellow blogger Stuart Thomson has a great post title [Storage & infrastructure @ InterConnect 2016: The choices are all yours] which provides some interesting statistics:
Wow! That can seem overwhelming. While the conference spans multiple hotels on the strip, I personally will be focusing my time at the [Mandalay Bay resort]. My session will be held at the Solutions Expo on Wednesday 1:45pm. Here are the details:
To help attendees plan your week, InterConnect has a [Session Preview Tool]. I have already found over 40 sessions related to Storage that I am interested in attending!
Need to register? Here is the [Registration Link].
I will be there all week, so if you see me, stop and say "Hello!"
As you can imagine, I get a lot of email from around the world. This one, from a loyal reader from overseas, was particularly interesting. Normally, I would direct them to read the fantastic manual [RTFM], but decided instead to go ahead and tackle it here in my blog.
I will tackle this in three steps.
First, let's attach "Server 1" and the FlashSystem 900 to the SAN fabric. IBM Spectrum Virtualize can handle one, two or even four separate fabrics. Let's assume you have a dual-port Host Bus Adapter (HBA) in server 1, and two redundant fabrics. We will connect each server port to each FCP switch. Likewise, we will connect each FCP switch to the FlashSystem 900, carve up "Volume 1", and create SAN "Zone A1" and "Zone A2", which identify "Server 1" as the initiator, and "FlashSystem 900" as the target. This is all basic stuff.
For those who want to follow along, I suggest you review the full implementation guidance in the IBM Redbook [Implementing the IBM Storwize V7000 Gen2]. Here is an excerpt:
"All Storwize V7000 Gen2 nodes in the Storwize V7000 Gen2 clustered system are connected to the same SANs, and they present volumes to the hosts. These volumes are created from storage pools that are composed of mDisks presented by the disk subsystems. The fabric must have three distinct zones:
Second, we connect the Storwize V7000 Gen2 to the FCP switches. You don't need to connect all of the ports, but I recommend that you have each controller node to each FCP switch, requiring four cables. Add more connections for added performance bandwidth.
Carve up "Volume 2" and this will be referred to as a "managed disk", mDisk for short, and create a "storage pool" which were formerly known as a "managed disk group" which is why you often see MDG in the naming conventions and examples. Storage pools can have one or more managed disks, and you can add more dynamically as needed.
The "storage zone" indicates the Storwize V7000 Gen2 as the initiator, and the FlashSystem 900 as target. If you want to increase the performance bandwidth, consider more cables between the FCP switches and the FlashSystem 900. We create "Zone B1" and "Zone B2". I recommend a separate "storage zones" for each additional storage system that you choose to attach to the Storwize V7000 Gen2.
The "cluster zone" that connects all of the Storwize V7000 Gen2 node ports together for node-to-node (intra-cluster) communication. Storwize V7000 Gen2 ports can serve as both initiators and targets dynamically. For example, when you write to one node, the node then copies the cache block over to the second node so there are two copies stored safely on separate nodes. Since we have two fabrics, we create "Zone C1" and "Zone C2".
Third, we connect "Server 2" to FCP switches, same as we did with "Server 1". We create "Volume 3" which is a "virtual disk, or vDisk for short, from the storage pool containing Volume 2. The "host zone"indicates Server 2 as the initiator, and Storwize V7000 Gen2 as the target. We create "Zone D1" and "Zone D2". I recommend putting each additional server in its own set of host zones.
In theory, you could have a server connected to both Volume 1 and Volume 3. For example, a Windows server would have a "C:" drive connected directly to FlashSystem 900 for high-speed performance, and have a "D:" drive on Storwize V7000 Gen2 to contain data. The Storwize V7000 Gen2 introduces 60 to 100 microseconds of added latency, but provides added value such as FlashCopy, Thin Provisioning, and Real-time compression.
Of course, there are unique situations that might require special configurations, depending on the servers, operating systems, host bus adapters, FCP switches, and storage systems involved.