Comments (4) Visits (21238)
It's Tuesday, and you know what that means? IBM Announcements! This week I am in beautiful Orlando, Florida for the [IBM Systems Technical University] conference.
This week, IBM announced its latest tape offerings for the seventh generation of Linear Tape Open (LTO-7), providing huge gains in performance and capacity.
For capacity, the new LTO-7 cartridges can hold up to 6TB native capacity, or 15TB effective capacity with 2.5x compression that for typical data. That is 2.4x larger than the 2.5TB catridges available with LTO-6. Performance is also nearly doubled, with a native throughput of 315 MB/sec, or effective 780 MB/sec effective capacity with 2.5x compression. The LTO consortium, of which IBM is a founding member, has published the roadmap for LTO generations to LTO-8, LTO-9 and LTO-10.
IBM will offer both half-height and full-height LTO-7 tape drives. All the features you love from LTO-6 like WORM, partitioning and Encryption carry forward. These drives will be supported on a variety of distributed operating systems, including Linux on z System mainframes, and the IBM i platform on POWER Systems.
The Linear Tape File System (LTFS) can be used to treat LTO-7 cartridges in much the same way as Compact Discs or USB memory sticks, allowing one person to create conent on an LTO-7 tape cartridge, and pass that cartridge to the next employee, or to another company. LTFS is also the basis for IBM Spectrum Archive that allows tape data to be part of a global namespace with IBM Spectrum Scale.
LTO-7 will be supported on the TS2900 auto-loader, as well as all of IBM's tape libraries: TS3100, TS3200, TS3310, TS3500 and TS4500. You can connect up to 15 TS3500 tape libraries together with shuttle connectors, for a maximum capacity of 2,700 drives serving 300,000 cartridges, for a maximum capacity of 1.8 Exabytes of data in a single system environment.
In addition to LTO-7 support, the IBM TS4500 tape library was also enchanced. You can now grow it up to 18 frames, and have up to 128 drives serving 23,170 cartridges, for a maximum capacity of 139 PB of data. You can now also intermix LTO and 3592 frames in the same TS4500 tape library.
For comptability, LTO-7 drives can read existing LTO-5 and LTO-6 tape cartridges, and can write to LTO-6 media, to help clients with transition.
This week I am in beautiful Orlando, Florida for the [IBM Systems Technical University] conference.
Amy Hirst, IBM Director, z Systems, Power, & Storage Technical Training, kicked off the general session.
Dr. Seshadri "Sesha" Subbanna, IBM Corporate Innovation and Technology Evaluation, asked the audience what capability is needed to drive business growth. A recent poll indicated that the ability for businesses to innovate was the number one response.
The IT industry has had its own version of growth. Consider the Apollo 11 [Guidance Computer] used to land a man on the moon had just 4KB or RAM, and 36KB or ROM. A typical smartphone has 62,000,000 times as much.
The Appollo missions led and motivated the Integrated-Circuit technology, but soon, maybe in the next 10 years, Dr. Subbanna feels that Silicon may run its course. Today, both POWER8 and z13 servers are based on 22nm. IBM has projected possible reductions to 17nm, 13nm, 10nm, and finally 7nm. That's it, smaller than 7nm may not be possible without hitting atomic issues.
The City of Rio de Janeiro, Brazil is a good example. In 2010, heavy rains resulted in flooding and landslides that killed over 110 residents. To prevent such high death rates in the future, IBM helped the city government predictive analytics and forecasting that allows "rain simulations" to see how well the city can handle different situations.
IBM is already looking for a more holistic view of systems, and new technologies like cognitive computing. New 3D technology allows various chip technologies to be stacked as layers on a single chip. For example, you could have computer on the bottom layer, flash non-volatile storage in middle layers, and networking at top layer. Connecting the layers is merely a matter of drilling holds and filling them with metal.
The idea that compute is the center of the universe, with a mainframe server surrounded by input and output "peripheral" storage devices, is giving way to a more storage-centric model, where central storage repositories (or data lakes) are accessed by "peripheral" smartphones, tablets and variety of servers. For example, the IBM DB2 Accerlation Appliance acts as a storage-centric model that IBM z System mainframes can connect to, send data in, process complex database queries, and get the results 2000x faster.
In another client example, IBM helped a bank in China to determine optimal placement of bank branches, based on public information of average salary levels of each neighborhood.
CPU processors are also getting help from co-processor accelerators like GPU (Graphical Processing Unit) and FPGA (Field Programmable Gate Arrays). Comparing a single IBM POWER8 server that is CAPI-attached to an IBM FlashSystem to a stack of x86 servers with internal SSD, the POWER8 solution connsumes 12x less rackspace, consumes 12x less electricity, and reduces per-user costs from $24/user for x86 down to $7.50/user on POWER8.
While social media, mobile phones and the Internet of Things (IoT) generate a lot data. If you then factor the "context multiplier effect" of all the links, connections and cross-references, you quickly see that data is growing at incredible rates.
Another issue is the difficulty to identify application inter-dependencies. Forecasting disruptive anamolies can be quite difficult. In one example, adminstrators received warning messages 65 minutes before a major outage, but they did not respond in time because they were unable to understand the full implications.
Cognitive computing is different than the tabulating and programming paradigms of prior decades. It is focused on Natural Language Processing, citing evidence to base responsed, and the ability to learn and improve based on learning from experience. The IBM Watson group is working with Memorial Sloane Kettering to help oncology doctors with cancer patients.
In an interesting demo, IBM Watson computer analyzed thousands of "TED Talk" videos, and was able to respond to search queries by playing a 30-second video clip that most closely address the search topic.
Cognitive computing is also looking at "Neuro-Synaptic" chips that work very much like the neurons and synapses in the brain. I have seen some of this work already at the IBM Almaden Research Center in California.
The general session ended with a Q&A panel with Dr. Subbanna, Frank De Gilio, and Bill Starke.
technorati tags: IBM, #ibmtechu, Seshadri Subbanna, Frank DeGilio, Bill Starke, Apollo 11, Apollo Guidance Computer, IoT, context multiplier effect, Rio Brazil, weather prediction, GPU, FPGA, POWER8, cognitive computing, TED talk, Watson
This week I am in beautiful Orlando, Florida for the [Systems Technical University].
Here are the sessions I will be speaking at:
It looks like a busy week!
This post was originally written as a guest post for VMware for VMworld 2015 conference. Read the full blog post [IBM Storage and the Beauty and Benefits of VVol]. The following is an exerpt:
Back in 2012, I had mentioned that VMware was cooking up an exciting new feature called VVol, short for VMware vSphere Virtual Volume.
Officially, the VVol concept was still just a "technology preview" in 2012, to be fleshed out over the next few years through extensive collaboration between VMware and all the major players: IBM, HP, Dell, NetApp and EMC.
In 2013 and 2014, IBM attended VMworld with live demonstrations of VVol support. VMware vSphere v6 was not yet available, but when it was, we assured them, IBM would be one of the first vendors with support!
When vSphere v6 was finally made available earlier this year, [only four vendors support VVols on Day 1 of vSphere 6 GA]! Keeping true to its promises, IBM was indeed one of them.
To understand why VVol is such a game-changer, you have to understand a major problem with VMware version 4 and version 5, namely their Virtual Machine File System, or [VMFS].
Here is a picture to help illustrate:
On the left, we see that VMFS datastore is a set of LUNs from the storage admin perspective, and a set of VMDK and related files from the vCenter admin perspective.
If there was a storage-related problem, such as bandwidth performance or latency, how would the two admins communicate to perform troubleshooting? For many disk systems, it is not obvious which VMDK file sits on which LUN.
There are also a variety of hardware capabilities that work at the LUN level, such as snapshots, clones or remote distance mirroring, and this would apply to all the VMDK files in the data store across the set of LUNs, which may not be what you want.
There are two ways to address this in vSphere v4 and v5:
On the right side of the picture, using VMware v6, vCenter admins can now allocate VVols, which are mapped to specific "VVol Storage Containers" on specific storage systems. The storage admin knows exactly which VVol is in which container, so they can now communicate and collaborate on troubleshooting!
The vSphere ESXi host communicates to storage arrays via a new "virtual LUN id" called a "Protocol Endpoint". This is to allow FCP, iSCSI and FCoE traffic to flow correctly through SAN or LAN switches. For NFS, the Protocol Endpoint represents a "virtual mount point", so that traffic can be routed through LAN switches correctly.
Storage Policies can help determine which attributes or characteristics you want for your VVol. For example, you may want your VVol to be on a storage container that supports snapshots at the hardware level. The vCenter server can be aware of which storage arrays, and which storage containers in those arrays, through the VMware API for Storage Awareness, or VASA.
Different storage manufactures can implement their VASA provider in different ways. IBM has opted to have a single VASA provider for all of its supported devices, so as to provide consistent client experience. When you purchase any VVol-supported storage system from IBM, you are entitled to download the IBM VASA provider at no additional charge!
Initially, the IBM VASA provider will focus on IBM XIV Storage System, an ideal platform for your VVol needs. The XIV is a grid-based storage system, utilizing unique algorithms that give optimal data placement for every LUN or VVol created, and virtually guarantees there will be no hot spots. The XIV provides an impressive selection of Enterprise-class features, including snapshot, mirroring, thin provisioning, real-time compression, data-at-rest encryption, performance monitoring, multi-tenancy and data migration capabilities.
With the XIV 11.6 firmware level, you can define up to 12,000 VVols across one or more storage containers in a single XIV system. For more details, see IBM Redbook [Enabling VMware Virtual Volumes with IBM XIV Storage System].
Let me give some real world examples from Paul Braren, an IBM XIV and FlashSystem Storage Technical Advisor from Connecticut, who has been working directly with clients over the past five years:
In addition to XIV, all of IBM's Spectrum Virtualize products also support VVolLs, including SAN Volume Controller, Storwize including the Storwize in VersaStack, and FLashSystem V9000.
I am not in San Francisco this week for VMworld, but lots of my IBM colleagues are, so please, stop by the IBM booth and tell them I sent you!
Next week, I will return to Istanbul, Turkey to present at the [IBM Systems Technical Symposium], June 1-3 at the Hilton Bomonti hotel.
(Frequent readers of my blog may remember that I had been to Istanbul for a similar conference last year. I arrived a day earlier to do some sightseeing, which I documented in my April 2014 blog post [Arrived Safely to Istanbul].)
Like IBM Edge conference in Las Vegas earlier this month, this conference will not just be for Storage, but also include z Systems and POWER Systems content. Here are the sessions I will be presenting:
If you are attending next week in Istanbul, I will see you there!