Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Days 4 (Thursday).
- Technology Trends in IBM Storage
Jack Arnold, IBM Client Technical Architect, provide an entertaining session on various technology trends in the industry. For example, What is the fastest growing storage medium for 2015? Answer: [Vinyl LP] records, which have seen a resurgence recently, growing at over 40 percent!
- IBM Spectrum Scale and Elastic Storage Server offerings
Tony Pearson provided an architectural overview of both Spectrum Scale software, as well as the Elastic Storage Server pre-built system appliance.
- IBM Spectrum Scale for File and Object storage
Tony Pearson explained the differences between file and object-level storage, and how IBM Spectrum Scale can provide both access methods in a single infrastructure.
- IBM Storage Integration with OpenStack
- IBM Spectrum Virtualize IP Replication 101
Andrea Sipka, IBM Software Developer for SVC/Storwize Copy Services from the UK Hursley lab, presented the implementation details of IP-based replication using the built-in WAN Acceleration that IBM licensed from Bridgeworks SANslide.
- Storage Meet the Experts
Mo McCullough hosted the last session of Thursday with a "Meet the Experts" Q&A panel. Tony Pearson, Brian Sherman, Clod Barrera, John Wilkinson, Mike Griese and Jim Blue were among the storage experts fielding questions. Tony Pearson provided a quick overview of the LTO-7 and TS4500 tape library announcements made earlier in the week.
Most IBM conferences are 4.5 days long, which means that there are typically two or three sessions on Friday morning. Unfortunately, the two sessions I was planning to attend on Friday were both cancelled, so Day 4 was the end of my week for this conference.
technorati tags: IBM, #ibmtechu, Jack Arnold, Andrea Sipka, Mo McCullough, Vinyl LP, Spectrum Scale, Elastic Storage Server, ESS, IP Replication, SVC, Storwize V7000, LTO-7, TS4500, Spectrum Virtualize, Mike Griese, Jim Blue
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 3 (Wednesday).
- What is Big Data? Architectures and Use Cases
Tony Pearson explained what Big Data analytics are, and IBM's various products to support this, incluidng BigInsights, BigSQL and Spectrum Scale with the Hadoop Connector.
- Why use IBM Spectrum Virtualize for High Availability
John Wilkinson, IBM Storage Software Engineer from the UK Hursley lab, presented the latest enhancements to Spectrum Virtualize-based products, such as SVC and Storwize V7000, related to Stretch Cluster and HyperSwap functions for High Availability.
- IBM Systems Hybrid Cloud Strategy, POV and Showcase
Dave Willoughby, IBM z System Hardware Architect for Systems Cloud Emerging Technologies, provided a high-level "Point-of-View" for Hybrid Cloud, and why IBM is focused on helping clients transition from traditional IT infrastructures.
- Data Footprint Reduction - Understanding IBM Storage Efficiency Options
Tony Pearson presented an overview of Thin Provisioning, Space-efficient snapshots, Data deduplication and Real-time Compression features.
- IBM Spectrum Virtualize - Understnding SVC, Storwize and FlashSystem V9000
Tony Pearson provide an overview of SAN Volume Controller, the Storwize family of products and FlashSystem V9000, all of which are based on Spectrum Virtualize software.
The day ended with a trip to Universal Studios. Dinner on the City Walk offered entertainment with Dueling Pianos. This was then followed by a trip to Hogsmeade, the Harry Potter themed portion of the resort.
technorati tags: IBM, #ibmtechu, big data, analytics, BigInsights, BigSQl, Spectrum Scale, Hadoop, John Wilkinson, SVC, Storwize, Stretch Cluster, HyperSwap, Dave Willoughby, Thin Provisioning, Space-Efficient Snapshot, Deduplication, Real-time Compression, Spectrum Virtualize, FlashSystem V9000
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 2 (Tuesday).
- Storage Futures
Andrew Greenfield, IBM Global XIV Storage and Networking Client Technical Specialist, presented IBM's future plans for XIV and FlashSystem products. This was a special NDA session.
- Demystify OpenStack
Eric Aquaronne, IBM Systems and Cloud Business Development lead, explained what OpenStack was, and why IBM is so heavily invested in its success. OpenStack is cloud management software that can be used to manager both on-premise and off-premise environments, including computer, storage and networking resources.
- Software Defined Storage - Why? What? How?
Tony Pearson presented an overview of Software Defined Environments and how storage fits into this.
Suspiciously, there was a lot of overlap with Brian Sherman's presentation on Day 1. As Charles Caleb Colton would say, "Imitation is the sincerest form of flattery."
- Making Sense of IBM Cloud Offerings
Jay Kruemcke, IBM Cloud Program Executive Client Collaboration Market Management Offering Manager, gave a high-level overview of IBM's various Cloud offerings from SoftLayer to Managed Cloud Services.
- The Pendulum Swings Back - Understanding Converged and Hyperconverged environments
Tony Pearson presented IBM's involvement with Converged Systems like VersaStack and Hyperconverged systems with Spectrum Accelerate and Spectrum Scale software.
- Next Generation Storage Tiering: Less Management, Lower Cost and Increased Performance
Tony Pearson presented Easy Tier, Storage Analytics Engine in Spectrum Control Advanced Edition, and Spectrum Scale tiering across flash, disk and tape media.
The second day ended with a "Networking" Reception in the Solution Center, serving food and my favorite grape-flavored beverages.
technorati tags: IBM, #ibmtechu, Andrew+Greenfield, Eric+Aquaronne, Jay+Kruemcke, XIV, FlashSystem, OpenStack, SDS, Software+Defined+Storage, IBM+Cloud, SoftLayer, Cloud+Managed+Services, converged+Systems, hyperconverged, VersaStack, Spectrum+Accelerate, Spectrum+Scale, Easy+Tier, Storage+Analytics+Engine, Spectrum+Control
Modified by TonyPearson
Continuing my coverage of the IBM Systems Technical University in Orlando, here are the sessions that I presented or attended on Day 1 (Monday).
- Storage Keynote Session
This was a three-part kick-off keynote session. Mo McCullough, IBM Systems Lab Services and Training, coordinated the storage track of this event and provided some details on how to use the website portal and smartphone app.
Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist for Storage, presented the future of the storage industry, including trends in storage media technologies, data plane and control plane level enhancements, and broader system-wide considerations.
Tony Pearson, IBM Master Inventor and Senior Software Engineer, wrapped up the session with an overview of IBM's Smarter Storage strategy.
- IBM Software Defined Storage Overview, Concepts and IBM SDS Family
Brian Sherman, IBM Distinguished Engineer and Client Technical Specialist for Advanced Technical Skills in the Americas, provided an overview of Software Defined Environments and how storage fits in that view, especially IBM's Spectrum Storage family.
- IBM Cloud Storage Options
Tony Pearson presented on IBM's various Cloud Storage options.
While my original focus was on-premise storage solutions for use by Data Centers and Cloud Service providers, there was a lot of interest in IBM's storage available from SoftLayer and other Cloud providers. During this week, IBM announced its acquisition of CleverSafe, which I had not incorporated into the deck.
- What's New in IBM Spectrum Protect v7.1.3
Tricia Jiang, IBM Technical Enablement Specialist for IBM Spectrum Storage, presented the latest release of IBM Spectrum Protect. That's an inside joke--this is the first release, but since it was based on IBM Tivoli Storage Manager (TSM) v7.1.2, it was easier just to continue the same numbering scheme.
The main features of v7.1.3 is the new in-line dedupe capability, the new "deduplication containers" concept, and support for backing up to object storage either on-premise or in the cloud
- IBM Spectrum Scale v4.1 Overview
Glen Corneau, IBM Client Technical Specialist for Power Systems, presented the latest features of IBM Spectrum Scale, formerly known as IBM General Parallel File System (GPFS). It was interesting to hear this from a Power Systems perspective, as IBM Spectrum Scale supports both AIX and Linux on POWER.
The day ended with a Welcome Reception at the IBM Solution Center that had various z System, Power System and System Storage solutions, as well as solutions from various IBM Business Partners and other third parties.
technorati tags: IBM, #ibmechu, Clod Barrera, Brian Sherman, Mo McCullough, Tricia Jiang, Glen Corneau, Smarter Storage, Cloud Storage, Spectrum Storage, Spectrum Protect, Spectrum Scale, SDS, Software Defined Storage, AIX, Linux POWER, TSM, GPFS
Modified by TonyPearson
Oh my, it is Tuesday again, and you know what that means? IBM Announcements!
This week, IBM announced its latest storage arrays in its IBM System Storage DS8000 series: the DS8880 models. Similar to the "Business Class" vs. "Enterprise Class" distinctions of the DS8870, IBM announced two new models, the DS8884 and the DS8886.
All of the new DS8880 models are based on the latest IBM POWER8 processors, and are noticeably thinner! These are now standard 19-inch wide, fitting nicely into standard IBM racks alongside most other standard 19-inch rack equipment.
The DC-UPS that used to be on the side are now at the bottom of each frame, taking up 8U of space. The High Performance Flash Enclosures (HPFE) that formerly were stored vertically above the DC-UPS will be stored horizontally with the rest of the HDD and SSD drives.
- DS8884 model
- The DS8884 will have 6-core controllers, up to 256 GB Cache, 64 ports that can negotiate between 16Gbps and 8Gbps, up to 240 drives in a single-rack configuration or 768 drives in a three-frame configuration, and up to 120 flash cards in HPFEs. The performance of this one is equal or better to existing DS8870 systems.
- DS8886 model
- The DS8886 will have 8-core, 16-core and 24-core controllers, offering up to three times the performance as the previous DS8870 models, with up to 2 TB of Cache, 128 ports, up to 1,536 drives across five frames, and up to 240 flash cards in HPFEs.
Field model conversion from DS8870 to DS8886 is available for existing clients with DS8870 Enterprise Configurations. This will let clients move their existing HDD, SSD, HPFE and Host Adapters over to the new DS8880 models.
In previous DS8000 models, clients would have one Hardware Management Console (HMC) inside the array, and an optional second HMC workstation somewhere else for high availability. While the second one was optional, it was always considered best practice to have it for redundancy sake. In the new DS8880 models, you can have both HMC in the array, and the Keyboard/Video/Monitor (KVM) can select between the two.
The new I/O enclosure pairs are four times faster, supporting six Device Adapters and two HPFE connections over PCIe Gen 3 network, the fastest available in the industry.
Lastly, IBM simplified the licensing of software features into three bundles, based on TB total capacity of Fixed Block (FB) LUNs and Count-Key-Data (CKD) volumes:
- Base function License: Logical Configuration support for FB, Operating Environment License, Thin Provisioning, Easy Tier® automated sub-volume tiering, and I/O Priority Manager.
- Copy Services License: FlashCopy®, Metro Mirror, Global Mirror, Metro/Global Mirror, z/Global Mirror (XRC), z/Global Mirror Resync, and Multi-Target PPRC.
- z-Synergy Service License: Parallel Access Volumes (PAV), HyperPAV, FICON® attachment, High performance FICON (zHPF), and IBM z/OS® Distributed Data Backup (zDDB).
IBM also provided a "Product preview", announcing plans for a third member of the DS8880 family in 2016 that will be flash-optimized to provide an all-flash, higher performance storage system model.
To learn more, read the [IBM Press Release] and [Function authorizations].
technorati tags: IBM, DS8000, DS8870, DS8880, DS8884, DS8886, HPFE, HDD, SSD, HMC, KVM, FB, CKD, Easy Tier, FlashCopy, FICON, zHPF, zDDB, all-flash
It's Tuesday, and you know what that means? IBM Announcements! This week I am in beautiful Orlando, Florida for the [IBM Systems Technical University] conference.
This week, IBM announced its latest tape offerings for the seventh generation of Linear Tape Open (LTO-7), providing huge gains in performance and capacity.
For capacity, the new LTO-7 cartridges can hold up to 6TB native capacity, or 15TB effective capacity with 2.5x compression that for typical data. That is 2.4x larger than the 2.5TB catridges available with LTO-6. Performance is also nearly doubled, with a native throughput of 315 MB/sec, or effective 780 MB/sec effective capacity with 2.5x compression. The LTO consortium, of which IBM is a founding member, has published the roadmap for LTO generations to LTO-8, LTO-9 and LTO-10.
IBM will offer both half-height and full-height LTO-7 tape drives. All the features you love from LTO-6 like WORM, partitioning and Encryption carry forward. These drives will be supported on a variety of distributed operating systems, including Linux on z System mainframes, and the IBM i platform on POWER Systems.
The Linear Tape File System (LTFS) can be used to treat LTO-7 cartridges in much the same way as Compact Discs or USB memory sticks, allowing one person to create conent on an LTO-7 tape cartridge, and pass that cartridge to the next employee, or to another company. LTFS is also the basis for IBM Spectrum Archive that allows tape data to be part of a global namespace with IBM Spectrum Scale.
LTO-7 will be supported on the TS2900 auto-loader, as well as all of IBM's tape libraries: TS3100, TS3200, TS3310, TS3500 and TS4500. You can connect up to 15 TS3500 tape libraries together with shuttle connectors, for a maximum capacity of 2,700 drives serving 300,000 cartridges, for a maximum capacity of 1.8 Exabytes of data in a single system environment.
In addition to LTO-7 support, the IBM TS4500 tape library was also enchanced. You can now grow it up to 18 frames, and have up to 128 drives serving 23,170 cartridges, for a maximum capacity of 139 PB of data. You can now also intermix LTO and 3592 frames in the same TS4500 tape library.
For comptability, LTO-7 drives can read existing LTO-5 and LTO-6 tape cartridges, and can write to LTO-6 media, to help clients with transition.
technorati tags: IBM, #ibmtechu, LTO, LTO-7, TS2900, TS2270, TS1070, TS3100, TS3200, TS3500, TS3310, TS4500
Modified by TonyPearson
This week I am in beautiful Orlando, Florida for the [IBM Systems Technical University] conference.
Amy Hirst, IBM Director, z Systems, Power, & Storage Technical Training, kicked off the general session.
Dr. Seshadri "Sesha" Subbanna, IBM Corporate Innovation and Technology Evaluation, asked the audience what capability is needed to drive business growth. A recent poll indicated that the ability for businesses to innovate was the number one response.
The IT industry has had its own version of growth. Consider the Apollo 11 [Guidance Computer] used to land a man on the moon had just 4KB or RAM, and 36KB or ROM. A typical smartphone has 62,000,000 times as much.
The Appollo missions led and motivated the Integrated-Circuit technology, but soon, maybe in the next 10 years, Dr. Subbanna feels that Silicon may run its course. Today, both POWER8 and z13 servers are based on 22nm. IBM has projected possible reductions to 17nm, 13nm, 10nm, and finally 7nm. That's it, smaller than 7nm may not be possible without hitting atomic issues.
The City of Rio de Janeiro, Brazil is a good example. In 2010, heavy rains resulted in flooding and landslides that killed over 110 residents. To prevent such high death rates in the future, IBM helped the city government predictive analytics and forecasting that allows "rain simulations" to see how well the city can handle different situations.
IBM is already looking for a more holistic view of systems, and new technologies like cognitive computing. New 3D technology allows various chip technologies to be stacked as layers on a single chip. For example, you could have computer on the bottom layer, flash non-volatile storage in middle layers, and networking at top layer. Connecting the layers is merely a matter of drilling holds and filling them with metal.
The idea that compute is the center of the universe, with a mainframe server surrounded by input and output "peripheral" storage devices, is giving way to a more storage-centric model, where central storage repositories (or data lakes) are accessed by "peripheral" smartphones, tablets and variety of servers. For example, the IBM DB2 Accerlation Appliance acts as a storage-centric model that IBM z System mainframes can connect to, send data in, process complex database queries, and get the results 2000x faster.
In another client example, IBM helped a bank in China to determine optimal placement of bank branches, based on public information of average salary levels of each neighborhood.
CPU processors are also getting help from co-processor accelerators like GPU (Graphical Processing Unit) and FPGA (Field Programmable Gate Arrays). Comparing a single IBM POWER8 server that is CAPI-attached to an IBM FlashSystem to a stack of x86 servers with internal SSD, the POWER8 solution connsumes 12x less rackspace, consumes 12x less electricity, and reduces per-user costs from $24/user for x86 down to $7.50/user on POWER8.
While social media, mobile phones and the Internet of Things (IoT) generate a lot data. If you then factor the "context multiplier effect" of all the links, connections and cross-references, you quickly see that data is growing at incredible rates.
Another issue is the difficulty to identify application inter-dependencies. Forecasting disruptive anamolies can be quite difficult. In one example, adminstrators received warning messages 65 minutes before a major outage, but they did not respond in time because they were unable to understand the full implications.
Cognitive computing is different than the tabulating and programming paradigms of prior decades. It is focused on Natural Language Processing, citing evidence to base responsed, and the ability to learn and improve based on learning from experience. The IBM Watson group is working with Memorial Sloane Kettering to help oncology doctors with cancer patients.
In an interesting demo, IBM Watson computer analyzed thousands of "TED Talk" videos, and was able to respond to search queries by playing a 30-second video clip that most closely address the search topic.
Cognitive computing is also looking at "Neuro-Synaptic" chips that work very much like the neurons and synapses in the brain. I have seen some of this work already at the IBM Almaden Research Center in California.
The general session ended with a Q&A panel with Dr. Subbanna, Frank De Gilio, and Bill Starke.
technorati tags: IBM, #ibmtechu, Seshadri Subbanna, Frank DeGilio, Bill Starke, Apollo 11, Apollo Guidance Computer, IoT, context multiplier effect, Rio Brazil, weather prediction, GPU, FPGA, POWER8, cognitive computing, TED talk, Watson
This week I am in beautiful Orlando, Florida for the [Systems Technical University].
Here are the sessions I will be speaking at:
|Monday||10:15am||Opening Session - Storage|
|01:45am||IBM's Cloud Storage Options|
|05:30pm||Solution Center Reception|
|Tuesday||11:30am||Software Defined Storage - Why? What? How?|
|03:15pm||The Pendulum Swings Back - Understanding Converged and Hyperconverged Environments|
|04:30pm||New Generation of Storage Tiering: Less Management, Lower Cost, and Increased Performance|
|05:30pm||Solution Center Reception|
|Wednesday||09:00am||What is Big Data? Architectures and Use Cases|
|01:45pm||Data Footprint Reduction - Understanding IBM Storage Efficiency Options|
|03:15pm||IBM Spectrum Virtualize - SVC, Storwize and FlashSystem V9000|
|Thursday||10:15am||IBM Spectrum Scale and Elastic Storage Server|
|01:45am||IBM Spectrum Scale for File and Object storage|
|01:45am||IBM Storage Integration with OpenStack|
|05:30pm||Storage! Meet the Experts|
|Friday||10:15am||IBM Spectrum Virtualize - SVC, Storwize and FlashSystem V9000|
It looks like a busy week!
technorati tags: IBM, Systems, STU, Orlando, Conference
Modified by TonyPearson
This post was originally written as a guest post for VMware for VMworld 2015 conference. Read the full blog post [IBM Storage and the Beauty and Benefits of VVol]. The following is an exerpt:
Back in 2012, I had mentioned that VMware was cooking up an exciting new feature called VVol, short for VMware vSphere Virtual Volume.
Officially, the VVol concept was still just a "technology preview" in 2012, to be fleshed out over the next few years through extensive collaboration between VMware and all the major players: IBM, HP, Dell, NetApp and EMC.
In 2013 and 2014, IBM attended VMworld with live demonstrations of VVol support. VMware vSphere v6 was not yet available, but when it was, we assured them, IBM would be one of the first vendors with support!
When vSphere v6 was finally made available earlier this year, [only four vendors support VVols on Day 1 of vSphere 6 GA]! Keeping true to its promises, IBM was indeed one of them.
To understand why VVol is such a game-changer, you have to understand a major problem with VMware version 4 and version 5, namely their Virtual Machine File System, or [VMFS].
Here is a picture to help illustrate:
On the left, we see that VMFS datastore is a set of LUNs from the storage admin perspective, and a set of VMDK and related files from the vCenter admin perspective.
If there was a storage-related problem, such as bandwidth performance or latency, how would the two admins communicate to perform troubleshooting? For many disk systems, it is not obvious which VMDK file sits on which LUN.
There are also a variety of hardware capabilities that work at the LUN level, such as snapshots, clones or remote distance mirroring, and this would apply to all the VMDK files in the data store across the set of LUNs, which may not be what you want.
There are two ways to address this in vSphere v4 and v5:
- The first method is to have fewer VMDK files per datastore. By defining smaller datastores with just a few VMs associated with each, you can then have a closer mapping of VMDK files to datastore LUNs. Unfortunately, VMware ESXi has a 256 limit on the number of different datastores that can be attached, so this method has its own limitations.
- The other method around this is "Raw Device Mapping" (RDM) which allowed Virtual Machines to be attached to specific LUNs. Some of the earlier restrictions and limitations for RDMs have since been relaxed over the releases, but your disk system still needs to expose the SCSI identifiers of each LUN to make this work, and additional setup is required if you plan to cluster two or more systems together, such as for a Microsoft Cluster Server (MSCS).
On the right side of the picture, using VMware v6, vCenter admins can now allocate VVols, which are mapped to specific "VVol Storage Containers" on specific storage systems. The storage admin knows exactly which VVol is in which container, so they can now communicate and collaborate on troubleshooting!
The vSphere ESXi host communicates to storage arrays via a new "virtual LUN id" called a "Protocol Endpoint". This is to allow FCP, iSCSI and FCoE traffic to flow correctly through SAN or LAN switches. For NFS, the Protocol Endpoint represents a "virtual mount point", so that traffic can be routed through LAN switches correctly.
Storage Policies can help determine which attributes or characteristics you want for your VVol. For example, you may want your VVol to be on a storage container that supports snapshots at the hardware level. The vCenter server can be aware of which storage arrays, and which storage containers in those arrays, through the VMware API for Storage Awareness, or VASA.
Different storage manufactures can implement their VASA provider in different ways. IBM has opted to have a single VASA provider for all of its supported devices, so as to provide consistent client experience. When you purchase any VVol-supported storage system from IBM, you are entitled to download the IBM VASA provider at no additional charge!
Initially, the IBM VASA provider will focus on IBM XIV Storage System, an ideal platform for your VVol needs. The XIV is a grid-based storage system, utilizing unique algorithms that give optimal data placement for every LUN or VVol created, and virtually guarantees there will be no hot spots. The XIV provides an impressive selection of Enterprise-class features, including snapshot, mirroring, thin provisioning, real-time compression, data-at-rest encryption, performance monitoring, multi-tenancy and data migration capabilities.
With the XIV 11.6 firmware level, you can define up to 12,000 VVols across one or more storage containers in a single XIV system. For more details, see IBM Redbook [Enabling VMware Virtual Volumes with IBM XIV Storage System].
Let me give some real world examples from Paul Braren, an IBM XIV and FlashSystem Storage Technical Advisor from Connecticut, who has been working directly with clients over the past five years:
"Many of my customers have clearly said they really want the ability to have a granular snapshot that grabs a moment in time of just one VM, rather than all the VMs that happen to be on the same LUN. They also want to delete VMs, and have the storage array automatically present that newly available space. Even better, with VVol, these SAN related tasks appear to be executed nearly instantly, leaving behind those legacy shared VMFS datastore limitations and overhead.
The same benefits of VVol are evident when cloning or deploying VMs. Imagine being to create a Windows Server VM with a 400GB thick-provisioned drive in under 20 seconds. Well, you don't have to imagine it! I recorded video of this actually happening over at IBM's European Storage Competence Center, featured in this 8-minute video: [IBM XIV Storage System and VMware vSphere Virtual Volumes (VVol). An ideal combination!]"
-- Paul Braren
In addition to XIV, all of IBM's Spectrum Virtualize products also support VVolLs, including SAN Volume Controller, Storwize including the Storwize in VersaStack, and FLashSystem V9000.
I am not in San Francisco this week for VMworld, but lots of my IBM colleagues are, so please, stop by the IBM booth and tell them I sent you!
Next week, I will return to Istanbul, Turkey to present at the [IBM Systems Technical Symposium], June 1-3 at the Hilton Bomonti hotel.
(Frequent readers of my blog may remember that I had been to Istanbul for a similar conference last year. I arrived a day earlier to do some sightseeing, which I documented in my April 2014 blog post [Arrived Safely to Istanbul].)
Like IBM Edge conference in Las Vegas earlier this month, this conference will not just be for Storage, but also include z Systems and POWER Systems content. Here are the sessions I will be presenting:
|Monday||11:30||Software Defined Storage: IBM Vision and Strategy|
|14:45||Software Defined Storage: Technical Overview|
|Tuesday||11:30||IBM's Cloud Storage Options|
|16:00||What is Big Data? Architectures and Practical use Cases|
|Wednesday||10:15||IBM Spectrum Storage Integration with OpenStack|
|14:45||New Generation of Storage Tiering: Less Management, Lower Costs and Increased Performance|
If you are attending next week in Istanbul, I will see you there!
technorati tags: IBM, Systems Technical Symposium, Istanbul Turkey, Software Defined Storage, Cloud Storage, Big Data, Spectrum Storage, OpenStack, Storage Tiering