Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line at the
IBM Systems Client Experience Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2016, Tony celebrates his 30th year anniversary with IBM Storage. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
My books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
Jim is an IBM Fellow for IBM Systems and Technology Group. There are only 73 IBM Fellows currently working for IBM, and this is the highest honor IBM can bestow on an employee. He has been working with IBM since 1968.
He is tasked with predicting the future of IT, and help drive strategic direction for IBM. Cost pressures, requirements for growth, accelerating innovation and changing business needs help influence this direction.
IBM's approach is to integrate four different "IT building blocks":
Scale-up Systems, like the IBM System Storage DS8000 and TS3500 Tape Library
Resource Pools, such as IBM Storage Pools formed from managed disks by IBM SAN Volume Controller (SVC)
Integrated stacks and appliances, integrated software and hardware stacks, from Storwize V7000 to full rack systems like IBM Smart Analytics Server or CloudBurst.
Mobility of workloads and resources requires unified end-to-end service management. Fortunately, IBM is the #1 leader in IT Service Management solutions.
Jim addressed three myths:
Myth 1: IT Infrastructures will be homogenous.
Jim feels that innovations are happening too rapidly for this to ever happen, and is not a desirable end-goal. Instead, a focus to find the right balance of the IT building blocks might be a better approach.
Myth 2: All of your problems can be solved by replacing everything with product X.
Jim feels that the days of "rip-and-replace" are fading away. As IBM Executive Steve Mills said, "It isn't about the next new thing, but how well new things integrate with established applications and processes."
Myth 3: All IT will move to the Cloud model.
Jim feels a substantial portion of IT will move to the Cloud, but not all of it. There will always be exceptions where the old traditional ways of doing things might be appropriate. Clouds are just one of the many building blocks to choose from.
Jim's focus lately has been finding new ways to take advantage of virtualization concepts. Server, storage and network virtualization are helping address these challenges through four key methods:
Sharing - virtualization that allows a single resource to be used by multiple users. For example, hypervisors allow several guest VM operating systems share common hardware on a single physical server.
Aggregation - virtualization that allows multiple resources to be managed as a single pool. For example, SAN Volume Controller can virtualize the storage of multiple disk arrays and create a single storage pool.
Emulation - virtualization that allows one set of resources to look and feel like a different set of resources. Some hypervisors can emulate different kinds of CPU processors, for example.
Insulation - virtualization that hides the complexity from the end-user application or other higher levels of infrastructure, making it easier to make changes of the underlying managed resources. For example, both SONAS and SAN Volume Controller allow disk capacity to be removed and replaced without disruption to the application.
In today's economy, IT transformation costs must be low enough to yield near-term benefits. The long-term benefits are real, but near-term benefits are needed for projects to get started.
What set's IBM ahead of the pack? Here was Jim's list:
100 Years of Innovation, including being the U.S. Patent leader for the last 18 years in a row
IBM's huge investment in IBM Research, with labs all over the globe
Leadership products in a broad portfolio
Workload-optimized designs with integration from middleware all the way down to underlying hardware
Comprehensive management software for IBM and non-IBM equipment
Clod is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. His presentation focused on trends and directions in the IT storage industry. Clod started with five workload categories:
To address these unique workload categories, IBM will offer workload-optimized systems. The four drivers on the design for these are performance, efficiency, scalability, and integration. For example, to address performance, companies can adopt Solid-State Drives (SSD). Unfortunately, these are 20 times more expensive dollar-per-GB than spinning disk, and the complexity involved in deciding what data to place on SSD was daunting. IBM solved this with an elegant solution called IBM System Storage Easy Tier, which provides automated data tiering for IBM DS8000, SAN Volume Controller (SVC) and Storwize V7000.
For scalability, IBM has adopted Scale-Out architectures, as seen in the XIV, SVC, and SONAS. SONAS is based on the highly scalable IBM General Parallel File System (GPFS). File systems are like wine, they get better with age. GPFS was introduced 15 years ago, and is more mature than many of the other "scalable file systems" from our competition.
Areal Density advancements on Hard Disk Drives (HDD) are slowing down. During the 1990s, the IT industry enjoyed 60 to 100 percent annual improvement in areal density (bits per square inch). In the 2000s, this dropped to 25 to 40 percent, as engineers are starting to hit various physical limitations.
Storage Efficiency features like compression have been around for a while, but are being deployed in new ways. For example, IBM invented WAN compression needed for Mainframe HASP. WAN compression became industry standard. Then IBM introduced compression on tape, and now compression on tape is an industry standard. ProtecTIER and Information Archive are able to combine compression with data deduplication to store backups and archive copies. Lastly, IBM now offers compression on primary data, through the IBM Real-Time Compression appliance.
For the rest of this decade, IBM predicts that tape will continue to enjoy (at least) 10 times lower dollar-per-GB than the least expensive spinning disk. Disk and Tape share common technologies, so all of the R&D investment for these products apply to both types of storage media.
For integration, IBM is leading the effort to help companies converge their SAN and LAN networks. By 2015, Clod predicts that there will be more FCoE purchased than FCP. IBM is also driving integration between hypervisors and storage virtualization. For example, IBM already supports VMware API for Array Integration (VAAI) in various storage products, including XIV, SVC and Storwize V7000.
Lastly, Clod could not finish a presentation without mentioning Cloud Computing. Cloud storage is expected to grow 32 percent CAGR from year 2010 to 2015. Roughly 10 percent of all servers and storage will be in some type of cloud by 2015.
As is often the case, I am torn between getting short posts out in a timely manner versus spending some more time to improve the length and quality of information, but posted much later. I will spread out the blog posts in consumable amounts throughout the next week or two, to achieve this balance.
Continuing my coverage of the [IBM System Storage Technical University 2011], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This year, we had two, one for System x called "Ask the eXperts", and one for System Storage called "Storage Free-for-All". This post covers the latter.
(Disclaimer: Do not shoot the messenger! We had a dozen or more experts on the panel, representing System Storage hardware, Tivoli Storage software, and Storage services. I took notes, trying to capture the essence of the questions, and the answers given by the various IBM experts. I have spelled out acronyms and provided links to relevant materials. The answers from individual IBMers may not reflect the official position of IBM management. Where appropriate, my own commentary will be in italics.)
You are in the wrong session! Go to "Ask the eXperts" session next door!
The TSM GUI sucks! Are there any plans to improve it?
Yes, we are aware that products like IBM XIV have raised the bar for what people expect from graphical user interfaces. We have plans to improve the TSM GUI. IBM's new GUI for the SAN Volume Controller and Storwize V7000 has been well-received, and will be used as a template for the GUIs of other storage hardware and software products. The GUI uses the latest HTML5, Dojo widgets and AJAX technologies, eliminating Java dependencies on the client browser.
Can we run the TSM Admin GUI from a non-Windows host?
IBM has plans to offer this. Most likely, this will be browser-based, so that any OS with a modern browser can be used.
As hard disk drives grow larger in capacity, RAID-5 becomes less viable. What is IBM doing to address this?
IBM is aware of this problem. IBM offers RAID-DP on the IBM N series, RAID-X on the IBM XIV, and RAID-6 on its other disk systems.
TPC licensing is outrageous! What is IBM going to do about it?
About 25 percent of DS8000 disk systems have SSD installed. Now that IBM DS8000 Easy Tier supports "any two" tiers, roughly 50 percent of DS8000 now have Easy Tier activated. No idea on how Easy Tier has been adopted on SVC or Storwize V7000.
We have an 8-node SVC cluster, should we put 8 SSD drives into a single node-pair, or spread them out?
We recommend putting a separate Solid-State Drive in each SVC node, with RAID-1 between nodes of a node-pair. By separating the SSD across I/O groups, you can reduce node-to-node traffic.
How well has SVC 6.2 been adopted?
The inventory call-home data is not yet available. The only SVC hardware model that does not support this level of software was the 2145-4F2 introduced in 2003. Every other model since then can be updated to this level.
Will IBM offer 600GB FDE drives for the IBM DS8700?
Currently, IBM offers 300GB and 450GB 15K RPM drives with the Full-Disk Encryption (FDE) capability for the DS8700, and 450GB and 600GB 10K RPM drives with FDE for the IBM DS8800. IBM is working with its disk suppliers to offer FDE on other disk capacities, and on SSD and NL-SAS drives as well, so that all can be used with IBM Easy Tier.
Is there a reason for the feature lag between the Easy Tier capabilities of the DS8000, and that of the SVC/Storwize V7000?
We have one team for Easy Tier, so they implement it first on DS8000, then port it over to SVC/Storwize V7000.
Does it even make sense to have separate storage tiers, especially when you factor in the cost of SVC and TPC to make it manageable?
It depends! We understand this is a trade-off between cost and complexity. Most data centers have three or more storage tiers already, so products like SVC can help simplify interoperability.
Are there best practices for combining SVC with DS8000? Can we share one DS8000 system across two or more SVC clusters?
Yes, you can share one DS8000 across multiple SVC clusters. DS8000 has auto-restripe, so consider having two big extent pools. The queue depth is 3 to 60, so aim to have up to 60 managed disks on your DS8000 assigned to SVC. The more managed disks the better.
The IBM System Storage Interopability Center (SSIC) site does not seem to be designed well for SAN Volume Controller.
Yes, we are aware of that. It was designed based on traditional Hardware Compatability Lists (HCL), but storage virtualization presents unique challenges.
How does the 24-hour learning period work for IBM Easy Tier? We have batch processing that runs from 2am to 8am on Sundays.
You can have Easy Tier monitor across this batch job window, and turn Easy Tier management between tiers on and off as needed.
Now that NetApp has acquired LSI, is the DS3000 still viable?
Yes, IBM has a strong OEM relationship with both NetApp and LSI, and this continues after the acquisition.
If have managed disks from a DS8000 multi-rank extent pool assigned to multiple SVC clusters, won't this affect performance?
Yes, possibly. Keep managed disks on seperate extent pools if this is a big concern. A PERL script is available to re-balance SVC striped volumes as needed after these changes.
Is the IBM [TPC Reporter] a replacement for IBM Tivoli Storage Productivity Center?
No, it is software, available at no additional charge, that provides additional reporting to those who have already licensed Tivoli Storage Productivity Center 4.1 and above. It will be updated as needed when new versions of Productivity Center are released.
We are experiencing lots of stability issues with SDD, SDD-PCM and SDD-DSM multipathing drivers. Are these getting the development attention they deserve?
IBM's direction is to shift toward native OS-based multipathing drivers.
Is anyone actually thinking of deploying public cloud storage in the near-term?
A few hands in the audience were raised.
None of the IBM storage devices seem to have [REST API]. Cloud storage providers are demanding this. What are IBM plans?
IBM plans to offer REST on SONAS. IBM uses SONAS internally for its own cloud storage offerings.
If you ask a DB2 specialist, an AIX specialist, and a System Storage specialist, on how to configure System p and System Storage for optimal performance, you get three different answers. Are there any IBMers who are cross-functional that can help?
Yes, for example, Earl Jew is an IBM Field Technical Support Specialist (FTSS) for both System p and Storage, and can help you with that.
Both Oracle and Microsoft recommend RAID-10 for their applications.
Don't listen to them. Feel free to use RAID-5, RAID-6 or RAID-X instead.
Resizing SVC source volumes forces ongoing FlashCopy or Metro Mirror relatiohships to be stopped. Does IBM plan to address this?
Currently, you have to stop, resize both source and target, then start the relationship again. Consider getting IBM Tivoli Storage Productivity Center for Replication (TPC-R).
IBM continues to support this for exising clients. For new deployments, IBM offers SONAS and the Information Archive (IA).
When will I be able to move SVC volumes between I/O groups?
You can today, but it is disruptive to the operating system. IBM is investigating making this less disruptive.
Will XIV ever support the mainframe?
It does already, with support for both Linux and z/VM today. For VSE support, use SVC with XIV. For those with the new zBX extension, XIV storage can be used with all of the POWER and x86-based operating systems supported. IBM has no plans to offer direct FICON attachment for z/OS or z/TPF.
Not a question - Kudos to the TSM and ProtecTIER team in supporting native IP-based replication!
When will IBM offer POWER-based models of the XIV, SVC and other storage devices?
IBM's decision to use industry-standard x86 technology has proven quite successful. However, IBM re-looks at this decision every so many years. Once again, the last iteration determined that it was not worth doing. A POWER-based model might not beat the price/performance of current x86 models, and maintaining two separate code bases would hinder development of new innovations.
We have both System i and System z, what is IBM doing to address the fact that PowerHA and GDPS are different?
IBM TPC-R has a service offering extension to support "IBM i" environments. GDPS plans to support multi-platform environments as well.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
Well, it's Tuesday again, and you know what that means... IBM announcements!
Last week, IBM had a big storage launch of various products, with the June 4 announcements at the IBM Edge 2012 conference. I provided highlights in my post [IBM Edge Announcements]. As promised, here are the rest of the announcements.
SONAS v1.3.2 adds support for management by the newly announced IBM Tivoli Storage Productivity Center v5.1 release. Also, IBM now officially supports "Gateway configurations" that have the storage nodes connected to XIV or Storwize v7000 disk systems. These gateway configurations offer new flexible choices and options for our ever-expanding set of clients.
ProtecTIER appliances and gateways
IBM ProtecTIER line of data deduplication appliances and gateways add CIFS file system support. Rather than using OST or a VTL interface, you now have CIFS as a new option for host attach. Also, IBM introduces the new TS7620 Express model, with options for 5.4TB and 11TB in capacity, replacing the previous TS7610 entry level.
LTFS Storage Manager
The Linear Tape File System (LTFS) allows files to be stored on tape cartridges in a manner that allows them to be mounted as file systems, much like a USB memory stick. The new LTFS Storage Manager software allows you to manage a collection of files across a set of cartridges, moving files from one cartridge to another, consolidating valid data onto fewer cartridges, and removing files no longer needed. This is sometimes referred to as "lifecycle management".
Tape System Library Manager
When IBM first introduced the "shuttle" that allowed up to fifteen TS3500 tape libraries to be connected together into a single system, only HPSS customers could take advantage of this. Software was required to coordinate the movement of cartridges from one library to another. The new IBM Tape System Library Manager now offers an alternative to HPSS for coordinating this activity.
DS8000 v6.3 microcode
IBM now offers 400GB solid-state drives. IBM's market leading support for Full Disk Encryption (FDE) is now extended to cover all drive speeds, from the slowest 7200RPM NL-SAS drives up to the fastest solid-state. IBM Easy Tier extends its super-easy implementation to work across all three of these tiers including encryption.
IBM now offers implementation services for IBM XIV Gen3 storage system, and the N series models 3220 and 3240.
This week I am on the road visiting various clients. Next week, Moscow Russia for the "Edge Comes to You" event!
Continuing my post-week coverage of the [Data Center 2010 conference], Wendesday afternoon included a mix of sessions that covered storage and servers.
Enabling 5x Storage Efficiency
Steve Kenniston, who now works for IBM from recent acquisition of Storwize Inc, presented IBM's new Real-Time Compression appliance. There are two appliances, one handles 1 GbE networks, and the other supports mixed 1GbE/10GbE connectivity. Files are compressed in real-time with no impact to performance, and in some cases can improve performance because there is less data written to back-end NAS devices. The appliance is not limited to IBM's N series and NetApp, but is vendor-agnostic. IBM is qualifying the solution with other NAS devices in the market. The compression can compress up to 80 percent, providing a 5x storage efficiency.
Townhall - Storage
The townhall was a Q&A session to ask the analysts their thoughts on Storage. Here I will present the answer from the analyst, and then my own commentary.
Are there any gotchas deploying Automated Storage Tiering?
Analyst: you need to fully understand your workload before investing any money into expensive Solid-State Drives (SSD).
Commentary: IBM offers Easy Tier for the IBM DS8000, SAN Volume Controller, and Storwize V7000 disk systems. Before buying any SSD, these systems will measure the workload activity and IBM offers the Storage Tier Advisory Tool (STAT) that can help identify how much SSD will benefit each workload. If you don't have these specific storage devices, IBM Tivoli Storage Productivity Center for Disk can help identify disk performance to determine if SSD is cost-justified.
Wouldn't it be simpler to just have separate storage arrays for different performance levels?
Analyst: No, because that would complicate BC/DR planning, as many storage devices do not coordinate consistency group processing from one array to another.
Commentary: IBM DS8000, SAN Volume Controller and Storwize V7000 disk systems support consistency groups across storage arrays, for those customers that want to take advantage of lower cost disk tiers on separate lower cost storage devices.
Can storage virtualization play a role in private cloud deployments?
Analyst: Yes, by definition, but today's storage virtualization products don't work with public cloud storage providers. None of the major public cloud providers use storage virtualization.
Commentary: IBM uses storage virtualization for its public cloud offerings, but the question was about private cloud deployments. IBM CloudBurst integrated private cloud stack supports the IBM SAN Volume Controller which makes it easy for storage to be provisioned in the self-service catalog.
Can you suggest one thing we can do Monday when we get back to the office?
Analyst: Create a team to develop a storage strategy and plan, based on input from your end-users.
Commentary: Put IBM on your short list for your next disk, tape or storage software purchase decision. Visit
[ibm.com/storage] to re-discover all of IBM's storage offerings.
What is the future of Fibre Channel?
Analyst 1: Fibre Channel is still growing, will go from 8Gbps to 16Gbps, the transition to Ethernet is slow, so FC will remain the dominant protocol through year 2014.
Analyst 2: Fibre Channel will still be around, but NAS, iSCSI and FCoE are all growing at a faster pace. Fibre Channel will only be dominant in the largest of data centers.
Commentary: Ask a vague question, get a vague answer. Fibre Channel will still be around for the next five years.
However, SAN administrators might want to investigate Ethernet-based approaches like NAS, iSCSI and FCoE where appropriate, and start beefing up their Ethernet skills.
Will Linux become the Next UNIX?
Linux in your datacenter is inevitable. In the past, Linux was limited to x86 architectures, and UNIX operating systems ran on specialized CPU architectures: IBM AIX on POWER7, Solaris on SPARC, HP-UX on PA-RISC and Itanium, and IBM z/OS on System z Architecture, to name a few. But today, Linux now runs on many of these other CPU chipsets as well.
Two common workloads, Web/App serving and DBMS, are shifting from UNIX to Linux. Linux Reliability, Availability and Serviceability (RAS) is approaching the levels of UNIX. Linux has been a mixed blessing for UNIX vendors, with x86 server margins shrinking, but the high-margin UNIX market has shrunk 25 percent in the past three years.
UNIX vendors must make the "mainframe argument" that their flavor of UNIX is more resilient than any OS that runs on Intel or AMD x86 chipsets. In 2008, Sun Solaris was the number #1 UNIX, but today, it is IBM AIX with 40 percent marketshare. Meanwhile HP has focused on extending its Windows/x86 lead with a partnership with Microsoft.
The analyst asks "Are the three UNIX vendors in it for the long haul, or are they planning graceful exits?" The four options for each vendor are:
Milk it as it declines
Accelerate the decline by focusing elsewhere
Impede the market to protect margins
Re-energize UNIX base through added value
Here is the analyst's view on each UNIX vendor.
IBM AIX now owns 40 percent marketshare of the UNIX market. While the POWER7 chipset supports multiple operating systems, IBM has not been able to get an ecosystem to adopt Linux-on-POWER. The "Other" includes z/OS, IBM i, and other x86-based OS.
HP has multi-OS Itanium from Intel, but is moving to Multi-OS blades instead. Their "x86 plus HP-UX" strategy is a two-pronged attack against IBM AIX and z/OS. Intel Nehalem chipset is approaching the RAS of Itanium, making the "mainframe argument" more difficult for HP-UX.
Before Oracle acquired Sun Microsystems, Oracle was focused on Linux as a UNIX replacement. After the acquisition, they now claim to support Linux and Solaris equally. They are now focused on trying to protect their rapidly declining install base by keeping IBM and HP out. They will work hard to differentiate Solaris as having "secret sauce" that is not in Linux. They will continue to compete head-on against Red Hat Linux.
An interactive poll of the audience indicated that the most strategic Linux/UNIX platform over the next next five years was Red Hat Linux. This beat out AIX, Solaris and HP-UX, as well as all of the other distributions of Linux.
The rooms emptied quickly after the last session, as everyone wanted to get to the "Hospitality Suites".
Continuing my coverage of the 30th annual [Data Center Conference]. Here is a recap of the Monday afternoon sessions:
IBM Watson and your Data Center
Steve Sams, IBM VP of Site and Facilities Services, cleverly used IBM Watson as a way to explain how analytics can be used to help manage your data center. Sadly, most of the people at my table missed the connection between IBM Watson and Analytics. How does answering a single trivia question in under three seconds relate to the ongoing operations of a data center? If you were similarly confused, take a peak at my series of IBM Watson blog posts:
The analyst who presented this topic was probably the fastest-speaking Texan I have met. He covered various aspects of Cloud Computing that people need to consider. Why hasn't Cloud taken off sooner? The analyst feels that Cloud Computing wasn't ready for us, and we weren't ready for Cloud Computing. The fundamentals of Cloud Computing have not changed, but we as a society have. Now that many end users are comfortable consuming public cloud resources, from Facebook to Twitter to Gmail, they are beginning to ask for similar from their corporate IT.
Legal issues - see this hour-long video, [Cloud Law & Order], which discusses legal issues related to Cloud Computing.
Employee staffing - need to re-tool and re-train IT employees to start thinking of their IT as a service provider internally.
Hybrid Cloud - rather than struggle choosing between private and public cloud methodologies, consider a combination of both.
University of Rochester Medical Center (URMC) Cracks Code on Data Growth
Often times, the hour is split, 30 minutes of the sponsor talking about various products, followed by 30 minutes of the client giving a user experience. Instead, I decided to let the client speak for 45 minutes, and then I moderated the Q&A for the remaining 15 minutes. This revised format seemed to be well-received!
University of Rochester is in New York, about 60 miles east of Buffalo, and 90 miles from Toronto across Lake Ontario. Six years ago, Rick Haverty joined URMC as the Director of Infrastructure services, managing 130 of the 300 IT personnel at the Medical Center. I met Rick back in May, when he presented at the IBM [Storage Innovation Executive Summit] in New York City.
URMC has DS8000, DS5000, XIV, SONAS, Storwize V7000 and is in the process of deploying Storwize V7000 Unified. He presented how he has used these for continuous operations and high availability, while controlling storage growth and costs.
The Q&A was lively, focusing on how his team manages 1PB of disk storage with just four storage administrators, his choice of a "Vendor Neutral Archive" (VNA), and his experiences with integration.
This was a great afternoon, and I was glad to get all my speaking gigs done early in the week. I would like to thank Rick Haverty of URMC for doing a great job presenting this afternoon!
This week I am in Orlando, Florida for the IBM Edge conference. Thursday evening after all the other sessions, we had a Free-for-All, a Q&A panel across all storage topics, moderated by Scott Drummond. The conference officially ends at noon tomorrow, but for many, this is the last session, as people fly out Friday morning. Here are the questions and the panel responses during the session.
When will IBM unify their storage management between Mainframe z/OS and the distributed systems platforms?
IBM offers a Change and Configuration Management Data Base (CCMDB) for this purpose with appropriate collectors from z/OS and distributed systems, but hasn't sold well.
When will IBM devices have RESTful interfaces?
Both IBM Systems Director and IBM Tivoli Storage Productivity Center (TPC) offer RESTful APIs. IBM Systems Director can manage z/VM and Linux on System z, as well as Power Systems and x86 based distributed systems. Since October 2008, IBM's Project Zero introduced RESTful interfaces to PHP and Groovy software running on WebSphere sMash environments. We have not heard much about this since 2008.
Will IBM TPC support NPIV on Power Systems?
TPC 5.1 has toleration support for this, showing the first port connection discovered, but not all connections, and we expect to retrofit this toleration to TPC 4.2.2 Fixpack 2. Hopefully, we will have full support in a future release.
We would like TPC for Replication to run on Linux for System z. We do not run z/OS at the disaster recovery site location.
Submit an IBM Request for Enhancement [RFE] for this. We have TPC for Replication on z/OS, as well as the distributed systems version that runs on Windows, Linux and AIX.
We have enhancements we would like to see for XIV and SONAS also, can we use the RFE process for this also?
Yes, submit the requirements for our review.
We heard the Statement of Direction that there would be storage integrated into the PureSystems. What exactly does that mean?
The PureSystems family of expert-integrated systems is based on a new chassis that has a front part, a midplane, and a back-part. All IBM System Storage products that support x86 and Power Systems can work with PureSystems. However, IBM does not yet offer storage that fits in the front part of the PureFlex chassis, but the Statement of Direction indicates that we intend to offer that option. Until then, the IBM Storwize V7000 is the storage of choice that can be put into the PureSystems rack, but outside the individual chasses.
We see some features like Real-Time Compression being put into the SAN Volume Controller (SVC), and other features put into the back-end devices. How are we supposed to make sense of this?
IBM's new pilot program, the SmartCloud Virtual Storage Center, to bring these all together. In general, we have design teams of system architects that determine which features go in which products, and prioritize accordingly.
We heard the IBM Executives during the opening session indicate that IBM's strategy involves supporting Big Data, but I haven't seen any storage that supports native Hadoop interfaces. Did I miss something?
First, I want to emphasize that Big Data is more than just MapReduce workloads. IBM offers Streams and BigInsights software to handle text, as well as Business Intelligence and Data Warehouse solutions for structured data. IBM's General Parallel File System (GPFS) has a Shared-Nothing-Cluster (SNC) mode with Hadoop interfaces that runs twice as fast as Hadoop's native HDFS file system. The storage products we recommend for Big Data are the SONAS and the DCS3700 disk systems, as both are optimized for the sequential workloads Big Data represents.
Everytime we upgrade our SVC, we review the list for SDDPCM multi-pathing and see that we need to upgrade our back-end DS8000 microcode up to recommended levels. Can we get a list of combinations that work from other customers?
The advantage of storage hypervisors like SVC is that we can separate the multi-pathing driver from the back-end managed disk systems. You only need the SDDPCM to support the SVC, not the back-end devices. For the most part, SVC has not dropped support for any level of previously supported OS or multi-pathing software.
On SVC, when we migrate volumes (vDisks) from one storage pool to another, we would like to throttle this process during FlashCopy.
Yes, we had several requests like this, which is why we now recommend using Volume Mirorring to perform migrations. In fact the GUI wizard uses Volume Mirroring by default when migrations are performed. As for throttling, IBM has implemented "I/O Priority Manager" that offers Quality of Service classes for DS8000 and XIV Gen3, and might consider porting this to other products in our portfolio.
Sizing systems is an art. I just need to know if the DS8000 is running hot. Can we have the equivalent of "red lines" for our disk systems similar to automobile engines?
Storage Optimizer was added to TPC 4.2 to help in this area, identifying heat-maps for IBM DS8000, DS6000, DS5000, DS4000, SVC and Storwize V7000. We recommend you look at the performance violation reports.
How can we evaluate the characteristics of our workloads?
Yes, TPC can do this.
When we are replacing non-IBM storage with IBM, we don't have good tools to evaluate the non-IBM equipment. What is IBM doing for this?
IBM's Disk Magic modeling tool can take inputs from a variety of sources, including iostat from the servers themselves. You can also install a 90-day trial of TPC to help with this.
We really like EMC's "Grab" program, does IBM have one also?
Updating the Host Attachment Kit (HAK) for AIX is quite painful for the SVC. We prefer the method employed for the XIV.
Thanks for the feedback.
For SVC, we need to correlate disk with VMware and VIOS. Can we get vSCSI information on VIOS?
TPC 5.1 has this support, and we believe it has been retrofitted to TPC 4.2.2 Fixpack 2, coming out this month.
Currently, with SVC, when volumes are part of a Global Mirror (GM) session, we need to cancel GM, expand the source volume, expand the target volume, then restart GM. We would like this to be fully automated and non-disruptive.
Sounds like a great requirement to submit for the RFE process.
Can we get an RSS Feed for the RFE community.
Yes, you can subscribe to it. You can also set up "Watch Lists".
Thanks to all of the IBM experts on the panel for their participation at this event!
Did IBM XIV force EMC's hand to announce VMAXe? Let's take a stroll down memory lane.
In 2008, IBM XIV showed the world that it could ship a Tier-1, high-end, enterprise-class system using commodity parts. Technically, prior to its acquisition by IBM, the XIV team had boxes out in production since 2005. EMC incorrectly argued this announcement meant the death of the IBM DS8000. Just because EMC was unable to figure out how to have more than one high-end disk product, doesn't mean IBM or other storage vendors were equally challenged. Both IBM XIV and DS8000 are Tier-1, high-end, enterprise-class storage systems, as are the IBM N series N7900 and the IBM Scale-Out Network Attached Storage (SONAS).
In April 2009, EMC followed IBM's lead with their own V-Max system, based on Symmetrix Engenuity code, but on commodity x86 processors. Nobody at EMC suggested that the V-Max meant the death of their other Symmetrix box, the DMX-4, which means that EMC proved to themselves that a storage vendor could offer multiple high-end disk systems. Hitachi Data Systems (HDS) would later offer the VSP, which also includes some commodity hardware as well.
In July 2009, analysts at International Technology Group published their TCO findings that IBM XIV was 63 percent less expensive than EMC V-Max, in a whitepaper titled [COST/BENEFIT CASE
FOR IBM XIV STORAGE SYSTEM Comparing Costs for IBM XIV and EMC V-Max Systems]. Not surprisingly, EMC cried foul, feeling that EMC V-Max had not yet been successful in the field, it was too soon to compare newly minted EMC gear with a mature product like XIV that had been in production accounts for several years. Big companies like to wait for "Generation 1" of any new product to mature a bit before they purchase.
To compete against IBM XIV's very low TCO, EMC was forced to either deeply discount their Symmetrix, or counter-offer with lower-cost CLARiiON, their midrange disk offering. An ex-EMCer that now works for IBM on the XIV sales team put it in EMC terms -- "the IBM XIV provides a Symmetrix-like product at CLARiiON-like prices."
(Note: Somewhere in 2010, EMC dropped the hyphen, changing the name from V-Max to VMAX. I didn't see this formally announced anywhere, but it seems that the new spelling is the officially correct usage. A common marketing rule is that you should only rename failed products, so perhaps dropping the hyphen was EMC's way of preventing people from searching older reviews of the V-Max product.)
This month, IBM introduced the IBM XIV Gen3 model 114. The analysts at ITG updated their analysis, as there are now more customers that have either or both products, to provide a more thorough comparison. Their latest whitepaper, titled [Cost/Benefit Case for IBM XIV Systems: Comparing Cost
Structures for IBM XIV and EMC VMAX Systems], shows that IBM maintains its substantial cost savings advantage, representing 69 percent less Total Cost of Ownership (TCO) than EMC, on average, over the course of three years.
In response, EMC announced its new VMAXe, following the naming convention EMC established for VNX and VNXe. Customers cannot upgrade VNXe to VNX, nor VMAXe to VMAX, so at least EMC was consistent in that regard. Like the IBM XIV and XIV Gen3, the new EMC VMAXe eliminated "unnecessary distractions" like CKD volumes and FICON attachment needed for the IBM z/OS operating system on IBM System z mainframes. Fellow blogger Barry Burke from EMC explains everything about the VMAXe in his blog post [a big thing in a small package].
So, you have to wonder, did IBM XIV force EMC's hand into offering this new VMAXe storage unit? Surely, EMC sales reps will continue to lead with the more profitable DMX-4 or VMAX, and then only offer the VMAXe when the prospective customer mentions that the IBM XIV Gen3 is 69 percent less expensive. I haven't seen any list or street prices for the VMAXe yet, but I suspect it is less expensive than VMAX, on a dollar-per-GB basis, so that EMC will not have to discount it as much to compete against IBM.
Now an avid reader of my blog has brought this to my attention. Apparently,
EMC has been showing customers a presentation
[Accelerating Storage Transformation with VMAX and VPLEX] with false and misleading comparison claims between IBM DS8000, HDS VSP and EMC VMAX 40K disk system performance.
(FTC Disclosure: This would be a good time to remind my readers that I work for IBM and own IBM stock. I do not endorse any of the EMC or HDS products mentioned in this post, and have no financial affiliation or investments directly with either EMC nor HDS. I am basing my information solely on the presentation posted on the internet and other sources publicly available, and not on any misrepresentations from EMC speakers at the various conferences where these charts might have been shown.)
The problem with misinformation is that it is not always obvious. The EMC presentation is quite pretty and professional-looking. It is the typical slick, attention-getting, low-content, over-simplified marketing puffery you have come to expect from EMC. There are two slides in particular that I have issue with.
This first graphic implies that IBM and HDS are nearly tied in performance, but that EMC VMAX 40K has nearly triple that bandwidth. Overall the slide has very little detail. That makes it difficult to determine what exactly is being claimed and whether a fair comparison is being made.
The title claims that VMAX 40K is "#1 in High Bandwidth Apps". Only three disk systems are shown so the claim appears to be relative to only the three systems. The wording "High Bandwidth Apps" is confusing considering the cited numbers are for disk systems and no application is identified. By comparison, IBM SONAS can drive up to 105 GB/sec sequential bandwidth, nearly double what EMC claims for its VMAX 40K, so EMC is certainly not even close to #1.
Is the workload random or sequential? That is not easy to determine. The use of "GB/s" along with the large block size of 128KB implies the I/O workload is sequential, which is great for some workloads like high performance computing, technical computing and video broadcasts. Random workloads, on the other hand, are usually measured in I/Os per second (IOPS) with a block size ranging 4KB to 64KB. (I am assuming the 128K blocks refers to 128KB block size, and not reading the same block of cache 128,000 times.)
The slide states "Maximum Sustainable RRH Bandwidth 128K Blocks". The acronym "RRH" is not defined; but I suspect this refers to "random read hits". For random workloads, 100 percent random read hits from cache represents one corner of the infamous "four corners" test. Real-world workloads have a mix of reads and writes, and a mix of cache hits and cache misses. It is also unclear whether the hits are from standard data cache or from internal buffers in adapters (perhaps accessing the same blocks repeatedly) or something else. So is this really for a random workload, or a sequential workload?
(The term "Hitachi Math" was coined by an EMC blogger precisely to slam Hitachi Data Systems for their blatant use of four-corners results, claiming that spouting ridiculously large, but equally unrealistic, 100 percent random read hit results don't provide any useful information. I agree. There are much better industry-standard benchmarks available, such as SPC-1 for random workloads, SPC-2 for sequential workloads, and even benchmarks for specific applications, that represent real-world IT environments. To shame HDS for their use of four-corners results, only for EMC themselves to use similar figures in their own presentation is truly hypocritical of them!)
The IBM system is identified as "DS8000". DS8000 is a generic family name that applies to multiple generations of systems first introduced in 2004. The specific model is not identified, but that is critical information. Is this a first generation DS8100, or the latest DS8800, or something in between?
The slide says "Full System Configs", but that is not defined and configuration details are not identified. Configuration details, also critical information in assessing system performance capabilities, are not specified. If the EMC box costs seven times more than IBM or HDS, would you really buy it to get 3x more performance? Is the EMC packed with the maximum amount of SSD? Were there any SSD in the IBM or HDS boxes to match?
The source of the claimed IBM DS8000 performance numbers is not identified. Did they run their own tests? While I cannot tell, the VMAX may have been configured with 64 Fibre Channel 8Gbps host connections. In that case each channel is theoretically capable of supporting about 800 MB/s at 100% channel utilization. Multiplying 64 x 800MB/s = 51.2GB/s, so did EMC just do the performance comparison on the back of a napkin, assuming there are no other bottlenecks in the system? Even then, I would not round up 51.2 to 52!
Response times were not identified. For random I/Os, response time is a very important metric. It is possible that the Symmetrix was operating with some resources at 100% utilization to get the highest GB/s result, but that would likely make I/O response times unacceptable for real-world random I/O workloads.
IBM and HDS have both published Storage Performance Council [SPC] industry-standard performance benchmarks. EMC has not published any SPC benchmarks for VMAX systems. If EMC is interested in providing customers with audited, detailed performance information along with detailed configuration information, all based on benchmarks designed to represent real-world workloads, EMC can always publish SPC benchmark results as IBM and other vendors have done. In past blog fights, EMC resorts to the excuse that SPC isn't perfect, but can they really argue that vague and unrealistic claims cited in its presentation are better?
The second graphic is so absurd, you would think it came directly from Larry Ellison at an Oracle OpenWorld keynote session. EMC is comparing a configuration with VMAX 40K plus an EMC VFCache host-side flash memory cache card to a configuration with an IBM and HDS disk system without host-side flash memory cache also configured. The comparison is clearly apples-to-oranges. Other disk system configuration details are also omitted.
FAST VP is EMC's name for its sub-volume drive tiering feature, comparable to IBM Easy Tier and Hitachi's Dynamic Tiering. The graph implies that IBM and HDS can only achieve a modest increment improvement from their sub-volume tiering. I beg to differ. I have seen various cases where a small amount of SSD on IBM DS8000 series can drastically improve performance 200 to 400 percent.
The "DBClassify" shown on the graph is a tool run as part of an EMC professional services offering called Database Performance Tiering Assessment, makes recommendations for storing various database objects on different drive tiers based on object usage and importance. Do you really need to pay for professional services? With IBM Easy Tier, you just turn it on, and it works. No analysis required, no tools, no professional services, and no additional charge!
VFCache is an optional product from EMC that currently has no integration whatsoever with VMAX. A fair comparison would have included a host-side flash memory cache (from any vendor) when the IBM or HDS storage system was configured. Or leave it out altogether and just focus on the sub-volume tiering comparison.
Keep in mind that EMC's VFCache supports only selected x86-based hosts. IBM has published a [Statement of Direction] indicating that it will also offer this for Power systems running AIX and Linux host-side flash memory cache integrated with DS8000 Easy Tier.
I feel EMC's claims about IBM DS8000 performance are vague and misleading. EMC appears to lack the kind of technical marketing integrity that IBM strives to attain.
Since EMC is not able or willing to publish fair and meaningful performance comparisons, it is up to me to set the record straight and point out EMC's failings in this matter.
Reminder: It's not to late to register for my Webcast "Solving the Storage Capacity Crisis" on Tuesday, September 25. See my blog post [Upcoming events in September] to register!
For the past three decades, IBM has offered security solutions to protect against unauthorized access. Let's take a look at three different approaches available today for the encryption of data.
Approach 1: Server-based
Server-based encryption has been around for a while. This can be implemented in the operating system itself, such as z/OS on the System z mainframe platform, or with an applicaiton, such as IBM Tivoli Storage Manager for backup and archive.
While this has the advantage that you can selectively encrypt individual files, data sets, or columns in databases, it has several drawbacks. First, you consume server resources to perform the encryption. Secondly, as I mention in the video above, if you only encrypt selected data, the data you forget to, or choose not to, encrypt may result in data exposure. Third, you have to manage your encryption keys on a server-by-server basis. Fourth, you need encryption capability in the operating system or application. And fifth, encrypting the data first will undermine any storage or network compression capability down-line.
Approach 2: Network-based
Network-based solutions perform the encryption between the server and the storage device. Last year, when I was in Auckland, New Zealand, I covered the IBM SAN32B-E4 switch in my presentation [Understanding IBM's Storage Encryption Options]. This switch receives data from the server, encrypts it, and sends it on down to the storage device.
This has several advantages over the server-based approach. First, we offload the server resources to the switch. Second, you can encrypt all the files on the volume. You can select which volumes get encrypted, so there is still the risk that you encrypt only some volumes, and not others, and accidently expose your data. Third, the SAN32B-E4 can centralized the encryption key management to the IBM Tivoli Key Lifecycle Manager (TKLM). This is also operating system and application agnostic. However, network-based encryption has the same problem of undermining any storage device compression capability, and often has a limit on the amount of data bandwidth it can process. The SAN32B-E4 can handle 48 GB/sec, with a turbo-mode option to double this to 96 GB/sec.
Approach 3: Device-based
Device-based solutions perform the encryption at the storage device itself. Back in 2006, IBM was the first to introduce this method on its [TS1120 tape drive]. Later, it was offered on Linear Tape Open (LTO-4) drives. IBM was also first to introduce Full Disk Encryption (FDE) on its IBM System Storage DS8000. See my blog post [1Q09 Disk Announcements] for details.
As with the network-based approach, the device-based method offloads server resources, allows you to encrypt all the files on each volume, can centrally manage all of your keys with TKLM, and is agnostic to operating system and application used. The device can compress the data first, then encrypt, resulting in fewer tape cartridges or less disk capacity consumed. IBM's device-based approach scales nicely. IBM has an encryption chip is placed in each tape drive or disk drive. No matter how many drives you have, you will have all the encryption horsepower you need to scale up.
Not all device-based solutions use an encryption chip per drive. Some of our competitors encrypt in the controller instead, which operates much like the network-based approach. As more and more disk drives are added to your storage system, the controller may get overwhelmed to perform the encryption.
The need for security grows every year. Enterprise Systems are Security-ready to protect your most mission critical application data.
Well, it's Tuesday again, and you know what that means! IBM Announcements!
Today, IBM announced its latest IBM Tivoli Key Lifecycle Manager (TKLM) 2.0 version. Here's a quick recap:
Centralized Key Management
Centralized and simplified encryption key management through Tivoli Key Lifecycle Manager's lifecycle of creation, storage, rotation, and protection of encryption keys and key serving through industry standards. TKLM is available to manage the encryption keys for LTO-4, LTO-5, TS1120 and TS1130 tape drives enabled for encryption, as well as DS8000 and DS5000 disk systems using Full Disk Encryption (FDE) disk drives.
Partitioning of Access Control for Multitenancy
Access control and partitioning of the key serving functions, including end-to-end authentication of encryption clients and security of exchange of encryption keys, such that groups of devices have different sets of encryption keys with different administrators. This enables [multitenancy] or multilayer security of a shared infrastructure using encryption as an enforcement mechanism for access control. As Information Technology shifts from on-premises to the cloud, multitenancy will become growingly more important.
Support for KMIP 1.0 Standard
Support for the new key management standard, Key Management Interoperability Protocol (KMIP), released through the Organization for the Advancement of Structured Information Standards [OASIS]. This new standard enables encryption key management for a wide variety of devices and endpoints. See the
[22-page KMIP whitepaper] for more information.
As much as I like to poke fun at Oracle, with hundreds of their Sun/StorageTek clients switching over to IBM tape solutions every quarter, I have to give them kudos for working cooperatively with IBM to come up with this KMIP standard that we can both support.
Support for non-IBM devices from Emulex, Brocade and LSI
Support for IBM self-encrypting storage offerings as well as suppliers of IT components which support KMIP, including a number of supported non-IBM devices announced by business partners such as Emulex, Brocade, and LSI. KMIP support permits you to deploy Tivoli Key Lifecycle Manager without having to worry about being locked into a proprietary key management solution. If you are a client with multiple "Encryption Key Management" software packages, now is a good time to consolidate onto IBM TKLM.
Role-based access control for administrators that allows multiple administrators with different roles and permissions to be defined, helping increase the security of sensitive key management operations and better separation of duties. For example, that new-hire college kid might get a read-only authorization level, so that he can generate reports, and pack the right tapes into cardboard boxes. Meanwhile, for that storage admin who has been running the tape operations for the past ten years, she might get full access. The advantage of role-based authorization is that for large organizations, you can assign people to their appropriate roles, and you can designate primary and secondary roles in case one has to provide backup while the other is out of town, for example.