This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
I have arrived safely to San Francisco, and was able to check-in at the hotel, pick up my registration badge for Oracle OpenWorld 2011, and attend the first keynote session. This is the largest Oracle OpenWorld event to-date, with over 45,000 attendees from 117 different countries. There are 520,000 square feet of exhibition floor, and over 2,400 educational sessions. The conference is spread across the different buildings of the Moscone center, as well as nearby hotels. On average, attendees will walk seven miles during the week.
Larry Ellison was the keynote speaker for this first kick-off session. He focused almost exclusively on server and storage hardware. He feels that business is all about moving data, not doing integer math.
At the beginning of 2011, Oracle had only sold about 1,000 Exadata, but they have a sales target to sell an additional 3,000 Exadata boxes by year end.
The Exadata offers up to 10x columnar compression, and has 10x faster bandwidth (40Gbps Infiniband versus 4Gbps FCP). If you have a 100TB database, it would take up only 10TB of disk with this approach. He claims that the 90TB of disk you don't have to buy can then be used to buy more DRAM and/or Flash SSD.
(Realistically, since SSD is 15x more expensive than spinning disk, you can only purchase about 6TB of Flash for the 90TB you save on disk!)
Larry claims the design point for Exadata and Exalogic was to offer a system that was more powerful than IBM's fastest P795 computer, but cheaper than commodity x86 hardware. His secret is to "Parallel everything" for faster performance, and no single points of failure (SPOF). Exadata offers up to 10-50x faster query, and 4-10x faster OLTP. To keep costs low, Exadata uses all commodity hardware except the Infiniband. He cited various customer examples:
A company replaced 36 Teradata with 3 Exadata and result was application was 8x faster.
Banco Chile 9x faster than previous system
Deutsche Post 60x faster
Sogetti gets 60x faster backups.
French bank BNP Paribas 17x faster and no change to applications.
Proctor & Gamble 18x faster
Merck 5x faster
Turkcell 250TB compressed to 25TB, 10x faster
The problem was that in each example, he said what it was compared against was the old previous system, which varies and could have been an older Sun system, or an old system from HP, IBM or Dell. Perhaps it was a freudian slip, but Larry mistakenly said "Paralyze" your applications, when he probably meant to "Parallelize".
Of all their 380,000 Oracle customers, 70 percent have SPARC/Solaris and/or Linux. Last week, Oracle announced the new SPARC-T4, which Larry claimed was 5x faster than the previous SPARC-T3. Larry feels that for the first time ever, a non-IBM CPU can challenge the long-standing rein of the IBM POWER series processor. Larry admitted that the IBM POWER7 chip actually did some tasks faster than the SPARC-T4, so his work is not yet done, but they plan to offer a new SPARC-T5 next year that will be 2x better than the SPARC-T4.
Larry compared the I/O bandwidth of serv ers based on SPARC-T4, compared to POWER7, and found that the SPARC-T4 has double the I/O bandwidth, for a cost that was only about 1/4 the cost of a mainframe. IBM offers both. POWER7-based servers for CPU-intensive workloads, and System z (S/390)-based systems for I/O-intensive workloads. Larry feels that even though POWER7 is superior than SPARC-T4 for mathematical calculations, all business applications are focused on I/O-bandwidth to move data, not computations.
Larry claims the new SPARC-T4 can do 1.2 million IOPS. He uses 40 Gbps Infiniband instead of traditional SAN-attached FCP solutions.
A new "box" called Exalytics, combines their commodity hardware platform with a hueristic adaptive in-memory cache, their latest "me-too" solution that compares with what IBM already offers in [IBM SolidDB]. In fact, their me-too is not even internally developed, but rather the result of an acquisition of a company called "Times Ten". I thought it was interesting that the only piece of Oracle software mentioned during Larry's 90-minute speach, was this piece of acquired technology. The new Exalytics product run on a small rack and grow, analyzing relational data, non-relational OLAP, as well as unstructured documents. The result is what Larry called "the Speed of Light".
He also mentioned that Bob Shimp would kick-off the Cloud later in the week. Given that Larry himself thought that Cloud was a stupid, over-marketed term that nobody has deployed over the past few years, to a complete believer, claiming that over 20 live demos will be given this year on Cloud.
Perhaps the funniest quote was his motivation to use Infiniband as the interconnect
"Ethernet was invented by Xerox when I was a child."
-- Larry Ellison
Here are some sessions that IBM is featuring on Monday. Note the first two are Solution Spotlight sessions at the IBM Booth #1111 where I will be most of the time.
IBM Cloud Computing Solutions for Oracle
10/03/11, 10:30 a.m. – 11:00 a.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: Chuck Calio,Technical Strategist, IBM Systems & Technology Group
IBM is recognized in the IT industry as one of the "Big 6" cloud providers, along with Amazon, Google, Microsoft, Salesforce and Yahoo. This session will highlight how IBM Cloud offerings apply to Oracle applications.
Lowering Cost and increasing efficiency in your long term support of Oracle EPM and BI
10/03/11, 3:00 p.m. -- 3:30 p.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: Matthew Angelstad, IBM Global Business Solutions - Oracle EPM (Hyperion) Practice Lead
In 2007, Oracle acquired Hyperion, a leading provider of performance management software. This session will show how IBM helps Oracle clients unify Enterprise Performance Management (EPM) and Business Intelligence (BI) in a cost-effective manner, supporting a broad range of strategic, financial and operational management processes.
Application Strategy: Charting the Course for Maximum Business Value
10/03/11, 3:30 p.m. – 4:30 p.m., OpenWorld session #39061
Presenter: Mike Marchildon, IBM
The industry is undergoing a shift from single Enteprise Resource Planning (ERP) application to second-generation platforms containing diverse yet interdependent systems. This shift presents opportunities and challenges for both IT and the business.
Next week, October 2-6, I am in San Francisco to support the IBM exhibition boot at [Oracle OpenWorld 2011] conference. IBM is a Grand Level Sponsor for this event. IBM and Oracle have been partners since 1986, and IBM is a [Diamond Level Partner in the Oracle OpenNetwork], the highest level available. I will be joined by dozens of other subject matter experts from various parts of IBM. Here is my schedule:
5:30pm - 7:00pm
Keynote session, Moscone North, Hall D
7:15pm - 10:30pm
IBM Team Dinner
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 4:30pm
IBM Booth #1111, Moscone South
5:00pm - 7:00pm
JD Edwards Customer Appreciation Event
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 6:00pm
IBM Booth #1111, Moscone South
7:00pm - 9:30pm
Titan Award Gala, SF City Hall
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 4:00pm
IBM Booth #1111, Moscone South
I won't have my laptop at the IBM booth, so if you need to reach me, send me an SMS text message to my cell phone, or send me a tweet on my Twitter account: [@az990tony]
IBM will also have experts in the following areas throughout the week:
Intel: Booth #711 at Moscone South
Java One: Booth #5608 at the Hilton San Francisco, Continental Ballroom
JD Edwards Pavilion: Booth HSJ-002 at the Westin St. Francis Hotel
Netezza, a newly acquired IBM company: Booth #3723 at Moscone West
I arrive Sunday afternoon. If you arrive Sunday, here are some things IBM is featuring:
Network with Other Quest IBM Customers on PeopleSoft
10/02/11, 10:00 a.m. – 11:00 a.m., OpenWorld session #29020
Presenter: Steve Johnston, IBM
Discuss topics of interest with your peers in this special interest group meeting for IBM customers using Oracle's PeopleSoft Enterprise applications.
Network with Other Quest IBM Customers on JD Edwards
10/02/11, 11:15 a.m. – 12:15 p.m., OpenWorld session #29001
Presenter: Steve Johnston, IBM
Discuss topics of interest with your peers in this special interest group meeting for IBM customers using Oracle's JD Edwards EnterpriseOne or JD Edwards World applications.
IOUG: Oracle Business Intelligence Enterprise Edition/Oracle Business Intelligence Applications (27380)
10/02/11, 12:15 p.m. – 1:15 p.m., OpenWorld session #27380
Presenters: Shyam Nath, IBM; Florian Schouten, Oracle
This session looks at Oracle Business Intelligence Enterprise Edition (OBIEE) and Oracle Business Intelligence Applications solutions. Hear what's new in OBIEE Release 18.104.22.168 and how that affects Oracle BI Applications implementations. Learn how mobile BI support in OBIEE adds new meaning to pervasive BI.
IOUG: Oracle Exadata Customer Panel
10/02/11, 1:30 p.m. – 2:30 p.m., OpenWorld session #27261
Presenters: Shyam Nath, IBM; Vinod Haval, Bank of America
This moderated panel discussion includes Oracle Exadata customers, Oracle product managers, and implementers who will share their real work implementation experiences and how they overcame the challenges in the process.
Managing Your Oracle Applications in Today's Economy: Ask the Experts
10/02/11, 1:30 p.m. – 3:30 p.m., OpenWorld session #29280
Presenter: Frances Wells, IBM
Attend a panel discussion of your peers as they discuss how effective data management strategies have helped them reduce costs, streamline test and development projects, and improve Oracle application performance while increasing IT efficiencies.
Download IBM’s mobile app for Oracle
OpenWorld and receive a Starbucks
gift card! (While supplies last!)
Visit [myIBMmobile.com] and get the IBM mobile
app—your guide to navigating IBM events at Oracle OpenWorld 2011.
Optimized for mobile devices—tablet friendly.
Uncover the best award-winning restaurants in San
Fransisco with the free Zagat guide to local restaurants
Easily navigate the show floor and the city with special
Stay on schedule with a helpful list of all IBM sessions
Learn more about the IBM/Oracle relationship
Find Starbucks locations close to Moscone Center
Of course, IBM is going all out on the social media side as well:
Every September, IBM Tucson spends a Wednesday or Saturday to help out local non-profit charities. The event is orgnaized the the local United Way. My first one was packing boxes of food for the [Community Food Bank of Southern Arizona] on September 12, 2001, the day after the [tragic events in New York and Washington DC]. The mindless activity of putting a bottle, bag or can into one box after another helped us cope with the shock and awe that week.
So, it seemed fitting on the 10th anniversary of that event to go back to the Community Food Bank and help pack boxes of food. The facility received nearly $200,000 in donations in response to the [shooting of US Congresswoman Gabrielle Giffords]. Her husband, astronaut Mark Kelly, suggested that dontaions go in part to the Tucson Community Food Bank, and with the money they were able to expand operations, dedicating a portion as the [Gabrielle Giffords Family Assistance Center] to bring together food handouts with the [Supplemental Nutrition Assistance Program for food stamps, and the Women with Infant Children (WIC) program. One-stop assistance!
This year, nearly 500 Tucson IBMers to complete 22 projects at 17 nonprofit agencies. We were not alone, we were joined by volunteers from Bank of America, Texas Instruments, Tucson Medical Center, Geico Insurance, University of Arizona, Cox Cable TV, Desert Diamond Casinos, The Westin La Paloma Resort and Spa, the Arizona Lottery, Community Partnership of Southern Arizona (CPSA), Pizza Hut, Arizona Daily Star, 94.9 MixFM Radio, BizTucson, and News 4 Tucson (our local NBC affiliate).
In a bit of competition, our team, Team B, of 14 IBMers, competed against another team, Team A, of 20 people. Despite having fewer people, we were able to pack 746 boxes, representing 20,000 pounds of food, beating out Team A which only packed 18,000 pounds. (I have chosen not to identify anyone on Team A, no need to rub their noses in it. This was all for a good cause.)
Each box contained cereal, canned evaporated milk, canned vegetables and fruits, fruit juice, rice, and dry beans. My job on the assembly line was to put two half-gallon jugs of grape juice in the box and move it down the line.
What lessons can a team of people learn from an activity like this?
When you put a bunch of efficiency experts from IBM on a task, they will self-organize and self-manage for optimum performance, just as we don on our regular day jobs.
No matter what you plan in advance, individual personalities and strengths surface, encouraging minor adjustments to process and procedures to be more efficient.
In an assembly line process, where each person has to wait for the person before them to finish their assigned task, it becomes obvious who is not pulling their fair share of the work. In this manner, everyone holds everyone else accountable for their output.
This was a great day for a good cause. The Community Food Bank qualifies for the Arizona [Working Poor Tax Credit] program. For every dollar the Community Food Bank receives, they can give 10 dollars of food to someone in need.
Special thanks to Greg Kishi for being our team leader for this event, and to Carol Tribble for taking these photographs.
Last week, fellow IBMer Ron Riffe started his three-part series on the Storage Hypervisor. I discussed Part I already in my previous post [Storage Hypervisor Integration with VMware]. We wrapped up the week with a Live Chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
"The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog.
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
A storage hypervisor increases the utilization of storage resources, and optimizes what is most scarce in your environment. For Linux, UNIX and Windows servers, you typically see utilization rates of 20 to 35 percent, and this can be raised to 55 to 80 percent with a storage hypervisor. But what is most scarce in your environment? Time! In a competitive world, it is not big animals eating smaller ones as much as fast ones eating the slow.
Want faster time-to-market? A storage hypervisor can help reduce the time it takes to provision storage, from weeks down to minutes. If your business needs to react quickly to changes in the marketplace, you certainly don't want your IT infrastructure to slow you down like a boat anchor.
Want more time with your friends and family? A storage hypervisor can migrate the data non-disruptively, during the week, during the day, during normal operating hours, instead of scheduling down-time on an evenings and weekends. As companies adopt a 24-by-7 approach to operations, there are fewer and fewer opportunities in the year for scheduled outages. Some companies get stuck paying maintenance after their warranty expires, because they were not able to move the data off in time.
Want to take advantage of the new Solid-State Drives? Most admins don't have time to figure out what applications, workloads or indexes would best benefit from this new technology? Let your storage hypervisor automated tiering do this for you! In fact, a storage hypervisor can gather enough performance and usage statistics to determine the characteristics of your workload in advance, so that you can predict whether solid-state drives are right for you, and how much benefit you would get from them.
Want more time spent on strategic projects? A storage hypervisor allows any server to connect to any storage. This eliminates the time wasted to determine when and how, and let's you focus on the what and why of your more strategic transformational projects.
If this sounds all too familiar, it is similar to the benefits that one gets from a server hypervisor -- better utilization of CPU resources, optimizing the management and administration time, with the agility and flexibility to deploy new technologies in and decommission older ones out.
"Server virtualization is a fairly easy concept to understand: Add a layer of software that allows processing capability to work across multiple operating environments. It drives both efficiency and performance because it puts to good use resources that would otherwise sit idle.
Storage virtualization is a different animal. It doesn't free up capacity that you didn't know you had. Rather, it allows existing storage resources to be combined and reconfigured to more closely match shifting data requirements. It's a subtle distinction, but one that makes a lot of difference between what many enterprises expect to gain from the technology and what it actually delivers."
Jon Toigo on his DrunkenData blog brings back the sanity with his post [Once More Into the Fray]. Here is an excerpt:
"What enables me to turn off certain value-add functionality is that it is smarter and more efficient to do these functions at a storage hypervisor layer, where services can be deployed and made available to all disk, not to just one stand bearing a vendor’s three letter acronym on its bezel. Doesn’t that make sense?
I think of an abstraction layer. We abstract away software components from commodity hardware components so that we can be more flexible in the delivery of services provided by software rather than isolating their functionality on specific hardware boxes. The latter creates islands of functionality, increasing the number of widgets that must be managed and requiring the constant inflation of the labor force required to manage an ever expanding kit. This is true for servers, for networks and for storage.
Can we please get past the BS discussion of what qualifies as a hypervisor in some guy’s opinion and instead focus on how we are going to deal with the reality of cutting budgets by 20% while increasing service levels by 10%. That, my friends, is the real challenge of our times."
Did you miss out on last Friday's Live Chat? We are doing it again this Friday, covering parts I and II of Ron's posts, so please join the conversation! The virtual dialogue on this topic will continue in another [Live Chat] on September 30, 2011 from 12 noon to 1pm Eastern Time.
Over on the Tivoli Storage Blog, there is an exchange over the concept of a "Storage Hypervisor". This started with fellow IBMer Ron Riffe's blog post [Enabling Private IT for Storage Cloud -- Part I], with a promise to provide parts 2 and 3 in the next few weeks. Here's an excerpt:
"Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them.
In August, Gartner published a paper [Use Heterogeneous Storage Virtualization as a Bridge to the Cloud] that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration.
A good storage hypervisor helps you be smart.
Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s 'in the cloud'."
"Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing."
At this point, you might scratch your head and ask "Does this Storage Hypervisor exist, or is this just a theoretical exercise?" The answer of course is "Yes, it does exist!" Just like VMware offers vSphere and vCenter, IBM offers block-level disk virtualization through the SAN Volume Controller(SVC) and Storwize V7000 products, with a full management support from Tivoli Storage Productivity Center Standard Edition.
SVC has supported every release of VMware since the 2.5 version. IBM is the leading reseller of VMware, so it makes sense for IBM and VMware development to collaborate and make sure all the products run smoothly together. SVC presents volumes that can be formatted for VMFS file system to hold your VMDK files, accessible via FCP protocol. IBM and VMware have some key synergies:
Management integration with Tivoli Storage Productivity Center and VMware vCenter plug-in
VAAI support: Hardware-assisted locking, hardware-assisted zeroing, and hardware-assisted copying. Some of the competitors, like EMC VPLEX, don't have this!
Space-efficient FlashCopy. Let's say you need 250 VM images, all running a particular level of Windows. A boot volume of 20GB each would consume 5000GB (5 TB) of capacity. Instead, create a Golden Master volume. Then, take 249 copies with space-efficient FlashCopy, which only consumes space for the modified portions of the new volumes. For each copy, make the necessary changes like unique hostname and IP address, changing only a few blocks of data each. The end result? 250 unique VM boot volumes in less than 25GB of space, a 200:1 reduction!
Support for VMware's Site Recovery Manager using SVC's Metro Mirror or Global Mirror features for remote-distance replication.
Data center federation. SVC allows you to seamlessly do vMotion from one datacenter to another using its "stretched cluster" capability. Basically, SVC makes a single image of the volume available to both locations, and stores two physical copies, one in each location. You can lose either datacenter and still have uninterrupted access to your data. VMware's HA or Fault Tolerance features can kick in, same as usual.
But unlike tools that work only with VMware, IBM's storage hypervisor works with a variety of server virtualization technologies, including Microsoft Hyper-V, Xen, OracleVM, Linux KVM, PowerVM, z/VM and PR/SM. This is important, as a recent poll on the Hot Aisle blog indicates that [44 percent run 2 or more server hypervisors]!
Join the conversation! The virtual dialogue on this topic will continue in a [live group chat] this Friday, September 23, 2011 from 12 noon to 1pm EDT. Join me and about 20 other top storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.