This blog is for the open exchange of ideas relating to IBM Systems, storage and storage networking hardware, software and services.
(Short URL for this blog: ibm.co/Pearson )
Tony Pearson is a Master Inventor, Senior IT Architect and Event Content Manager for [IBM Systems for IBM Systems Technical University] events. With over 30 years with IBM Systems, Tony is frequent traveler, speaking to clients at events throughout the world.
Tony is author of the Inside System Storage series of books, available on Lulu.com! Order your copies today!
Lloyd Dean is an IBM Senior Certified Executive IT Architect in Infrastructure Architecture. Lloyd has held numerous senior technical roles at IBM during his 19 plus years at IBM. Lloyd most recently has been leading efforts across the Communication/CSI Market as a senior Storage Solution Architect/CTS covering the Kansas City territory. In prior years Lloyd supported the industry accounts as a Storage Solution architect and prior to that as a Storage Software Solutions specialist during his time in the ATS organization.
Lloyd currently supports North America storage sales teams in his Storage Software Solution Architecture SME role in the Washington Systems Center team. His current focus is with IBM Cloud Private and he will be delivering and supporting sessions at Think2019, and Storage Technical University on the Value of IBM storage in this high value IBM solution a part of the IBM Cloud strategy. Lloyd maintains a Subject Matter Expert status across the IBM Spectrum Storage Software solutions. You can follow Lloyd on Twitter @ldean0558 and LinkedIn Lloyd Dean.
Tony Pearson's books are available on Lulu.com! Order your copies today!
Featured Redbooks and Redpapers:
"The postings on this site solely reflect the personal views of each author and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management."
(c) Copyright Tony Pearson and IBM Corporation.
All postings are written by Tony Pearson unless noted otherwise.
Tony Pearson is employed by IBM. Mentions of IBM Products, solutions or services might be deemed as "paid
endorsements" or "celebrity endorsements" by the US Federal Trade Commission.
Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.
Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.
Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.
Tony Pearson is not a medical doctor, and this blog does not reference any IBM product or service that is intended for use in the diagnosis, treatment, cure, prevention or monitoring of a disease or medical condition, unless otherwise specified on individual posts.
The developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.
Wednesday afternoon at the [Oracle OpenWorld 2011] conference started with another keynote session.
In a last minute substitution, Oracle OpenWorld rescheduled Salesforce.com CEO Marc Benioff's keynote from Wednesday to a Thursday 8:00am morning general session, to make room for S. D. Shibulal, CEO of InfoSys Consulting.
(Forbes Magazine considers InfoSys the #15 [most innovative company]. To give this some context, Salesforce.com and Amazon.com are #1 and #2, Google and Apple are in the top 10, and Oracle is #77.
)
S.D. started out saying that "Today is October 4, a very important day in history!" This was October 6, so everybody was a bit confused, checking their watches and tablets to confirm what day it was. What he was referring to was the first trans-pacific flight that happened October 4 exactly 80 years ago, the pilots were awarded medals and accolades for this tremendous achievement. This year, trans-pacific flights happen every day, and nobody raises an eyebrow. I looked this up, and the first trans-pacific flight happened [June 9, 1928] from California to Australia involved stops in Hawaii and Fiji, but [Clyde Edward Pangborn] is remembered for his October 4, 1931 flight as the first non-stop flight, from Japan to Seattle, Washington. His point, however, is that innovation has a lot of "firsts" that people don't realize until things are commonplace.
If you look at the 1991 list of Fortune 500 companies, only 25 percent of these still are in operation today (IBM is one of them!) The rest failed to stay relevant, to reach and scale as needed for market transitions. He gave examples of travel agencies and the Encyclopedia Brittanica that failed to adapt in the face of [disintermediation]. Success in today's marketplace requires three things:
Predicting and sensing tomorrow's demand
Influencing tomorrow's demand
Fulfilling tomorrow's demand
Ming Tsai, Managing Director and Chief Client Office of InfoSys, asked a series of questions:
"Is Market Research dead?" In 2010, over 4 EB of data were generated. Marketeers do not need to conduct surveys to generate more data, they are drowning in data that is all around them. 80 percent of profits come from 20 percent of your clients.
"Who controls the message?" This was perhaps a tip of the hat to ousted Marc Benioff of Salesforce.com for being able to organize his own last-minute keynote at the St. Regis hotel using twitter and other social media. I was not there, but apparently the place was packed with a line around the building to hear Marc talk.
"Is Product Development backwards? This is referring to the standard waterfall approach of designing a product, shipping a product, with the first interaction with the end customer being the last stage. In new "agile" development models, customers are engaged up front, with highly iterative deployments, to ensure that the final product meets customer requirements.
He showed a product called "Social Edge" from InfoSys that determines "sentiment analysis". Users can co-create their own user experience.
Paul Gottsegen, InfoSys, explained how "Mobility" is challenging existing business models. Their latest product "mConnect" offers to link web applications to any mobile device. This includes healthcare monitoring, cash for the 50 percent of the world that are "unbanked", and even Cable TV on an iPad. He brought on stage Bill Tucker, VP of IT at Nordstrom, a retail outlet of fine clothing.
(I have a collection of Nordstrom jackets that I had bought in San Francisco Union Square throughout the years. Every time that I fly from Tucson to San Francisco, especially in the summer, it is freezing cold, and I need to buy a jacket. This time I was prepared, and brought several of me jackets with me.)
Bill explained that people are not comparing their end-user experience at Nordstrom's with direct competitors like Macy's, but rather with all of their other end-user experiences like that at Starbuck's coffee, or the Apple store. This raises the bar in customer expectations. Nordstrom has been force to make drastic improvements to keep up with these expectations.
Prasad Thirkutam, InfoSys, asked if supply chains are agile and adaptive enough. He mentioned that 40 percent of Flash memory and 60 percent of circuit boards are made in Japan that were recently hit by an earthquake and tsunami. He explained product "Demand-to-Deliver" solution from InfoSys that provides multi-level inventory, identifying the safety stock levels based on various analytics. This reduces waste by 7 percent, and shortens cash from 60 days to 40 days.
Prasad introduced Vin Melvin, CIO of Arrow Electronics. His focus is to get data "correct". To increase end-to-end speed to handle order changes and cancelations, and to optimize and re-balance supply chain as needed.
(FTC Disclosure: Arrow is a distributor of IBM equipment. I have worked with Arrow many years.)
Wednesday morning at the [Oracle OpenWorld 2011] conference started with another keynote session. This time, Safra Catz, CFO and President of Oracle, introduced John Chambers, CEO of Cisco.
John says Cisco is helping to "empower the customer through market transitions." This includes helping customers decide how to deploy new technology, choosing between integrated stacks and interoperable components, scaling the business with a flat IT budget, and how/when to decide on moving to the cloud.
(FTC Disclosure: IBM resells Cisco switches and directors and are considered a partner in this sense. If you are going to buy Cisco switches and directors, please consider buying them through IBM.)
The information economy is transitioning to a networked one. Access to information is not as important as access to expertise. Process and Procedures are not as important as Communities and Relationships. The old style Command-and-Control management is giving way to Collaboration. He showed a chart that showed the evolution from routed/bridged networks to packet/mobile and video. He also had a chart that showed the evolution from Mainframe/Mini-computers, to Client/Server and Web, to Virtualization in the Cloud. He also indicated that Google's acquisition of Motorola was indicative of the "Death of the PC".
High Tech companies must re-invent themselves to stay relevant. Here were Cisco's five "Foundational Priorities":
Leadership in the Core. This refers to his core business of high-end Ethernet and Fibre Channel directors.
Collaboration. This was the original promise of networking computers together, was to bring people together also. He feels that "Collaboration" will take off in the 2010's.
Data Center/Virtualization/Cloud. Cisco is now in the business of selling computers. They are now #2 in North America for x86 server sales, and #3 globally. In this regard, they are a direct competitor to both IBM and Oracle at this conference. John wants to create "borderless" networks between Private and Public clouds. He claims that they have now 8,228 Ciscu UCS customers over the past 18 months. This was a slam at Oracle, who hasn't sold half that many new systems in the same time period.
Video. John indicated that every product in the Cisco family is video-enabled, from the Cius tablet, to WebEx, to TelePresence, to all of his switches and directors. In theory, the "Flip" video cam that Cisco dropped in their latest round of layoffs would have also been counted in that category. John indicates that he envisions video will take over as the predominant communication mechanism. Back in 2006, at Oracle OpenWorld, John showed a chart that indicated that people will transition from passive TV-watchers to active video producers. Here we are five years later, and while 24 hours' worth of video are uploaded to YouTube every second, most people are still TV-watchers.
Architectures for Business Transformation. He elaborated on this to refer to issues like reliability, security, and products that are designed to work together. Business and Government leaders are focused on their business, not technology.
He gave a demo of Cisco UCS. This is a 4U collection of server blades, with up to 384GB of DRAM using 8GB DIMMs, or 192GB using much-cheaper 4GB DIMMs. There are 2 switches with 8 ports each 10GbE, for a total of 160 Gbps, that can carry both Ethernet and FCoE traffic. The UCS System Manager is similar to IBM's Unified Resource Manager in that it manages the entire box. A "service profile" has 40 to 50 BIOS settings that can be applied to give each x86 blade a specific personality. You can re-provision these by changing their service profile as needed.
The next demo was really cool. They took video that involved people talking, and had it "machine transcribed" so that you can read the words being said in the video. Type in a word like "tolerances" in the search engine, and the video advances exactly to the spot where that word is uttered.
The next demo after that involved a special camera for monitoring High-Occupancy Vehicle (HOV) lanes in traffic. In an example used in London, UK, the camera can see inside the car and confirm there are enough people to justify HOV usage, and if not, scan the license plate and charge the owner of the vehicle a fine. (In a sense, "Big Data" analytics combined with Cisco's vision of ubiquitous video equals [Big Brother])
In another slam against Oracle, John actually backed up his claims with published benchmarks. He wrapped up his talk with: "If I have done my job well, then you will all leave this room a bit uncomfortable." Not surprisingly, John didn't mention either the vBlock relationship with EMC, or the FlexPod relationship with NetApp.
Tuesday morning at the [Oracle OpenWorld 2011] conference started with another keynote session. This time, Michael Dell, founder, chairman and CEO of Dell, Inc., presented. Over the past nine years, he feels "the line between business and IT is going away." Michael claims that "Dell is no longer a PC company", and instead is focusing on data center solutions and services to be more like IBM.
John Fowler, Executive VP for Oracle Hardware, claims that Oracle has a single team for hardware development. The SPARC-T4 is their newest chip, with 8 cores and 64 dynamic threads, running at 3.0 GHz. It has on-chip 10GbE ethernet, PCIe, DDR3 Memory controllers and Crypto features. For storage, Oracle now offers four different offerings:
Exadata (as Database storage)
ZFS Storage Array (NAS)
Pillar Axiom (block-level I/O)
StorageTek tape
Edward Screven, Chief Corporate Architect at Oracle, indicated that the new Oracle Linux kernel allows for zero downtime patches, meaning that you can update the OS while applications are running without a reboot. The OracleVM (based on open-source XEN) supports both x86 and SPARC-based server hosts. On x86, it can run Linux, Solaris and Windows guests. On SPARC, it can run Linux and Solaris guests.
John Loaiza, Oracle Senior VP, explained the Exadata. It has 168 disk drives and 56 PCIe Flash Cards, connected via 40Gbps Infiniband. The Exadata keeps all data on spinning disk, with "warm data" cached on Flash, and "hot data" cached on DRAM. This is similar to IBM's Easy Tier feature on the DS8000, SVC and Storwize V7000.
Brad Cameron, Senior Director, explained Exalogic, which pre-dates Oracle's acquisition of Sun Microsystems. The idea was to build an x86 machine for running Java applications on Oracle WebLogic. The Exalogic can connect via Infiniband to an Exadata to access database information, and to 10GbE ethernet for the rest of the servers and clients. Whether you get the quarter, half or full-rack system, you get 40TB of NAS storage.
Ganesh Ramamurthy, Oracle VP of Hardware Engineering, presented the SPARC Supercluster. This combines the storage cells from Exadata, the compute nodes from Exalogic, shared NAS storage using ZFS file system, and Solaris 11 with OracleVM. Taking a cue from IBM's zEnterprise Unified Resource Manager, Oracle is offering centralized management for all the layers in their SPARC Supercluster stack. The SPARC Supercluster is intended as general purpose machine, and can be used to run non-Oracle applications like SAP. From a storage perspective, he claims that the storage in the SPARC Supercluster is 2.5x better than EMC VMAX, which basically puts it comparable to IBM XIV pricing.
For my readers in San Francisco attending Oracle OpenWorld, here are some sessions that IBM is featuring on Wednesday. Note the first two are Solution Spotlight sessions at the IBM Booth #1111 where I will be most of the time.
Data Management Best Practices for Oracle Applications
Oracle RAC and Cloud: Tips from IBM Global Business Services
10/05/11, 10:00 a.m. -- 11:00 a.m., OpenWorld session #15733
Presenters: David Simpson, IBM; Nalin Sahoo, Oracle
In this session, gain valuable insight into high-availability systems leveraging Oracle Database 11g Release 2 and Oracle Real Application Clusters (Oracle RAC). Hear best practices and lessons learned with these Oracle technologies as well as how IBM utilizes cloud infrastructure with Oracle Clusterware and server pools.
In the Heat of the Oracle Fusion Decision-Making Process: What's Your Next Move?
10/05/11, 10:00 a.m. -- 11:00 a.m., OpenWorld session #9423
Presenter: Esther Parker, IBM
This session discusses how companies can embrace Oracle Fusion so they can meet their business objectives today and in the future.
Monday morning of the [Oracle OpenWorld 2011] conference had Joe Tucci, CEO of EMC, present the keynote. Joe indicated that I.T. stands for "Industry in Transition". He had a chart that showed the history of IT, from the mainframe and mini-computer, to the PC and client/server era, and now to the Cloud era. He called these "waves of disruption". The catalysts for change are a "Budge Dilemma", "Information Deluge" and "Cyber Security". The keynote was very similar to what EMC presented at [VMworld] conference earlier this summer.
"We have failed our customers. Over the past 10 years, they spend 73% to maintain their existing systems, and only 27% for new."
--- Joe Tucci, EMC
While many people equate "EMC" and "Failure", I believe Joe was referring not just to his own company, but most of the other IT vendors as well. Analysts predict that from January 1, 2010 to December 31, 2019, the world of stored data will grow from 0.9 ZB to 35.2 ZB, which represents a 44x increase. During that same time, IT staff is only expected to grow 50 percent. A staggering 90 percent of this data will be unstructured (non-database) content. Meanwhile, the average company gets cyber-attacked 300 times per week.
The answer is Cloud Computing. A few years ago, EMC was trying to get people to go "private cloud" route instead of "public cloud", they now have a more realistic "hybrid cloud" approach similar to IBM. Of the clients that EMC works with, 35 percent are implementing some form of cloud, and another 30 percent are planning to. The tenents of Hybrid Cloud are "Efficiency", "Control" and "Choice" which equals "Agility".
Joe also mentioned that there is now a new "layering" for IT. Instead of storage, switches and servers, we have a cloud platform of shared resources, mobile devices like smartphones and tablets, and management.
Joe feels there is a massive opportunity where Cloud meets Big Data. A cute video showed a driver wearing a motorcycle helmet so you can't see his face get into an under-powered car with "VNXe" on the license plate. He punches in "Cloud and Big Data" into the GPS navigation system, and starts out on city streets. Then the car transforms to an under-utilized family sedan "VNX" on a highway in the middle of the desert, then transforms to an over-priced sports car labeled "VMAX" as it climbs into the mountains surrounded by fog. The video borrowed the "CARS" theme from the videos IBM developed for its 2008 launch of "Information Infrastructure" initiative.
EMC's Pat Gelsinger (CTO) and fellow blogger Chad Sakac did some demos of VMware vCenter. They called the VMware vSphere "the Datacenter-wide OS" indicating that EMC storage has 75 points of integration with their "partner" (VMware is majority-owned by EMC, so I am not sure if partner is the right term). If you don't count Itanium, SPARC, POWER and IBM Syste z architectures, VMware enjoys over 80 percent marketshare for server virtualization.
(Full disclosure: IBM is the leading reseller of VMware.)
Pat claims that 40 percent of Oracle Apps at EMC run VMware. For the longest time, Oracle refused support its apps on VMware, but they relaxed this restrictive policy back in 2009. Today, nearly 25 percent of Oracle Apps run virtualized. EMC claims that they can support 5 million VMs on a single VMAX, and can generate 1 million IOPS from a single VMware ESX host.
Chad did a demo of vFabric which allows a vCenter plug-in to kick up Database instances of OracleDB, MySQL, Hadoop, PostgreSQL, and GreenPlum (GreenPlum is EMC's version of open-source PostgreSQL).
Chad showed that VMware vMition could move workloads from servers without solid-state, to servers that are flash-enabled. Lightweight workloads can be moved from DAS-enabled servers to compute-enabled storage devices like their EMC Isilon. (EMC acquired Isilon to offer their me-too version of IBM's Scale-Out NAS [SONAS] product.) EMC announced their first "Solid-State on a PCIe card" from their Project Lightning initiative. These are 320 GB capacity, so they sounded like a me-too versino of IBM's [Fusion-io IOdrive] cards that IBM has had available for quite some time now.
Next, Pat and Chad talked about Big Data. The world is transforming from a manual scale-up model to an automated scale-out architecture. Moving from "islands" to "pools". They used a cute example of Car Insurance. Business Analytics were able to review a safe drivers record, including the driver's Facebook and Twitter activity, and give him a discount, and then review the bad driving habits of another driver, and raise the bad driver's rates.
EMC announced their "GreenPlum Analytics Platform" (GAP?). I often tell people that if you want to predict what EMC will announce next, just look at what IBM announced 18 months ago. This new platform sounds like their me-too version of IBM's [Smart Analytics System].
After EMC, Judith Sim from Oracle introduced the Ed Lee, the Mayor of San Francisco which was just named the "Greenest city in North America". He thanked the audience for contributing an estimated $100 million USD to his local economy. Also, he was happy that by eliminating paper-based handouts and conference materials, the audience saved 1,636 trees.
Mark Hurd, formerly CEO of HP, and now president of Oracle, gave some highlights of 2011, and what Oracle's strategy is going forward. He said that Oracle plans to provide complete stacks, complete choice, and have each component of the stack be best-of-breed. In 2011, Oracle introduced the new MySQL 5.5 database, Java 7 programming language, and the Solaris 11 operating system with ZFS file system. Oracle spent $4 Billion in R&D, and gained 20 percent growth in software licenses, which gave them 33 percent growth fiscally for 2011 year. Oracle acquired Larry Ellison's [Pillar Data] storage company. Oracle also launched a [Database Appliance].
Thomas Kurian, another Oracle executive, finished the keynote session. He started with yet another chart showing the historical transition from Mainframe to Tablet. He indicated that leading-edge OracleDB and their Fusion middleware combined with industry standard hardware provides 5-30x faster queries, 4-10x less disk space, and simplifies the data center footprint. Their Exadata provides what he likes to call "Hierarchical Storage Management" between DRAM, Flash Solid-State, and spinning disk.
(Note: I started my career at IBM in 1986 working on a product called DFHSM, the Data Facility Hierarchical Storage Manager! It is now a vibrant component of DFSMS, part of IBM's z/OS mainframe operating system.)
ps this new announcement is to address that deficiency.
Finally, Oracle announced their "Exadata Storage Expansion Rack". Many people realized that the Exadata was under-provisioned for storage, which explains why they have only sold a few thousand of them, so perha
If you are attending Oracle OpenWorld, here are sessions for Tuesday that IBM is featuring. Note the first two are Solution Spotlight sessions at the IBM Booth #1111 where I will be most of the time.
Securing Heterogeneous Database Infrastructures: A Comprehensive Approach
10/04/11, 9:45 a.m. -- 10:15 a.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: Al Cooley, Director, IBM InfoSphere Guardium
IBM Business Analystics for Oracle Solutions
10/04/11, 2:15 p.m. -- 2:45 p.m., Solution Spotlight, Booth #1111 Moscone South
Presenter: John Strazdins, ERP Strategy Executive
Consolidated Global View of Your Customer with One Global Billing System
10/04/11, 3:30 p.m. -- 4:30 p.m., OpenWorld session #23650
Presenter: John Waterman, IBM
Enterprise billing system technologies are emerging to assist with global customer views and other challenges banks struggle with today. In this session, Citi discusses its challenges and successes in implementing a global billing system.
Upgrading Your Siebel CRM with Reduced Risk and Lowered Cost: Customer Successes
10/04/11, 3:30 p.m. -- 4:30 p.m., OpenWorld session #18222
Presenters: Arnaud Wingelaar, IBM; Geetha Sundaram; Agnes Zhang, Oracle
Hear customer success stories about upgrading Siebel CRM. Learn best practices on upgrading with lowered cost, or achieving a high-availability upgrade with zero downtime and reduced risk.
Next week, October 2-6, I am in San Francisco to support the IBM exhibition boot at [Oracle OpenWorld 2011] conference. IBM is a Grand Level Sponsor for this event. IBM and Oracle have been partners since 1986, and IBM is a [Diamond Level Partner in the Oracle OpenNetwork], the highest level available. I will be joined by dozens of other subject matter experts from various parts of IBM. Here is my schedule:
Day
Time
Description
Sunday
5:30pm - 7:00pm
Keynote session, Moscone North, Hall D
7:15pm - 10:30pm
IBM Team Dinner
Monday
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 4:30pm
IBM Booth #1111, Moscone South
5:00pm - 7:00pm
JD Edwards Customer Appreciation Event
Tuesday
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 6:00pm
IBM Booth #1111, Moscone South
7:00pm - 9:30pm
Titan Award Gala, SF City Hall
Wednesday
8:00am - 9:15am
Keynote session, Moscone North, Hall D
9:45am - 4:00pm
IBM Booth #1111, Moscone South
I won't have my laptop at the IBM booth, so if you need to reach me, send me an SMS text message to my cell phone, or send me a tweet on my Twitter account: [@az990tony]
IBM will also have experts in the following areas throughout the week:
Intel: Booth #711 at Moscone South
Java One: Booth #5608 at the Hilton San Francisco, Continental Ballroom
JD Edwards Pavilion: Booth HSJ-002 at the Westin St. Francis Hotel
Netezza, a newly acquired IBM company: Booth #3723 at Moscone West
I arrive Sunday afternoon. If you arrive Sunday, here are some things IBM is featuring:
Network with Other Quest IBM Customers on PeopleSoft
10/02/11, 10:00 a.m. – 11:00 a.m., OpenWorld session #29020
Presenter: Steve Johnston, IBM
Discuss topics of interest with your peers in this special interest group meeting for IBM customers using Oracle's PeopleSoft Enterprise applications.
Network with Other Quest IBM Customers on JD Edwards
10/02/11, 11:15 a.m. – 12:15 p.m., OpenWorld session #29001
Presenter: Steve Johnston, IBM
Discuss topics of interest with your peers in this special interest group meeting for IBM customers using Oracle's JD Edwards EnterpriseOne or JD Edwards World applications.
IOUG: Oracle Business Intelligence Enterprise Edition/Oracle Business Intelligence Applications (27380)
10/02/11, 12:15 p.m. – 1:15 p.m., OpenWorld session #27380
Presenters: Shyam Nath, IBM; Florian Schouten, Oracle
This session looks at Oracle Business Intelligence Enterprise Edition (OBIEE) and Oracle Business Intelligence Applications solutions. Hear what's new in OBIEE Release 11.1.1.5 and how that affects Oracle BI Applications implementations. Learn how mobile BI support in OBIEE adds new meaning to pervasive BI.
IOUG: Oracle Exadata Customer Panel
10/02/11, 1:30 p.m. – 2:30 p.m., OpenWorld session #27261
Presenters: Shyam Nath, IBM; Vinod Haval, Bank of America
This moderated panel discussion includes Oracle Exadata customers, Oracle product managers, and implementers who will share their real work implementation experiences and how they overcame the challenges in the process.
Managing Your Oracle Applications in Today's Economy: Ask the Experts
10/02/11, 1:30 p.m. – 3:30 p.m., OpenWorld session #29280
Presenter: Frances Wells, IBM
Attend a panel discussion of your peers as they discuss how effective data management strategies have helped them reduce costs, streamline test and development projects, and improve Oracle application performance while increasing IT efficiencies.
Download IBM’s mobile app for Oracle
OpenWorld and receive a Starbucks
gift card! (While supplies last!)
Visit [myIBMmobile.com] and get the IBM mobile
app—your guide to navigating IBM events at Oracle OpenWorld 2011.
Optimized for mobile devices—tablet friendly.
Uncover the best award-winning restaurants in San
Fransisco with the free Zagat guide to local restaurants
Easily navigate the show floor and the city with special
tools
Stay on schedule with a helpful list of all IBM sessions
Learn more about the IBM/Oracle relationship
Find Starbucks locations close to Moscone Center
Of course, IBM is going all out on the social media side as well:
Every September, IBM Tucson spends a Wednesday or Saturday to help out local non-profit charities. The event is orgnaized the the local United Way. My first one was packing boxes of food for the [Community Food Bank of Southern Arizona] on September 12, 2001, the day after the [tragic events in New York and Washington DC]. The mindless activity of putting a bottle, bag or can into one box after another helped us cope with the shock and awe that week.
So, it seemed fitting on the 10th anniversary of that event to go back to the Community Food Bank and help pack boxes of food. The facility received nearly $200,000 in donations in response to the [shooting of US Congresswoman Gabrielle Giffords]. Her husband, astronaut Mark Kelly, suggested that dontaions go in part to the Tucson Community Food Bank, and with the money they were able to expand operations, dedicating a portion as the [Gabrielle Giffords Family Assistance Center] to bring together food handouts with the [Supplemental Nutrition Assistance Program for food stamps, and the Women with Infant Children (WIC) program. One-stop assistance!
This year, nearly 500 Tucson IBMers to complete 22 projects at 17 nonprofit agencies. We were not alone, we were joined by volunteers from Bank of America, Texas Instruments, Tucson Medical Center, Geico Insurance, University of Arizona, Cox Cable TV, Desert Diamond Casinos, The Westin La Paloma Resort and Spa, the Arizona Lottery, Community Partnership of Southern Arizona (CPSA), Pizza Hut, Arizona Daily Star, 94.9 MixFM Radio, BizTucson, and News 4 Tucson (our local NBC affiliate).
In a bit of competition, our team, Team B, of 14 IBMers, competed against another team, Team A, of 20 people. Despite having fewer people, we were able to pack 746 boxes, representing 20,000 pounds of food, beating out Team A which only packed 18,000 pounds. (I have chosen not to identify anyone on Team A, no need to rub their noses in it. This was all for a good cause.)
Each box contained cereal, canned evaporated milk, canned vegetables and fruits, fruit juice, rice, and dry beans. My job on the assembly line was to put two half-gallon jugs of grape juice in the box and move it down the line.
What lessons can a team of people learn from an activity like this?
When you put a bunch of efficiency experts from IBM on a task, they will self-organize and self-manage for optimum performance, just as we don on our regular day jobs.
No matter what you plan in advance, individual personalities and strengths surface, encouraging minor adjustments to process and procedures to be more efficient.
In an assembly line process, where each person has to wait for the person before them to finish their assigned task, it becomes obvious who is not pulling their fair share of the work. In this manner, everyone holds everyone else accountable for their output.
This was a great day for a good cause. The Community Food Bank qualifies for the Arizona [Working Poor Tax Credit] program. For every dollar the Community Food Bank receives, they can give 10 dollars of food to someone in need.
Special thanks to Greg Kishi for being our team leader for this event, and to Carol Tribble for taking these photographs.
Last week, fellow IBMer Ron Riffe started his three-part series on the Storage Hypervisor. I discussed Part I already in my previous post [Storage Hypervisor Integration with VMware]. We wrapped up the week with a Live Chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
"The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog.
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
A storage hypervisor increases the utilization of storage resources, and optimizes what is most scarce in your environment. For Linux, UNIX and Windows servers, you typically see utilization rates of 20 to 35 percent, and this can be raised to 55 to 80 percent with a storage hypervisor. But what is most scarce in your environment? Time! In a competitive world, it is not big animals eating smaller ones as much as fast ones eating the slow.
Want faster time-to-market? A storage hypervisor can help reduce the time it takes to provision storage, from weeks down to minutes. If your business needs to react quickly to changes in the marketplace, you certainly don't want your IT infrastructure to slow you down like a boat anchor.
Want more time with your friends and family? A storage hypervisor can migrate the data non-disruptively, during the week, during the day, during normal operating hours, instead of scheduling down-time on an evenings and weekends. As companies adopt a 24-by-7 approach to operations, there are fewer and fewer opportunities in the year for scheduled outages. Some companies get stuck paying maintenance after their warranty expires, because they were not able to move the data off in time.
Want to take advantage of the new Solid-State Drives? Most admins don't have time to figure out what applications, workloads or indexes would best benefit from this new technology? Let your storage hypervisor automated tiering do this for you! In fact, a storage hypervisor can gather enough performance and usage statistics to determine the characteristics of your workload in advance, so that you can predict whether solid-state drives are right for you, and how much benefit you would get from them.
Want more time spent on strategic projects? A storage hypervisor allows any server to connect to any storage. This eliminates the time wasted to determine when and how, and let's you focus on the what and why of your more strategic transformational projects.
If this sounds all too familiar, it is similar to the benefits that one gets from a server hypervisor -- better utilization of CPU resources, optimizing the management and administration time, with the agility and flexibility to deploy new technologies in and decommission older ones out.
"Server virtualization is a fairly easy concept to understand: Add a layer of software that allows processing capability to work across multiple operating environments. It drives both efficiency and performance because it puts to good use resources that would otherwise sit idle.
Storage virtualization is a different animal. It doesn't free up capacity that you didn't know you had. Rather, it allows existing storage resources to be combined and reconfigured to more closely match shifting data requirements. It's a subtle distinction, but one that makes a lot of difference between what many enterprises expect to gain from the technology and what it actually delivers."
Jon Toigo on his DrunkenData blog brings back the sanity with his post [Once More Into the Fray]. Here is an excerpt:
"What enables me to turn off certain value-add functionality is that it is smarter and more efficient to do these functions at a storage hypervisor layer, where services can be deployed and made available to all disk, not to just one stand bearing a vendor’s three letter acronym on its bezel. Doesn’t that make sense?
I think of an abstraction layer. We abstract away software components from commodity hardware components so that we can be more flexible in the delivery of services provided by software rather than isolating their functionality on specific hardware boxes. The latter creates islands of functionality, increasing the number of widgets that must be managed and requiring the constant inflation of the labor force required to manage an ever expanding kit. This is true for servers, for networks and for storage.
Can we please get past the BS discussion of what qualifies as a hypervisor in some guy’s opinion and instead focus on how we are going to deal with the reality of cutting budgets by 20% while increasing service levels by 10%. That, my friends, is the real challenge of our times."
Did you miss out on last Friday's Live Chat? We are doing it again this Friday, covering parts I and II of Ron's posts, so please join the conversation! The virtual dialogue on this topic will continue in another [Live Chat] on September 30, 2011 from 12 noon to 1pm Eastern Time.
Over on the Tivoli Storage Blog, there is an exchange over the concept of a "Storage Hypervisor". This started with fellow IBMer Ron Riffe's blog post [Enabling Private IT for Storage Cloud -- Part I], with a promise to provide parts 2 and 3 in the next few weeks. Here's an excerpt:
"Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them.
In August, Gartner published a paper [Use Heterogeneous Storage Virtualization as a Bridge to the Cloud] that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration.
A good storage hypervisor helps you be smart.
Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s 'in the cloud'."
"Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing."
At this point, you might scratch your head and ask "Does this Storage Hypervisor exist, or is this just a theoretical exercise?" The answer of course is "Yes, it does exist!" Just like VMware offers vSphere and vCenter, IBM offers block-level disk virtualization through the SAN Volume Controller(SVC) and Storwize V7000 products, with a full management support from Tivoli Storage Productivity Center Standard Edition.
SVC has supported every release of VMware since the 2.5 version. IBM is the leading reseller of VMware, so it makes sense for IBM and VMware development to collaborate and make sure all the products run smoothly together. SVC presents volumes that can be formatted for VMFS file system to hold your VMDK files, accessible via FCP protocol. IBM and VMware have some key synergies:
Management integration with Tivoli Storage Productivity Center and VMware vCenter plug-in
VAAI support: Hardware-assisted locking, hardware-assisted zeroing, and hardware-assisted copying. Some of the competitors, like EMC VPLEX, don't have this!
Space-efficient FlashCopy. Let's say you need 250 VM images, all running a particular level of Windows. A boot volume of 20GB each would consume 5000GB (5 TB) of capacity. Instead, create a Golden Master volume. Then, take 249 copies with space-efficient FlashCopy, which only consumes space for the modified portions of the new volumes. For each copy, make the necessary changes like unique hostname and IP address, changing only a few blocks of data each. The end result? 250 unique VM boot volumes in less than 25GB of space, a 200:1 reduction!
Support for VMware's Site Recovery Manager using SVC's Metro Mirror or Global Mirror features for remote-distance replication.
Data center federation. SVC allows you to seamlessly do vMotion from one datacenter to another using its "stretched cluster" capability. Basically, SVC makes a single image of the volume available to both locations, and stores two physical copies, one in each location. You can lose either datacenter and still have uninterrupted access to your data. VMware's HA or Fault Tolerance features can kick in, same as usual.
But unlike tools that work only with VMware, IBM's storage hypervisor works with a variety of server virtualization technologies, including Microsoft Hyper-V, Xen, OracleVM, Linux KVM, PowerVM, z/VM and PR/SM. This is important, as a recent poll on the Hot Aisle blog indicates that [44 percent run 2 or more server hypervisors]!
Join the conversation! The virtual dialogue on this topic will continue in a [live group chat] this Friday, September 23, 2011 from 12 noon to 1pm EDT. Join me and about 20 other top storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Last February, IBM introduced Watson on the Jeopardy! game show. These three shows were re-aired this week in the United States. I wrote a series of blog posts back then:
This last one on how to build your own Watson, Jr. has gotten over 69,000 hits! While several people told me they plan to build their own, I have not heard back from anyone yet, so perhaps it is taking longer than expected.
IBM and Wellpoint announced this week that it will be [putting Watson to work] in healthcare. [Wellpoint] is one of the largest health benefits company in the United States, with over 70 million people served through its affiliate plans and its various subsidiaries. I am one of the development lab advocates for Wellpoint, and have been proud to work with the account team to help Wellpoint achieve their goals.
This marks the first commercial deployment of IBM Watson. This is a joint effort. IBM will develop the base IBM Watson for healthcare platform, and Wellpoint will then develop healthcare-specific solutions to run on this platform. Watson's ability to analyze the meaning and context of human language, and quickly process vast amounts of information to suggest options targeted to a patient's circumstances, can assist decision makers, such as physicians and nurses, in identifying the most likely diagnosis and treatment options for their patients.
Is this going to put doctors out of business? No. Physicians find it challenging to read and understand hundreds or thousands of pages of text, and put this into their practice. IBM Watson, on the other hand, can scan through hundred of millions of pages in just a few seconds to help answer a question or provide recommendations. Together, doctors armed with access to IBM Watson will be able to improve the quality and effectiveness of medical care.
From an insurance point of view, improving the quality of care will help reduce medical mistakes and malpractice lawsuits. This is a win-win for everyone except ambulance-chasing lawyers!
Last week, US President Barack Obama declared September 2011 as "National Preparedness Month". Here is an excerpt of the press release:
Whenever our Nation has been challenged, the American people have responded with faith, courage, and strength. This year, natural disasters have tested our response ability across all levels of government. Our thoughts and prayers are with those whose lives have been impacted by recent storms, and we will continue to stand with them in their time of need. This September also marks the 10th anniversary of the tragic events of September 11, 2001, which united our country both in our shared grief and in our determination to prevent future generations from experiencing similar devastation. Our Nation has weathered many hardships, but we have always pulled together as one Nation to help our neighbors prepare for, respond to, and recover from these extraordinary challenges.
In April of this year, a devastating series of tornadoes challenged our resilience and tested our resolve. In the weeks that followed, people from all walks of life throughout the Midwest and the South joined together to help affected towns recover and rebuild. In Joplin, Missouri, pickup trucks became ambulances, doors served as stretchers, and a university transformed itself into a hospital. Local businesses contributed by using trucks to ship donations, or by rushing food to those in need. Disability community leaders worked side-by-side with emergency managers to ensure that survivors with disabilities were fully included in relief and recovery efforts. These stories reveal what we can accomplish through readiness and collaboration, and underscore that in America, no problem is too hard and no challenge is too great.
Preparedness is a shared responsibility, and my Administration is dedicated to implementing a "whole community" approach to disaster response. This requires collaboration at all levels of government, and with America's private and nonprofit sectors. Individuals also play a vital role in securing our country. The National Preparedness Month Coalition gives everyone the chance to join together and share information across the United States. Americans can also support volunteer programs through www.Serve.gov, or find tools to prepare for any emergency by visiting the Federal Emergency Management Agency's Ready Campaign website at [www.Ready.gov] or [www.Listo.gov].
In the last few days, we have been tested once again by Hurricane Irene. While affected communities in many States rebuild, we remember that preparedness is essential. Although we cannot always know when and where a disaster will hit, we can ensure we are ready to respond. Together, we can equip our families and communities to be resilient through times of hardship and to respond to adversity in the same way America always has -- by picking ourselves up and continuing the task of keeping our country strong and safe.
NOW, THEREFORE, I, BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim September 2011 as National Preparedness Month. I encourage all Americans to recognize the importance of preparedness and observe this month by working together to enhance our national security, resilience, and readiness.
IBM has several webinars to help you prepare for upcoming disasters.
Today, September 8, at 4pm EDT, IBM is hosting a [CloudChat on Business Resilience] will focus on resiliency and continuity in the cloud—a timely topic considering the recent weather events on the East Coast of the U.S. This chat will include Richard Cocchiara, IBM Distinguished Engineer and CTO, IBM Business Continuity and Resiliency Services (@RichCocchiara1) and Patrick Corcoran, Global Business Development, IBM Business Continuity and Resiliency Services (@PatCorcoranIBM).
Don't think you can afford Disaster Recovery planning? Next week, September 13, I will be joined with a few other experts on freeing up much needed funds from your tight IT budget, by being more efficient. The Webinar [Taming Data Growth Made Easy] is part of IBM's "IT Budget Killer" series.
Lastly, on September 21, IBM will have the Webinar [Planning for Disaster Recovery in a Power Environment: Best Practices to Protect Your Data]. This will cover principal lessons learned from disasters like Hurricane Katrina and the World Trade Center, local and regional considerations for Disaster Recovery Planning, planning Recovery Time Objectives (RTOs), and best practices for automation, mirroring and multiple Site Operational Efficiencies. A customer case study from University of Rochester Medical Center (URMC) will help reinforce the concepts, with a discussion on how a major hospital ensures Business Continuity via Contingency Planning using IBM Power Systems. The speakers in clude Steve Finnes, World Wide Offering Manager for IBM Power Systems, Vic Peltz, Consulting IT Architect for WW Business Continuance Technical Marketing, and Rick Haverty, Director of IT Infrastructure at University of Rochester Medical Center (URMC).
Hopefully, you will find these webinars useful and informative!
We had our first "Future of IT Storage" Lunch-and-Learn here in Indianapolis, IN. We held it at the [Harry & Izzy's Restaurant], which looks like it has been in business for quite a while, but actually was only started four years ago. It is the sister restaurant for St. Elmo's next door which has been running since 1902, so it maintains a sense of that heritage, but with a bit more casual atmosphere.
Please note that in the wake of Hurricane Irene, the [Burlington, MA (Boston Area) event] has been postponed, probably to October or November. We have already notified all the people who signed up, but in case you planned just to show up, I wanted to let you know here in this blog.
Special thanks to Karen Harrison and Kerry Ingram for their help in setting up this event! Also a shout-out to Leanna and Amy, our two waitresses who served us today!
Wrapping up my coverage of the [IBM System Storage Technical University 2011], I attended a few sessions on Friday morning. The last session was Glenn Anderson's "IT Game Changers: the IT Professional's Guide to Becoming a Technology Trailblazer." Glenn used to run the Storage University events, but now is the conference manager for the System z mainframe events.
Glenn organized this talk from lessons from the following books:
Glen suggested that IT professionals should understand the dissatisfaction with IT that is driving companies to switch over to Cloud Computing. IT professionals should adopt a service-oriented approach, realize the full potential of new disruptive technologies, and know when to "jump the curve" to the next generation of technology. For example, IT professionals should lead the movement to Cloud. If you build your own private cloud, or purchase some time for instances on a public cloud, you will be in a better position to be the "trusted advisor" to IT management.
CIOs should encourage IT to be part of the corporate strategy, but may have to fix the broken IT funding model. The IT department should be a "value center" not a "cost center" as it has been traditionally treated. When treated as a "cost center", IT departments only focus on cost reductions, and not looking at ways that the IT department can help drive revenues, improve customer service, or enhance employee productivity. A well-orgnized IT department can be a competitive advantage.
Taking a "service-oriented" approach allows IT and Business Process to come together. Often times, IT and business professionals don't communicate well, and this new service-oriented approach can bridge the gap. Service Oriented Architecture [SOA] can help connect existing legacy applications to the new Cloud Computing environment.
IT budgets should consist of two parts. Strategic funding for new IT projects, and an operational budget for keeping current applications running. Roughly 45 percent of capital investment in USA goes toward IT. Too often, the IT department is focused on itself, on technology and reducing costs, and not enough on aligning IT with business transformation. When IT is used in conjunction with a sound business strategy, their can be significant payoff.
After 550 years, the printing press and printed materials are being pushed from center. While other electronic media like radio and television have been around for a while, the internet and digital publishing are constantly available, and represent a shift from traditional printed materials.
When evaluating new technologies, IT professionals should ask themselves a few questions. Is it easy to use? Does it enable people to connect in new ways? Is it more cost-effective, or tap new sources of revenue? Does it shift power from one player to another? A new intellectual ethic is taking hold. Becoming an IT Game Changer can help stay one step ahead as Cloud Computing and other new IT platforms are adopted.
Continuing my coverage of the [IBM System Storage Technical University 2011], I participated in the storage free-for-all, which is a long-time tradition, started at SHARE User Group conference, and carried forward to other IT conferences. The free-for-all is a Q&A Panel of experts to allow anyone to ask any question. These are sometimes called "Birds of a Feather" (BOF). Last year, we had two: one focused on Tivoli Storage software, and the second to cover storage hardware. This year, we had two, one for System x called "Ask the eXperts", and one for System Storage called "Storage Free-for-All". This post covers the latter.
(Disclaimer: Do not shoot the messenger! We had a dozen or more experts on the panel, representing System Storage hardware, Tivoli Storage software, and Storage services. I took notes, trying to capture the essence of the questions, and the answers given by the various IBM experts. I have spelled out acronyms and provided links to relevant materials. The answers from individual IBMers may not reflect the official position of IBM management. Where appropriate, my own commentary will be in italics.)
You are in the wrong session! Go to "Ask the eXperts" session next door!
The TSM GUI sucks! Are there any plans to improve it?
Yes, we are aware that products like IBM XIV have raised the bar for what people expect from graphical user interfaces. We have plans to improve the TSM GUI. IBM's new GUI for the SAN Volume Controller and Storwize V7000 has been well-received, and will be used as a template for the GUIs of other storage hardware and software products. The GUI uses the latest HTML5, Dojo widgets and AJAX technologies, eliminating Java dependencies on the client browser.
Can we run the TSM Admin GUI from a non-Windows host?
IBM has plans to offer this. Most likely, this will be browser-based, so that any OS with a modern browser can be used.
As hard disk drives grow larger in capacity, RAID-5 becomes less viable. What is IBM doing to address this?
IBM is aware of this problem. IBM offers RAID-DP on the IBM N series, RAID-X on the IBM XIV, and RAID-6 on its other disk systems.
TPC licensing is outrageous! What is IBM going to do about it?
About 25 percent of DS8000 disk systems have SSD installed. Now that IBM DS8000 Easy Tier supports "any two" tiers, roughly 50 percent of DS8000 now have Easy Tier activated. No idea on how Easy Tier has been adopted on SVC or Storwize V7000.
We have an 8-node SVC cluster, should we put 8 SSD drives into a single node-pair, or spread them out?
We recommend putting a separate Solid-State Drive in each SVC node, with RAID-1 between nodes of a node-pair. By separating the SSD across I/O groups, you can reduce node-to-node traffic.
How well has SVC 6.2 been adopted?
The inventory call-home data is not yet available. The only SVC hardware model that does not support this level of software was the 2145-4F2 introduced in 2003. Every other model since then can be updated to this level.
Will IBM offer 600GB FDE drives for the IBM DS8700?
Currently, IBM offers 300GB and 450GB 15K RPM drives with the Full-Disk Encryption (FDE) capability for the DS8700, and 450GB and 600GB 10K RPM drives with FDE for the IBM DS8800. IBM is working with its disk suppliers to offer FDE on other disk capacities, and on SSD and NL-SAS drives as well, so that all can be used with IBM Easy Tier.
Is there a reason for the feature lag between the Easy Tier capabilities of the DS8000, and that of the SVC/Storwize V7000?
We have one team for Easy Tier, so they implement it first on DS8000, then port it over to SVC/Storwize V7000.
Does it even make sense to have separate storage tiers, especially when you factor in the cost of SVC and TPC to make it manageable?
It depends! We understand this is a trade-off between cost and complexity. Most data centers have three or more storage tiers already, so products like SVC can help simplify interoperability.
Are there best practices for combining SVC with DS8000? Can we share one DS8000 system across two or more SVC clusters?
Yes, you can share one DS8000 across multiple SVC clusters. DS8000 has auto-restripe, so consider having two big extent pools. The queue depth is 3 to 60, so aim to have up to 60 managed disks on your DS8000 assigned to SVC. The more managed disks the better.
The IBM System Storage Interopability Center (SSIC) site does not seem to be designed well for SAN Volume Controller.
Yes, we are aware of that. It was designed based on traditional Hardware Compatability Lists (HCL), but storage virtualization presents unique challenges.
How does the 24-hour learning period work for IBM Easy Tier? We have batch processing that runs from 2am to 8am on Sundays.
You can have Easy Tier monitor across this batch job window, and turn Easy Tier management between tiers on and off as needed.
Now that NetApp has acquired LSI, is the DS3000 still viable?
Yes, IBM has a strong OEM relationship with both NetApp and LSI, and this continues after the acquisition.
If have managed disks from a DS8000 multi-rank extent pool assigned to multiple SVC clusters, won't this affect performance?
Yes, possibly. Keep managed disks on seperate extent pools if this is a big concern. A PERL script is available to re-balance SVC striped volumes as needed after these changes.
Is the IBM [TPC Reporter] a replacement for IBM Tivoli Storage Productivity Center?
No, it is software, available at no additional charge, that provides additional reporting to those who have already licensed Tivoli Storage Productivity Center 4.1 and above. It will be updated as needed when new versions of Productivity Center are released.
We are experiencing lots of stability issues with SDD, SDD-PCM and SDD-DSM multipathing drivers. Are these getting the development attention they deserve?
IBM's direction is to shift toward native OS-based multipathing drivers.
Is anyone actually thinking of deploying public cloud storage in the near-term?
A few hands in the audience were raised.
None of the IBM storage devices seem to have [REST API]. Cloud storage providers are demanding this. What are IBM plans?
IBM plans to offer REST on SONAS. IBM uses SONAS internally for its own cloud storage offerings.
If you ask a DB2 specialist, an AIX specialist, and a System Storage specialist, on how to configure System p and System Storage for optimal performance, you get three different answers. Are there any IBMers who are cross-functional that can help?
Yes, for example, Earl Jew is an IBM Field Technical Support Specialist (FTSS) for both System p and Storage, and can help you with that.
Both Oracle and Microsoft recommend RAID-10 for their applications.
Don't listen to them. Feel free to use RAID-5, RAID-6 or RAID-X instead.
Resizing SVC source volumes forces ongoing FlashCopy or Metro Mirror relatiohships to be stopped. Does IBM plan to address this?
Currently, you have to stop, resize both source and target, then start the relationship again. Consider getting IBM Tivoli Storage Productivity Center for Replication (TPC-R).
IBM continues to support this for exising clients. For new deployments, IBM offers SONAS and the Information Archive (IA).
When will I be able to move SVC volumes between I/O groups?
You can today, but it is disruptive to the operating system. IBM is investigating making this less disruptive.
Will XIV ever support the mainframe?
It does already, with support for both Linux and z/VM today. For VSE support, use SVC with XIV. For those with the new zBX extension, XIV storage can be used with all of the POWER and x86-based operating systems supported. IBM has no plans to offer direct FICON attachment for z/OS or z/TPF.
Not a question - Kudos to the TSM and ProtecTIER team in supporting native IP-based replication!
Thanks!
When will IBM offer POWER-based models of the XIV, SVC and other storage devices?
IBM's decision to use industry-standard x86 technology has proven quite successful. However, IBM re-looks at this decision every so many years. Once again, the last iteration determined that it was not worth doing. A POWER-based model might not beat the price/performance of current x86 models, and maintaining two separate code bases would hinder development of new innovations.
We have both System i and System z, what is IBM doing to address the fact that PowerHA and GDPS are different?
IBM TPC-R has a service offering extension to support "IBM i" environments. GDPS plans to support multi-platform environments as well.
This was a great interactive session. I am glad everyone stayed late Thursday evening to participate in this discussion.
IBM Storage Strategy for the Smarter Computing Era
I presented this session on Thursday morning. It is a session I give frequently at the IBM Tucson Executive Briefing Center (EBC). IBM launched [Smarter Computing initiative at IBM Pulse conference]. My presentation covered the role of storage in Business Analytics, Workload Optimized Systems, and Cloud Computing.
Layer 8: Cloud Computing and the new IT Delivery Model
Ed Batewell, IBM Field Technical Support Specialist, presented this overview on Cloud Computing. The "Layer 8" is a subtle reference to the [7-layer OSI Model] for networking protocols. Ed cites insights from the [2011 IBM Global CIO Survey]. Of the 3000 companies surveyed, 60 percent plan to use or deploy clouds. In USA, 70 percent of CIOs have significant plans for cloud within the next 3-5 years. These numbers are double the statistics gleamed from the 2009 Global CIO survey. Clouds are one of IBM's big four initiatives, expecting to generate $7 Billion USD annual revenues by 2015.
IBM is recognized in the industry as one of "Big 5" vendors (Google, Yahoo, Microsoft, and Amazon round out the rest). As such, IBM has contributed to the industry a set of best practices known as the [Cloud Computing Reference Architect (36-page document)]. As is typical for IBM, this architecture is end-to-end complete, covering the three main participants for successful cloud deployments:
Consumers: the people and systems that use cloud computing services
Providers: the people, infrastructure and business operations needed to deliver IT services to consumers
Developers: the people and their development tools that create apps and platforms for cloud computing
IBM is working hard to eliminate all barriers to adoption for Cloud Computing. [Mirage image management] can patch VM images offline to address "Day 0" viruses. [Hybrid Cloud Integrator can help integrate new Cloud technologies to legacy applications. [IBM Systems Director VMcontrol] can manage VM images from z/VM on the mainframe, to PowerVM on UNIX servers, to VMware, Microsoft, Xen and KVM for x86 servers. IBM's [Cloud Service Provider Platform (CSP2)] is designed for Telecoms to offer Cloud Computing services. IBM CloudBurst is a "Cloud-in-a-Can" optimized stack of servers, storage and switches that can be installed in five days and comes in various "tee-shirt sizes" (Small, Medium, Large and Extra Large), depending on how many VMs you want to run.
Ed mentioned that companies trying to build their own traditional IT applications and environments, in an effort to compete against the cost-effective Clouds, reminded him of Thomas Thwaites' project of building a toaster from scratch. You can watch the [TED video, 11 minutes]:
An interesting project is [Reservoir] which IBM is working with other industry leaders to develop a way to seamlessly migrate VMs from one location to another, globally, without requiring shared storage, SAN zones or Ethernet subnets. This is similar to how energy companies buy and sell electricity to each other, as needed, or the way telecommunications companies allow roaming acorss each others networks.
IBM System Networking - Convergence
Jeff Currier, IBM Executive Consultant for the new IBM System Networking group, presented this session on Network Convergence. Storage is expected to grow 44x, from 0.8 [Zettabytes] in 2009, to 35 Zetabytes by the year 2020. The role of the network is growing in importance. IBM refers to this converged loss-less Ethernet network as "Convergence Enhanced Ethernet" (CEE), which Cisco uses the term "Data Center Ethernet" (DCE), and the rest of the industry uses "Data Center Bridging" (DCB).
To make this happen, we need to replace Spanning Tree Protocol [STP] that eliminates walking in circles in a multi-hop network configuration, with a new Layer 2 Multipathing (L2MP) protocol. The two competing for the title are Shortest Path Bridging (IEEE 802.1aq) and Transparent Interconnect of Lots of Links (IETF TRILL).
All roads lead to Ethernet. While FCoE has not caught on as fast as everyone hoped, iSCSI has benefited from all the enhancements to the Ethernet standard. iSCSI works in both lossy and lossless versions of Ethernet, and seems to be the preferred choice for new greenfield deployments for Small and Medium sized Businesses (SMB). Larger enterprises continue to use Fibre Channel (FCP and FICON), but might use single-hop FCoE from the servers to top-of-rack switches. Both iSCSI and FCoE scale well, but FCoE is considered more efficient.
IBM has a strategy, and is investing heavily in these standards, technologies, and core competencies.
I have been working on Information Lifecycle Management (ILM) since before they coined the phrase. There were several break-out sessions on the third day at the [IBM System Storage Technical University 2011] related to new twists to ILM.
The Intelligent Storage Service Catalog (ISSC) and Smarter ILM
Hans Ammitzboll, Solution Rep for IBM Global Technology Services (GTS), presented an approach to ILM focused on using different storage products for different tiers. Is this new? Not at all! The original use of the phrase "Information Lifecycle Management" was coined in the early 1990s by StorageTek to help sell automated tape libraries.
Unfortunately, disk-only vendors started using the term ILM to refer to disk-to-disk tiering inside the disk array. Hans feels it does not make sense to put the least expensive penny-per-GB 7200 RPM disk inside the most expense enterprise-class high-end disk arrays.
IBM GTS manages not only IBM's internal operations, but the IT operations of hundreds of other clients. To help manage all this storage, they developed software to supplement reporting, monitoring and movement of data from one tier to another.
The Intelligent Storage Service Catalog (ISSC) can save up to 80 percent of planning time for managing storage. What did people use before? Hans poked fun at chargeback and showback systems that "offer savings" but don't actually "impose savings". He referred to these as Name-and-Shame, where the top 10 offenders of storage usage.
His storage pyramid involves a variety of devices, with IBM DS8000, SVC and XIV for the high-end, midrange disk like Storwize V7000, and blended disk-and-tape solutions like SONAS and Information Archive (IA) for the lower tiers.
To help people understand these concepts, IBM developed the [Thinking Worlds Player] game.
Managing your Data with Policies on SONAS
Mark Taylor, IBM Advanced Technical Services, presented the policy-driven automation of IBM's Scale-Out NAS (SONAS). A SONAS system can hold 1 to 256 file systems, and each file system is further divided into fileset containers. Think of fileset containers like 'tree branches' of the file system.es.
SONAS supports policies for file placement, file movement, and file deletion. These are SQL-like statements that are then applied to specific file systems in the SONAS. Input variables include date last modified, date last accessed, file name, file size, fileset container name, user id and group id. You can choose to have the rules be case-sensitive or case-insensitive. The rules support macros. A macro pre-processor can help simplify calculations and other definitions that are used repeatedly.
Each file system in SONAS consists of one or more storage pools. For file systems with multiple pools, file placement policies can determine which pool to place each file. Normally, when a set of files are in a specific sub-directory on other NAS systems, all the files will be on the same type of disk. With SONAS, some files can be placed on 15K RPM drives, and other files on slower 7200 RPM drives. This file virtualization separates the logical grouping of files from the physical placement of them.
Once files are placed, other policies can be written to migrate from one disk pool to another, migrate from disk to tape, or delete the file. Migrating from one disk pool to another is done by relocation. The next time the file is accessed, it will be accessed directly from the new pool. When migrating from disk to tape, a stub is left in the directory structure metadata, so that subsequent access will cause the file to be recalled automatically from tape, back to disk. Policies can determine which storage pool files are recalled to when this happens.
Migrating from disk to tape involves sending the data from SONAS to external storage pool manager, such as IBM Tivoli Storage Manager (TSM) server connected to a tape library. SONAS supports pre-migration, which allows the data to be copied to tape, but left on disk, until space is needed to be freed up. For example, a policy with THRESHOLD(90,70,50) will kick in when the file system is 90 percent full, file will be migrated (moved) to tape until it reaches 70 percent, and then files will be pre-migrated (copied) to tape until it reaches 50 percent.
Policies to delete files can apply to both disk and tape pools. Files deleted on tape remove the stub from the directory structure metadata and notify the external storage pool manager to clean up its records for the tape data.
If this all sounds like a radically new way of managing data, it isn't. Many of these functions are based on IBM's Data Facility Storage Management Subsystem (DFSMS) for the mainframe. In effect, SONAS brings mainframe-class functionality to distributed systems.
Understanding IBM SONAS Use Cases
For many, the concept of a scale-out NAS is new. Stephen Edel, IBM SONAS product offering manager, presented a variety of use cases where SONAS has been successful.
First, let's consider backup. IBM SONAS has built-in support for Tivoli Storage Manager (TSM), as well as supporting the NDMP industry standard protocol, for use with Symantec NetBackup, Commvault Simpana, and EMC Legato Networker. While many NAS solutions support NDMP, IBM SONAS can support up to 128 session per interface node, and up to 30 interface nodes, for parallel processing. SONAS has a high-speed file scan to identify files to be backed up, and will pre-fetch the small files into cache to speed up the backup process. A SONAS system can support up to 256 systems, and each file system can be backed up on its own unique schedule if you like. Different file systems can be backed up to different backup servers.
SONAS also has anti-virus support, with your choice of Symantec or McAfee. An anti-virus scan can be run on demand, as needed, or as files are individually accessed. When a Windows client reads a file, SONAS will determine if it has been already scanned with the most recent anti-virus signatures, and if not, will scan before allowing the file to be read. SONAS will also scan new files created.
Successful SONAS deployments addressed the following workloads:
content capture including video capture
content distribution
file production/rendering
high performance computing, research and business analytics
"Cheap and Deep" archive
worldwide information exchange and geographically distant collaboration
cloud hosting
SONAS is selling well in Government, Universities, Healthcare, and Media/Entertainment, but is not limited to these industries. It can be used for private cloud deployments and public cloud deployments. Having centralized management for Petabytes of data can be cost-effective either way.
IBM SONAS brings the latest techologies to bring a Smarter ILM to a variety of workloads and use cases.
I am pleased with the turn-out we had attending last week for my Infoboom Webinar on [The Future of Storage]. The 55-minute replay is available on Infoboom, and the slide deck can be downloaded from the [IBM Expert Network].
I mentioned that I was going to Indianapolis and Boston next week to give lectures on this topic. Here are the details:
Indianapolis - September 7, 2011
The Future of Storage with Tony Pearson Luncheon Briefing
Harry & Izzy's
153 South Illinois Street
Indianapolis, IN 46225
Time: 11am to 1:30pm
[Registration Page]
Boston - September 8, 2011
The Future of Storage with Tony Pearson Briefing and Networking Reception
The Capital Grille
10 Wayside Road
Burlington, MA 01803
Time: 4:30pm to 6:30pm
[Registration Page]
I will also be in San Francisco for Oracle OpenWorld (Oct 2-6), Auckland New Zealand (Nov 9-11), and Melbourne Australia (Nov 15-17).
Storage Networking is part of the IBM System Storage team. There were several break-out sessions on the third day at the [IBM System Storage Technical University 2011] related to storage networking.
SAN Best Practices
I always try to catch a session from Jim Blue, who works in our "SAN Central" center of competency team. This session was a long list of useful hints and tips, based on his many years of experience helping clients.
SAN Zoning works by inclusion, limiting the impact of failing devices. The best approach is to zone by individual initiator port. The default policy for your SAN zoning should be "deny".
Ports should be named to identify who, what, where and how.
While many people know not to mix both disk and tape devices on the same HBA, Jim also recommends not mixing dissimilar disks, test and production, FCP and FICON.
The sweet spot is FOUR paths. Too many paths can impact performance.
When making changes to redundant fabrics, make changes to the first fabric, then allow sufficient time before making the same changes to the other fabric.
Use software tools like Tivoli Storage Productivity Center (Standard Edition) to validate all changes to your SAN fabric.
Do not mix 62.5 and 50.0 micron technology.
Use port caps to disable inactive ports. In one amusing anecdote, he mention that an uncovered port was hit by sunlight every day, sending error messages that took a while to figure out.
Save your SAN configuration to non-SAN storage for backup
Consider firmware about two months old to be stable
Rule of thumb for estimating IOPS: 75-100 IOPS per 7200 RPM drive, 120-150 IOPS per 10K RPM drive, and 150-200 IOPS per 15K RPM drive.
Decide whether your shop is just-in-time or just-in-case provisioning. Just-in-time gets additional capacity on demand as needed, and just-in-case over-provisions to avoid scrambling last minute.
Avoid oversubscribing your inter-switch links (ISL). Aim for around 7:1 to 10:1 ratio.
Don't go cheap on bandwidth between sites for long-distance replication
Next Generation Network Fabrics - Strategy and Innovations
Mike Easterly, IBM Director of Global Field Marketing, presented IBM System Networking strategy, in light of IBM's recent acquisition of Blade Network Technologies (BNT). BNT is used in 350 of the Fortune 500 companies, and is ranked #2 behind Cisco in sales of non-core Ethernet switches (based on number of units sold).
Based on a recent survey, companies are upgrading their Ethernet networks for a variety of reasons:
56 percent for Live Partition Mobility and VMware Vmotion
45 percent for integrated compute stacks, like IBM CloudBurst
43 percent for private, public and hybrid cloud computing deployments
40 percent for network convergences
Many companies adopt a three-level approach, with core directors, distribution switches, and then access switches at the edge that connect servers and storage devices. IBM's BNT allows you to flatten the network to lower latency by collapsing the access and distribution levels into one.
IBM's strategy is to focus on BNT for the access/distribution level, and to continue its strategic partnerships for the core level.
IBM BNT provides better price/performance and lower energy consumption. To help with hot-aisle/cold-aisle rack deployments, IBM BNT provides both F and R models. F models have ports on the front, and R models have ports in the rear.
IBM BNT supports virtual fabric and HW-offload iSCSI traffic, and future-enabled for FCoE. Support for TRILL (transparent interconnect of lots of links) and OpenFlow will be implemented through software updates to the switches.
While Cisco Nexus 1000v is focused on VMware Enterprise Plus, IBM BNT's VMready works with VMware, Hyper-V, Linux KVM, XEN, OracleVM, and PowerVM. This allows single pane of management of VMready and ESX vSwitches.
In preparation for Converged Enhanced Ethernet (CEE), IBM BNT will provide full 40GbE support sometime next year, and offer switches that support 100GbE uplinks. IBM offers extended length cables, including passive SFP+ DAC at 8.5 meters, and 10Gbase-T Cat7 cables up to 100 meters.
Inter-datacenter Workload Mobility with VMware vSphere and SAN Volume Controller (SVC)
This session was co-presented between Bill Wiegand, IBM Advanced Technical Services, and Rawley Burbridge, IBM VMware and midrange storage consultant. IBM is the leader in storage virtualization product (SVC), and is the leading reseller of VMware.
Like MetroCluster on IBM N series, or EMC's VPLEX Metro, the IBM SAN Volume Controller can support a stretched cluster across distance that allows virtual machines to move seamlessly from one datacenter to another. This is a feature IBM introduced with SVC 5.1 back in 2009. This can be used for PowerVM Live Partition Mobility, VMware vMotion, and Hyper-V Quick Migration.
SVC stretched cluster can help with both Disaster Avoidance and Disaster Recovery. For Disaster Avoidance, in anticipation of an outage, VMs can be moved to the secondary datacenter. For Disaster Recover, additional automation, such as VMware High Availability (HA) is needed to restart the VMs at the secondary datacenter.
IBM stretched cluster is further improved with a feature called Volume Mirroring (formerly vDisk Mirroring) which creates two physical copies of one logical volume. To the VMware ESX hosts, there is only one volume, regardless of which datacenter it is in. The two physical copies can be on any kind of managed disk, as there is no requirement or dependency of copy services on the back-end storage arrays.
Another recent improvement is the idea of spreading the three quorum disks to three different locations or "failure domains". One in each data center, and a third one in a separate building, somewhere in between the other two, perhaps.
Of course, there are regional disasters that could affect both datacenters. For this reason, SVC stretched cluster volumes can be replicated to a third location up to 8000 km away. This can be done with any back-end disk arrays, as again there is not requirement for copy services from the managed devices. SVC takes care of it all.
Networking is going to be very important for a variety of transformational projects going forward in the next five years.
After the amount of flack Jon Toigo had to endure for not giving advanced notice to his upcoming Webcast, I thought I would better remind people about my own Webinar that is happening next Tuesday, August 23.
So here's the scoop, next Tuesday I will be presenting [The Future of Storage], August 23, 1pm to 2pm EDT. You can register to attend at the [Infoboom Registration Page]. Infoboom is a social community for business and IT leaders of small and midsize businesses brought to you by IBM.
But that's not all! After the webinar, I will then travel to various cities for face-to-face lectures. Here are the first two:
September 7 - Indianapolis
September 8 - Boston area
If you are near either of these two locations, contact your local IBM storage specialist or IBM business partner to participate.
Since the [IBM System Storage Technical University 2011] runs concurrently with the System x Technical University, attendees are allowed to mix-and-match. I attended several presentations regarding server virtualization and hypervisors.
Hypervisor Architecture
Matt Archibald is an IT Management Consultant in IBM's Systems Agenda Delivery team. He started with a history of hypervisors, from IBM's early CP/CMS in 1967, through the latest VMware Vsphere 5 just announced.
He explained that there are three types of Hypervisor architectures today:
Type 1 - often referred to as "Bare Metal" runs directly on the server host hardware, and allows different operating system virtual machines to run as guests. IBM's System z [PR/SM] and [PowerVM] as well as the popular VMware ESXi are examples of this type.
Type 2 - often referred to as "Hosted" runs above an existing operating system, and allows different operating system virtual machines to run as guests. The popular [Oracle/Sun VirtualBox] is an example of this type.
OS Containers - runs above an existing operating system base, and allows multiple "guests" that all run the same operating system as the base. This affords some isolation between applications. [Parallels Virtuozzo Containers] is an example of this type.
The dominant architecture is Type 1. For x86, IBM is the number one reseller of VMware. VMware recently announced [Vsphere 5], which changes its licensing model from CPU-based to memory-based. For example, a virtual machine with 32 virtual CPUs and 1TB of virtual RAM (VRAM) would cost over $73,000 per year to license the VMware "Enterprise Plus" software. The only plus-side to this new licensing is that the "memory" entitlement transfers during Disaster Recovery to the remote location.
"Xen is dead." was the way Matt introduced the section discussing Hybrid Type-1 hypervisors like Xen and Hyper-V. These run bare-metal, but require networking and storage I/O to be processed by a single bottleneck partition referred to as "Dom 0". As such, this hybrid approach does not scale well on larger multi-sock host servers. So, his Xen-is-dead message was referring to all Hybrid-based Hypervisors including Hyper-V, not just those based on Xen itself.
The new up-and-comer is "Linux KVM". Last year, in my blog post about [System x KVM solutions], I mentioned the confusion over KVM acronym used with two different meanings. Many people use KVM to refer to Keyboard-Video-Mouse switches that allow access to multiple machines. IBM has renamed these switches to Local Console Managers (LCM) and Global Console Manager (GCM). This year, the System x team have adopted the use of "Linux KVM" to refer to the second meaning, the [Kernel-based Virtual Machine] hypervisor.
Linux KVM is not a product, but an open-source project. As such, it is built into every Linux kernel. Red Hat has created two specific deliverables under the name Red Hat Enterprise Virtualization (RHEV):
RHEV-H, a tiny ESXi-like bare-metal hypervisor that fits in 78MB, making it small enough to be on a USB stick, CD-rom or memory chip.
RHEV-M, a vCenter-like management software to manage multiple virtual machines across multiple hosts.
Personally, I run RHEL 6.1 with KVM on my IBM laptop as my primary operating system, with a Windows XP guest image to run a few Windows-specific applications.
A complaint of the current RHEV 2.2 release from Linux fanboys is that RHEV-M requires a Windows server, and uses Windows Powershell for scripting. The next release of RHEV is likely to provide a Linux-based option for management server.
Of the various hypervisors evaluated, KVM appears to be poised to offer the best scalability for multi-socket host machines. The next release is expected to support up to 4096 threads, 64TB of RAM, and over 2000 virtual machines. Compare that to VMware Vsphere 5 that supports only 160 threads, 2TB of RAM and up to 512 virtual machines.
Linux KVM Overview
Matt also presented a session focused on Linux KVM. While IBM is the leading reseller of VMware for the x86 server platform, it has chosen Linux KVM to run all of its internal x86 Cloud Computing facilities, as it can offer 40 to 80 percent savings, based on Total Cost of Ownership (TCO).
Linux KVM can run unmodified Windows and Linux guest operating systems as guest images with less than 5 percent overhead. Since KVM is built into the Linux kernel, any certification testing automatically benefits KVM as well. KVM takes advantage of modern CPU extensions like Intel's VT and AMD's AMD-V.
For high availability, in the event that a host fails, KVM can restart the guest images on other KVM hosts. RHEV offers "prioritized restart order" which allows mision-critical images to be started before less important ones.
RHEV also provides "Virtual Desktop Infrastructure", known as VDI. This allows a lightweight client with a browser to access an OS image running on a KVM host. Matt was able to demonstrate this with Firefox browser running on his Android-based Nexus One smartphone.
RHEV also adds features that make it ideal for cloud deployments, including hot-pluggable CPU, network and storage; service Level Agreement monitoring for CPU, memory and I/O resources; storage live migrations to move the raw image files while guests are running; and a self-service user portal.
IBM has been doing server virtualization for decades. When I first started at IBM in 1986, I was doing z/OS development and testing on z/VM guest images. Later, around 1999, I started working with the "Linux on z" team, running multiple Linux images under PR/SM and z/VM. While the server virtualization solutions most people are familiar with (VMware, Hyper-V, Xen) have only been around the last five years or so, IBM has a much deeper and robust understanding and long heritage. This helps to set IBM apart from the competition when helping clients.