Last week, I presented "An Introduction to Cloud Computing" for two hours to the local Institute of Management Accountants [IMA] for their Continuing Professional Education [CPE]. Since I present IBM's leadership in Cloud Storage offerings, I have had to become an expert in Cloud Computing overall. The audience was a mix of bookkeepers, accountants, auditors, comptrollers, CPAs, and accounting teachers.
I have posted my charts on Slideshare.Net:
Here is a sample of the questions I took during and after my presentation:
If I need to shut down host machine, I lose all my virtual machines as well?
No, it is possible to seemlessly move virtual machines from one host to another. If you need to shut down a host machine, move all the VMs to other hosts, then you can shut down the empty host without impacting business.
Does the SaaS provider have to build their own app, can they not buy an app and then rent it out?
Yes, but they won't have competitive differentiation, and the software development they buy from will want a big cut of the action. SaaS developers that build their own applications can keep more of the profits for themselves.
How do backups work in cloud computing? Do I have to contact someone at the cloud computing company to find the backup tape?
Large datacenters often keep the most recent backups on disk, and older versions on tape in automated tape libraries that can fetch your backup in less than 2 minutes. Because of this, there is no need to talk to anyone, you can schedule or invoke your own backups, and often perform the recovery yourself using self-service tools.
Last month, my sister tried to rent a car during the week the Tucson Gem Show, but they were out of cars she wanted to drive. Could this happen with Cloud Computing?
Not likely. With rental cars, the cars have to be physically in Tucson to rent them. Rental companies could have brought cars down from Phoenix to satisfy demand. With Cloud Computing, it is all accessible over the global network, you are not limited to the cloud providers nearest you.
Is there a reason why Amazon Web Services (AWS) charges more for a Windows image than a Linux image?
Yes, Amazon and Microsoft have a patent cross-licensing agreement where Amazon pays Microsoft for the priveledge of offering Windows-based images on their EC2 cloud infrastructure. It just makes business sense to pass those costs onto the consumer. Linux is a free open source operating system, and is often the better choice.
So if we rent a machine from Amazon, they send it to my accounting office? What exactly am I getting for 12 cents per hour?
No. The computer remains in their datacenter. You get a virtual machine that runs 1.2Ghz Intel processor, with 1700MB of RAM, and 160GB of hard disk space, with Windows operating system running on it, comparable to a machine you can get at the local BestBuy, but instead of it running in the next room, it is running in a datacenter somewhere else in the United States with electricity and air conditioning.
You access it remotely from your desktop or laptop PC.
Why would I ever rent more than one computer?
It depends on your workload. For example, Derek Gottfrid at the New York Times needed to convert 11 million articles from TIFF format to PDF format so that he could put them up on the web. This would have taken him months using a single computer, so he rented 100 computers and got the entire stack converted in 24 hours, for a cost of about $240. See the articles [Self-Service, Prorated, Super Computing] and [TimesMachine] for details.
What about throughput? Won't I need to run cables from my accounting office to this cloud computing data center?
You will need connectivity, most likely from connections provided by your local telephone or cable company, or through the Internet. Certainly, there can be cases where direct privately-owned fiber optic cables, known as "dark fiber", can directly connect consumers to local Cloud service providers, for added security.
What about medical records? Will Cloud Computing help the Healthcare industry?
Yes, hospitals are finding that digitizing their records greatly reduces costs. IBM offers the Grid Medical Archive Solution [GMAS] as a private cloud storage solution to store X-ray images and other electronic medical records on disk and tape, and these records can be accessed from multiple hospitals and clinics, wherever the doctor or patient happens to be.
The advantage of personal computers was individualization, I could put on my own choices of software, and customize my own settings, won't we lose this with Cloud Computing?
Yes, customized software and settings cost companies millions of dollars with help desk calls. Cloud Computing attempts to provide some standardization, reducing the amount of effort to support IT operations.
Won't putting all the computers into a big datacenter make them more vulnerable to hackers?
Security is a well-known concern, but this is being addressed with encryption, access control lists, multi-tenancy isolation, and VPN connections.
My daughter has a BlackBerry or iPod or something, and when we mentioned that someone in Phoenix wore a monkey suit to avoid photo-radar speed cameras, she was able to pull up a picture on her little hand-held thing, is this the future?
Yes, mobile phones and other hand-held devices now have internet access to take advantage of Cloud Computing services. People will be able to access the information they need from wherever they happen to be. (You can see the picture here: [Man Dons Mask for Speed-Camera Photos])
IBM offers a variety of Cloud Computing services, as well as customized solutions and integrated systems that can be deployed on-premises behind your corporate firewall. To learn more, go to [ibm.com/cloud].
The second speaker was local celebrity Dan Ryan presenting the financials for the upcoming [Rosemont Copper] mining operations. Copper is needed for emerging markets, such as hybrid vehicles and wind turbines. Copper is a major industry in Arizona.
technorati tags: , IBM, Cloud Computing, IMA, CPE, IaaS, PaaS, SaaS, Amazon, AWS, Linux, Eucalyptus, Ubuntu, Rosemont Copper
The last keynote session of the [Oracle OpenWorld 2011] conference was Oracle making a few major announcements.
Steve Miranda, Senior VP for Oracle Applications, explained the new "Fusion 11g Apps" which are now generally available. Basically, they took all the scattered applications they have from acquisitions of PeopleSoft, JD Edwards, Siebel and so on, and re-wrote them to industry-standard Java so that they would all run either on-premise or in the Cloud. The Enterprise Apps come in seven categories: Financials like General Ledger and Payroll; Human Capital Management (HCM) formerly known as Human Resources; Supply Chain Management (SCM); Customer Relationship Management (CRM); Governance Risk and Compliance (GRC); Procurement; and Project/Portfolio Management (PPM). Oracle also has "Industry Apps" for specific verticals.
All of these apps have "embedded BI" (business intelligence), such as dashboards, multi-dimensional calculations, decision support, and real-time optimization. This is intended to help the end-user answer four questions:
- What do you need to do today?
- How to get it done?
- What you need to know today?
- Who can help you?
Larry Ellison, Oracle CEO, said that it took six years to rewrite all the Fusion Apps. They used an "agile" development model with over 200 early adopters to ensure that these applications were successful. They were under a "controlled release program" but now that is over, and the applications are generally available. Larry indicates that these applications were developed under the concepts of Service Oriented Architecture [SOA], which neither Salesforce.com nor SAP R3 have.
(This made me chuckle. SOA was initially developed by IBM and Microsoft, but is now industry standard. There is no reason not to develop software that isn't SOA.)
Following the IBM model, Oracle has built-in the security at the OS, Database and Middleware layer, rather than in each application. As IBM has understood for several decades, a secure infrastructure is the way to go so that all applications are secure.
With all these Fusion Apps now re-written so that they work on industry-standard Java (J2EE, actually), allowing them to run either on-premise or out on the Cloud, Larry Ellison said "I guess we need a Cloud!" This started his announcement of the "Oracle Public Cloud" [OPC]. OPC has both PaaS and SaaS. The PaaS would offer VM instances with support for database and Java services. The SaaS would be all the Fusion Apps rented on the "as-a-service" model. Rather than force everyone to Oracle 11g, you can run any Oracle database on OPC, and you can run any Java or J2EE application on the OPC.
Your data is portable. Larry is pro-choice, and wants people to be able to move from any cloud to any cloud. Since it's based on industry-standard Java, applications can move seamlessly between OPC, Amazon EC2 and IBM SmartCloud. IBM has been a major force behind [Open Cloud Standards], so it is always good that other major vendors follow suit.
He quoted [someone as saying "Beware of False Clouds"] This was Salesforce.com CEO Marc Benioff's attack against all "Private Cloud" IT vendors. Larry twisted this to say he agrees, "True Clouds" are based on open industry standards, and "False Clouds" are vendor-lockin. OPC is based on Java, J2EE, XML, BPEL and Ruby on Rails, whereas Salesforce.com is based on proprietary Heroku and APEX. He called Salesforce.com the "Roach Motel of Cloud Computing" .. you can check in, but you can't check out.
OPC plans to offer some "data sources", including Dun&Bradstreet news feed, Twitter, Facebook and other social networks. It is based on a monthly subscription using a self-service portal. The resources are elastic, with capacity delivered on demand. He claims that Salesforce.com is rate-limited, and cancels long-running jobs if they are consuming too many resources. Larry said OPC would never do that.
Larry said that there are private-only offerings like SAP R3, and public-only offerings like Salesforce.com, Workday, and Taleo, but Oracle instead has adopted the IBM model of supporting choice between private, public and hybrid clouds.
Larry then attacked "Multi-tenancy", specifically, the idea that SaaS providers often use a single database instance, but then create a column to identify which records belong to which tenants. He said this was state-of-the-art 15 years ago, but is a bad idea now. Too risky. Instead, Larry's OPC has unique database instances for each tenant through virtualization.
Larry also announced the Oracle Social Network (OSN). This is a corporate-version of Facebook, that supports collaboration and file-sharing, similar to IBM [LotusLive], Google Docs, or Microsoft Office365. All of the Fusion Apps are written to interface directly with the OSN or any of these other social networks through APIs. This includes navigation and integrated social networking. He also indicated that all Fusion Apps run on mobile devices. He showed the SAP R3 GUI, and said it reminded him of "the fins on a 1968 Cadillac!"
Larry said that other CRM SaaS focus on helping sales managers track their employees, but Oracle's CRM helps sellers sell more.
He then gave an example of a mythical sales manager Bob, and his sales employee Julian, selling two Exadata boxes for $4.8 Million USD. A "safe harbor" statement was shown at the beginning of this keynote, to make sure nobody asks to buy Exadata boxes this cheap.
technorati tags: IBM, SmartCloud, LotusLive, Oracle, OpenWorld, Larry Ellison, Marc Bernioff, Salesforce.com, Workday, Taleo, SAP, Oracle Public Cloud, Oracle Social Network, SaaS, PaaS,
IBM announced that it will offer [three free months of IBM Smart Business Cloud] computing and storage services to government agencies, charitable non-profit organizations, and other organizations involved with reconstruction resulting from the earthquakes and tsunami in Japan and the northern Pacific region.
With traditional communications down, and many data centers incapacitated, Cloud Computing can be a great way to resume operations. According to the announcement, organizations can submit their requests now until April 30, and the program will run until July 31, 2011. Options include:
- Virtual machine images running 32-bit and 64-bit versions of Linux or Windows.
- 60GB to 2 TB of disk storage per instance.
- Options for various IBM middleware (DB2, Informix, Lotus, and WebSphere)
- Rational Application Lifecycle Management and Tivoli Monitoring software
The offer also includes [LotusLive] Software-as-a-Service (SaaS) for email and online collaboration. For more about LotusLive, see this [Red Paper].
technorati tags: IBM, Japan, Earthquake, Tsunami, Smart Business Cloud, Linux, Windows, DB2, Informix, Lotus, WebSphere, LotusLive, SaaS
Continuing my coverage of the Data Center 2010 conference, Monday I attended four keynote sessions.
- Opening Remarks
The first keynote speaker started out with an [English proverb]: Turbulent waters make for skillful mariners.
He covered the state of the global economy and how CIOs should address the challenge. We are on the flat end of an "L-shaped" recovery in the United States. GDP growth is expected to be only 4.7 percent Latin America, 2.3 percent in North America, 1.5 percent Europe. Top growth areas include 8.0 percent India and 8.6 percent China, with an average of 4.7 growth for the entire Asia Pacific region.
On the technical side, the top technologies that CIOs are pursuing for 2011 are Cloud Computing, Virtualization, Mobility, and Business Intelligence/Analytics. He asked the audience if the "Stack Wars" for integrated systems are hurting or helping innovation in these areas.
Move over "conflict diamonds", companies now need to worry about [conflict minerals].
He proposed an alternative approach called Fabric-Based Infrastructure. In this new model, a shared pool of servers is connected to a shared pool of storage over an any-to-any network. In this approach, IT staff spend all of their time just stocking up the vending machine, allowing end-users to get the resources they need.
- Crucial Trends You Need to Watch
The second speaker covered ten trends to watch, but these were not limited to just technology trends.
- Virtualization is just beginning - even though IBM has had server virtualization since 1967 and storage virtualization since 1974, the speaker felt that adoption of virtualization is still in its infancy. Ten years ago, average CPU utilization for x86 servers of was only 5-7 percent. Thanks to server virtualization like VMware and Hyper-V, companies have increased this to 25 percent, but many projects to virtualized have stalled.
- Big Data is the elephant in the room - storage growth is expected to grow 800 percent over the next 5 years.
- Green IT - Datacenters consume 40 to 100 times more energy than the offices they support. Six months ago, Energy Star had announced [standards for datacenters] and energy efficiency initiatives.
- Unified Communications - Voice over IP (VoIP) technologies, collaboration with email and instant messages, and focus on Mobile smartphones and other devices combines many overlapping areas of communication.
- Staff retention and retraining - According to US Labor statistics, the average worker will have 10 to 14 different jobs by the time they reach 38 years of age. People need to broaden their scope and not be so vertically focused on specific areas.
- Social Networks and Web 2.0 - the keynote speaker feels this is happening, and companies that try to restrict usage at work are fighting an uphill battle. Better to get ready for it and adopt appropriate policies.
- Legacy Migrations - companies are stuck on old technology like Microsoft Windows XP, Internet Explorer 6, and older levels of Office applications. Time is running out, but migration to later releases or alternatives like Red Hat Linux with Firefox browser are not trivial tasks.
- Compute Density - Moore's Law that says compute capability will double every 18 months is still going strong. We are now getting more cores per socket, forcing applications to re-write for parallel processing, or use virtualization technologies.
- Cloud Computing - every session this week will mention Cloud Computing.
- Converged Fabrics - some new approaches are taking shape for datacenter design. Fabric-based infrastructure would benefit from converging SAN and LAN fabrics to allow pools of servers to communicate freely to pools of storage.
He sprinkled fun factoids about our world to keep things entertaining.
- 50 percent of today's 21-year-olds have produced content for the web. 70 percent of four-year-olds have used a computer. The average teenager writes 2,282 text messages on their cell phone per month.
- This year, Google averaged 31 billion searches per month, compared 2.6 billion searches per month in 2007.
- More video has been uploaded to YouTube in the last two months than the three major US networks (ABC, NBC, CBS) have aired since 1948.
- Wikipedia averages 4300 new articles per day, and now has over 13 million articles.
- This year, Facebook reached 500 million users. If it were a country, it would be ranked third. Twitter would be ranked 7th, with 69% of their growth being from people 32-50 years old.
- In 1997, a GB of flash memory cost nearly $8000 to manufacture, today it is only $1.25 instead.
- The computer in today's cell phone is million times cheaper, and thousand times more powerful, than a single computer installed at MIT back in 1965. In 25 years, the compute capacity of today's cell phones could fit inside a blood cell.
See [interview of Ray Kurzweil] on the Singularity for more details.
- The Virtualization Scenario: 2010 to 2015
The third keynote covered virtualization. While server virtualization has helped reduce server costs, as well as power and cooling energy consumption, it has had a negative effect on other areas. Companies that have adopted server virtualization have discovered increased costs for storage, software and test/development efforts.
The result is a gap between expectations and reality. Many virtualization projects have stalled because there is a lack of long-term planning. The analysts recommend deploying virtualization in stages, tackle the first third, so called "low hanging fruit", then proceed with the next third, and then wait and evaluate results before completing the last third, most difficult applications.
Virtualization of storage and desktop clients are completely different projects than server virtualization and should be handled accordingly.
- Cloud Computing: Riding the Storm Out
The fourth keynote focus on the pros and cons of Cloud Computing. First they start by defining the five key attributes of Cloud: self-service, scalable elasticity, shared pool of resources, metered and paid per use, over open standard networking technologies.
In addition to IaaS, PaaS and SaaS classifications, the keynote speaker mentioned a fourth one: Business Process as a Service (BPaaS), such as processing Payroll or printing invoices.
While the debate rages over the benefits between private and public cloud approaches, the keynote speaker brings up the opportunites for hybrid and community clouds. In fact, he felt there is a business model for a "cloud broker" that acts as the go-between companies and cloud service providers.
A poll of the audience found the top concerns inhibiting cloud adoption were security, privacy, regulatory compliance and immaturity. Some 66 percent indicated they plan to spend more on private cloud in 2011, and 20 percent plan to spend more on public cloud options. He suggested six focus areas:
- Test and Development
- Prototyping / Proof-of-Concept efforts
- Web Application serving
- SaaS like email and business analytics
- Department-level applications
- Select workloads that lend themselves to parallelization
The session wrapped up with some stunning results reported by companies. Server provisioning accomplished in 3-5 minutes instead of 7-12 weeks. Reduced cost of email by 70 percent. Four-hour batch jobs now completed in 20 minutes. 50 percent increase in compute capacity with flat IT budget. With these kind of results, the speaker suggests that CIOs should at least start experimenting with cloud technologies and start to profile their workloads and IT services to develop a strategy.
That was just Monday morning, this is going to be an interesting week!
technorati tags: IBM, GDP, Cloud Computing, virtualization, mobility, BI, CIO, Big Data, Green IT, Google, Twitter, Facebook, IaaS, PaaS, SaaS, BPaaS
It seems everyone is talking about stacks, appliances and clouds.
On StorageBod, fellow blogger Martin Glassborow has a post titled [Pancakes!] He feels that everyone from Hitachi to Oracle is turning into the IT equivalent of the International House of Pancakes [IHOP] offering integrated stacks of software, servers and storage.
Cisco introduced its "Unified Computing System" about a year ago, [reinventing the datacenter with an all-Ethernet approach]. Cisco does not offer its own hypervisor software nor storage, so there are two choices. First, Cisco has entered a joint venture, called Acadia, with VMware and EMC, to form the Virtual Computing Environment (VCE) coalition. The resulting stack was named Vblock, which one blogger had hyphenated as Vb-lock to raise awareness to the proprietary vendor lock-in nature of this stack. Second, Cisco, VMware and NetApp had a similar set of [Barney press releases] to announce a viable storage alternative to those not married to EMC.
On StorageMojo, fellow blogger Robin Harris presents [A deep dive into Cisco’s UCS]. Here is an excerpt:
"Only when it makes sense. Oracle/Sun has the better argument: when you know exactly what you want from your database, we’ll sell you an integrated appliance that will do exactly that. And it’s fine if you roll your own.
But those are industry-wide issues. There are UCS/VCE specific issue as well:
- Cost. All the integration work among 3 different companies costs money. They aren’t replacing existing costs – they are adding costs. Without, in theory, charging more.
- Lock-in. UCS/Vblock is, effectively, a mainframe with a network backplane.
- Barriers to entry. Are there any? Cisco flagged hypervisor bypass and large memory support as unique value-add – and neither seems any more than a medium-term advantage.
- BOT? Build, Operate, Transfer. In theory Vblocks are easier and faster to install and manage. But customers are asking that Acadia BOT their new Vblocks. The customer benefit over current integrator practice? Lower BOT costs? Or?
- Price. The 3 most expensive IT vendors banding together?
- Longevity. Industry “partnerships” don’t have a good record of long-term success. Each of these companies has its own competitive stresses and financial imperatives, and while the stars may be aligned today, where will they be in 3 years? Unless Cisco is piloting an eventual takeover."
Similar questions and concerns were raised by others, including Greg Ferro's [Questions I Have About Cisco] and Gestalt IT's [One Year Later: Questioning Cisco UCS].
Fellow blogger Bob Sutor (IBM) has an excellent post titled
[Appliances and Linux
]. Here is an excerpt:
"In your kitchen you have special appliances that, presumably, do individual things well. Your refrigerator keeps things cold, your oven makes them hot, and your blender purees and liquifies them. There is room in a kitchen for each of these. They work individually but when you are making a meal they each have a role to play in creating the whole.
You could go out and buy the metal, glass, wires, electrical gadgets, and so on that you would need to make each appliance but it is is faster, cheaper, and undoubtably safer to buy them already manufactured. For each device you have a choice of providers and you can pay more for additional features and quality.
In the IT world it is far more common to buy the bits and pieces that make up a final solution. That is, you might separately order the hardware components, the operating system, and the applications, and then have someone put them all together for you. If you have an existing configuration you might add more blades or more storage devices.
You don’t have to do this, however, in every situation. Just from a hardware perspective, you can buy a ready-made machine just waiting for the on switch to be flicked and the software installed. Conversely, you might get a pre-made software image with operating system and applications in place, ready to be provisioned to your choice of hardware. We can get even fancier in that the software image might be deployable onto a virtual machine and so be a ready made solution runnable on a cloud.
Thus in the IT world we can talk about hardware-only appliances, software-only appliances (often called virtual software appliances), and complete hardware and software combinations. The last is most comparable to that refrigerator or oven in your kitchen."
If your company was a restaurant, how many employees would you have on hand to produce your own electricity from gas generators, pump your own water from a well, and assemble your own toasters and blenders from wires and motors? I think this is why companies are re-thinking the way they do their own IT.
Rather than business-as-usual, perhaps a mix of pre-configured appliances, consisting of software, server and storage stacked to meet a specific workload, connected to public cloud utility companies, might be the better approach. By 2013, some analysts feel that as many as 20 percent of companies might not even have a traditional IT datacenter anymore.
Fellow blogger David Salgado (Microsoft) rips into the IT industry for [marketing these "stacks" of components as "private clouds"]. Fellow blogger Mary-Jo Foley (Microsoft) asks ['Private cloud' = just another buzzword for on-premise datacenter?"] adds more attention to the confusion over the terms private and public cloud. Here's an excerpt that shows Microsoft's thinking in this area:
- "The private cloud:
- “By employing techniques like virtualization, automated management, and utility-billing models, IT managers can evolve the internal datacenter into a ‘private cloud’ that offers many of the performance, scalability, and cost-saving benefits associated with public clouds. Microsoft provides the foundation for private clouds with infrastructure solutions to match a range of customer sizes, needs and geographies.
- The public cloud:
- “Cloud computing is expanding the traditional web-hosting model to a point where enterprises are able to off-load commodity applications to third-party service providers (hosters) and, in the near future, the Microsoft Azure Services Platform. Using Microsoft infrastructure software and Web-based applications, the public cloud allows companies to move applications between private and public clouds.”
Finally, I saw this from fellow blogger, Barry Burke(EMC), aka the Storage Anarchist, titled [a walk through the clouds] which is really a two-part post.
The first part describes a possible future for EMC customers written by EMC employee David Meiri, envisioning a wonderful world with "No more Metas, Hypers, BIN Files...."
The vision is a pleasant one, and not far from reality. While EMC prefers to use the term "private cloud" to refer to both on-premises and off-premises-but-only-your-employees-can-VPN-to-it-and-your-IT-staff-still-manages-it flavors, the overall vision is available today from a variety of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) providers.
A good analogy for "private cloud" might be a corporate "intranet" that is accessible only within the company's firewall. This allowed internal websites where information to be disseminated to employees could be posted, using standard HTML and standard web browsers that are already deployed on most PCs and workstations. Web pages running on an intranet can easily be moved to an external-facing website without too much rework or trouble.
The second part has Barry claiming that EMC has made progress towards a "Virtual Storage Server" that might be announced at next month's EMC World conference.
When people hear "Storage Virtualization" most immediately think of the two market leaders, IBM SAN Volume Controller and Hitachi Data Systems (HDS) Universal Storage Platform (USP) products. Those with a tape bent might throw in IBM's TS7000 virtual tape libraries or Oracle/Sun's Virtual Storage Manager (VSM). And those focused on software-only solutions might recall Symantec's Veritas Volume Manager (VxVM), DataCore's SANsymphony, or FalconStor's IPStor products.
But what about EMC's failed attempt at storage virtualization, the Invista? After five years of failing to deliver value, EMC has so far only publicised ONE customer reference account, and I estimate that perhaps only a few dozen actual customers are still running on this platform. Compare that to IBM selling tens of thousands of SAN Volume Controllers, and HDS selling thousands of their various USP-V and USP-VM products, and you quickly realize that EMC has a lot of catching up to do. EMC's first delivered Invista about 18 months after IBM SAN Volume Controller, similar to their introduction of Atmos being 18 months after our Scale-Out File Services (SoFS) and their latest CLARiiON-based V-Max coming out 18 months after IBM's XIV storage system.
So what will EMC's Invista follow-on "Virtual Storage Server" product look like? No idea. It might be another five years before you actually hear about a customer using it. But why wait for EMC to get their act together?
IBM offers solutions TODAY that can make life as easy as envisioned here. IBM offers integrated systems sold as ready-to-use appliances, customized "stacks" that can be built to handle particular workloads, residing on-premises or hosted at an IBM facility, and public cloud "as-a-service" offerings on the IBM Cloud.
technorati tags: StorageBod, Martin Glassborow, IHOP, Hitachi, Oracle, Cisco, UCS, Ethernet, VMware, VCE, NetApp, Barney, StorageMojo, Robin Harris, IBM, Bob Sutor, Linux, Appliances, Stacks, Private Cloud, Public Cloud, Cloud Computing, IaaS, PaaS, SaaS, Barry Burke, EMC, Invista