In the cover story this month,
Lee Cleveland, Distinguished Engineer, Power Systems direct attach
storage, and Andy Walls, Distinguished Engineer, chief hardware
architect for DS8000 and solid-state drives (SSDs), sat down to talk
about all of the new storage technologies IBM has been releasing lately.
What I didn’t have room for in the article was a nice summary of the
technologies that can help you improve access, manage growth, protect
data, reduce costs or reduce complexity. Whatever your goals, IBM has an
integrated storage option for every organization.
Here are the quick highlights of the latest storage announcements:
IBM Storwize V7000
New advanced software functions
New easy-to-use, Web-based GUI
RAID and enclosure RAS services and diagnostics
Additional host, controller and ISV interoperability
Integration with IBM Systems Director
Enhancements to Tivoli Storage Productivity Center (TPC), FlashCopy Manager (FCM) and Tivoli Storage Manager (TSM) support
Proven IBM software functionalities
Easy Tier (dynamic HDD/SSD management)
RAID 0, 1, 5, 6, 10
Storage virtualization (local and external disks)
Non-disruptive data migration
Global and Metro Mirror
FlashCopy up to 256 copies of each volume
Thin provisioning
IBM Storwize Rapid Application Storage Solution
Runs on: AIX 7.1-5.3, IBM i 7.1-6.1 (with VIOS), Red Hat and SUSE Linux, z/VSE, Microsoft Windows, Mac OS X
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 days
Faced with a nasty loss of credibility, a string of poor financial
results, shrinking market share in its core business, an unwieldy and
alienating bureaucracy blamed for the top executive exodus it been
experiencing, and a stock price that's plunged into the toilet Cisco,
once an economic bellwether, is promising to do more than simply kill
off its once-popular Flip video camcorder business and lay 550 people
off, an admission that its foray into the consumer segment had largely
failed.
It said in a press release issued Thursday morning that it's going to
a "streamlined operating model" focused on five areas, not apparently
the literally 30 different directions it's been going in although it did
say, come to think of it, something about "greater focus" so maybe it's
not really cutting back.
These focus areas are, it said, "routing, switching, and services;
collaboration; data center virtualization and cloud; video; and
architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street
Journal criticized Cisco for not being able to explain in plain English
what it's doing and Barron's complained that it needed a Kremlinologist
to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales,
services and engineering organizations in the next 120 days or by July
31 when its next fiscal year begins. Well, maybe not everything, it
warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
Field operations will be organized into three geographic regions
for faster decision making and greater accountability: the Americas,
EMEA and Asia Pacific, Japan and Greater China still under sales chief
Robert Lloyd;
Services will follow key customer segments and delivery models still under its multi-tasking COO Gary Moore;
Engineering, still reporting to Moore, will now be led by
two-in-a-box Pankaj Patel and Padmasree Warrior and aside from the
company's five focus areas there will be a dedicated Emerging Business
Group under Marthin De Beer focused on "select early-phase businesses"
"with continued focus on integrating the Medianet architecture for video
across the company."
Lastly, it's going to "refine" - but apparently not dismantle its
hydra-headed, decision-inhibiting Council structure blamed for
frustrating and running off key talent - down to three "that reinforce
consistent and globally aligned customer focus and speed to market
across major areas of the business: Enterprise, Service Provider and
Emerging Countries. These councils will serve to further strengthen the
connection between strategy and execution across functional groups.
Resource allocation and profitability targets will move to the sales and
engineering leadership teams which will have accountability and direct
responsibility for business results."
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on
making a series of changes throughout the next quarter and as we enter
the new fiscal year that will make it easier to work for and with Cisco,
as we focus our portfolio, simplify operations and manage expenses. Our
five company priorities are for a reason - they are the five drivers of
the future of the network, and they define what our customers know
Cisco is uniquely able to provide for their business success. The new
operating model will enable Cisco to execute on the significant market
opportunities of the network and empower our sales, service and
engineering organizations."
Load Balancers Are Dead: Time to Focus on Application Delivery 2 February 2009 Mark Fabbi Gartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications. Overview This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift. Key Findings
Enterprises are still focused on load balancing.
There is little cooperation between networking and application teams on a holistic approach for application deployment.
Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Recommendations
Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
Enterprises must start building specialized expertise around application delivery
What you need to Know IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar. Analysis What's the Issue? Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis. In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs). In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure. Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications. Why Did This Happen? The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers. Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade. Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload. Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market. Current Obstacles As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features. The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas. Why We Need More and Why Should Enterprises Care? Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models. These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements. ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications. Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure. What Vendors Provide ADC Solutions Today? During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market. Vendors that have more-attractive offerings will have most or all of these attributes:
A strong set of advanced platform capabilities
Customizable, extensible platforms and solutions
A vision focused on application delivery networking
Affinity to applications:
Needs to be application-fluent (that is, they need to "speak the language")
Support organizations need to "talk applications"
*What Should Enterprises Do About This?
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity. In addition, enterprises should:
Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
Application layer proxy, which is often bidirectional
Brocade is leading the way by helping
organizations around the world build cloud-optimized networks that increase
business agility and profitability. Offering robust, yet flexible network
solutions, Brocade enables organizations to choose the best type of cloud model
for their unique business requirements and objectives. Brocade is introducing
two industry-leading product line advancements that are catered towards your
customer's existing IT infrastructure.
Leading-edge
Developments to the Data Center SAN Environment
Based on years of proven success, Brocade SAN
fabrics provide the most reliable, scalable, high-performance foundation for
private cloud architectures. Brocade continues that leadership with the
industry's first 16 Gbps Fibre Channel SAN solutions:
The Brocade® DCX® 8510 Backbone, the industry's most
powerful SAN backbone for private cloud storage
The Brocade 6510 Switch, the new price/performance leader in enterprise SAN
switches
The Brocade 1860 Fabric Adapter, a new class of adapter that meets all your
customer's Fibre Channel/FCoE/IP connectivity needs in a single device
Brocade Network Advisor, an easy-to-use, unified network management platform
Advancements to
the NetIron® MLX
Series
Brocade network solutions for service providers
combine high scalability and performance to transform your customer's business
with new revenue-generating cloud services—increasing their overall level of
profitability. Key offerings include:
New 10 GbE, 100 GbE, and advanced management modules for the Brocade MLX
Series of high-performance core routers
Compact Brocade NetIron CER 2000 Series routers, delivering high scalability
and performance at the network edge
Leading-edge enhancements in the Brocade NetIron 5.2 software release for
IPv6 scaling, broader MPLS connectivity, and more
Brocade Network Advisor, an easy-to-use, unified network management platform
World-class professional services and technical support for carrier-class
networks
Backups are a necessity. They’re important in any computing environment, and you would be hard pressed to find anybody who would disagree with the criticality of having backup copies of their data. In the event that primary systems or data sets are unavailable, backups are designed to provide the assurance that significant amounts of work, time or money aren’t lost.
To protect the partners, customers and constituents of organizations from risks associated with potential data loss, the U.S. federal government has established various compliance requirements that must be met and maintained. In addition to general business-compliance requirements, many industries have additional regulations that must be met. Examples include Sarbanes-Oxley Act of 2002 (SOX), Payment Card Industry Data Security Standard (PCI DSS), the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), Gramm-Leach-Bliley Act (GLBA), and the Federal Information Security Management Act (FISMA); it’s easy to see why compliance is often referred to as regulatory alphabet soup (which is not far off from the storage industry, I would add).
Depending on the industry, the mandated data-retention timeframe can vary from a few as seven years to as many as 100 years. At the upper end of that spectrum, a significant amount of infrastructure investment and planning is necessary. Unfortunately, systems complexity becomes a byproduct of trying to solve these challenges and that complexity evolves over time until it becomes unmanageable.
Just as the specific requirements for these regulations vary, so do the consequences of being non-compliant, which is often discovered during periodic industry audits or following a breach. Failure to meet compliance requirements could result in warnings or fines, and in extreme cases, termination of operations and prison time. The trouble is: Compliance testing can be difficult to do, and it can come down to having a confidence in whether or not systems will be able to perform adequately under trial.
A quick summary of the latest announcements by Nick Harris
In the cover story this month, Lee
Cleveland, Distinguished Engineer, Power Systems direct attach storage, and
Andy Walls, Distinguished Engineer, chief hardware architect for DS8000 and
solid-state drives (SSDs), sat down to talk about all of the new storage
technologies IBM has been releasing lately. What I didn’t have room for in the
article was a nice summary of the technologies that can help you improve
access, manage growth, protect data, reduce costs or reduce complexity.
Whatever your goals, IBM has an integrated storage option for every
organization.
Here are the quick highlights of the
latest storage announcements:
IBM
Storwize V7000
New advanced software functions
New easy-to-use, Web-based GUI
RAID and enclosure RAS services and diagnostics
Additional host, controller and ISV interoperability
Integration with IBM Systems Director
Enhancements to Tivoli Storage Productivity Center (TPC), FlashCopy Manager
(FCM) and Tivoli Storage Manager (TSM) support
Proven IBM software functionalities
Easy Tier (dynamic HDD/SSD management)
RAID 0, 1, 5, 6, 10
Storage virtualization (local and external disks)
Non-disruptive data migration
Global and Metro Mirror
FlashCopy up to 256 copies of each volume
Thin provisioning
IBM Storwize Rapid Application
Storage Solution
Runs on: AIX 7.1-5.3, IBM i 7.1-6.1
(with VIOS), Red Hat and SUSE Linux, z/VSE, Microsoft Windows, Mac OS X
Video is really gaining a foothold in how content is delivered, not only in the consumer space but also in the high tech space. Recently I was at SNW where SiliconAngle.TV was broadcasting live from the event. If this is where we are going, I thought it would be good to have my good friends at MediaBoss TV help me with a commercial. All comments welcome.
IBM® System Storage™ N series with Operations Manager software offers
comprehensive monitoring and management for N series enterprise storage
and content delivery environments. Operations Manager is designed to
provide alerts, reports, and configuration tools from a central control
point, helping you keep your storage and content delivery infrastructure
in-line with business requirements for high availability and low total
cost of ownership.
We focus especially on Protection Manager, which is designed as an
intuitive backup and replication management software for IBM System
Storage N series unified storage disk-based data protection
environments. The application is designed to support data protection and
help increase productivity with automated setup and policy-based
management.
This IBM Redbooks® publication demonstrates how Operation Manager
manages IBM System Storage N series storage from a single view and
remotely from anywhere. Operations Manager can monitor and configure all
distributed N series storage systems, N series gateways, and data
management services to increase the availability and accessibility of
their stored and cached data. Operations Manager can monitor the
availability and capacity utilization of all its file systems regardless
of where they are physically located. It can also analyze the
performance utilization of its storage and content delivery network. It
is available on Windows® , Linux® , and Solaris™ . Read More>
Systems combining block and file storage maximize benefits of server
virtualization.
The data center of the future
looks an awful lot like data centers of the past in one important respect:
storage demands. While the trend toward server virtualization and consolidation
is transforming the way data centers are being designed, built and managed,
rampant data growth continues to be a limiting factor.
In its annual “Digital
Universe” study, EMC projects a nearly 45-fold annual data growth by 2020. Data
growth was cited as the No. 1 data center hardware infrastructure challenge in a
recent Gartner survey of representatives from 1,004 large enterprises in eight
countries.
“While all the top data center
hardware infrastructure challenges impact cost to some degree, data growth is
particularly associated with increased costs relative to hardware, software,
associated maintenance, administration and services,” said April Adams, research
director at Gartner. “Given that cost containment remains a key focus for most
organizations, positioning technologies to show that they are tightly linked to
cost containment, in addition to their other benefits, is a promising
approach.”
In order to drive down costs
and reduce operational complexity, organizations virtualizing their data centers
and beginning the journey to the cloud require a storage infrastructure that is
both simple and efficient. Unified storage delivers on both counts.
Unified storage is the
combination of block- and file-based storage in the same system with common
management. These multiprotocol systems can be attached to servers via IP and/or
Fibre Channel.
The Road to
Unification
Unified storage is an evolving
technology, but not a new technology. A variety of vendors have taken stabs at
providing block- and file-oriented storage in a single box since the late 1990s.
Some of the earliest attempts involved simply putting two machines together in a
single enclosure and then creating a GUI to handle management of both.
Next came NAS gateways, which
used a NAS box as an entry to SAN storage. In this setup, a NAS box provides
file-based access to applications via a LAN port, and then stores the data on a
block-oriented storage array that can be accessed across the SAN. While this
approach accommodates both block and file protocols, it has some disadvantages.
One of the major problems is that data must be transferred twice — once across
the NAS Ethernet connection and again across the Fibre Channel or IP SAN — which
adds to I/O latency. Another issue is that the management of NAS gateways
continues to be separate from the management of SAN arrays.
More recent unified storage
platforms leverage virtualization technology to offer a much deeper integration
of file- and block-based storage. A file system performs I/O to disk blocks
using a common virtualized disk-volume engine. Virtualization allows
administrators to create a seamless pool of unified storage and enables
transparent data movement for tiered storage.
While NetApp introduced unified
storage to the market several years ago, it is now available from most storage
vendors. Many of these solutions include features such as data replication,
incremental snapshots and remote mirroring that contribute to robust business
continuity capabilities.
Aligning Storage with
Virtualization
IT organizations face growing
pressure to transform the data center to meet increasing demands for wider
access to information, transactions and services. To a great degree, this means
creating a technology infrastructure composed of virtualized computing and
networking. By breaking the relationship between applications and the IT systems
on which they run, virtualization frees system administrators from providing
specific hardware with static configurations.
However, many organizations
have found that the benefits of virtualization are offset by increased storage
complexity and expense. For example, the creation of hundreds or even thousands
of virtual server image files often leads to massive storage waste. Because each
of these images is typically many gigabytes in size, the total storage required
in virtual environments can be 30 percent more than in an equivalent physical
environment. As a result, virtual machine sprawl increases operational overhead
and compromises storage utilization efficiency and overall business agility.
Unified storage improves
utilization by allowing organizations to consolidate and virtualize storage
across storage protocols, environments and mixed storage platforms. Combinations
of block storage (Fibre Channel or iSCSI) and file storage (NAS systems with
CIFS or NFS) can be managed via a common set of features such as snapshots, thin
provisioning, tiered provisioning, replication, synchronous mirroring and data
migration — all from a single user interface. This shift toward a shared
infrastructure enables organizations to achieve storage utilization rates of 85
percent or more, compared to the sub-50-percent rates in standalone storage
silos.
“IT managers are looking for
storage solutions that not only deliver immediate value, but also enable
flexibility and growth over time, so that storage can adapt to changes in an
organization's applications, user needs or business demands,” said Mark Peters,
senior analyst at Enterprise Strategy Group. “Storage solutions that are both
virtualized and unified are ideal to address the needs for both storage
flexibility and data growth.”
IBM's Ed Walsh, Director of Storage Efficiency sits down with Steve Duplessie, Founder of ESG to talk about how IBM Real-time Compression sets the bar for doing storage optimization in NAS. At the end of the day, if you can do compression in real time, without sacrificing performance and the transparency of the implementation, then why wouldn't you - given the savings you can get over traditional compression.
We all know compression is not new and it is coming as a standard feature in a number of storage systems. The issue is, each of these technologies has a significant impact on performance - both primary storage performance as well as the performance on all of the back end operations such as backups, replication etc...
IBM's Real-time Compression doesn't have any of these limitations - listen to Ed to hear more.
Technology giant IBM on Tuesday said it has emerged as
the top player in the Indian external disk storage systems for the year
2010.
According to IT research firm IDC, IBM India
has maintained its 2010 leadership with a 26.2 per cent market share (in
revenue terms) and over four per cent points lead over its nearest
competitor.
“While the overall external disk storage
market in India declined to 1.5 per cent in calender year 2010,
according to IDC, IBM has been able to grow its hold in the country
given the constant innovation and focus on bringing in storage
efficiency,” Sandeep Dutta, Storage, Systems and Technology Group, IBM
India/ South Asia told PTI.
Also, in Q4 2010, IBM
maintained leadership with a 29 per cent market share and a seven per
cent point lead over its nearest competitor in revenue terms.
During
the year 2010, IBM launched products like IBM StorwizeV7000 and IBM
System Storage DS8000, which helped it to strengthen its leadership
position in the market.
During the year, IBM bagged
orders from Kotak, Suzlon, Oswal mills, CEAT, L&T (ECC division),
Indian Farmer and Fertilizer Cooperative Ltd, Solar Semiconductors and
Ratnamani Metals. Read More>
WASHINGTON - 01 Mar 2011:IBM (NYSE:IBM) today announced a major expansion of its Institute for Electronic Government (IEG) in Washington, D.C., adding cloud computing and analytics capabilities for public sector organizations around the world.
IBM has moved and expanded the facility in order to meet the growing demand from Government, Health Care and Education leaders who recognize the potential of cloud computing environments and business analytics technologies to improve efficiencies, reduce costs and tackle energy and budget challenges.
According to recent IBM surveys of technology leaders globally, 83 percent of respondents identified business analytics -- the ability to see patterns in vast amounts of data and extract actionable insights -- as a top priority and a way in which they plan to enhance their competitiveness. In addition, an overwhelming majority of respondents -- 91 percent -- expect cloud computing to overtake on-premise computing as the primary IT delivery model by 2015.