Brocade Unveils Vision for the Virtual Enterprise
Brocade Introduces Brocade CloudPlex(TM), an Open, Extensible Architecture for Virtualization and Cloud-Optimized Networks
SAN JOSE, CA -- (MARKET WIRE) -- 05/03/11 -- Brocade (NASDAQ: BRCD) today introduced a new technology architecture that outlines the company's vision and the technology investments it will make to help its customers evolve their data centers and IT resources and migrate them to the "Virtual Enterprise."
Brocade intends to deliver on this vision through the Brocade CloudPlex™ architecture, an open, extensible framework intended to enable customers to build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. What is unique about the Brocade Cloudplex architecture is that it is both the foundation for integrated compute blocks, but it also embraces a customer's existing multi-vendor infrastructure to unify all of their assets into a single compute and storage domain.
Brocade CloudPlex meets the goal of the Brocade One™ strategy, designed to help companies transition smoothly to a world where information and applications can reside anywhere by delivering solutions that deliver unmatched simplicity, non-stop performance, application optimization and investment protection.
"Virtualization has fundamentally changed the nature of applications by detaching them from their underlying IT infrastructure and introducing a high degree of application mobility across the entire enterprise," said Dave Stevens, chief technology officer at Brocade. "This is the concept of the 'Virtual Enterprise' that we feel unleashes the true potential of cloud computing in all its forms -- private, hybrid and public."
Through the CloudPlex architecture, Brocade will help its customers scale their IT environments from managing hundreds of virtual machines (VMs) in certain classes of servers to tens of thousands of VMs that are distributed and mobilized across their entire enterprise and throughout the cloud. According to Gartner, the expansion of VMs not only improves automation and reduces operational expenses, it is the primary requirement for IT organizations to migrate to cloud architectures.(1)
Gartner advises that, "IT organizations pursuing virtualization should have an overall strategic plan for cloud computing and a roadmap for the future, and should plan proactively. Further, these organizations must focus on management and process change to manage virtual resources, and to manage the speed that virtualization enables, to avoid virtualization sprawl."
The Brocade Cloudplex architecture will define the stages and the components from Brocade and its partners that are required to get to the Virtual Enterprise. The stages comprise three main categories -- fabrics, globalization and open technologies -- with some of these components being available today while others are in development or on the roadmap of Brocade's engineering priorities.
The currently available components are:
- Networks comprised of Ethernet fabrics and Fibre Channel fabrics as the flat, fast and simple foundation designed to scale to highly virtualized IT environments;
- Multiprotocol fabric adapters for simplified server I/O consolidation;
- High-performance application delivery products necessary for load balancing network traffic across distributed data centers;
The components on the roadmap are:
- Integrated, tested and validated solution bundles of server, virtualization, networking and storage resources called Brocade Virtual Compute Blocks. An integral element of the Brocade CloudPlex architecture, Brocade will enable its systems partners and integrators to deliver Virtual Compute Block solutions comprising servers, hypervisors, storage, and cloud-optimized networking in pre-bundled, pre-racked configurations with unified support;
- Powerful and universal fabric and network extension delivered through a new platform capable of supporting a number of IP, SAN and mainframe extension technologies including virtual private LAN services (VPLS), Fibre Channel over IP (FCIP) and FICON;
- An advancement of Brocade Fabric ID technology called "Cloud IDs" that enables simple and secure isolation and mobility of VMs for native multi-tenancy cloud environments;
- An open framework for management, provisioning and integration designed to promote multi-vendor and system-to-sytem interoperability specifically for cloud environments. These include Brocade products supporting OpenStack software for storage, compute and Software-Defined Networking (SDN) capabilities enabled through OpenFlow;
- Unified education, support and services delivered through Brocade and partners to help customers manage this highly distributed "Virtual Enterprise" environment.
Brocade Partner Endorsements
"We are excited to be working with Brocade to develop highly-scalable virtualized computing and storage configurations, providing superior cost-performance solutions today for our customers while at the same time establishing a clear path to cloud IT architectures in the future. Specifically, Brocade switches coupled with Dell PowerEdge servers and EqualLogic or Dell Compellent storage provide the scalability, flexibility and efficiency our customers demand in the virtual era."
-- Dario Zamarian, Vice President and General Manager, Dell Networking
"Fujitsu's global cloud strategy is built on our real experience in working with customers on the delivery of both Services and Infrastructures for Cloud computing across the world. We believe that common processes, holistic management of infrastructure elements and the use of industry standards are fundamentally helping customers to ease the transition and to migrate their largest and most complex IT environments smoothly to join 'any mode' of the cloud consumption of their choosing. Brocade shares these views and has laid out a compelling vision through its CloudPlex architecture that Fujitsu Technology Solutions fully endorses and will support. This architecture provides compelling added value to Fujitsu's Cloud offerings by defined standards and holistic management."
-- Jens-Peter Seick, Senior Vice President, Data Center Systems, Fujitsu
"Hitachi is helping customers deliver IT services through the cloud by using open, standards-based technologies that let them build and scale their virtualized data centers at their own pace. With Brocade's CloudPlex architecture, both Hitachi and Brocade address our mutual customers' IT needs and protect their existing IT investments by migrating their legacy devices to cloud deployments -- preventing cloud from becoming just another IT silo."
-- Sean Moser, Vice President, Storage Software Product Management, Hitachi Data Systems
"Recent advancements in cloud and virtualization are making it possible for enterprises to deploy an intelligent infrastructure that enables workloads to move around the enterprise and around the world in a transparent, fluid way. We believe that enabling flexible application deployment is imperative to mainstream adoption of cloud computing. VMware and Brocade share a common vision of offering customers the ability to accelerate IT by reducing complexity while significantly lowering costs and enabling more flexible, agile services delivery."
-- Parag Patel, Vice President, Global Strategic Alliances, VMware
Unveiling at Brocade Technology Day Summit
Brocade CTO Dave Stevens will discuss more details about the CloudPlex architecture at the annual Brocade Technology Day Summit taking place on its San Jose campus on May 3 and 4. To participate in the event via a live webcast, please visit the following page on Brocade's Facebook page or simply register for the event at:
Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
(1) Source: "The Road Map From Virtualization to Cloud Computing" (Gartner, March 2011)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
© 2011 Brocade Communications Systems, Inc. All Rights Reserved.
Add to Digg Bookmark with del.icio.us Add to Newsvine
Brocade Media & Analyst Relations
Brocade Investor Relations
News Provided by Acquire Media
Load Balancers Are Dead: Time to Focus onApplication Delivery 2 February 2009Mark FabbiGartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications.OverviewThis research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift.Key Findings
- Enterprises are still focused on load balancing.
- There is little cooperation between networking and application teams on a holistic approach for application deployment.
- Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
What you need to KnowIT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar.AnalysisWhat's the Issue?Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis.In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs).In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure.Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications.Why Did This Happen?The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers.Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade.Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload.Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market.Current ObstaclesAs the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features.The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas.Why We Need More and Why Should Enterprises Care?Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models.These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements.ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications.Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure.What Vendors Provide ADC Solutions Today?During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market.Vendors that have more-attractive offerings will have most or all of these attributes:
- Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
- Enterprises must start building specialized expertise around application delivery
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity.In addition, enterprises should:
- A strong set of advanced platform capabilities
- Customizable, extensible platforms and solutions
- A vision focused on application delivery networking
- Affinity to applications:
- Needs to be application-fluent (that is, they need to "speak the language")
- Support organizations need to "talk applications"*What Should Enterprises Do About This?
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
- Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
- Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
- Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
- Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
- Application layer proxy, which is often bidirectional
- Content transformation
- Selective compression
- Selective caching of dynamic content
- HTML or other application protocol optimizations
- Web application firewall
- XML validation and transformation
- Rules and programmatic interfaces
Optimization of SAP Infrastructure to result in better performance, low costs and high energy efficiency
20 Apr 2011:
Today IBM (NYSE: IBM)
announced that Audi selected IBM to build a cloud environment for
Audi's SAP infrastructure to deliver higher performance, fast and
flexible provisioning of SAP applications and capacities, lower
infrastructure costs, and to deliver above-average energy efficiency
with the ability to enlarge future SAP applications to an almost
Audi was facing challenges to scale its IT systems by the
increased use of business-critical applications in areas such as
production and logistics, supplier relationship management and human
resources which challenged their IT infrastructure regarding reliability
In April 2010, Audi signed a contract with IBM to rebuild their
existing SAP infrastructure, including consolidation and virtualization
of the server hardware, process standardization, opportunities for
performance-related billing and a much higher operational flexibility.
Audi's new SAP Infrastructure solution is based on a new generation of
high-performance IBM POWER 7 Servers and IBM database technology (DB2).
"Along with a very high level of reliability and failure safety, the
new SAP Infrastructure solution, which we will migrate into a private
cloud, substantially lowering energy consumption," said Audi's Lorenz
Schoberl, head of IT Infrastructure Services. "The DB2 solution's
built-in data compression capability will enable us to save time and
reduce costs of storage and archiving."
"We were able to demonstrate that our combination of POWER servers
and DB2 will decrease the total cost of ownership over the next four
years -- from a business and technology point of view," said Gunter
Frohlich, IBM Client Manager for Audi.
The new infrastructure is fully operational and will be managed by
IBM in a private cloud environment hosted in Audi's data center.
About IBM Cloud Computing
IBM has helped thousands of clients adopt cloud models and manages
millions of cloud based transactions every day. IBM assists clients in
areas as diverse as banking, communications, healthcare and government
to build their own clouds or securely tap into IBM cloud-based business
and infrastructure services. IBM is unique in bringing together key
cloud technologies, deep process knowledge, a broad portfolio of cloud
solutions, and a network of global delivery centers. For more
information about IBM cloud solutions, visit www.ibm.com/smartcloud
For more about IBM, visit www.ibm.com/de/pressroom.
SAN JOSE, CA -- (MARKET WIRE) -- 03/22/11 -- Brocade® (NASDAQ: BRCD)
today announced it is taking a leadership position to help define standards to enable scalability and manageability in hyper-scale cloud infrastructures. Brocade has become an initial member of the Open Networking Foundation (ONF)
, a non-profit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN).
SDN involves several components, one of the most important being standard-based OpenFlow, an emerging standard delivering service providers granular control of their network infrastructures. Brocade will leverage its work in developing OpenFlow across its high-performance service provider portfolio to enable customers to build high-value applications across their networks with greater efficiency and unparalleled simplicity.
Today's service providers and network operators face a number of challenges that require multiple solutions in order to ensure highly efficient and profitable operation. Brocade's goal in working with the Open Networking Foundation is to alleviate the burden of operational complexity for service providers by leveraging OpenFlow to manage and operate their networks.
Brocade has developed an OpenFlow enabled IP/MPLS router as part of its service provider product portfolio for application verification and interoperability testing with its partners and customers. Brocade plans to make additional OpenFlow strategy and product announcements later this year. Brocade will initially focus its efforts on delivering solutions that enable the scalability and manageability required in hyper-scale cloud infrastructures.
"Stronger definition of network behavior in software is a growing trend, and open interfaces are going to lead to faster innovation," said Nick McKeown, ONF board member and professor at Stanford University.
"In June 2010, Brocade was one of the first major networking vendors to publicly endorse OpenFlow," said Ken Cheng, vice president, Service Provider Products, Brocade. "Our goal is to leverage OpenFlow to build compelling cloud networking solutions for service providers and network operators worldwide, while lowering the cost associated with operating their networks."
Social Media Tags: Brocade, OpenFlow, NetIron, Storage Area Networks, SAN, IP, Fibre Channel, Ethernet, WAN, LAN, Networks, Switch, Router
Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Brocade CTO Named to TechAmerica CLOUD(2) Commission
Commission to Provide Recommendations on Deployment of Cloud Technologies to the United States Federal Government
SAN JOSE, CA -- (MARKET WIRE) -- 04/15/11 -- Brocade (NASDAQ: BRCD) today announced that Dave Stevens, the company's chief technology officer (CTO) has been named a Commissioner on the TechAmerica Foundation's "Leadership Opportunity in U.S. Deployment of the Cloud," known also as CLOUD(2).
The commission's mandate is to deliver recommendations to the U.S. government on ways it can effectively deploy cloud technologies and set specific public policies that will help drive further cloud innovation in both the private and public sectors.
Brocade has direct and highly relevant experience in the challenges and opportunities that the CLOUD(2) Commission is addressing by the virtue of its 15 years of experience in building mission-critical data center networks for some of the most demanding IT environments in the world. This experience and expertise has positioned Brocade to address the challenges on moving to more agile, flexible cloud IT models.
The Brocade approach, as defined by its Brocade One™ strategy, is to help its customers migrate smoothly from current networking architectures to a world where information and applications reside and can be accessed anywhere through open, multivendor cloud technologies.
"Brocade is an established leader in building and deploying fabric-based data center architectures, and customers continue to trust their networks to Brocade as they move to highly virtualized and cloud models," said Dave Stevens, chief technology officer at Brocade. "I am honored to serve as a commissioner for CLOUD(2), and I Iook forward to the opportunity to leverage our experience in this space and to play a key role in advancing the deployment of cloud architectures."
The commission will make recommendations for how government should deploy cloud technologies and address policies that might hinder U.S. leadership of the cloud in the commercial space. Recommendations for government deployment will be presented to Federal Chief Information Officer Vivek Kundra. Commercial-facing recommendations will be shared with Commerce Secretary Gary Locke and Commerce Under Secretary Pat Gallagher.
"The Obama Administration has demonstrated a clear understanding of the need to adopt cloud technologies across the government enterprise," said Dallas Advisory Partners Founder, and TechAmerica Foundation Chairman, David Sanders. "CLOUD(2) represents a broad range of companies, and is well-positioned to provide diverse insight on issues critical to the cloud. These new commissioners will be essential to the continued advancement of U.S. innovation, and we look forward to providing the Administration constructive recommendations that address these critical issues."
The commission is composed of 71 experts in the field, from both the business and academic worlds. Leading the CLOUD(2) commission are co-commissioners Salesforce.com CEO and Chairman Marc Benioff and VCE Chairman and CEO Michael D. Capellas, as well as CSC North American public sector president Jim Scheaffer and Microsoft corporate VP of technology policy and strategy Dan Reed.
Also joining co-chairmen Benioff and Capellas representing academia will be John Mallery of Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory, and Michael R. Nelson, visiting professor of Internet studies in Georgetown University's Communication, Culture and Technology Program.
A full list of commissioners is available at http://www.techamericafoundation.org/cloud-commission-commissioners
To learn more about CLOUD(2), please visit http://www.techamericafoundation.org/cloud-commission
Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners
Chapter 14 - Management Platform & Managed Environments
To design a good cloud management platform we need to
understand the managed environment. As we know that the workloads would include
not only stuff running on virtual infrastructure but also traditional
infrastructure. So we need to design a management platform that can support
delivery of traditional services as well as cloud services.
The advantage of using IBM reference architecture (refer
previous chapter) is that we the service management cost to a minimum and be
able to manage multiple services (IAAS, PAAS, SAAS, Traditional Services)
through a single management platform (Common Cloud Management Platform).
The design of the management platform is mainly driven by
what platforms we need to manage as well as the services we have to deliver.
The core components of the management platform are determined by the amount of
service automation expected to be provided by the platform.
The cloud management platform can be thought of like a
Service Delivery Platform as applied to Telecommunication industries. The term Service Delivery
Platform (SDP) usually refers to a set of components that provides a
services delivery architecture (such as service creation, session control &
protocols) supporting multiple delivery models of service.
The core components can be again classified into the
business support (BSS) components and the operational support (OSS) components. The
business components include ways to manage the customer, subscription, offering
& catalog, contract, order, billing, and financial aspects of the platform.
The OSS deals
with the backend aspects of fulfilling the service request. So it includes
components like service automation, provisioning, monitoring and management.
The IBM Tivoli suite of products supports addressing almost
all of the OSS
requirements as well as some of the key components in the BSS components. As an
architect, the key decisions to take are to look at the capabilities required
based on the client needs and create a platform that is extensible. This needs to be done keeping flexibility in
mind which means you have the capability to add and remove components to
support different capabilities. In an
established and mature Data
Center, it is highly
unlikely that all these components are delivered by a single vendor. That’s why
an architecture build on open standards is critical to the success of building
a good management platform.
IBM is leading the efforts for adoption of standards by
different cloud providers, consumers and tools vendors. The work being done by
IBM with Open Group and Cloud Standards Customers Council are
some examples for the same.
Once we have determined the functional components of our
solution we need to worry about the non-functional requirements. These include
aspects like security, availability, resiliency, performance, scalability,
capacity planning and sizing. We will
need to determine these aspects for the management platform based on the size
and heterogeneity of the managed environment. We will discuss these aspects in
the next chapter.
Teresa Takai, the Defense Department's chief information officer, says the "paramount" goal of effective security in a cloud computing infrastructure is best achieved using an internal "private" system, though she wouldn't rule out use of commercial providers.
In oral testimony at a hearing of the House Armed Services Subcommittee on Emerging Threats and Capabilities on April 6, Takai said Defense could opt for public cloud services offered by companies such as Google and Microsoft Corp.
In response to questions from Rep. James Langevin, D-R.I., Takai said, "There will be instances where we [can] use commercial cloud providers ... [if] they meet our standards." She did not specify what type of applications Defense would host on a commercial cloud.
Takai added the department plans to tap the Defense Information Systems Agency, which already is providing private cloud services to the Army and email service for 1.4 million personnel. The Army, Takai said, is "looking to move [its] apps to the cloud."
One of her key priorities is to secure the Pentagon's classified networks after masses of data were illicitly siphoned off last fall to the WikiLeaks website, said Takai, who took office last October. In her prepared testimony, she said Defense plans to deploy a public key infrastructure-based identity credential on a hardened smart card for use on the department's Secret classified networks. It is similar to, but stronger than, the technology in the Common Access Card on unclassified networks.
Defense also plans to use a Host-Based Security System to protect classified networks, a tool that "will allow us to know who is on the network" and detect anomalous behavior, Takai told the hearing.
Intel® Cloud Builders Reference Architecture Library
Key challenges and focus areas for IT include enhancing efficiency,
security, resource utilization, flexibility, and simplifying data center
management, among others. Intel works closely with leading systems and
solution providers to deliver proven reference architectures to address
IT challenges. This work is based on IT requirements—from a wide range
of end users—that address challenges in evolving to cloud and next-
generation data centers, including the evolving usage requirements of
the Open Data Center Alliance.
This lab-based experience is embodied in Intel® Cloud Builders
reference architectures. Each reference architecture provides detailed
instructions on how to install and configure a particular cloud software
solution using Intel® Xeon® processor-based servers.
Developed with ecosystem leaders, the following reference architectures relate to building a cloud, or Infrastructure as a Service (IaaS), and to enhancing and optimizing cloud infrastructure with a focus on security, efficiency, and simplifying your cloud environment.
Learn more about how to build and optimize your cloud infrastructure via reference architecture guides below. Read More>
ARMONK, N.Y. - 07 Apr 2011: IBM (NYSE: IBM) has joined more than 45 leading cloud organizations to form the new Cloud Standards Customer Council, which is managed by OMG®. Organizations including Lockheed Martin, Citigroup and North Carolina State University have already joined the Council, which will help advance cloud adoption prioritizing key interoperability issues such as management, reference architectures, hybrid cloud, as well as security and compliance.
The Council will complement vendor-led cloud standards efforts and establish a core set of client–driven requirements to ensure cloud users will have the same freedom of choice, flexibility, and openness they have with traditional IT environments. The Cloud Standards Customer Council is open to all end-user organizations and further enhances customers' abilities to offer both public and private cloud offerings through a standardized platform.
IBM is inviting all of its users to participate in the CSCC and work together in addressing the challenges faced while implementing Cloud Computing. The group will work to lower the barriers for widespread adoption of Cloud Computing by helping to prioritize key Interoperability issues such as cloud management, reference architecture, hybrid clouds, as well as security and compliance.
“To make Open Cloud successful and reflective of real business needs, IBM is asking for client feedback regarding their direction and priorities around cloud standards development,” said Angel Diaz, vice president, IBM Software Standards. “This council is designed to focus on the reality of what provides the greatest cloud computing benefits for clients. Ultimately, this effort is about how organizations can use what they have today and extend their business - using open standards - to get the greatest benefits from cloud.”
In our previous posts on the IT industry’s shift to the Cloud Services era, we’ve provided definitions
, market context
, user adoption
trends, and user views about cloud services benefits, challenges
In this post, We offer our initial forecast of IT cloud services delivery across five major IT product segments.we
offer our initial forecast of IT cloud services delivery across five
major IT product segments that, in aggregate, represent almost
two-thirds of enterprise IT spending (excluding PCs). This forecast
sizes IT suppliers’ opportunity to deliver their own IT offerings to
customers via the cloud services model (”opportunity #1“, as described in our recent post Framing the Cloud Opportunity for IT Suppliers).
The development of this forecast involved a team of over 30 IDC analysts, led by Robert Mahowald (Business Applications/SaaS), Tim Grieser (Infrastructure Software), Steve Hendrick (Application Development & Deployment Software), Matt Eastwood (Servers) and Rick Villars (Storage), with additional contributions from David Tapper (Outsourcing/Hosted Services) and John Gantz (Global Research).
SAN FRANCISCO, CA,
07 Apr 2011:
IBM (NYSE: IBM) today
unveiled its next generation IBM SmartCloud, an enterprise-class, secure
cloud specifically created to meet the demands of businesses.
To accelerate the shift from experimentation, development and
assessment to full scale enterprise deployment of cloud, IBM is building
out its existing cloud portfolio with IBM SmartCloud, enterprise cloud
technologies and services offerings for private, public and hybrid
clouds based on IBM hardware, software, services and best practices.
As part of this announcement, IBM is demonstrating a next-generation,
enterprise cloud service delivery platform currently piloting with key
clients and available later this year. For the first time, enterprise
clients will be able to select key characteristics of a public, private
and hybrid cloud to match workload requirements from simple Web
infrastructure to complex business processes, along five dimensions,
· Security and isolation
· Availability and performance
· Technology platforms
· Management Support and Deployment
· Payment and Billing
The IBM SmartCloud includes a broad spectrum of secure managed
services, to run diverse workloads across multiple delivery methods both
public and private. It includes customer choice with the potential for
end-to-end management of service delivery from the server and operating
system to the application and process layer.
“The new IBM SmartCloud allows for the best of both worlds – the cost
savings and scalability of a shared cloud environment plus the
security, enterprise capabilities and support services of a private
environment,” said Erich Clementi, senior vice president, IBM Global
Technology Services. “In thousands of cloud engagements, we have
discovered that enterprise client wants a choice of cloud deployment
models that meet the requirements of their workloads and the demands of
This level of choice and control translates into capabilities
customized to your needs and priorities, whether you’re deploying a
simple web application, an ordering logistics system or a complete ERP
The new IBM cloud can enable organizations, their employees and
partners, to get what they need, as they need it – from advanced
analytics and business applications to IT infrastructure like virtual
servers and storage or access to tools for testing software code - all
deployed securely across IBM’s global network of cloud data centers.
The IBM SmartCloud has two implementation options: Enterprise and Enterprise +.
- Enterprise – Available today and
expanding on our existing Development and Test Cloud allowing customers
to expand on internal development and test efforts with reduction of
application development tasks from days to minutes via automation and
rapid provisioning with over 30% reduction in costs versus traditional
application environments. This offering is available immediately.
- Enterprise + -- To be made
available later this year, Enterprise + will complement and expand on
the value of Enterprise, offering brand new capabilities provide a core
set of multi-tenant services to manage virtual server, storage, network
and security infrastructure components including managed operational
Cloud computing fundamentals
Summary: A revolution is defined as a change in the way
people think and behave that is both dramatic in nature and broad in scope. By
that definition, cloud computing is indeed a revolution. Cloud computing is
creating a fundamental change in computer architecture, software and tools
development, and of course, in the way we store, distribute and consume
information. The intent of this article is to aid you in assimilating the
reality of the revolution, so you can use it for your own profit and well
being. Learn more>
Last year’s acquisition policy pronouncements are starting to be felt
across to the U.S. Army, with upticks in cloud computing initiatives,
increasing use of fixed-price contracts and adoption of social media.
“Army IT spending will remain stable; the goal is to optimize the IT
[spending]. Optimization will be guided by computing trends,” said Gary
Winkler, Army program executive officer for enterprise information
He was one of several Army acquisition speakers at the AFCEA Belvoir
Industry Days conference at the National Harbor in Oxon Hill, Md. Winkler also recently announced he is leaving the Army.
Efforts to improve efficiency, realign spending priorities and
streamline a cumbersome acquisition process were launched during the
past year amid a tightening national budget by Defense Secretary Robert
Gates and Ashton Carter, undersecretary of defense for acquisition,
technology and logistics.
Leading the charge for the Army’s efforts to hold down spending and
become more efficient are cloud computing initiatives, mobile
technologies, data center consolidation and social collaboration,
Winkler said that mobile data traffic is on track to increase by 39
times between 2009 and 2014, and the social software market is showing
40 percent growth per year through 2013 — also contributing to getting
the Pentagon’s policies rolling further down in operations.
The Army also wants to increase use of firm fixed-price and
multiple-source contracts, as directed in Carter’s Better Buying Power
initiative, and looking to maximize broadly scoped contracts that can be
used for a variety of missions.
However, there are still plenty of challenges, and there likely will
be more to come. Winkler predicted that force reductions could still lie
ahead for DOD, citing his own experience in the 1980s when, like now,
an insourcing effort was followed by a hiring freeze — which was later
followed by layoffs.
“We can tighten our belts and squeeze a little bit [as directed by
the Pentagon] — but I think it’s going to be more than just a little
bit,” Winkler said.
Still, PEO-EIS has been involved in the development of Better Buying
Power tenets, including helping shape concepts and strategies for
improving tradecraft services, establishing common taxonomy and
reforming IT acquisition — all banner items in Carter’s 23-point
acquisition reform plan released last September. Read More>
"Provision public cloud resources or securely extend your internal
virtualized infrastructure into the public cloud with VMware and our
vCloud Powered service providers, the largest ecosystem of cloud
computing partners. Leverage secure hybrid cloud resources with
confidence while providing choice and flexibility, ensuring
interoperability and portability of workloads between cloud environments
with a VMware vCloud infrastructure built on
VMware vCloud Director
"Security often comes up as a big stopping point for cloud computing.
One of the ways around this is to build a private cloud – one that
remains within the corporate firewall and wholly controlled internally.
That was the approach taken by Los Alamos National Laboratory as it
seeks to create an infrastructure on demand (IOD) architecture to
simplify the rollout of new technology projects and to eliminate delays
in storage, server and network provisioning.
Anil Karmel, IT manager at Los Alamos National Lab noted four tenets that played a major role in the private cloud decision:
• green IT
• streamlined operations
• rapid scaleup/down
“As we deploy more virtual servers, we consume far less power and also
reduce electronic waste,” said Karmel. “We estimate eventual savings of
$1.3 million annually due to IOD.”
Server capacity on demand is now achievable in a few clicks. Instead of
30 days to provision a server, it now takes less than 30 minutes.
The organization is utilizing HP c7000 blade enclosures along with HP
Virtual Connect Fibre Channel/Flex 10 Ethernet. HP BL460c and BL490c
blades are used, with each blade containing multiple quad-core and
A NetApp SAN was brought in to add storage capacity. This is based on
the NetApp V Series with 2 PBs of Tier 2 SATA storage. Tier One is
provided by existing HP arrays.
The cloud itself consists of four elements: a web portal at the front
end; Microsoft SharePoint as the automation engine for cloud workflows,
and also as the integration point for functions such as chargeback;
VMware vCloud Director to manage and operate the cloud; and VMware
vShield to provide security at both the application level and at the
user device level.
“Any virtual environment has to be cost effective, so that means it has
to be simple while being aware of any and all changes in real time,”
This is especially important in the security arena. Traditional security
operates at the hardware or software layer. But the addition of a
virtualization layer, said Karmel, provides too many gray areas for such
security tools to operate effectively. Hence security itself is now
being virtualized to eliminate yet another wave of security holes
showing up in the corporate networks.
Using Infrastructure on Demand, the National Lab is creating virtual
security enclaves using vShield that prevent one desktop or client from
infecting others, and keeps virtual machines (VMs) out of harm’s way.
Rules are set indicating access rights, as well as security protocols
based on threat detection. Traditional security tools interface with
this virtual security layer to keep servers and devices more protected.
Any time a threat is detected, the offending virtual computer is sent to
a remediation area, which has no network connectivity with which to
“This all occurs automatically based on preset policy,” said Karmel. “If
a VM is moved from one host to another, the security policy given to it
moves with it.”
To prevent VM sprawl, VMs are given an expiry data. This is one year by
default, though that can be adjusted. 30 days before the due date, an
email is automatically generated asking the VM owner about renewal.
Another similar email is relayed with 10 days left and then again the
day before expiry. As soon as the VM is turned off, the user is informed
of the fact and asked if he/she wants it back on line. Even then, 29
days later, the user is told that VM is scheduled for deletion. The next
day it is deleted.
However, a backup is retained for seven years just in case. The NetApp
storage is used to create snapshots of VMs before they are retired to
tape. For now, restores are not automated. But in the next version of
Infrastructure on Demand, users will be able to restore VMs they desire
in a few clicks.
“Lifecycle management of VMs is very important,” said Karmel.
The organization has erected a chargeback structure. Cloud resources are
priced according to CPU, RAM and disk. Users can see the total cost
before submitting a request for IT resources. Following a request, the
line manger has to approve and accepts the charges to that unit.
“You have to build best practices around our workloads,” said Karmel.
Service Level Agreements (SLAs) are set at four 9’s. If some hardware
goes down and Infrastructure on Demand doesn’t meet the SLA, it doesn’t
charge for that resource for that month. In addition, uptime and
availability metrics are regularly published so users are fully
At the moment, separate network, security and virtual server teams are
being maintained to monitor the infrastructure. Over time, this may be
streamlined to one centralized unit."