management platform sizing means sizing for the following components that provides
the functional capabilities
Service Request Management
Service Monitoring &
Service Level Management
Service Usage & Accounting
sizing will be affected based on the non-functional consideration that needs to
be addressed by each of these components of the management platform. One should review the performance reports and workload pattern/handling capabilities of each of the products selected to
validate the sizing considered can meet the non-functional requested by the solution.
The size of the management platform depends on the size of the managed environment. It is
preferred to keep a centralized management environment and scale it as needed
when the managed environment grows. This is often not an easy calculation or simple process. Need to apply pure engineering to plan the capacity for each capabilities. Apart from the capabilities discussed above, the following key areas also needs to be covered
Service Availability Management
In order to size for all these capabilities you need to have answers for some very critical questions. The right sizing and capacity planning depends how good the answers for the following questions can be provided by the project. For example
What operations are expected to be performed with management platform?
What are the average and peak concurrent administrator workloads?
What is the enterprise network topology?
What is the expected workload for provisioned virtual servers, and how do they map to the physical configuration?
For the provisioned servers: What is the distribution size?
What are the application service level requirements?
High Availability (HA) consideration is another important aspect to include in the capacity planning. The management platform has to be designed for HA with appropriate policies defined.
This IBM® Redpaper™ publication introduces PowerVM™ Active Memory™ Sharing on IBM Power Systems™ based on POWER6® and later processor technology. Active Memory Sharing is a virtualization technology that allows multiple partitions to share a pool of physical memory. This is designed to increase system memory utilization, thereby enabling you to realize a cost benefit by reducing the amount of physical memory required.
The paper provides an overview of Active Memory Sharing, and then demonstrates, in detail, how the technology works and in what scenarios it can be used. It also contains chapters that describe how to configure, manage and migrate to Active Memory Sharing based on hands-on examples.
The paper is targeted to both architects and consultants who need to understand how the technology works to design solutions, and to technical specialists in charge of setting up and managing Active Memory Sharing environments. For performance related information, see: ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/pow03017usen/POW03017USEN.PDF
Dubuque, Iowa and IBM Combine Analytics, Cloud Computing and Community Engagement to Conserve Water
DUBUQUE, Iowa, - 20 May 2011:The City of Dubuque and IBM (NYSE:IBM) today announced that the IBM analytics and cloud computing technology deployed in 2010 by Dubuque as part of its Smarter Sustainable Dubuque research helped reduce water utilization by 6.6 percent and increased leak detection and response eightfold.
The Smarter Sustainable Dubuque Water Pilot Study empowered 151 Dubuque households with information, analysis, insights and social computing around their water consumption for nine weeks. By providing citizens and city officials with an integrated view of water consumption, the Water Pilot resulted in water conservation, increased leak reporting rate, and encouraged behavior changes.
Water savings were measured by comparing the consumption of the 151 pilot households with another 152 control group households with identical smart meters but without the access to the analysis and insights provided by the Water Pilot Study for the nine-week duration.
The smarter meter system monitored water consumption every 15 minutes and collected and communicated to the IBM Research Cloud. Data was collected from information including weather, demographics, and household characteristics. Using cloud computing, the data was analyzed to trigger notification of potential leaks and anomalies, and helped volunteers understand their consumption in greater detail. Volunteers were only able to view their own consumption habits while city management can see the aggregate data. All participating homes were volunteers and the data being collected was anonymous and contained no confidential information.
Participating households were alerted about potential anomalies and leaks and were able to get a better understanding of their consumption patterns and, compare and contrast it anonymously with others in the community. Pilot study participants accessed their personal water usage information through a website portal and participated in online games and competitions aimed at promoting sustainable behavior enabling them to become fully engaged and informed about their consumption and the impact of the changes they made to it. Participants were able to see their data expressed in dollar savings, gallon savings and carbon reduction.
A cloud is not a cloud if it is not elastic. The elastic
property of the cloud to expand and shrink based on demand is possible only
with a proper capacity planning. I feel the most difficult exercise to do while
making a cloud solution is capacity planning for your cloud.By this, I mean you have to size
managed environment as well as
Most of the engagements that I’ve walked into might have
some capacity or infrastructure that they want us to leverage and use it in the
cloud.So the comparison becomes
difficult if you don’t have a standard measuring unit for your infrastructure –
for instance how do you know a Quadcore
on an intel platform compares to power7 core. So I found a good explanation in
this guide, in this interesting article –
The answer to the difficult question was to use something
called the cloud CPU unit which is
nothing but the computing power equal to the processing power on a one
gigahertz CPU. When a user requests two CPUs, for example, they will get the
processing power of two 1 GHz CPUs. This means that a system with two CPUs,
each with four cores, running at 3 GHz will have the equivalent of 24 CPU units
(2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the
resource needs and do the trends and forecasting. I typically collect the
projections from the clients and then put down some critical assumptions to
determine how big my cloud should be. Some critical questions that I typically
many concurrent users and peak users and what percentage of these users
needs to be covered?
type of workloads they typically run – development, test ?
image attributes – mem, cpu, storage etc
infrastructure planner for cloud made life easy for me that had a user
friendly interface to take me through these steps and arrive at a sizing for
the managed environment. Once we know
the managed environment, we can make
the sizing of the management platform. The details of how to plan the managed
environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
In Collaboration With Ixia, Brocade Will Demonstrate the
Performance, Reliability and Advanced Feature-Set of the Industry's
First 100 GbE Terabit-Trunk Router
LAS VEGAS, NV -- (MARKET WIRE) -- 05/09/11 --
- INTEROP 2011 -- Brocade (NASDAQ: BRCD) today announced that it will work with Ixia
(NASDAQ: XXIA) to replicate mission-critical service provider
environments and test high-capacity Brocade® Ethernet network solutions
designed to help service providers become cloud-optimized. The
demonstration creates a true-to-life service provider infrastructure
scenario for increasing IPv4/IPv6 routing scalability within the core
Multiprotocol Label Switching (MPLS) network while retaining high
service levels for end customers. The demonstration will be held in the
Brocade booth (# 833) during Interop Las Vegas 2011, at the Mandalay Bay Convention Center.
As service providers evolve to become destinations offering cloud-based
services, rather than just basic data delivery, the performance and
scalability demands on their networks have increased significantly. The
Brocade MLXe Core Router is a 100 Gigabit Ethernet (GbE)-ready solution
that enables service providers and virtualized data centers to support
these demands by efficiently delivering cloud-based services that use
less infrastructure and help reduce expenditures.
In this specific demonstration, Brocade and Ixia
will test the IPv4/IPv6 traffic flows, MPLS and throughput capabilities
of the Brocade MLXe multiservice router over 10 and 100 GbE
connections. By leveraging Ixia's leading test solutions, attendees will be able to view the following:
Ixia IxNetwork application emulating a large-scale Layer 3 virtual
private network (VPN) topology surrounding the Brocade MLXe router
Brocade MLXe router maintaining forwarding information and peering relationships
IxNetwork generating line-rate traffic sourced and destined over K2
100 GbE ports and Xcellon-Flex™ 10 GbE ports to fully load the Brocade
MLXe router, showcasing the forwarding plane performance
IxNetwork's real-time flow statistics and detailed reporting tools
validating the scalability of the Brocade MLXe peering sessions and
control plane scalability
Brocade MLXe forwarding low-latency traffic to all destination routes
Repeatable and scalable testing using Test Composer automation built into IxNetwork
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's
leading organizations transition smoothly to a world where applications
and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective
Faced with a nasty loss of credibility, a string of poor financial
results, shrinking market share in its core business, an unwieldy and
alienating bureaucracy blamed for the top executive exodus it been
experiencing, and a stock price that's plunged into the toilet Cisco,
once an economic bellwether, is promising to do more than simply kill
off its once-popular Flip video camcorder business and lay 550 people
off, an admission that its foray into the consumer segment had largely
It said in a press release issued Thursday morning that it's going to
a "streamlined operating model" focused on five areas, not apparently
the literally 30 different directions it's been going in although it did
say, come to think of it, something about "greater focus" so maybe it's
not really cutting back.
These focus areas are, it said, "routing, switching, and services;
collaboration; data center virtualization and cloud; video; and
architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street
Journal criticized Cisco for not being able to explain in plain English
what it's doing and Barron's complained that it needed a Kremlinologist
to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales,
services and engineering organizations in the next 120 days or by July
31 when its next fiscal year begins. Well, maybe not everything, it
warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
Field operations will be organized into three geographic regions
for faster decision making and greater accountability: the Americas,
EMEA and Asia Pacific, Japan and Greater China still under sales chief
Services will follow key customer segments and delivery models still under its multi-tasking COO Gary Moore;
Engineering, still reporting to Moore, will now be led by
two-in-a-box Pankaj Patel and Padmasree Warrior and aside from the
company's five focus areas there will be a dedicated Emerging Business
Group under Marthin De Beer focused on "select early-phase businesses"
"with continued focus on integrating the Medianet architecture for video
across the company."
Lastly, it's going to "refine" - but apparently not dismantle its
hydra-headed, decision-inhibiting Council structure blamed for
frustrating and running off key talent - down to three "that reinforce
consistent and globally aligned customer focus and speed to market
across major areas of the business: Enterprise, Service Provider and
Emerging Countries. These councils will serve to further strengthen the
connection between strategy and execution across functional groups.
Resource allocation and profitability targets will move to the sales and
engineering leadership teams which will have accountability and direct
responsibility for business results."
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on
making a series of changes throughout the next quarter and as we enter
the new fiscal year that will make it easier to work for and with Cisco,
as we focus our portfolio, simplify operations and manage expenses. Our
five company priorities are for a reason - they are the five drivers of
the future of the network, and they define what our customers know
Cisco is uniquely able to provide for their business success. The new
operating model will enable Cisco to execute on the significant market
opportunities of the network and empower our sales, service and
Brocade Introduces Brocade CloudPlex(TM), an Open, Extensible Architecture for Virtualization and Cloud-Optimized Networks
SAN JOSE, CA-- (MARKET WIRE) --05/03/11--Brocade(NASDAQ: BRCD) today introduced a new technology architecture that outlines the company's vision and the technology investments it will make to help its customers evolve their data centers and IT resources and migrate them to the "Virtual Enterprise."
Brocade intends to deliver on this vision through the Brocade CloudPlex™ architecture, an open, extensible framework intended to enable customers to build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. What is unique about the Brocade Cloudplex architecture is that it is both the foundation for integrated compute blocks, but it also embraces a customer's existing multi-vendor infrastructure to unify all of their assets into a single compute and storage domain.
Brocade CloudPlex meets the goal of the Brocade One™ strategy, designed to help companies transition smoothly to a world where information and applications can reside anywhere by delivering solutions that deliver unmatched simplicity, non-stop performance, application optimization and investment protection.
"Virtualization has fundamentally changed the nature of applications by detaching them from their underlying IT infrastructure and introducing a high degree of application mobility across the entire enterprise," saidDave Stevens, chief technology officer at Brocade. "This is the concept of the 'Virtual Enterprise' that we feel unleashes the true potential of cloud computing in all its forms -- private, hybrid and public."
Through the CloudPlex architecture, Brocade will help its customers scale their IT environments from managing hundreds of virtual machines (VMs) in certain classes of servers to tens of thousands of VMs that are distributed and mobilized across their entire enterprise and throughout the cloud. According toGartner, the expansion of VMs not only improves automation and reduces operational expenses, it is the primary requirement for IT organizations to migrate to cloud architectures.(1)
Gartneradvises that, "IT organizations pursuing virtualization should have an overall strategic plan for cloud computing and a roadmap for the future, and should plan proactively. Further, these organizations must focus on management and process change to manage virtual resources, and to manage the speed that virtualization enables, to avoid virtualization sprawl."
CloudPlex Components The Brocade Cloudplex architecture will define the stages and the components from Brocade and its partners that are required to get to the Virtual Enterprise. The stages comprise three main categories -- fabrics, globalization and open technologies -- with some of these components being available today while others are in development or on the roadmap of Brocade's engineering priorities.
The currently available components are:
Networks comprised of Ethernet fabrics and Fibre Channel fabrics as the flat, fast and simple foundation designed to scale to highly virtualized IT environments;
Multiprotocol fabric adapters for simplified server I/O consolidation;
High-performance application delivery products necessary for load balancing network traffic across distributed data centers;
The components on the roadmap are:
Integrated, tested and validated solution bundles of server, virtualization, networking and storage resources called Brocade Virtual Compute Blocks. An integral element of the Brocade CloudPlex architecture, Brocade will enable its systems partners and integrators to deliver Virtual Compute Block solutions comprising servers, hypervisors, storage, and cloud-optimized networking in pre-bundled, pre-racked configurations with unified support;
Powerful and universal fabric and network extension delivered through a new platform capable of supporting a number of IP, SAN and mainframe extension technologies including virtual private LAN services (VPLS), Fibre Channel over IP (FCIP) and FICON;
An advancement of Brocade Fabric ID technology called "Cloud IDs" that enables simple and secure isolation and mobility of VMs for native multi-tenancy cloud environments;
An open framework for management, provisioning and integration designed to promote multi-vendor and system-to-sytem interoperability specifically for cloud environments. These include Brocade products supporting OpenStack software for storage, compute and Software-Defined Networking (SDN) capabilities enabled through OpenFlow;
Unified education, support and services delivered through Brocade and partners to help customers manage this highly distributed "Virtual Enterprise" environment.
Brocade Partner Endorsements "We are excited to be working with Brocade to develop highly-scalable virtualized computing and storage configurations, providing superior cost-performance solutions today for our customers while at the same time establishing a clear path to cloud IT architectures in the future. Specifically, Brocade switches coupled with Dell PowerEdge servers and EqualLogic or Dell Compellent storage provide the scalability, flexibility and efficiency our customers demand in the virtual era." --Dario Zamarian, Vice President and General Manager, Dell Networking
"Fujitsu'sglobal cloud strategy is built on our real experience in working with customers on the delivery of both Services and Infrastructures for Cloud computing across the world. We believe that common processes, holistic management of infrastructure elements and the use of industry standards are fundamentally helping customers to ease the transition and to migrate their largest and most complex IT environments smoothly to join 'any mode' of the cloud consumption of their choosing. Brocade shares these views and has laid out a compelling vision through its CloudPlex architecture that Fujitsu Technology Solutions fully endorses and will support. This architecture provides compelling added value toFujitsu'sCloud offerings by defined standards and holistic management." --Jens-Peter Seick, Senior Vice President, Data Center Systems,Fujitsu
"Hitachiis helping customers deliver IT services through the cloud by using open, standards-based technologies that let them build and scale their virtualized data centers at their own pace. With Brocade's CloudPlex architecture, bothHitachiand Brocade address our mutual customers' IT needs and protect their existing IT investments by migrating their legacy devices to cloud deployments -- preventing cloud from becoming just another IT silo." --Sean Moser, Vice President, Storage Software Product Management,Hitachi Data Systems
"Recent advancements in cloud and virtualization are making it possible for enterprises to deploy an intelligent infrastructure that enables workloads to move around the enterprise and around the world in a transparent, fluid way. We believe that enabling flexible application deployment is imperative to mainstream adoption of cloud computing.VMwareand Brocade share a common vision of offering customers the ability to accelerate IT by reducing complexity while significantly lowering costs and enabling more flexible, agile services delivery." --Parag Patel, Vice President, Global Strategic Alliances,VMware
Unveiling at Brocade Technology Day Summit Brocade CTODave Stevenswill discuss more details about the CloudPlex architecture at the annual Brocade Technology Day Summit taking place on itsSan Josecampus onMay 3and 4. To participate in the event via a live webcast, please visit thefollowing pageon Brocade's Facebook page or simply register for the event at:
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
(1) Source: "The Road Map From Virtualization to Cloud Computing" (Gartner,March 2011)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron,SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Load Balancers Are Dead: Time to Focus on Application Delivery 2 February 2009 Mark Fabbi Gartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications.
Overview This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift.
Enterprises are still focused on load balancing.
There is little cooperation between networking and application teams on a holistic approach for application deployment.
Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
Enterprises must start building specialized expertise around application delivery
What you need to Know IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar.
Analysis What's the Issue? Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis. In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs). In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure. Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications.
Why Did This Happen? The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers. Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade. Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload. Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market.
Current Obstacles As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features. The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas.
Why We Need More and Why Should Enterprises Care? Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models. These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements. ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications. Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure. What Vendors Provide ADC Solutions Today? During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market. Vendors that have more-attractive offerings will have most or all of these attributes:
A strong set of advanced platform capabilities
Customizable, extensible platforms and solutions
A vision focused on application delivery networking
Affinity to applications:
Needs to be application-fluent (that is, they need to "speak the language")
Support organizations need to "talk applications"
*What Should Enterprises Do About This?
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity. In addition, enterprises should:
Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
Application layer proxy, which is often bidirectional
Optimization of SAP Infrastructure to result in better performance, low costs and high energy efficiency
20 Apr 2011:
Today IBM (NYSE: IBM)
announced that Audi selected IBM to build a cloud environment for
Audi's SAP infrastructure to deliver higher performance, fast and
flexible provisioning of SAP applications and capacities, lower
infrastructure costs, and to deliver above-average energy efficiency
with the ability to enlarge future SAP applications to an almost
Audi was facing challenges to scale its IT systems by the
increased use of business-critical applications in areas such as
production and logistics, supplier relationship management and human
resources which challenged their IT infrastructure regarding reliability
In April 2010, Audi signed a contract with IBM to rebuild their
existing SAP infrastructure, including consolidation and virtualization
of the server hardware, process standardization, opportunities for
performance-related billing and a much higher operational flexibility.
Audi's new SAP Infrastructure solution is based on a new generation of
high-performance IBM POWER 7 Servers and IBM database technology (DB2).
"Along with a very high level of reliability and failure safety, the
new SAP Infrastructure solution, which we will migrate into a private
cloud, substantially lowering energy consumption," said Audi's Lorenz
Schoberl, head of IT Infrastructure Services. "The DB2 solution's
built-in data compression capability will enable us to save time and
reduce costs of storage and archiving."
"We were able to demonstrate that our combination of POWER servers
and DB2 will decrease the total cost of ownership over the next four
years -- from a business and technology point of view," said Gunter
Frohlich, IBM Client Manager for Audi.
The new infrastructure is fully operational and will be managed by
IBM in a private cloud environment hosted in Audi's data center.
About IBM Cloud Computing
IBM has helped thousands of clients adopt cloud models and manages
millions of cloud based transactions every day. IBM assists clients in
areas as diverse as banking, communications, healthcare and government
to build their own clouds or securely tap into IBM cloud-based business
and infrastructure services. IBM is unique in bringing together key
cloud technologies, deep process knowledge, a broad portfolio of cloud
solutions, and a network of global delivery centers. For more
information about IBM cloud solutions, visit www.ibm.com/smartcloud
SAN JOSE, CA -- (MARKET WIRE) -- 03/22/11 -- Brocade® (NASDAQ: BRCD) today announced it is taking a leadership position to help define standards to enable scalability and manageability in hyper-scale cloud infrastructures. Brocade has become an initial member of the Open Networking Foundation (ONF), a non-profit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN).
SDN involves several components, one of the most important being standard-based OpenFlow, an emerging standard delivering service providers granular control of their network infrastructures. Brocade will leverage its work in developing OpenFlow across its high-performance service provider portfolio to enable customers to build high-value applications across their networks with greater efficiency and unparalleled simplicity.
Today's service providers and network operators face a number of challenges that require multiple solutions in order to ensure highly efficient and profitable operation. Brocade's goal in working with the Open Networking Foundation is to alleviate the burden of operational complexity for service providers by leveraging OpenFlow to manage and operate their networks.
Brocade has developed an OpenFlow enabled IP/MPLS router as part of its service provider product portfolio for application verification and interoperability testing with its partners and customers. Brocade plans to make additional OpenFlow strategy and product announcements later this year. Brocade will initially focus its efforts on delivering solutions that enable the scalability and manageability required in hyper-scale cloud infrastructures.
"Stronger definition of network behavior in software is a growing trend, and open interfaces are going to lead to faster innovation," said Nick McKeown, ONF board member and professor at Stanford University.
"In June 2010, Brocade was one of the first major networking vendors to publicly endorse OpenFlow," said Ken Cheng, vice president, Service Provider Products, Brocade. "Our goal is to leverage OpenFlow to build compelling cloud networking solutions for service providers and network operators worldwide, while lowering the cost associated with operating their networks."
Social Media Tags: Brocade, OpenFlow, NetIron, Storage Area Networks, SAN, IP, Fibre Channel, Ethernet, WAN, LAN, Networks, Switch, Router
About Brocade Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Brocade CTO Named to TechAmerica CLOUD(2) Commission
Commission to Provide Recommendations on Deployment of Cloud Technologies to the United States Federal Government
SAN JOSE, CA -- (MARKET WIRE) -- 04/15/11 -- Brocade (NASDAQ: BRCD) today announced that Dave Stevens, the company's chief technology officer (CTO) has been named a Commissioner on the TechAmerica Foundation's "Leadership Opportunity in U.S. Deployment of the Cloud," known also as CLOUD(2).
The commission's mandate is to deliver recommendations to the U.S. government on ways it can effectively deploy cloud technologies and set specific public policies that will help drive further cloud innovation in both the private and public sectors.
Brocade has direct and highly relevant experience in the challenges and opportunities that the CLOUD(2) Commission is addressing by the virtue of its 15 years of experience in building mission-critical data center networks for some of the most demanding IT environments in the world. This experience and expertise has positioned Brocade to address the challenges on moving to more agile, flexible cloud IT models.
The Brocade approach, as defined by its Brocade One™ strategy, is to help its customers migrate smoothly from current networking architectures to a world where information and applications reside and can be accessed anywhere through open, multivendor cloud technologies.
"Brocade is an established leader in building and deploying fabric-based data center architectures, and customers continue to trust their networks to Brocade as they move to highly virtualized and cloud models," said Dave Stevens, chief technology officer at Brocade. "I am honored to serve as a commissioner for CLOUD(2), and I Iook forward to the opportunity to leverage our experience in this space and to play a key role in advancing the deployment of cloud architectures."
The commission will make recommendations for how government should deploy cloud technologies and address policies that might hinder U.S. leadership of the cloud in the commercial space. Recommendations for government deployment will be presented to Federal Chief Information Officer Vivek Kundra. Commercial-facing recommendations will be shared with Commerce Secretary Gary Locke and Commerce Under Secretary Pat Gallagher.
"The Obama Administration has demonstrated a clear understanding of the need to adopt cloud technologies across the government enterprise," said Dallas Advisory Partners Founder, and TechAmerica Foundation Chairman, David Sanders. "CLOUD(2) represents a broad range of companies, and is well-positioned to provide diverse insight on issues critical to the cloud. These new commissioners will be essential to the continued advancement of U.S. innovation, and we look forward to providing the Administration constructive recommendations that address these critical issues."
The commission is composed of 71 experts in the field, from both the business and academic worlds. Leading the CLOUD(2) commission are co-commissioners Salesforce.com CEO and Chairman Marc Benioff and VCE Chairman and CEO Michael D. Capellas, as well as CSC North American public sector president Jim Scheaffer and Microsoft corporate VP of technology policy and strategy Dan Reed.
Also joining co-chairmen Benioff and Capellas representing academia will be John Mallery of Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory, and Michael R. Nelson, visiting professor of Internet studies in Georgetown University's Communication, Culture and Technology Program.
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners
To design a good cloud management platform we need to
understand the managed environment. As we know that the workloads would include
not only stuff running on virtual infrastructure but also traditional
infrastructure. So we need to design a management platform that can support
delivery of traditional services as well as cloud services.
The advantage of using IBM reference architecture (refer
previous chapter) is that we the service management cost to a minimum and be
able to manage multiple services (IAAS, PAAS, SAAS, Traditional Services)
through a single management platform (Common Cloud Management Platform).
The design of the management platform is mainly driven by
what platforms we need to manage as well as the services we have to deliver.
The core components of the management platform are determined by the amount of
service automation expected to be provided by the platform.
The cloud management platform can be thought of like a
Service Delivery Platform as applied to Telecommunication industries. The term Service Delivery
Platform (SDP) usually refers to a set of components that provides a
services delivery architecture (such as service creation, session control &
protocols) supporting multiple delivery models of service.
The core components can be again classified into the
business support (BSS) components and the operational support (OSS) components. The
business components include ways to manage the customer, subscription, offering
& catalog, contract, order, billing, and financial aspects of the platform.
The OSS deals
with the backend aspects of fulfilling the service request. So it includes
components like service automation, provisioning, monitoring and management.
The IBM Tivoli suite of products supports addressing almost
all of the OSS
requirements as well as some of the key components in the BSS components. As an
architect, the key decisions to take are to look at the capabilities required
based on the client needs and create a platform that is extensible.This needs to be done keeping flexibility in
mind which means you have the capability to add and remove components to
support different capabilities.In an
established and mature Data
Center, it is highly
unlikely that all these components are delivered by a single vendor. That’s why
an architecture build on open standards is critical to the success of building
a good management platform.
IBM is leading the efforts for adoption of standards by
different cloud providers, consumers and tools vendors. The work being done by
IBM with Open Group and Cloud Standards Customers Council are
some examples for the same.
Once we have determined the functional components of our
solution we need to worry about the non-functional requirements. These include
aspects like security, availability, resiliency, performance, scalability,
capacity planning and sizing.We will
need to determine these aspects for the management platform based on the size
and heterogeneity of the managed environment. We will discuss these aspects in
the next chapter.
Teresa Takai, the Defense Department's chief information officer, says the "paramount" goal of effective security in acloud computinginfrastructure is best achieved using an internal "private" system, though she wouldn't rule out use of commercial providers.
In oral testimony at ahearingof the House Armed Services Subcommittee on Emerging Threats and Capabilities on April 6, Takai said Defense could opt for public cloud services offered by companies such as Google and Microsoft Corp.
In response to questions from Rep. James Langevin, D-R.I., Takai said, "There will be instances where we [can] use commercial cloud providers ... [if] they meet our standards." She did not specify what type of applications Defense would host on a commercial cloud.
Takai added the department plans to tap the Defense Information Systems Agency, which already is providing private cloud services to the Army and email service for 1.4 million personnel. The Army, Takai said, is "looking to move [its] apps to the cloud."
One of her key priorities is to secure the Pentagon's classified networks after masses of data were illicitly siphoned off last fall to the WikiLeaks website, said Takai, who took office last October. In her preparedtestimony,she said Defense plans to deploy a public key infrastructure-based identity credential on a hardened smart card for use on the department's Secret classified networks. It is similar to, but stronger than, the technology in the Common Access Card on unclassified networks.
Defense also plans to use a Host-Based Security System to protect classified networks, a tool that "will allow us to know who is on the network" and detect anomalous behavior, Takai told the hearing.