If you haven’t signed up yet, be sure to check out the October cloud computing for developers virtual event. Participants in this two-day event will learn how to leverage the power of the cloud to tackle the toughest business and technical challenges! This two-day event will be packed with real-world examples and live demos of techniques and products – and you’ll see it all without leaving your desk. It's going to be exciting to have you all there with us getting smarter learning new technical skills to prepare us all for a smarter planet.
Here's some of what's in plan for the event. Remember that you can ask as many questions as you wish to our team of experts about any of our sessions.
IBM technical experts will kick off the event on day 1 with a session on the IBM development and test cloud and you'll see the cloud in action in a live demo. Our experts will discuss use cases and scenarios that will help you as you develop and test in the cloud.
Next we'll discuss a roadmap on how you and IBM can move your application to pattern-based middleware and why infrastructure-as-a-service alone is not enough to reduce implementation challenges when making the move to software-as-a-service.
Then you will learn how IBM's new Cast Iron Cloud Integration Platform has helped hundreds of customers just like you connect their cloud and on-premise applications in just days with its 'configuration, not coding' approach. You will see an engaging live ERP to cloud CRM demo.
The final day 1 session will demonstrate how to efficiently package middleware and/or applications so that they can be easily deployed into dynamic "cloudified" IT infrastructure. Techniques addressed in this session will include Anatomy of an Open Virtual Appliance, OVA repository and lifecycle, single and multi-image OVAs, best practices and examples of OVF.
That's not all folks, remember we have a full set of sessions on the 2nd day to. Remember, you'll have to register separately for day 2.
We'll start the day off showing you how solutions such as eXtreme Scale can scale the database layer. And you'll learn how eXtreme Scale and XC10 help solution-wide HTTP session management, and the WebSphere Application Server dynamic cache service for page fragments.
Ever wondered why iSeries may be an ideal platform for cloud computing? The next session will show you how iSeries has been architected for applications that can be delivered in a hosted or SaaS environment, drilling down into the capabilities that make IBM iSeries well suited for SaaS.
I'm sure you will not want to leave before you hear best practices for designing databases for multitenancy and resiliency which is the topic of the next session. Learn about use cases of AWS and DB2 instances, database schemas as well as a demonstration of setting up HADR in the cloud.
We'll wrap up with a final session examining some technical considerations associated with building a secure application in a cloud environment and then discuss how they can be addressed with IBM products including DataPower, TFIM, TSIEM and TSPM.
We are giving you a choice. Choose the 2-day event best suited to you depending on where you are in the world. Both events will have very similar sessions. Register for the event that is best timed for North American (October 12-13) or European (October 26-27) time zones.
Load Balancers Are Dead: Time to Focus on Application Delivery 2 February 2009 Mark Fabbi Gartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications.
Overview This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift.
Enterprises are still focused on load balancing.
There is little cooperation between networking and application teams on a holistic approach for application deployment.
Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
Enterprises must start building specialized expertise around application delivery
What you need to Know IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar.
Analysis What's the Issue? Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis. In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs). In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure. Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications.
Why Did This Happen? The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers. Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade. Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload. Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market.
Current Obstacles As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features. The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas.
Why We Need More and Why Should Enterprises Care? Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models. These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements. ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications. Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure. What Vendors Provide ADC Solutions Today? During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market. Vendors that have more-attractive offerings will have most or all of these attributes:
A strong set of advanced platform capabilities
Customizable, extensible platforms and solutions
A vision focused on application delivery networking
Affinity to applications:
Needs to be application-fluent (that is, they need to "speak the language")
Support organizations need to "talk applications"
*What Should Enterprises Do About This?
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity. In addition, enterprises should:
Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
Application layer proxy, which is often bidirectional
Note: This is a (slightly updated) re-post from a personal blog - just my view in the context of IBM's drive to foster open choice and collaboration. Please bear in mind that this is based on my personal thoughts (not an official IBM position) and read the article as it is intended to be - thought provoking ... enjoy!
Having returned from the European Red Hat Partner summit and the VMware vForum where I presented on behalf of IBM, it took me a while to digest the “openness” of it all …so let me share my thoughts retrospectively. The key messages conveyed in both events were (un?)surprisingly similar, considering that we have a major opens source software company on one side and a more traditional “business” model on the other.
Being proprietary rocks…! (?) Let’s be straight – one could argue that in an ideal world (for selfish, money-making businesses without ethics) there would be no open source, being proprietary rocks! After all making money by attracting and “retaining” clients (I’m deliberately not saying “locking them in”) is ultimately the goal of every business – and that (the attracting/retaining clients bit) actually applies to VMware in the same way as Redhat (if we don't mix up the ‘open source community’ with Redhat as a business) … Now that would obviously completely ignore the power and dynamics of an open technical community but more importantly that’s not in the interest of the consumer… Public cloud promises to empower the consumer – so they will increasingly be looking for choice … no capital dependency, outsourced, pay per use service operation models enable you (in theory!) to switch providers like I just switched my energy and gas supplier to XXX last week – go to a comparison site, find the best deal and “click” … done (obviously not reality today with cloud).
Public cloud can only exist on open source … ? What both events made crystal clear is that increasingly many “traditional” businesses will be forced to have a foot in both camps in order to balance customer demand for open choice with a business model allowing them to make money and retain customer “affinity” (otherwise we probably wouldn't see URLs like this …http://www.microsoft.com/opensource/
There was a bold statement by a speaker at the Red Hat summit: “Public cloud can only live on open source!” I was initially inclined to agree but then thought this through again and adjusted it mentally to what I believe to be more appropriate: “public clouds need to live on INTEROPERABLE source”… Open source should of course help to facilitate this but if I just end up with a bunch of non-intuitive, non-integrated code, with undocumented APIs and outlandish image formats then the fact that its open source doesn’t help me at all. So I am not saying that I don’t believe in open source, actually quite the opposite, all I’m saying is that the “open source” stamp on its own is not good enough and as a consumer of resources (not a developer) I would indeed consider a proprietary solution as long as it is intuitive, cheap, with well-documented APIs and – that is the key – inter-operable with other public providers. So it is important to understand the difference between open source and open standards.
The Public cloud is only as good as the “connectors” to it - Key Battle 1: Hybrid Connectors VMware very much provides the majority of today’s x86 virtual enterprise footprint (a good chunk of that on IBM infrastructure and IBM very closely partnering with VMware). With that VMware has potentially a critical control point in the private cloud. The Public cloud is a completely different story with over 80% being OSS based and VMware yet hardly to be seen! So especially for VMware it must be of utmost importance to provide a ‘best of breed’ connector between existing vSphere infrastructures and public vCloud Director resources before others provide this linkage to other (non-VMware) public platforms. So I expect a lot of focus on vCloud Connector functionality from VMware (in the same way as on ‘Concero’ from Microsoft). VMware’s strategy therefore is to entice Service Providers to take advantage of the existing vSphere footprint “Hey look, many of your customers already have VMware, the only thing you need to do is to provide public vCloud Director resources for them to burst out to – we provide the connector, it’s as simple as this!” Now, that might sound great but the main concern for me (the consumer) simply is how much of a dependency is being created for me by doing this, how easy can I go and "click" to switch to a Amazon, IBM or Rackspace cloud once I am in that environment ...? So there clearly is a chance to develop a public VMware cloud ecosystem around vCD in this way – but how long before someone else offers seamless alternatives (more than just Amazon’s VM Import)? So will it be enough to only provide linkage to publicVMware vCD resources? IMHO absolutely not. I am very curious to see how much VMware will enable connectivity to other public provider platforms going forward … Again, it will be a fine balancing act but I’m convinced that it won’t be successful otherwise.
In the meanwhile keep your eyes peeled and expect the industry to increase focus on enabling hybrid connectors - I obviously can't make any specific forward-looking statements from an IBM perspective. But just take Red Hat as an example, it made clear that CloudForms (their IaaS platform) can indeed manage VMware though their DeltaCloud driver and – while currently positioning CloudForms for private and hybrid – their vision (of course) is for DeltaCloud to be the top-level public layer linking into private (or public) VMware clouds.
Key Battle2: PaaS Now – here’s another (the real?) battle for Cloud control (or better ‘ecosystem control’) … Who will provide the application platform for these future cloud-based applications? Who will control the ecosystem of future application suites? Who will be the next "Microsoft" you might ask? A lot will be control points and the pain of moving. If switching public cloud providers could really be as easy as switching utility providers, switching your application platform (e.g. as ISV) is rather like moving house! Using open standards is a great value proposition here and it’s not just the OSS providers who have realised this …
Red Hat recently announced their hosted “OpenShift” PaaS platform which essentially allows developing and running Java, Ruby, PHP and Python applications and comes in 3 different editions. From 1) “Express” (free) which provides a runtime environment for simple Ruby, PHP and Python apps over 2) “Flex” for multi-tiered Java and PHP apps with more options (like mySQL DBs and JBoss middleware) to full control with the “Power” edition supporting “any application or programming language that can compile on RHEL 4, 5, or 6″ and enables to deploy apps directly on EC2 and (in the near future) to IBM’s SmartCloud.
VMware had before announced their own open “Cloud Foundry” PaaS project, it has incarnations as fully hosted service (currently in beta), as open source project (CloudFoundry.org) or a free single PaaS instance for local development use. An interesting move IMHO which could help the adoption of this layer for VMware (away from e.g. MS Azure, Google’s App Engine or Amazons’ Elastic Beanstalk).
So what's IBM doing in this space? IBM has recently announced the IBM Workload Deployer - an evolution of the WebSphere CloudBurst hardware appliance. It essentially stores and secures "WebSphere Application Server Hypervisor Edition images" and more importantly workload patterns which can be published into a cloud. These workload patterns (think of them as customizable templates that capture settings, dependencies and configuration required to deploy applications) enable you to focus on what essentially differentiates PaaS from IaaS ... the application rather than the infrastructure. Dustin Amrhein explains this much better than me in this little blog. Importantly, all this comes with a REST APIs that allow for standards-based integration into existing environments, including Tivoli. If you have only 10 minutes to spare I can only recommend to watch this great video from Cloud Jason (there are 3 more) ... I promise you will get a really good idea what IWD can do!
Professional Suicide So, yes, I honestly believe that KVM has a good chance to become hypervisor of choice for public cloud. However … that is unlikely to be the control point… . So which management platform(s) will take that all important crown …? Will it be an OSS based one? I don’t want to hazard a guess, there are many …and that is part of the problem, many argue that the open source “communities” will have to overcome a challenge and become a COMMUNITY if they want to succeed. ESX could not be beaten with 7 or 8 different (but weak) flavours of Xen and that was just a single OSS project splintered by commercial offerings … in the same way the sea of OSS based cloud controllers with eucalyptus, openstack, cloudstack, deltacloud, opennebula faces focussed (more proprietary) heavy-weights like Microsoft, Google and Amazon. The increasing number of OSS management solutions and “open bodies” will also make e.g. VMware less nervous than intended as long as they indirectly compete with each other …
BUT (and it’s a big “but”) I would argue that anyone not strategically looking at these open solutions is at best ignorant or – e.g. if you are a service provider yourself – more likely long-term professionally suicidal … yes, in an ideal world everyone wants ‘today’s best of breed' but more critically you have to maintain your negotiation potential through the ability to switch and if only for that reason alone you need to keep your options open! It will be of the utmost importance to partner with solution providers who share this mind-set and have the capability and strategy to support such a long-term goal and yes, IBM is clearly uniquely positioned to fulfill this role. And while I spoke to many completely different clients at both events, that was a common concern raised by most of them.
Industry endorsement like the recent OVA announcement - with IBM being a major driving force and supporter - will help to give KVM the needed credibility and weight … I am looking forward to seeing these visions translated into tangible solutions.
"Security often comes up as a big stopping point for cloud computing.
One of the ways around this is to build a private cloud – one that
remains within the corporate firewall and wholly controlled internally.
That was the approach taken by Los Alamos National Laboratory as it
seeks to create an infrastructure on demand (IOD) architecture to
simplify the rollout of new technology projects and to eliminate delays
in storage, server and network provisioning.
Anil Karmel, IT manager at Los Alamos National Lab noted four tenets that played a major role in the private cloud decision:
• green IT
• streamlined operations
• rapid scaleup/down
“As we deploy more virtual servers, we consume far less power and also
reduce electronic waste,” said Karmel. “We estimate eventual savings of
$1.3 million annually due to IOD.”
Server capacity on demand is now achievable in a few clicks. Instead of
30 days to provision a server, it now takes less than 30 minutes.
The organization is utilizing HP c7000 blade enclosures along with HP
Virtual Connect Fibre Channel/Flex 10 Ethernet. HP BL460c and BL490c
blades are used, with each blade containing multiple quad-core and
A NetApp SAN was brought in to add storage capacity. This is based on
the NetApp V Series with 2 PBs of Tier 2 SATA storage. Tier One is
provided by existing HP arrays.
The cloud itself consists of four elements: a web portal at the front
end; Microsoft SharePoint as the automation engine for cloud workflows,
and also as the integration point for functions such as chargeback;
VMware vCloud Director to manage and operate the cloud; and VMware
vShield to provide security at both the application level and at the
user device level.
“Any virtual environment has to be cost effective, so that means it has
to be simple while being aware of any and all changes in real time,”
This is especially important in the security arena. Traditional security
operates at the hardware or software layer. But the addition of a
virtualization layer, said Karmel, provides too many gray areas for such
security tools to operate effectively. Hence security itself is now
being virtualized to eliminate yet another wave of security holes
showing up in the corporate networks.
Using Infrastructure on Demand, the National Lab is creating virtual
security enclaves using vShield that prevent one desktop or client from
infecting others, and keeps virtual machines (VMs) out of harm’s way.
Rules are set indicating access rights, as well as security protocols
based on threat detection. Traditional security tools interface with
this virtual security layer to keep servers and devices more protected.
Any time a threat is detected, the offending virtual computer is sent to
a remediation area, which has no network connectivity with which to
“This all occurs automatically based on preset policy,” said Karmel. “If
a VM is moved from one host to another, the security policy given to it
moves with it.”
To prevent VM sprawl, VMs are given an expiry data. This is one year by
default, though that can be adjusted. 30 days before the due date, an
email is automatically generated asking the VM owner about renewal.
Another similar email is relayed with 10 days left and then again the
day before expiry. As soon as the VM is turned off, the user is informed
of the fact and asked if he/she wants it back on line. Even then, 29
days later, the user is told that VM is scheduled for deletion. The next
day it is deleted.
However, a backup is retained for seven years just in case. The NetApp
storage is used to create snapshots of VMs before they are retired to
tape. For now, restores are not automated. But in the next version of
Infrastructure on Demand, users will be able to restore VMs they desire
in a few clicks.
“Lifecycle management of VMs is very important,” said Karmel.
The organization has erected a chargeback structure. Cloud resources are
priced according to CPU, RAM and disk. Users can see the total cost
before submitting a request for IT resources. Following a request, the
line manger has to approve and accepts the charges to that unit.
“You have to build best practices around our workloads,” said Karmel.
Service Level Agreements (SLAs) are set at four 9’s. If some hardware
goes down and Infrastructure on Demand doesn’t meet the SLA, it doesn’t
charge for that resource for that month. In addition, uptime and
availability metrics are regularly published so users are fully
At the moment, separate network, security and virtual server teams are
being maintained to monitor the infrastructure. Over time, this may be
streamlined to one centralized unit."
This IBM® Redpaper™ publication introduces PowerVM™ Active Memory™ Sharing on IBM Power Systems™ based on POWER6® and later processor technology. Active Memory Sharing is a virtualization technology that allows multiple partitions to share a pool of physical memory. This is designed to increase system memory utilization, thereby enabling you to realize a cost benefit by reducing the amount of physical memory required.
The paper provides an overview of Active Memory Sharing, and then demonstrates, in detail, how the technology works and in what scenarios it can be used. It also contains chapters that describe how to configure, manage and migrate to Active Memory Sharing based on hands-on examples.
The paper is targeted to both architects and consultants who need to understand how the technology works to design solutions, and to technical specialists in charge of setting up and managing Active Memory Sharing environments. For performance related information, see: ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/pow03017usen/POW03017USEN.PDF
Brocade Introduces Brocade CloudPlex(TM), an Open, Extensible Architecture for Virtualization and Cloud-Optimized Networks
SAN JOSE, CA-- (MARKET WIRE) --05/03/11--Brocade(NASDAQ: BRCD) today introduced a new technology architecture that outlines the company's vision and the technology investments it will make to help its customers evolve their data centers and IT resources and migrate them to the "Virtual Enterprise."
Brocade intends to deliver on this vision through the Brocade CloudPlex™ architecture, an open, extensible framework intended to enable customers to build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. What is unique about the Brocade Cloudplex architecture is that it is both the foundation for integrated compute blocks, but it also embraces a customer's existing multi-vendor infrastructure to unify all of their assets into a single compute and storage domain.
Brocade CloudPlex meets the goal of the Brocade One™ strategy, designed to help companies transition smoothly to a world where information and applications can reside anywhere by delivering solutions that deliver unmatched simplicity, non-stop performance, application optimization and investment protection.
"Virtualization has fundamentally changed the nature of applications by detaching them from their underlying IT infrastructure and introducing a high degree of application mobility across the entire enterprise," saidDave Stevens, chief technology officer at Brocade. "This is the concept of the 'Virtual Enterprise' that we feel unleashes the true potential of cloud computing in all its forms -- private, hybrid and public."
Through the CloudPlex architecture, Brocade will help its customers scale their IT environments from managing hundreds of virtual machines (VMs) in certain classes of servers to tens of thousands of VMs that are distributed and mobilized across their entire enterprise and throughout the cloud. According toGartner, the expansion of VMs not only improves automation and reduces operational expenses, it is the primary requirement for IT organizations to migrate to cloud architectures.(1)
Gartneradvises that, "IT organizations pursuing virtualization should have an overall strategic plan for cloud computing and a roadmap for the future, and should plan proactively. Further, these organizations must focus on management and process change to manage virtual resources, and to manage the speed that virtualization enables, to avoid virtualization sprawl."
CloudPlex Components The Brocade Cloudplex architecture will define the stages and the components from Brocade and its partners that are required to get to the Virtual Enterprise. The stages comprise three main categories -- fabrics, globalization and open technologies -- with some of these components being available today while others are in development or on the roadmap of Brocade's engineering priorities.
The currently available components are:
Networks comprised of Ethernet fabrics and Fibre Channel fabrics as the flat, fast and simple foundation designed to scale to highly virtualized IT environments;
Multiprotocol fabric adapters for simplified server I/O consolidation;
High-performance application delivery products necessary for load balancing network traffic across distributed data centers;
The components on the roadmap are:
Integrated, tested and validated solution bundles of server, virtualization, networking and storage resources called Brocade Virtual Compute Blocks. An integral element of the Brocade CloudPlex architecture, Brocade will enable its systems partners and integrators to deliver Virtual Compute Block solutions comprising servers, hypervisors, storage, and cloud-optimized networking in pre-bundled, pre-racked configurations with unified support;
Powerful and universal fabric and network extension delivered through a new platform capable of supporting a number of IP, SAN and mainframe extension technologies including virtual private LAN services (VPLS), Fibre Channel over IP (FCIP) and FICON;
An advancement of Brocade Fabric ID technology called "Cloud IDs" that enables simple and secure isolation and mobility of VMs for native multi-tenancy cloud environments;
An open framework for management, provisioning and integration designed to promote multi-vendor and system-to-sytem interoperability specifically for cloud environments. These include Brocade products supporting OpenStack software for storage, compute and Software-Defined Networking (SDN) capabilities enabled through OpenFlow;
Unified education, support and services delivered through Brocade and partners to help customers manage this highly distributed "Virtual Enterprise" environment.
Brocade Partner Endorsements "We are excited to be working with Brocade to develop highly-scalable virtualized computing and storage configurations, providing superior cost-performance solutions today for our customers while at the same time establishing a clear path to cloud IT architectures in the future. Specifically, Brocade switches coupled with Dell PowerEdge servers and EqualLogic or Dell Compellent storage provide the scalability, flexibility and efficiency our customers demand in the virtual era." --Dario Zamarian, Vice President and General Manager, Dell Networking
"Fujitsu'sglobal cloud strategy is built on our real experience in working with customers on the delivery of both Services and Infrastructures for Cloud computing across the world. We believe that common processes, holistic management of infrastructure elements and the use of industry standards are fundamentally helping customers to ease the transition and to migrate their largest and most complex IT environments smoothly to join 'any mode' of the cloud consumption of their choosing. Brocade shares these views and has laid out a compelling vision through its CloudPlex architecture that Fujitsu Technology Solutions fully endorses and will support. This architecture provides compelling added value toFujitsu'sCloud offerings by defined standards and holistic management." --Jens-Peter Seick, Senior Vice President, Data Center Systems,Fujitsu
"Hitachiis helping customers deliver IT services through the cloud by using open, standards-based technologies that let them build and scale their virtualized data centers at their own pace. With Brocade's CloudPlex architecture, bothHitachiand Brocade address our mutual customers' IT needs and protect their existing IT investments by migrating their legacy devices to cloud deployments -- preventing cloud from becoming just another IT silo." --Sean Moser, Vice President, Storage Software Product Management,Hitachi Data Systems
"Recent advancements in cloud and virtualization are making it possible for enterprises to deploy an intelligent infrastructure that enables workloads to move around the enterprise and around the world in a transparent, fluid way. We believe that enabling flexible application deployment is imperative to mainstream adoption of cloud computing.VMwareand Brocade share a common vision of offering customers the ability to accelerate IT by reducing complexity while significantly lowering costs and enabling more flexible, agile services delivery." --Parag Patel, Vice President, Global Strategic Alliances,VMware
Unveiling at Brocade Technology Day Summit Brocade CTODave Stevenswill discuss more details about the CloudPlex architecture at the annual Brocade Technology Day Summit taking place on itsSan Josecampus onMay 3and 4. To participate in the event via a live webcast, please visit thefollowing pageon Brocade's Facebook page or simply register for the event at:
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
(1) Source: "The Road Map From Virtualization to Cloud Computing" (Gartner,March 2011)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron,SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
08 Dec 2010:
IBM (NYSE: IBM) today
announced the availability of new online software services based on the
same on-premise solutions used by clients today – now delivered as a
monthly subscription offering - that enables better automation and
control of IT Service Desk functions. This new service adds to IBM's
software-as-a-service offerings that help automate a range of IT
services critical to maintaining business operations.
Even small and mid-size companies deal with labor-intensive
services for employees such as resolving IT issues, fixing laptops and
onboarding new hires. Many companies struggle with slow, inefficient
service request handling because at the core their networking,
facilities, application support and IT assets aren't integrated and
typically depend on manual updates. For example, IBM estimates that only
five percent of service and support issues are resolved by
self-service, making automation and integration crucial for service
Dubuque, Iowa and IBM Combine Analytics, Cloud Computing and Community Engagement to Conserve Water
DUBUQUE, Iowa, - 20 May 2011:The City of Dubuque and IBM (NYSE:IBM) today announced that the IBM analytics and cloud computing technology deployed in 2010 by Dubuque as part of its Smarter Sustainable Dubuque research helped reduce water utilization by 6.6 percent and increased leak detection and response eightfold.
The Smarter Sustainable Dubuque Water Pilot Study empowered 151 Dubuque households with information, analysis, insights and social computing around their water consumption for nine weeks. By providing citizens and city officials with an integrated view of water consumption, the Water Pilot resulted in water conservation, increased leak reporting rate, and encouraged behavior changes.
Water savings were measured by comparing the consumption of the 151 pilot households with another 152 control group households with identical smart meters but without the access to the analysis and insights provided by the Water Pilot Study for the nine-week duration.
The smarter meter system monitored water consumption every 15 minutes and collected and communicated to the IBM Research Cloud. Data was collected from information including weather, demographics, and household characteristics. Using cloud computing, the data was analyzed to trigger notification of potential leaks and anomalies, and helped volunteers understand their consumption in greater detail. Volunteers were only able to view their own consumption habits while city management can see the aggregate data. All participating homes were volunteers and the data being collected was anonymous and contained no confidential information.
Participating households were alerted about potential anomalies and leaks and were able to get a better understanding of their consumption patterns and, compare and contrast it anonymously with others in the community. Pilot study participants accessed their personal water usage information through a website portal and participated in online games and competitions aimed at promoting sustainable behavior enabling them to become fully engaged and informed about their consumption and the impact of the changes they made to it. Participants were able to see their data expressed in dollar savings, gallon savings and carbon reduction.
In this post, We offer our initial forecast of IT cloud services delivery across five major IT product segments.we
offer our initial forecast of IT cloud services delivery across five
major IT product segments that, in aggregate, represent almost
two-thirds of enterprise IT spending (excluding PCs). This forecast
sizes IT suppliers’ opportunity to deliver their own IT offerings to
customers via the cloud services model (”opportunity #1“, as described in our recent post Framing the Cloud Opportunity for IT Suppliers).
In this article learn how to:
Set up a 64-bit Linux instance (a Bronze-level offering) with the Linux Logical Volume Manager (LVM). Capture a private image and provision it as a new Platinum instance. Grow the LVM volume and file system to accommodate the new physical volumes. Configure LVM across physical volumes using Linux LVM-type partitions. Background on LVM and the test scenario
First, a description of LVM concepts and the test scenario for those who may not be familiar with LVM.
Note: You are about to configure Linux LVM: Here be Dragons. Mind the gap.
The Linux LVM is organized into physical volumes (PVs), volume groups
(VGs), and logical volumes (LVs)
Physical volume: Physical HDDs, physical HDD partitions (such as /dev/vdb1).
Extents: PVs are split into chunks called PEs
Logical extents (LEs) map 1:1 to PEs and are used for the physical-to-logical volume mapping.
Volume group: A virtual disk consisting of aggregated
physical volumes. VGs can be logically partitioned into LVs.
Logical volume: Acts as a virtual
disk partition. After creating a VG you can create LVs in that VG.
They can be used as raw block devices, swap devices, or
for creating a (mountable) file system just like disk partitions.
File system: LVs can be used as raw devices or swap, but are more commonly "formatted"
with a supported file system and mounted to defined mountpoint. I'll format the LV as an ext3 file system in this scenario.
Partition table: You'll use tools like fdisk, sfdisk, or
cfdisk to manipulate the block device partition table and create Linux LVM (8e) type partitions.
Optimization of SAP Infrastructure to result in better performance, low costs and high energy efficiency
20 Apr 2011:
Today IBM (NYSE: IBM)
announced that Audi selected IBM to build a cloud environment for
Audi's SAP infrastructure to deliver higher performance, fast and
flexible provisioning of SAP applications and capacities, lower
infrastructure costs, and to deliver above-average energy efficiency
with the ability to enlarge future SAP applications to an almost
Audi was facing challenges to scale its IT systems by the
increased use of business-critical applications in areas such as
production and logistics, supplier relationship management and human
resources which challenged their IT infrastructure regarding reliability
In April 2010, Audi signed a contract with IBM to rebuild their
existing SAP infrastructure, including consolidation and virtualization
of the server hardware, process standardization, opportunities for
performance-related billing and a much higher operational flexibility.
Audi's new SAP Infrastructure solution is based on a new generation of
high-performance IBM POWER 7 Servers and IBM database technology (DB2).
"Along with a very high level of reliability and failure safety, the
new SAP Infrastructure solution, which we will migrate into a private
cloud, substantially lowering energy consumption," said Audi's Lorenz
Schoberl, head of IT Infrastructure Services. "The DB2 solution's
built-in data compression capability will enable us to save time and
reduce costs of storage and archiving."
"We were able to demonstrate that our combination of POWER servers
and DB2 will decrease the total cost of ownership over the next four
years -- from a business and technology point of view," said Gunter
Frohlich, IBM Client Manager for Audi.
The new infrastructure is fully operational and will be managed by
IBM in a private cloud environment hosted in Audi's data center.
About IBM Cloud Computing
IBM has helped thousands of clients adopt cloud models and manages
millions of cloud based transactions every day. IBM assists clients in
areas as diverse as banking, communications, healthcare and government
to build their own clouds or securely tap into IBM cloud-based business
and infrastructure services. IBM is unique in bringing together key
cloud technologies, deep process knowledge, a broad portfolio of cloud
solutions, and a network of global delivery centers. For more
information about IBM cloud solutions, visit www.ibm.com/smartcloud
SAN FRANCISCO, CA,
07 Apr 2011:
IBM (NYSE: IBM) today
unveiled its next generation IBM SmartCloud, an enterprise-class, secure
cloud specifically created to meet the demands of businesses.
To accelerate the shift from experimentation, development and
assessment to full scale enterprise deployment of cloud, IBM is building
out its existing cloud portfolio with IBM SmartCloud, enterprise cloud
technologies and services offerings for private, public and hybrid
clouds based on IBM hardware, software, services and best practices.
As part of this announcement, IBM is demonstrating a next-generation,
enterprise cloud service delivery platform currently piloting with key
clients and available later this year. For the first time, enterprise
clients will be able to select key characteristics of a public, private
and hybrid cloud to match workload requirements from simple Web
infrastructure to complex business processes, along five dimensions,
· Security and isolation
· Availability and performance
· Technology platforms
· Management Support and Deployment
· Payment and Billing
The IBM SmartCloud includes a broad spectrum of secure managed
services, to run diverse workloads across multiple delivery methods both
public and private. It includes customer choice with the potential for
end-to-end management of service delivery from the server and operating
system to the application and process layer.
“The new IBM SmartCloud allows for the best of both worlds – the cost
savings and scalability of a shared cloud environment plus the
security, enterprise capabilities and support services of a private
environment,” said Erich Clementi, senior vice president, IBM Global
Technology Services. “In thousands of cloud engagements, we have
discovered that enterprise client wants a choice of cloud deployment
models that meet the requirements of their workloads and the demands of
This level of choice and control translates into capabilities
customized to your needs and priorities, whether you’re deploying a
simple web application, an ordering logistics system or a complete ERP
The new IBM cloud can enable organizations, their employees and
partners, to get what they need, as they need it – from advanced
analytics and business applications to IT infrastructure like virtual
servers and storage or access to tools for testing software code - all
deployed securely across IBM’s global network of cloud data centers.
The IBM SmartCloud has two implementation options: Enterprise and Enterprise +.
- Enterprise – Available today and
expanding on our existing Development and Test Cloud allowing customers
to expand on internal development and test efforts with reduction of
application development tasks from days to minutes via automation and
rapid provisioning with over 30% reduction in costs versus traditional
application environments. This offering is available immediately.
- Enterprise + --To be made
available later this year, Enterprise + will complement and expand on
the value of Enterprise, offering brand new capabilities provide a core
set of multi-tenant services to manage virtual server, storage, network
and security infrastructure components including managed operational
The new software expands IBM's business analytics capabilities by
enabling organizations to develop faster, more precise social media
marketing programs that support their brand's total online presence
through a cloud-based delivery model.
The first product, IBM Coremetrics Social, helps companies analyze
the business impact of their social marketing initiatives, while IBM
Unica Pivotal Veracity Email Optimization Suite analyzes email links
that are shared across social network platforms, enabling marketers to
better capitalize on opportunities across channels.
Today's news follows IBM's recent announcement
of new software and the creation of a new consulting practice dedicated
to the emerging category of "Smarter Commerce," which is focused on
helping companies swiftly adapt to rising customer demands in today's
digitally transformed marketplace. Smarter Commerce includes new cloud
analytics software that enables companies to monitor their brand's
presence in real-time through social media channels to better assess the
effectiveness of new services and product offerings, fine tune
marketing campaigns, and create sales initiatives in real-time.
"IBM's approach to social media analytics is based on the
understanding that people interact with an organization's brand in a
number of ways—including email, social networking sites and company Web
sites—and the true measure of business impact demands a fully integrated
view of the interaction with these resources," said John Squire, chief
strategy officer, IBMCoremetrics. "The new social
media analytics software unveiled today will help marketers develop more
targeted, highly-measurable, and effective social media marketing
IBM Coremetrics Social enables organizations across a wide range of
industries to measure the effectiveness and return on investment (ROI)
of their social marketing initiatives by gaining insight from data
that's publicly available on social media websites.
This Smarter Commerce offering delivers real-time intelligence on the
social media response to a particular brand, or the products, content
and services being offered, and enables clients to make fact-based,
accurate decisions about marketing expenditures. As a result, marketing
teams can easily attribute business impact to social referrals in the
context of other marketing programs.
Using the analytics foundation of the Coremetrics Continuous Optimization Platform™
and its complete suite of marketing optimization applications, IBM
Coremetrics Social provides cross-channel reporting and benchmark
capabilities to track and improve social marketing campaigns. With
social benchmarking, brands can evaluate the effectiveness of their
social initiatives relative to their peer companies, and understand
where they excel, and where there is opportunity for improvement.
It has become routine for social networks to be used as a resource to
broadly share links to special offers made available by companies via
email. Well-known brands can expect to see as much as 38 percent of
their special offer email links shared across social networks. An
average of 28 percent of these links is then 'liked' or commented on.
The new IBM Unica Pivotal Veracity Email Optimization
suite tracks and analyzes email links that are shared across social
network platforms, delivering actionable insights which marketers can
turn into recognizable profit. Unlike other technologies, this new
offering opens the doors for marketers to identify, track, and improve
the perception of their brands across channels. The Social Email
Analytics software tracks all links associated with a marketer's brand
and email, not just the intended links a marketer shares. This approach
better encompasses and reflects the emerging complexities and
ramifications of consumer interactions with brands, starting with email
and ending up in the social realm. With this new software, marketers can
also hone Web pages for social networks and better identify
opportunities across channels.
Last year’s acquisition policy pronouncements are starting to be felt
across to the U.S. Army, with upticks in cloud computing initiatives,
increasing use of fixed-price contracts and adoption of social media.
“Army IT spending will remain stable; the goal is to optimize the IT
[spending]. Optimization will be guided by computing trends,” said Gary
Winkler, Army program executive officer for enterprise information
Efforts to improve efficiency, realign spending priorities and
streamline a cumbersome acquisition process were launched during the
past year amid a tightening national budget by Defense Secretary Robert
Gates and Ashton Carter, undersecretary of defense for acquisition,
technology and logistics.
Leading the charge for the Army’s efforts to hold down spending and
become more efficient are cloud computing initiatives, mobile
technologies, data center consolidation and social collaboration,
Winkler said that mobile data traffic is on track to increase by 39
times between 2009 and 2014, and the social software market is showing
40 percent growth per year through 2013 — also contributing to getting
the Pentagon’s policies rolling further down in operations.
The Army also wants to increase use of firm fixed-price and
multiple-source contracts, as directed in Carter’s Better Buying Power
initiative, and looking to maximize broadly scoped contracts that can be
used for a variety of missions.
However, there are still plenty of challenges, and there likely will
be more to come. Winkler predicted that force reductions could still lie
ahead for DOD, citing his own experience in the 1980s when, like now,
an insourcing effort was followed by a hiring freeze — which was later
followed by layoffs.
“We can tighten our belts and squeeze a little bit [as directed by
the Pentagon] — but I think it’s going to be more than just a little
bit,” Winkler said.
Still, PEO-EIS has been involved in the development of Better Buying
Power tenets, including helping shape concepts and strategies for
improving tradecraft services, establishing common taxonomy and
reforming IT acquisition — all banner items in Carter’s 23-point
acquisition reform plan released last September. Read More>