When we talk about the WebSphere Application Server Hypervisor Edition, we often get a lot of questions about whether or not SUSE Linux is the only flavor of the Linux operating system that we support. The short answer to that question is no.
While it is true that we only deliver the WebSphere Application Server Hypervisor Edition with a SUSE Linux operating system, we will support the use of the virtual image packaging with Red Hat Enterprise Linux as the base operating system. The basic process consists of creating a virtual machine disk based off of a suitable Red Hat install, altering the OVF file in WebSphere Application Server Hypervisor Edition to reference this virtual disk instead of the SUSE virtual disk, and then packaging a new OVA file that contains all the same WebSphere virtual disks (profiles, binaries, IBM HTTP Server) but swaps out the Red Hat virtual disk for the SUSE virtual disk. We have done this many times in both the lab and field, and we offer services to users who need help in creating the image.
Customers often ask if there is any difference in using Red Hat versus SUSE Linux. The answer is, of course, yes and no. The answer is yes in that users must bring their own licenses of Red Hat (SUSE Linux licenses are included in the WebSphere Application Server Hypervisor Edition), and users must support and maintain the Red Hat operating system on their own. However, once the image is built, there is absolutely no difference in the use of that image within WebSphere CloudBurst.
Once built, users upload the image into their WebSphere CloudBurst catalog and it is available for use in pattern building just like any other image. I mentioned that users are responsible for updating and maintaining the image, well users can use WebSphere CloudBurst to create these updated images. When patches or updates are ready for the Red Hat operating system, the Extend/Capture facility available for images in WebSphere CloudBurst can be used to create a new custom Red Hat operating system with your desired fixes. This is all done without ever having to worry about actually recreating and repackaging the image again.
I know seeing is believing, so with respect to the "sameness" of using a Red Hat version of the WebSphere Application Server Hypervisor Edition within WebSphere CloudBurst, I've created a short demo you can watch here. As always, let us know what you think and send any questions our way.
I’m going to take a different approach this week in the blog. Instead of me telling you about some of the features or uses of WebSphere CloudBurst, I thought I would catch up with someone using the product everyday, WebSphere Test Architect Robbie Minshall. Robbie is responsible for a team of testers that harness a lab of over 2,000 physical machines to put our WebSphere Application Server product through some pretty rigorous testing. Toward the beginning of this year Robbie’s team started to leverage the WebSphere CloudBurst Appliance in order to create the WebSphere Application Server environments needed for their testing.
Robbie, can you tell us a little bit about what the WebSphere Application Server test efforts entail?
In WebSphere Application Server development and test we have two primary scenarios. The first is making sure that developers have rapid access to code, test cases and server topologies so that they can write code, test cases and then execute test scenarios on meaningful topologies. The second scenario is an automated daily regression where in response to a build, we provision a massive amount of WebSphere Application Server topologies and execute our automated regression tests.
Previously we have supported these scenarios through the deployment of the Tivoli Provisioning Manager for operating system provisioning, some applications for checking out environments, and then a lot of automation scripts for the silent install and configuration of WebSphere Application Server cells.
Given those scenarios and the existing solution, what are your motivations for setting up a private cloud using WebSphere CloudBurst Appliance?
We are supporting these scenarios through a pretty complicated combination of technologies. These include silent WAS install scripts, wsadmin configuration scripts, a custom hardware leasing application and the utilization of Tivoli Provisioning Manager for OS Provisioning. This solution is working very well for us though as always we are looking for areas to improve, opportunities to simplify and to reduce our dependency on investment in our custom automation scripts. Mainly, there were 3 areas where we wanted to improve our framework: Availability, Utilization and Management. This is why we started looking to the WebSphere CloudBurst Appliance.
Can you expand a bit on what you are looking for in those three areas?
The first focus area we have is availability of environments. We really wanted to lower the entry requirement for the skills and education necessary to get a development or test environment. Setting up these environments has just been too hard, too time consuming, and too error prone. Using WebSphere CloudBurst we can provide an easy push button solution for developers to get on-demand access to the topologies they need.
The second area we are looking for significant improvements on is hardware utilization. Our budgets are tight and in our native automation pools we are only using between 6-12% of the available physical resources. In order to improve this we were looking at leveraging virtualization. WebSphere CloudBurst offers the classic benefit of virtualization with the nice additions of optimized WebSphere Application Server placement and really good topology and pattern management. In our initial experiments we were able to push the hardware utilization up to 90% of physical capacity and consistently were leveraging around 70% of our physical capacity.
Finally we are looking to improve and simplify our management of physical resources and automation. We work in a lot of small agile teams and organizational priorities change from iteration to iteration. Not only does WebSphere CloudBurst allow us to maintain a catalog of topologies or patterns for releases but it also allows us to adjust physical resource allocation to teams through the use of sub clouds or cloud groups.
Basically we felt that WebSphere CloudBurst would improve the availability of application environments, enhance automation, and improve hardware utilization all with very low physical and administrative costs.
What were some of the challenges involved with getting a cloud up and running in your test department?
One of our challenges seems like it would be common to many scenarios, especially in today’s world. Our budget for new hardware to build out our cloud infrastructure was initially very limited. Most cloud infrastructure designs depict very ideal hardware scenarios including SANs, large multicore machines, and private and public networks within a dedicated lab. Quite frankly we did not have the budget to create this from scratch. It was important for us to demonstrate value and data to warrant future investment in dedicated infrastructure. After some performance comparisons we were very happily surprised to see that we could leverage our existing mixed hardware within a distributed cloud. The performance of application environments dispensed by WebSphere CloudBurst on many small existing boxes in comparison to large multicore machines with a SAN was very comparable. This allows us to leverage existing hardware, with minimal investment all the while demonstrating the value and efficiencies of cloud computing. That data in turn has allowed us to obtain new dedicated hardware to iteratively build up a larger lab specifically for use with WebSphere CloudBurst.
Specifically with WebSphere CloudBurst, are there any tips/hints you would offer users getting started with the appliance?
Sure. First, we quickly realized as we added hypervisors to our WebSphere CloudBurst setup it was critical to have someone with network knowledge on hand. This is because the hypervisors came from various sections of our lab, and we really needed people with knowledge of how the network operated in those different sections. Once we had the right people we were able to setup WebSphere CloudBurst and deploy patterns within an hour and a half.
Moving forward we continued to have challenges as we dynamically moved systems between our native hardware pool and our cloud. Occasionally the WebSphere CloudBurst administrator would move a system into the cloud but incorrectly configure the network or storage information. This lead to some misconfigured hypervisors polluting our cloud. We overcame this, quite simply and satisfactorily I may add, by creating some simple WebSphere CloudBurst CLI scripts which add the hypervisors, test them individually, by carrying out a small deployment to that hypervisor, and then move the correctly configured hypervisors into the cloud after verifying success. Misconfigured hypervisors go into a pool for problem determination. This has allowed us to maintain a clean cloud, and we are able to dynamically move our hardware in and out of the cloud to meet our business objectives.
We also use the WebSphere CloudBurst CLI to prime the cloud so to speak. Before using a given hypervisor in our cloud, we execute scripts that ensure each unique virtual image in our catalog has been deployed to each of our hypervisors at least once. When the image is first deployed to a hypervisor, a cache is created on the hypervisor side of the connection, thus meaning subsequent deployments do not require the entire image to be transferred over the wire. This gives us consistent and fast deployment times once we are using a hypervisor in our cloud.
I would assume that like many applications deployed on WebSphere Application Server, your team’s applications have several external dependencies. Some of these dependencies won’t necessarily be in the cloud, so how did you handle this?
You’re right about the external dependencies. Our applications and test cases run on the WebSphere Application Server but are dependent upon many external resources such as databases, LDAP servers, external web services etc. WebSphere CloudBurst allows us to deploy WAS topologies in a very dynamic and configurable way but the 1.0.1 version does not allow us to deploy these external resources in the same manner. This was overcome by using script packages in our patterns. These script packages allow us to associate our test applications with various patterns we have defined. The script package definition also allows us to pass in parameters to the execution of our scripts. We supply these parameter values during deploy time, and these values are used to convey the name or location of various external resources. The scripts that install our applications can access these values and ensure the application is properly integrated with the set of resources not managed by the appliance.
What is your team looking to do next with WebSphere CloudBurst and their private cloud?
The next challenge on our plate is to keep up with the demand of our expanding cloud and to develop a more dynamic relationship between our native pools and our cloud using the Tivoli Provisioning Manager. These are fun challenges to have and we look forward to sharing our progress.
I'm glad I got to spend some time with Robbie to glean some insight into their work and progress with WebSphere CloudBurst. I hope this information was useful to you. It's always nice to hear about a product from practitioners who can give you hints, tips, gotchas, and other useful information. Be sure to let me know if you have any questions about what Robbie and his team are doing with WebSphere CloudBurst.
As fast as the Impact conference arrived is about as fast as it is disappearing in our collective rearview mirror. It seems like a blur. In a word, the conference was... exhausting! In other words, it was informative, exciting, and illuminating. I hope that many of you had a chance to make it out there, and I hope more of you make it to Impact in 2013.
For those of you familiar with the conference, you know that it is typically a launching ground for new product versions and altogether new products. This year was certainly no different with the launch of the new version of WebSphere Application Server (8.5), the new and improved IBM Business Process Manager and IBM Operational Decision Manager, a new version of WebSphere eXtreme Scale (8.5), and numerous updates across the messaging and connectivity stack. While I encourage you to follow up on all of these important announcements, they are not what I am going to focus on today. Instead, I am going to focus on the new addition to the IBM family that got plenty of attention this year: IBM PureApplication System.
Joe recently touched on this new offering, so I won't get into an exhaustive overview. To put it briefly, IBM PureApplication System is an expert integrated system. What does that mean? First and foremost it means that it is a system -- a whole. It is an integrated platform of hardware and software, optimized and tuned for running transactional web and database workloads. I do not mean that it is a system of software that we pre-install on off-the-shelf hardware. Rather, it is the result of hardware and software engineers across IBM working together to build a system that is expert at what it does. More than just the web application and database software though, IBM PureApplication System also contains pre-installed and pre-configured management software that delivers a soup to nuts (hardware to application) single pane of glass for managing the entire system. I could go on and on, but again that's not my purpose here. I encourage you to check out the new IBM PureSystems web page for more information and some pretty cool videos.
For those of you that take a look at IBM PureApplication System you will quickly find that the notion of pattern-based deployments (something I have talked about at length on this blog) plays a key role in the new system. In fact, the same virtual system and virtual application pattern constructs that you have come to know in IBM Workload Deployer are front and center in IBM PureApplication System as well. In the new system, you can build custom virtual system and virtual application patterns, deploy them to your cloud, and then manage them over time. If you are familiar with the IBM Workload Deployer user interface, you will likely find yourself immediately familiar with the interface of IBM PureApplication System. Given all of that, if you are like many of the users I talked to at Impact and since, you probably have some questions as to IBM Workload Deployer and this new system. Most commonly, I get these two questions: "What does this mean for the IBM Workload Deployer product?" and "How do I know when to use IBM PureApplication System versus IBM Workload Deployer?" Let me do my best to address those questions.
In terms of the impact of the IBM PureApplication System on the IBM Workload Deployer offering, I can only view it in one way: affirmation. As I said above, IBM PureApplication System puts the mode of pattern-based deployments front and center, and further affirms that this kind of approach is crucial to the evolution of application delivery and management. Those of you familiar with IBM Workload Deployer or its predecessor WebSphere CloudBurst know that we have been talking about patterns for years. Rest assured we will continue to talk about patterns and solutions for building, deploying, and managing them. As it stands, we have at least three ways for you to build, deploy, and manage patterns: IBM SmartCloud Application Services, IBM Workload Deployer, and IBM PureApplication System. As you can see, options for consuming patterns have only increased since the initial launch of WebSphere CloudBurst. Furthermore, if you were at Impact, you know that we have a vibrant and vocal community of IBM Workload Deployer users, and I hope to see that community continue to grow! As I see it, the core technology of IBM Workload Deployer is becoming our 'operating system' for cloud platform management.
The question of when to use IBM Workload Deployer or when to use IBM PureApplication System is one whose answer is a bit more nuanced and not something one can or should try to definitively answer in a blog post. One thing I do suggest though is that when evaluating these two technologies, it is important to acknowledge that they have different business value propositions. Sure they share common core technology in terms of building, deploying, and managing pattern-based environments, but beyond that they diverge a bit. Remember, IBM PureApplication System is, well a system. It is the hardware, software, and management technology you need to run your middleware application workloads. It is pre-built and pre-integrated to the point that it only requires you to roll it into your datacenter, hook it up to your network, and do some one time configuration. The aim for the duration between receipt of the system to up and running with your first deployment is four hours, and if you were at Impact you saw an amusing video with the chief architect (Jason McGee) that proves this claim.
IBM Workload Deployer is fundamentally different in terms of how you consume it and how it integrates with your infrastructure. Most notably, IBM Workload Deployer does not include optimized hardware (servers, storage, networking) for running your workloads or a single point of management for everything from hardware to applications. To use IBM Workload Deployer you attach it to your network and point it at existing virtualized servers. Simply put, IBM Workload Deployer assumes you have existing, under-utilized hardware that you can get more out of with the intelligent deployment and management approach the appliance delivers. While you do not get the pre-integrated and optimized system of hardware plus software, you do get the flexibility to use your existing infrastructure.
As you can see, there are similarities (patterns) and differences (whole system vs. management system), and the result is a pretty different set of value propositions. The key in evaluating these technologies is that you do so with a crisp understanding of your current needs AND your future plans for growth and evolution. I know this kind of advice is a bit generalized, but I hope the differences I discussed above help you to at least understand the capabilities of the two different offerings. As always, if you have any comments or questions, please reply to the post!
When I talk with WebSphere CloudBurst users, the topic of custom virtual images comes up frequently. In some cases they simply want to customize a shipped IBM Hypervisor Edition, and in other cases they want to create a completely custom image. Creating a customized version of an IBM Hypervisor Edition is relatively easy since we give you extend & capture in WebSphere CloudBurst. Creating a completely custom image has historically been a bit tougher, mostly owing ot the fact that there was not a standard tool or process for image assembly. I am happy to say that today's publication of the IBM Image Construction and Composition Tool changes all that.
Watch a demo of the IBM Image Construction and Composition Tool
The primary purpose of the Image Construction and Composition Tool is to enable a modular approach to virtual image construction, while taking into account the typical division of responsibilities within an organization. The tool allows the right people within an organization to contribute their specialized knowledge as appropriate to the virtual image creation process. This means OS teams can handle the OS and software teams can handle the appropriate software. A separate image builder can then use both OS and software components to meet the needs of users within the organization. Best of all, the image builder does not need intimate knowledge of how to install or configure any of the components in the image. They simply need to know which OS and software components to use.
When using the Image Construction and Composition Tool, you start by defining the base operating system you wish to use for your images. You can do this by importing an existing virtual image with an OS already installed, providing an ISO for the OS, or pointing to a base OS image on the IBM Cloud. The bottom line is that you have necessary flexibility to start with your certified or ‘golden’ operating system build. Once you have the base OS image defined in the Image Construction and Composition Tool, you can start defining custom software for use in the images you will compose.
In the tool, bundles represent the software you wish to install within a virtual image. The definition of a bundle contains two major parts: Installation and Configuration. The installation component of a bundle tells the Image Construction and Composition Tool how to install your software into the virtual image. You provide a script or set of scripts that install the necessary components into your image, and you direct the tool to call these scripts. These tasks run once during the initial creation of the virtual image, thus allowing you to capture large binaries, long-running installation tasks, or other necessary actions directly into your image.
The configuration section of a bundle defines actions that configure the software installed into the image. Like with the installation tasks, you provide a script or set of scripts for configuration tasks. Unlike installation tasks that run exactly once, configuration scripts become part of the image’s activation framework and as such, run during each image deployment. Using the tool, you can define input parameters for configuration scripts and optionally expose them so that users can provide values for the parameters at image deploy-time. Configuration tasks are important in providing flexibility that allows users to leverage a single virtual image for a number of different deployment scenarios.
Once you have your base OS image and one or more bundles defined in the Image Construction and Composition Tool, you can compose a virtual image. To compose a virtual image, you extend the base OS image and add any number of bundles into the new image. A base OS image plus a set of bundles defines a unique image.
After you define the image you want to construct, you initiate a synchronize action in the Image Construction and Composition Tool. When you start the synchronize action, the tool first creates a virtual machine in either a VMware or IBM Cloud environment (based on how you configured the tool). Next, the installation tasks of each bundle you included in the virtual image run to install the required software. Finally, the tool copies the configuration scripts from each bundle into the virtual machine and adds them to the image’s activation framework. This ensures the automatic invocation of all configuration scripts during subsequent image deployments.
Once the image is in the synchronized state, you can capture it. Capturing the image results in the creation of a virtual image based on the state of the synchronized virtual machine. The tool also automates the generation of metadata that becomes part of the virtual image package. When the capture of the virtual image completes, you can export it from the Image Construction and Composition Tool and deploy it using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
I am excited for users to get their hands on the Image Construction and Composition Tool. I believe it represents the first big step in helping users to design and construct more sustainable virtual images. Did I mention it is completely free to download and use? Visit the Image Construction and Composition Tool website for more details and a download link. I look forward to your comments and feedback.
One of the things I haven't written about much here is how the WebSphere CloudBurst Appliance integrates with other IBM software solutions. One of those interesting integration scenarios, and one I think is particularly useful for developers, involves Rational Build Forge.
Very simply put, Rational Build Forge is an adaptive execution framework that allows users to define completely automated workflows for just about any purpose. These workflows are represented as projects that contain a discrete number of steps. When looking at Rational Build Forge through the software assembly prism, the offering allows users to fully automate and govern the process of building, assembling, and delivering software into an application environment.
Now, on to the integration of WebSphere CloudBurst and Rational Build Forge. Users can build custom patterns in WebSphere CloudBurst that include a special script package (which I'll eventually provide a link to from here). This script package provides the glue between the deployment process in WebSphere CloudBurst and Rational Build Forge. When deploying a WebSphere CloudBurst pattern that contains this script package, users provide the name of a Rational Build Forge project as well as information about the Rational Build Forge server on which the project is defined.
Once the necessary information is supplied, the deployment process gets underway. Toward the end of the deployment, like all other scripts included in patterns, the special Rational Build Forge script is invoked. This results in the project specified during deployment being executed on the virtual machine created by WebSphere CloudBurst.
Because the Rational Build Forge project executes on a virtual machine setup by WebSphere CloudBurst, the individual steps of the project can very easily access the WebSphere Application Server environment. Thus, the Rational Build Forge project could very easily contain steps to build, package, and deploy an application into the WebSphere Application Server cell. The result is a fully automated process that includes everything from standing up the application environment to delivering applications into that environment.
I put together a short demonstration of this integration, and you can take a look at it here. As always, please let us know if you have any questions or comments. Your feedback is much appreciated!
Lately, I have run into multiple situations where an IBM Workload Deployer user has been trying to decide exactly how they want to create their customized images for the cloud. Essentially, they have been trying to decide whether to use the native extend and capture capabilities of IBM Workload Deployer, or to pursue the use of the Image Construction and Composition Tool (also included with the appliance). The conversations have been interesting and challenging, but more importantly, they have been a reminder that constructing enterprise-ready environments for the cloud does not happen by magic. It takes thought, deliberate planning, sustainable design, and the tools to carry everything out.
The tools part we have covered. I have every confidence, bolstered by user experience after user experience, that IBM Workload Deployer and associated tools (like the Image Construction and Composition Tool) equip you to build highly customized, cloud-based application environments. In this post, I want to focus in on the thought process that goes into how you decide to build your customized environment. Specifically, I would like to talk about important points to consider as you try to understand whether to use the native extend and capture capabilities of IBM Workload Deployer or the Image Construction and Composition Tool.
To be clear from the outset, I am not trying to provide a decision flowchart in this post. For all intents and purposes, that would be next to impossible. Instead, I want to pose to you some important questions that you should ask of yourself, along with the reasons why I believe those queries to be important. Keeping in mind that this is not an all-inclusive list, here it goes:
Question: Are the customizations that you want to make congruent with an IBM-supplied image?
Reason: One of the first decisions you should make is whether or not you can start with an IBM-supplied image as the base for your customization. You need to know what middleware elements (type and version) make up your environment and what operating system should host that environment (version and distribution). You can match that information against the list of content that IBM supplies. If there is a match, you should start by looking at extend and capture to customize that image to meet your needs. If there is no direct match, you may be looking at the Image Construction and Composition Tool.
Question: Does your custom content supplement middleware content supplied in an IBM image?
Reason: If you simply need to add additional components that supplement software already in an IBM image, I believe it is best to first examine the use of extend and capture. Whether these components are IBM software or not is irrelevant as the extend and capture functionality does not care.
Question: How configurable do you want to make the custom content in your image?
Reason: If you are adding content into the image, you need to think about just how configurable you need it to be. When you use extend and capture, you add the content to an existing image in a manner that pretty well ends up being opaque to IBM Workload Deployer. To configure that content, you need to have script packages and make sure they are part of every pattern you create based on the image. Alternatively, if you use the Image Construction and Composition Tool, you can embed configuration behavior in the image's activation engine, and you can expose deploy-time parameters without needing to include script packages in every single pattern. As an example, if you need to add a monitoring agent into your environment, you would likely do this via extend and capture and end up with a pretty simple script package to configure that agent during deployment. If however, you need to create an image with a custom database, you would likely favor the Image Construction and Composition Tool as you could embed common deploy-time configuration parameters directly in the image. For a database, there are likely to be many more deploy-time configuration parameters that you want to expose as compared to a more simple monitoring agent.
Question: Is your main focus on making operating system changes?
Reason:If your primary focus is on making operating system changes AND the answer to the first question is that your target content aligns well with IBM-supplied images, then extend and capture is where you want to start. Of course, you need to make sure that you can make all necessary changes to the OS with extend and capture, but I will say that this capability is not very restrictive at all.
Admittedly, this is a short list, but I believe it is a good starting point for how you decide upon one approach versus the other. Also, I would be remiss not to point out that these tools are absolutely not mutually exclusive. Many users I work with use a combination of the two approaches. In fact, there are some use cases that call for both tools. Start by creating a completely custom image in the Image Construction and Composition Tool, and then subject that image to the extend and capture process in IBM Workload Deployer to customize it for a particular purpose, team, project, etc. I hope you find this helpful, and I welcome your feedback or thoughts!
When many people think of cloud computing they immediately think of virtualization and virtual machines in particular. This is completely natural and not at all surprising. After all, one of the core underlying technologies necessary for cloud computing is virtualization. However, it is important not to confuse one element of cloud computing with the entire thing - and this can sometimes happen. Many people have begun to leverage virtual machines in their on premise environment and sometimes begin to call this their private cloud. While virtualization is a substantial step forward and help gets you started down the necessary path of standardization and automation that is essential in a cloud - it is not in and of itself "a cloud".
The National Institute of Standards and Technology has published its definition of cloud computing. This is a very complete and yet concise definition that includes not only the essential characteristics of a cloud solution but also the service models (IaaS, PaaS, SaaS) and deployment models (public, private, hybrid, community). It is a great way to get a perspective on cloud and can be useful when considering the solutions of various vendors.
Let me summarize the essential elements of cloud from this definition here:
broad network access
So, this is interesting. Not only is this much more than just virtualization - but virtualization isn't even mentioned in the list explicitly. Not to worry - virtualization is of course important and is included under the resource pooling topic. I would assert that virtualization is also necessary to facilitate the type of on-demand, self-service, elastically scaling resources that are leveraged in a cloud. What is crystal clear from this definition is that there is a lot more to a cloud solution than just virtual images and some hypervisor infrastructure upon which to run them. Somebody must provide the necessary on-demand/self-service capabilities, the network access to these services, the management of the resource pools, enabling true elasticity for running systems, measuring services and so forth. IBM Workload Deployer provides just such capabilities for the on-premise cloud allowing you to efficiently deploy patterns built for virtual systems and virtual applications with deep knowledge of the middleware that is being provisioned to optimize these solutions. Furthermore, Workload Deployer provides the complete lifecycle management from pattern creation, to deployment and provisioning, applying maintenance, resource and license management in the on-premise cloud, elastic scalability, and eventually returning resources to the on-premise cloud to be reused. Workload Deployer is a complete solution for not only server virtualization but of course for cloud computing.
However, virtualization doesn't have to stop with just virtual machines. It is a general principle that can be applied to more than just servers. At its core, virtualization is really about providing a level of abstraction between some real resources and the consumers of those resources. This is a natural fit when we think of server virtualization and virtual machines. However, there are also substantial benefits to be gained by adopting a similar abstraction between the middleware and the applications themselves - sometimes referred to as application virtualization.
By application virtualization I mean providing the capabilities to abstract the application from the underlying infrastructure such that it can be elastic, participate in health management policies, and provide agility across the pool of application infrastructure resources. This type of application virtualization is built into our Virtual Application pattern (hence the name) in Workload Deployer and surfaced in solutions via policies (such as scaling and routing), and high availability functions built into the Web Application pattern type. For Virtual Applications these features are fully integrated and optimized functions as are all elements of Virtual Applications. However, similar features have also been available for WebSphere Application Deployments in Virtual System patterns with a special extension.
WebSphere Virtual Enterprise provides application virtualization for traditional WebSphere ND solutions and this same feature is delivered for Virtual System pattern deployments of WebSphere Application Server by use of the Intelligent Management Pack. Leveraging the capabilities of Workload Deployer with Virtual Systems lets you gain the benefits of server virtualization and to reduce hardware, provide rapid and consistent deployment of entire systems, dynamically adjust resource consumption, and much more. Leveraging the capabilities of the Intelligent Management Pack provides the ability to manage service level agreements with elastic scaling and health management, lower operational costs, and provide for improved application management. These two solutions together provide a powerful combination to improve the management and resiliency of your enterprise applications.
If you would like to learn more about application virtualization using the Intelligent Management Pack in conjunction with Virtual System Patterns in IWD then please join Keith Smith and myself tomorrow for a webcast on this very topic. Keith is the lead architect on our WebSphere Virtual Enterprise and Intelligent Management Pack products and brings a wealth of experience in this space. So don't miss this opportunity - register here.
When IBM Workload Deployer v3.0 rolled around, the appliance introduced the concept of shared services. These were services that a cloud administrator could launch into the cloud infrastructure defined to IBM Workload Deployer, and use to serve a number of different application deployments. There were, and continue to be, two main shared services: a proxy service and a cache service. The shared proxy service does pretty much what you may guess. It provides request routing capabilities across multiple different instances of multiple different applications, thereby providing a centralized resource that encapsulates this basic need in an application environment. You can probably also guess what the caching service does. It caches things! Specifically, in IBM Workload Deployer v3.0 it provided in-memory caching of HTTP sessions, thus ensuring high availability of data stored in those sessions.
Undoubtedly, the ability to make HTTP session data fault tolerant is extremely critical in any application environment, cloud-based environments included. However, the applicability of a shared cache service is much further reaching, and in IBM Workload Deployer v3.1, we are starting to open this service up to your applications. What does this mean to you? Quite simply it now means that you can access this cache directly from your application code. If you are familiar with WebSphere eXtreme Scale or the DataPower XC10 Caching Appliance, then you know exactly what I mean. You can use the WebSphere eXtreme Scale ObjectGrid API to insert, read, update, and delete entries that exist in the in-memory cache. The underlying cache technology is based on the same code that powers WebSphere eXtreme Scale and DataPower XC10, so you can be sure that your cache is scalable, fault tolerant, responsive, and otherwise able to meet the needs of your application.
As I hope you find to be the case with many IBM Workload Deployer capabilities, this is a superbly simple capability to leverage. When you deploy virtual application patterns based on the IBM Workload Deployer Pattern for Web Applications, the capability is simply there. The underlying runtime that is serving your application is automatically augmented with the capabilities necessary so that your applications can connect to and utilize the deployed caching service. It is also worth pointing out that you can utilize the caching capabilities provided by this shared service for applications and application infrastructure that you deploy via virtual system patterns as well. You can either choose to augment the WebSphere Application Server environment with the XC10 Feature Pack (a deploy-time option for virtual system patterns built on WebSphere Application Server Hypervisor Edition v8), or you can configure WebSphere Application Server as you always would when integrating with a WebSphere eXtreme Scale environment or a DataPower XC10 Appliance.
What's the real benefit to all of this you ask? Well, when you use the shared caching service, you get the benefits of a distributed, in-memory, extremely scalable cache without having to deal with too much setup or administration. You simply tell IBM Workload Deployer how many resources you want to dedicate to your cache, and deploy the shared service. IBM Workload Deployer takes care of the details, including scaling in and out the cache to meet the needs of the system. On top of all of this, there is also an option to configure 'Next to the Cloud' caching. If you currently own DataPower XC10 appliances, you can make those available to virtual application pattern deployments (this was already possible with virtual system patterns) by simply providing details of the location of the appliance collective in question.
Put simply, setting up, administering, and utilizing an object caching service for your applications has never been easier. Check it out and let us know what you think!
Typically we spend most of the real estate on this blog talking about cloud computing and specifically, IBM Workload Deployer. However, I am hoping that this week you permit me to take a bit of a detour to discuss a very important new announcement. Last week, IBM announced the early availability of the WebSphere Application Server v8.5 Alpha. In all fairness, your response may be 'You guys always have WAS Alphas. Why should I care about this one?' I have two words for you: Liberty Profile.
Based on my own experience in the IBM labs and my conversations with numerous enterprise developers out there, I think I understand many of the needs to create an efficient development environment. Developers need tools and runtimes that are lightweight, easy to install, simple to configure, and fast to recycle or otherwise update. Enhancements in our WebSphere Application Server v8.0 took many of these concerns head on with features such as directory-based install and drastically improved server startup times. The new v8.5 Alpha, and specifically the Liberty Profile, extend this developer focus even further.
The Liberty Profile is a lightweight, fast, and easy to use application runtime that you can download for free by visiting the WASdev community site. The design of the runtime is best described as fit-for-purpose and you configure it by selectively enabling and disabling features based on application need. For example, you may enable the servlet, JPA, and JSP features, or you may decide you only need to enable the servlet feature of the runtime. It is completely up to you! In addition to this innovative new runtime, the WebSphere Application Server v8.5 Alpha also includes free tools for Eclipse. These tools make it simple to create Liberty Profile server instances, start server instances, stop server instances, install applications, and remove applications. In fact, you can do all of this and even download and install the WebSphere Application Server v8.5 Alpha without ever leaving your Eclipse workspace! Check out the demonstration below to see an example of installing and using the new Alpha.
I really hope that you will participate in the new WebSphere Application Server v8.5 Alpha. The setup process that includes both tools and runtime will take just a few minutes of your time, and leaves but a small footprint on your machine (the Liberty Profile of the WAS v8.5 Alpha is only ~50 MB unzipped). In the meantime, you can find more information about the Alpha on the WASdev site or in the new Information Center. Finally, don't forget to join in on the conversation on the WASdev forum!
If you were to compare the deployment mechanics for virtual application patterns and virtual system patterns, you would notice differences in the way IBM Workload Deployer establishes these environments in your cloud. In both cases the end result is a virtualized environment with which you can work, but the construction of these environments varies. For the most part, you need to understand the virtual application pattern deployment process when creating custom patterns of that type, and you need to understand the virtual system pattern deployment process when creating custom patterns of that type. However, the way in which IBM Workload Deployer deploys virtual application patterns may have an effect on how you build custom virtual system patterns.
When deploying virtual application patterns, IBM Workload Deployer does not use traditional IBM Hypervisor Edition images to initially create the virtual machines for your deployment. Instead, the appliance deploys a virtual image that contains only a hardened operating system environment. After the virtual machine initializes, the appliance triggers the installation, configuration, and integration of software and applications that make up the requested virtual application pattern. This is a bit more of a bottom-up, modular approach as compared to the virtual system pattern deployment process which involves the use of IBM Hypervisor Edition images. Neither is necessarily better than the other (after all they both result in customized deployments that happen in mere minutes), but they are different.
Okay, so I promised that the way in which the appliance deploys virtual application patterns had something to do with virtual system pattern customization techniques, but what exactly? It goes back to the beginning of virtual application pattern deployment and the base virtual image deployed by IBM Workload Deployer. When you deploy virtual application patterns, you never directly interact with this image. However, the image comes pre-loaded on the appliance and appears in the catalog right next to the IBM Hypervisor Edition images. This is important because it means you can use this base OS image in the creation of your custom virtual system patterns as well!
The current version of the base image contains a 64-bit Red Hat Enterprise Linux operating system and a single part that you can use in your virtual system patterns. Further, we place no restrictions on how you use or customize this image. You can even subject this image to the extend and capture process in IBM Workload Deployer. In this way, you can install any software content you want into the image (provided it runs on the OS of course), use the image in a pattern, and deploy that software via the appliance. Since you can use the image to build a virtual system pattern, you can include any configuration scripts that you require. Again, we do not inhibit the way in which you customize the image, nor do we constrain the way you use it in a virtual system pattern. It is entirely up to you.
Personally, I think this base image opens up a new set of possibilities for you, our users. Over the course of WebSphere CloudBurst and now IBM Workload Deployer, we got a lot of feedback requesting a base OS image that allowed this kind of flexibility. Well, it is here now, and I cannot wait to see how everyone starts using it!
When one uses IBM Workload Deployer (previously WebSphere CloudBurst) to deploy a virtual system pattern, they benefit from a completely automated deployment process. The automation includes the creation and placement of virtual machines, injection of IP addresses, initiation of internal processes, and invocation of included scripts. Most of these processes are straightforward and require little more than a brief overview. However, the placement of virtual machines stands out, and it's inner workings are the subject of quite a few questions when I discuss the appliance. With that in mind, I thought I would provide a little more information on how the placement algorithm in IBM Workload Deployer works.
The placement subsystem in IBM Workload Deployer considers three primary elements: compute resource, availability, and license optimization. Compute resource availability is the gating factor for placement. That means that IBM Workload Deployer first looks at the available CPU, memory, and storage resource in the collection of hypervisors making up the cloud group(s) you are targeting for deployment. If a particular hypervisor cannot provide enough resource based on the amount you requested for your deployment, then it is automatically taken out of the eligible hosts pool. It is important to note that IBM Workload Deployer will overcommit CPU, and it will overcommit storage if you direct it to do so. It will not overcommit memory because that could severely degrade the performance of the application(s) running in the virtual machines.
After choosing the pool of hypervisors that are capable of hosting the virtual machines in your deployment from a compute resource perspective, the appliance then considers high availability. To better understand this particular placement stage, let's consider an example. Consider you are deploying a pattern based on WebSphere Application Server Hypervisor Edition and it contains two custom node parts. It is conceivable, and in fact likely, that these two custom node parts will host members of the same cluster, and thus the two nodes will support the same applications. As such, IBM Workload Deployer will attempt to place the two custom nodes on different physical machines in order to prevent a single point of failure. Of course, this depends on having two hypervisors with enough resource (CPU, memory, storage) to host the virtual machines, but the appliance makes that decision in the first placement stage.
After considering compute resource and high availability, IBM Workload Deployer moves to the last stage of placement: license optimization. In this stage, the placement subsystem attempts to place the virtual machines on hypervisors in a way that minimizes the licensing cost to you. The appliance can do this because it is aware of IBM virtualization licensing rules and takes those into account during this stage (if you aren't familiar with virtualization licensing rules and you are curious, ask you're sales representative to explain some time). During this stage, it will not violate any resource overcommit directives or rules in place, nor will it compromise system availability, but it will seek to minimize costs within these parameters.
At this point, I should make something clear that may already have occurred to you. You can override most of these placement rules by creating a cloud group containing only one hypervisor. In this case, IBM Workload Deployer will put all virtual machines on the single hypervisor until it runs out of compute resource (memory is likely to be the constraining factor). I would not suggest that you do this unless you have a good reason or you are in a simple pilot phase, but I do like to point out the art of the possible!
While not incredibly deep from a technical perspective, I do hope that this provided a few helpful details on what goes on during the placement stages of deployment. If you have any questions, do let me know.
I hate sitting on secrets. I always have. I understand that sometimes it's in the best interest of everyone (and your job) to keep tight lips, but that does not make it any more fun. Inevitably, the run-up to our annual Impact conference means everyone in the lab is doing their fair share of secret keeping -- just waiting for announce time. For a lot of us, that day ended Tuesday with the announcement of the IBM Workload Deployer v3.0.
Now, you may be wondering, 'I have never heard of this. Why is it version 3.0??' Well, IBM Workload Deployer is a sort of evolution of the WebSphere CloudBurst Appliance, which was previously at version 2.0. This is good news for all of our current WebSphere CloudBurst users because all of the functionality (plus new bits of course) that they have been using in WebSphere CloudBurst are present in IBM Workload Deployer. You can use and customize our IBM Hypervisor Edition images in IBM Workload Deployer. You can build and deploy custom patterns that contain custom scripts in order to create highly customized IBM middleware environments. So, what's the big deal here? Two words: workload patterns.
Workload patterns represent a new cloud deployment model and are an evolution of the traditional topology patterns you may have seen with WebSphere CloudBurst Appliance (I am a little torn between evolution and revolution, but that's splitting hairs). Fundamentally, workload patterns raise the level of abstraction one notch higher than topology patterns and put the focus on the application. That means, when you use a workload pattern the focus is on the application instead of the application infrastructure. Perhaps an example would be helpful to illustrate how a workload pattern may work in IBM Workload Deployer.
Let's consider the use of a workload pattern that was part of the recent announcement, the IBM Workload Deployer Pattern for Web Applications v1.0. Just how might something like this work? It's simple really. You upload your application (maybe a WAR or EAR file), upload a database schema file (if you want to deploy a database with the solution), upload an LDIF file (if you want to setup an LDAP in the deployment to configure application security), attach policies that describe application requirements (autonomic scaling behavior, availability guidelines, etc), and hit the deploy button. IBM Workload Deployer handles setting up the necessary application middleware, installing and configuring applications, and then managing the resultant runtime in accordance with the policies you defined. In short, workload patterns provide a completely application centric approach to deploying environments to the cloud.
Now, if you are a middleware administrator, application developer, or just a keen observer, you probably have picked up on the fact that in order to deliver something as consumable and easy to use as what I described above, one must make a certain number of assumptions. You are right. Workload patterns encapsulate the installation, configuration, and integration of middleware, as well as the installation and configuration of applications that run on that middleware. Most of this is completely hidden from you, the user. This means you have less control over configuration and integration, but you also have significantly reduced labor and increased freedom/agility. You can concentrate on the development of the application and its components and let IBM Workload Deployer create and manage the infrastructure that services that application.
Having shown and lobbied a bit for the benefits of workload patterns, I also completely understand that sometimes you just need more control. That is not a problem in IBM Workload Deployer because as I said before, you can still create custom patterns, with custom scripts based on custom IBM Hypervisor Edition images. The bottom line is that the IBM Workload Deployer offers choice and flexibility. If your application profile meshes well with a workload pattern, by all means use it. If you need more control over configuration or more highly customized environments, look into IBM Hypervisor Edition images and topology patterns. They are both present in IBM Workload Deployer, and the choice is yours.
If you happen to be coming to IBM Impact next week, there will be much more information about IBM Workload Deployer. I encourage you to drop-by our sessions, ask questions, and take the opportunity to meet some of our IBM lab experts. Hope to see you in Las Vegas!
The concepts that govern users and user groups in WebSphere CloudBurst are fairly basic, but I get asked about them enough that I believe they warrant a short discussion. First things first, you can define users in WebSphere CloudBurst and optionally define user groups to assemble users into logical collections. For both users and user groups, you can assign roles that define the actions a particular user or group of users can take using the appliance.
All of that is straight forward, but it can get a bit tricky once we start considering the effects of user permissions when managing at the user group level. The basic premise is that when a user belongs to a group or groups, the user's effective permissions are a sum of the permissions to all of the groups to which they belong. While that is easy to say, and maybe even to understand, I feel like an example always helps.
Consider that we have a single user WCAGuy that belongs to the PatternAuthors, ContentCreators, and CloudAdmins groups. The permissions for those groups are as follows:
PatternAuthors: Users in this group have permission to create and deploy patterns
ContentCreators: Users in this group have permission to create catalog content as well as create and deploy patterns
CloudAdmins: Users in this group have permission to administer the cloud, create catalog content, and create and deploy patterns
Naturally then, it follows that the WCAGuy user can administer the cloud, create catalog content, create patterns, and deploy patterns. So then, what happens if we remove the WCAGuy user from the CloudAdmins user group? Well, as you may expect, there is an update to the user's permissions. The WCAUser user can no longer administer the cloud, but they can still create catalog content, create patterns, and deploy patterns (owing to their membership in the other two groups). Similarly, if we next removed the WCAGuy user from the ContentCreators group, then the user would retain only the permission to create and deploy patterns.
Just one more thing, let's talk about what happens when I remove a user from a group and they no longer belong to any groups. Consider that I created the WCAGuy user with the permission to create catalog content as well as create and deploy patterns. Next, I added the user to the CloudAdmins group, meaning the user now has the permission to administer the cloud. I promptly decide that the user has no business with those permissions, so I remove the user from the CloudAdmins group. What happens? The user retains the permission set of the last group to which they belonged. In this case, that means the WCAGuy user retains cloud administration rights. I have to update the user's permission set if I want to take that right away, but in this case, it will not automatically disappear upon removing them from the CloudAdmins group.
I hope this helps clear up any ambiguity you may have had concerning users, user groups, and permission sets in WebSphere CloudBurst.
As a final preview of this week's building block sessions in the Enabling cloud computing with WebSphere campaign, I caught up with WebSphere DataPower architect Tim Smith. Tim is delivering a podcast that introduces and explains the new Application Optimization capabilities in the WebSphere DataPower line of products. Here is what Tim had to say:
Me: I speak with quite a few customers about the WebSphere CloudBurst Appliance, and for once I'm happy to be the one asking this question. Why do we deliver WebSphere DataPower in the appliance form factor?
Tim: DataPower has become a dominant player in the DMZ and in the ESB. Much of the reason is that this is a purpose built hardware appliance. There are many things that our customers like about this appliance package. First, it has security as part of its DNA. The basis for securing connections, applies throughout the network whether it is in a DMZ or in an ESB. The physical box provides tamper resistant protection. Another reason is availability -- there are no spinning media, dual power supplies, and a focus on fail over support.
In both the DMZ and the ESB, there has been a proliferation of products. The main reason for the proliferation is that customers want to remove as many decisions from the general purpose server as possible, and let servers do what they do best, process application requests. The devices that have been proliferating make more decisions on the request. They do deep packet processing and routing. They also may transform the request into an entirely different request. So, there are an abundance of "pre-processing" decisions and operations made. With DataPower, many functions are integrated into the single hardware platform, giving you a smaller box count. No need to purchase and maintain several platforms, their OS and software versions, compatibility lists, etc. With a single hardware box that does so many things, we can greatly reduce the total cost of ownership for our users.
The DataPower appliance is a blend of Hardware and firmware that is well provisioned with hardware assists that help compile, parse, and assist in many of the intensive packet processing capabilities. To summarize, you get an extremely flexible and adaptable product that reduces total cost while increasing performance.
Me: A theme that comes up in cloud computing over and over is consolidation. Can you speak to the consolidation offered by WebSphere DataPower appliances with respect to the self-balancing capabilities?
Tim: Yes. My answer to the prior question was a long-winded way of describing DataPower's ability to consolidate many features into a single platform. Self-balancing is an example. As DataPower became more popular, larger installations required multiple DataPower appliances in a tier of platforms. A common architecture was to place a load balancer or IP sprayer in front of the tier to distribute the traffic evenly among the tier of DataPower appliances. An IP sprayer is an example of another platform that needs to be added to the environment. It is another box that must be purchased, managed, and maintained. Self-balancing is a feature that was added to DataPower to eliminate the need for an IP sprayer. The way it works is that one of the DataPower appliances in the tier owns the Virtual IP (VIP) Address. It receives all of the traffic, and then distributes it to each of the other DataPower appliances in the tier. If the DataPower appliance that owns the VIP address goes down, one of the others is elected and it takes over. The result is one less product required to support the same level of functionality.
Me: For much of the past, cloud computing mostly focused on virtualization and management of resources at the raw compute level (servers, storage, networking, etc.). While there is definitely ongoing focus here, we start to see it moving up the stack towards applications, and part of that effort includes more evolved application load distribution. With that in mind, how can WebSphere DataPower help users more effectively distribute requests to their applications?
Tim: If a front end appliance or gateway device can dynamically learn information about its environment, specifically the back end, it will be able to make better decisions on how and where to route the request. This is one of the tasks that the Application Optimization feature addresses. Information from the back end can of course be manually configured, but the real value in cloud computing is dynamically adapting when new server resources are brought on line or are taken off line. In the 3.8.0 release, we implemented something called Intelligent Load Distribution (ILD). Intelligent load distribution focuses on continually learning the topology of a back end, updating DataPower's load balancers with that information, and distributing the load based on the updates. In addition to the topology, ILD learns the weights associated with each server. These weights can continually and automatically change as traffic patterns change. The result is load balancing to the back end that sends the optimal amount of load to each server.
Another traffic distribution aspect incorporated into ILD is session affinity. When a server application needs to receive every request from a given client, session affinity is used to route the requests to the same server. In some sense, session affinity overrides the load balancing algorithm. The session affinity support works with any type of back end server, but with a WebSphere back end, all session affinity information is automatically configured.
Me: Continuing on the theme of application intelligence, what is this new Application Routing option in WebSphere DataPower?
Tim: ILD focused on learning the topology of the network and making better decisions based on an ever changing cloud topology. Application Routing does something similar by learning which applications are running on each server. Once a request is handed to DataPower's load balancer, the request is classified as to the application that it is targeted for. Then the request is load balanced amongst the servers that are running that application. The information to perform application routing is dynamically learned and changes as applications are added or removed.
WebSphere has invested substantially in managing the life cycle of an application. Changing from one edition of an application to the next sounds like an easy task, but it can be very difficult to perform this type of maintenance on a production environment. The DataPower appliance supports life cycle management by working with the WebSphere back end to provide group and atomic edition rollout. The rollout feature allows traffic to be gracefully diverted from servers that are being taken offline and reloaded with the new application edition. This rollout can be done while leaving the other applications on the server unaffected. This support makes edition rollout a very simple task for the system administrator.
Next up on our sneak preview of the building block sessions for the Enabling cloud computing with WebSphere campaign is the Dynamic Infrastructure Services block. One portion of that block is a discussion about some of the technical capabilities of WebSphere Virtual Enterprise given by Nitin Gaur. Nitin is a Consulting IT Specialist within WebSphere, and an all-around WebSphere guru. I caught up with him to ask a few questions about his upcoming podcast.
Me: When people think cloud computing, one of the core concepts is 'on demand'. They want just enough resource at just the right time. In that sense, can you tell me a little about the On-Demand Router (ODR) in WebSphere Virtual Enterprise (WVE)? What is it and what core functions does it provide?
Nitin: So, first allow me to take a step back. In my view, cloud computing is a new consumption and delivery model nudged by consumer demand and continual growth in internet services. I classify any Cloud computing platform exhibits the following 6 key characteristics:
Standards based delivery
Usage based equitable chargeback
I thus, deliberately use the term platform in the context of a cloud computing environment that facilitates flexibility, robustness and agility, as a systemic approach in providing a stage to hosting applications without the concern for availability or provisioning of underlying resources. Since hardware and software virtualization do offer significant cost and resource management advantages, it is not rare to see virtualized platforms as core building blocks of any cloud platform. Such virtualization technologies provide an elastic infrastructure service. In this respect, WVE provides application server virtualization, which enables an elastic business-policy-driven application infrastructure.
Now back to the On-Demand Router. The ODR is the autonomic engine that drives the activity enabling the elastic infrastructure discussed above. The ODR operates in a highly dynamic WVE environment, so it is imperative for the ODR to be aware of any changes in the environment such as newly deployed applications, the addition of new application servers, and any planned or unplanned server outages. It achieves this awareness by continuously interacting with WVE's fluid and dynamic feedback mechanism.
Me: Autonomic capabilities seem to be a core part of WebSphere Virtual Enterprise. To that end, can you tell us a little about the autonomic capabilities provided by dynamic clusters in WVE?
Nitin: Dynamic application placement is a defining capability of WVE that directly contributes to WVE's ability to provide a dynamic, virtualized, and goal-oriented environment for workload management and continuous availability. The dynamic application management capability maximizes the efficient use of hardware resources by allocating resources appropriately per application based on fluctuating demands in the enterprise infrastructure. It determines which servers to stop and start in a dynamic server cluster in order to meet current demand for applications, and it does this in the context of a set of administrator-defined policies that uphold the enterprise’s service level agreements (SLAs) for its application infrastructure. The dynamic application placement framework must balance resource availability against health policies, service policies, and the importance levels assigned to applications.
Dynamic server clusters are key to WVE’s ability to dynamically adjust the application environment according to server load, and they provide the basis for a virtualized server runtime environment. The big difference between a dynamic cluster in WVE and a static cluster in WebSphere Application Server is that dynamic clusters grow and shrink as needed to meet current demand by starting and stopping members of the cluster. Although dynamic clusters and static clusters can co-exist in a cell, dynamic application placement can only work with dynamic clusters. To prevent unchecked growth, each dynamic cluster has a mechanism that you use to define a boundary for that cluster’s growth. The boundary is both quantitative (based on criteria that define the minimum and maximum number of application servers that can run in the cluster simultaneously) and locational (based on criteria that confine the growth of the dynamic cluster to a defined set of nodes).
Me: I know you have been around the country, and for that matter globe, helping our users to adopt and implement WebSphere Virtual Enterprise. Tell us about one of your favorite customer stories.
Nitin: So I would cite an example of one of the leaders in the entertainment Industry (and my favorite customer), let's call them Company X (since I cannot cite the name). The core of the company's application infrastructure system is the Sales App Infrastructure (SAI) consisting of more than 10 enterprise applications. To keep up with demand, Company X was required to procure more hardware and software to support the core systems. This strategy resulted in a large infrastructure footprint with low hardware utilization. The increase in hardware footprint became difficult to manage and required additional resources. The large footprint of the company's deployment put them in reaction mode rather than a posture of proactive monitoring. Some application servers rendered themselves unavailable and required the team to restart them every 24 hours. From a cost standpoint, it costs the company the same amount of money to request a virtual platform as it would to purchase a new physical server. This led to significantly under utilized hardware throughout the enterprise. WVE was brought in to Company X to help better manage their WebSphere Application Server footprint. Dynamic clusters, application health policies, and application editioning features helped the company to better utilize hardware, reduce hardware expenditures, increase visibility into their applications, and improve availability of their applications.
In addition to helping with the existing environment, WVE helped Company X to roll out a new project with applications that required continuous availability to worldwide users. The team made use of policy-based workload management to ensure performance and availability levels of these new applications met their business needs. In addition, the company was able to reduce the amount of WebSphere Application Server licenses and physical servers required for this new deployment. In sum, WebSphere Virtual Enterprise saves the company significant time, money, and management effort.
Yesterday, we kicked off a WebSphere in the Clouds campaign designed to connect you with IBMers that can help you to leverage WebSphere solutions to build clouds. The campaign consists of webcasts, podcasts, live Q&A sessions, and online JAMs. You can listen to replays and sign up for upcoming events by visiting the Global WebSphere Community website.
Next week, the campaign delivers a series of podcasts that discuss the WebSphere technologies that form the building blocks of clouds. These podcasts will discuss both the business and technical aspects of these solutions, and they will cover topics like application infrastructure in the cloud, policy-based workload management using application virtualization, hybrid cloud integration, and more. Over the past few days, I had the opportunity to catch up with the various presenters of these podcasts to ask them a few questions about their solutions. These interviews provide a nice sneak peak at what is coming in the podcasts, and I will be posting them here in the coming days.
To kick things off, I'm posting a video interview with Marc Haberkorn. Marc is the WebSphere Product Manager for WebSphere CloudBurst, WebSphere Application Server Hypervisor Edition, and WebSphere Virtual Enterprise. My colleague, Ryan Boyles, caught up with Marc and got his thoughts on how these solutions enable virtualization and automation for your cloud environments. Enjoy!
When it comes to building and using WebSphere CloudBurst patterns, people always ask me if I have any best practices. It turns out, I do. In fact, I have a singular piece of advice that wraps it all up: Build WebSphere CloudBurst patterns in a way such that once deployed, there is no after-the-fact, manual configuration for the running environment. That means, build the pattern so that it not only contains all the nodes necessary for your application environment, but it also contains all the configuration necessary for the environment.
Put like this, most everyone I talk to agrees with me. However, they quickly recognize that, absent this really cool integration with Rational Automation Framework for WebSphere, this means they will be writing scripts for many configuration actions and including them in patterns in the form of script packages. For users not familiar with configuration scripting for our WebSphere products, this can be a daunting proposition. But... it shouldn't be!
Recently, I put together a short presentation that lays out an iterative approach for developing script packages for WebSphere CloudBurst. Specifically, the presentation focuses on developing configuration script packages for the WebSphere Application Server (though the general concepts apply to all Hypervisor Edition products equally). I believe this method is useful for anyone, from novice users to WebSphere scripting gurus. The basic process goes something like this:
Identify: Identify the target WebSphere Application Server topology and configuration for your application environment.
Deploy: Build a WebSphere CloudBurst pattern that matches your desired topology and deploy it to your cloud.
Develop and Test: Develop and test your configuration script. Not a WebSphere Application Server scripting ninja? No worries. Use the Command Assistance feature in the WebSphere Application Server v7 administration console. This feature shows you the wsadmin commands that match the actions you manually take in the console. This affords a lower barrier of entry for those not familiar with wsadmin.
Package: Package up the resulting scripts into a script package along with metadata that describes the package.
Modify and redeploy: Load the new script package into your appliance, add it to your pattern, and then redeploy. Upon deployment completion, verify the scripts produce the desired result.
The presentation provides detail on the above steps and walks through an example scenario for this process. I am embedding it below, and I hope it proves useful. As always, feel free to send in any questions or comments.
A while back I had a four part FAQ series inspired by questions arising from customer visits discussing the first release of WebSphere CloudBurst. With the recent release of WebSphere CloudBurst 2.0, I think it is a good time to revisit an FAQ series with an entirely new set of questions.
For the first part of the series, I want to address a question we get all the time now: "What is the difference between WebSphere CloudBurst and WebSphere Virtual Enterprise?" This question was always fairly common, but now even more so because the new Intelligent Management Pack option for WebSphere Application Server Hypervisor Edition allows you to deploy WebSphere Virtual Enterprise cells using WebSphere CloudBurst.
Fundamentally, the difference between the WebSphere CloudBurst Appliance and WebSphere Virtual Enterprise is a complementary one. WebSphere CloudBurst provides a means to create your application environments, deploy them into a shared, cloud environment, and then manage them over time. In this respect, the appliance focuses on bringing cloud-like capabilities to the application infrastructure layer of your application environments. WebSphere CloudBurst does give you management capabilities for your running, virtualized application environments (i.e. applying maintenances and fixes), but for the most part those capabilities do not extend into the application runtime environment.
Now, you may ask why WebSphere CloudBurst does not extend its reach into the application runtime. The answer is simple: We already have a solution that does just that, WebSphere Virtual Enterprise. WebSphere Virtual Enterprise provides capabilities that allow you to dynamically and autonomically manage your application runtime. You can use WebSphere Virtual Enterprise to not only assign performance goals to your applications, but also to declare the importance of a given application meeting its goals relative to other applications in your organization. This enables the dynamic management of your applications and their resources such that your applications perform according to their goals and relative importance to your business. Simply put, you get an elastic runtime at the application layer of your application environments.
As I said, WebSphere CloudBurst and WebSphere Virtual Enterprise are complementary solutions. Both enable notions of cloud computing, but at different layers of your application environments. WebSphere CloudBurst hones in on the application infrastructure components, while WebSphere Virtual Enterprise zeros in on the applications running in those environments. The new Intelligent Management Pack for WebSphere Application Server Hypervisor Edition means that WebSphere CloudBurst can now dispense WebSphere Virtual Enterprise environments into your on-premise cloud. That means you can take advantage of these complementary solutions from a single and simple management plane.
I hope this helps to clear things up. As always, questions and comments are welcome!
May is almost here and that means that IBM IMPACT is right around the corner. Just like years past, IMPACT 2010 will be a great chance to get valuable education and insight into IBM WebSphere software and software from across the IBM software family. If you want to hear how IBM software is leading the march toward a smarter planet, register now.
IMPACT 2010 will be a great chance to hear the WebSphere cloud computing story. There will be multiple sessions on the WebSphere CloudBurst Appliance. These include customer-led sessions, internal adoption stories, overviews, and much more. I'll be there running a hands-on lab and delivering a session that discusses integration between WebSphere CloudBurst and IBM Rational tools. Of course, there is more to WebSphere and cloud computing than WebSphere CloudBurst. We have several other sessions that will detail all of IBM WebSphere's work in the cloud.
If you are interested, I put together a short video discussing some of the sessions on tap for WebSphere and cloud computing at IMPACT 2010. I'd also encourage you to check out the social media site for IBM IMPACT 2010. On that site, you will find tweets, videos, and blogs about the conference. Don't forget to sign up, and I hope to see you in Las Vegas!
-- Dustin Amrhein
The reason I suggest the application proxy approach is twofold. First, it affords you the ability of having custom interactions with the REST API. For instance, you may insert logic into the server-side proxy code that returns only a subset of the JSON data contained in the response from the appliance. Alternatively, in an effort to reduce the chattiness on your client-side, you may join JSON data from multiple different REST requests to the appliance to fulfill a single client request. You may even decide to represent the data in an all together different format than JSON. All of these options and many more are available to you if you implement an application-based proxy to the REST API.
The second reason I suggest the application approach is that it is easier, and seemingly safer, to not deal with user passwords on the client-side. If you setup your application proxy, you can configure it to retrieve the appropriate password from a secure location (like an encoded file) based on information passed along in the request. This means the password information is only present in the request (in encoded form of course) from the application proxy to the WebSphere CloudBurst Appliance.
The good news about the application-based proxy approach is that it is simple to put in place. I composed one using the open source Apache Wink project. The Apache Wink project is an open source implementation of the JAX-RS specification (and then some), and it enables you to develop POJOs that are in turn exposed in a RESTful manner. In my case, I had a single resource POJO:
The Apache Wink runtime routes any HTTP GET request whose path is like /resources/* to the getResources method in the WCAResource class. This method passes along information taken from the query string (the host name of the target WebSphere CloudBurst Appliance and the requesting WebSphere CloudBurst username), as well as the HTTP path information and sends it on to the getResource method declared as follows:
The getResource method above uses the WebSphere CloudBurst host name and the request path to construct the URL for the corresponding WebSphere CloudBurst REST API call. Next, it constructs an Apache Wink Resource object and sends the REST request along to the WebSphere CloudBurst Appliance. How do we authenticate this request? We use the WebSphere CloudBurst username (sent as a query string parameter) to retrieve the appropriate encoded password information. Once we have that, we construct the necessary header for basic authorization over SSL.
The application-based proxy shown here is simply a pass-through. It does not manipulate the data returned from the WebSphere CloudBurst REST API, nor does it map a single client-side call to multiple REST requests. However, it would be simple enough to extend it to do any of those things. If you have any questions about the code here, please let me know. I'd be happy to share more of the code, or talk about how and where to extend it.
The WebSphere Application Server Hypervisor Edition virtual image is made up of four different virtual disks. One of those disks contains pre-created and pre-configured WebSphere Application Server profiles. When the image is activated (either through WebSphere CloudBurst or in a standalone fashion), all of the profiles not being used are deleted leaving only the intended WebSphere profile type.
Since the profiles are pre-created, this implies that certain information must be updated after the image is activated to reflect things that change with each node that is created. Among other things, the cell name, node name, and host name of the WAS profile configuration are usually updated during the image activation process. Nearly every time I talk to WAS administrators about WebSphere CloudBurst and WebSphere Application Server Hypervisor Edition they are intrigued by this particular configuration update and almost always ask "How do you do it?" (Dustin's note: Since the command to rename the cell is not officially documented, I have removed it from this post. I'm sorry, but it is for your own good!)
Most of the time this question pops up because users are attempting to, with a more narrow focus than WAS Hypervisor Edition, freeze-dry certain WAS configurations in their organization. However, no matter how they do that (virtual images, zipped up configuration files, etc.), they too need to update things like the cell, node, and host names when attempting to reuse the configuration. Many have gone down the route of trying to identify all of the different XML files they need to change in order to update this information, but this is untenable and in fact unnecessary.
If you need to update the node or host name, forget manually updating XML files. Instead, use these three wsadmin commands:
The commands can be run from a standalone node or from a deployment manager node. They are pretty straight forward, and if you need more information about them just take a look in the WebSphere Application Server Information Center. I hope this is helpful information, and stay away from those XML files!
One of the most powerful features of WebSphere CloudBurst is the ability to take one of the WebSphere Application Server Hypervisor Edition virtual images that are shipped with the appliance and extend it to a produce a custom virtual image. This allows users to begin creating customized environments from the bottom up, starting with the operating system. To put it in better context, let's take a look at a couple of scenarios where this feature comes in quite handy.
First off, a very common need for our customers is the ability to continually monitor their application environments. For instance, you may install Tivoli monitoring agents on all of your machines hosting WebSphere Application Server processes and configure those agents to report back to a management server. This is a great case for image extension in WebSphere CloudBurst.
In this scenario, you would start by extending an existing WebSphere Application Server Hypervisor Edition image. WebSphere CloudBurst creates a running virtual machine based off of the selected image, and you log into that virtual machine and install the Tivoli monitoring agents. Once the installation is done, you capture the virtual image back into the WebSphere CloudBurst catalog and use the new image to build a custom pattern. The last step is to include a script package on this custom pattern that, upon deployment, will configure the installed monitoring agents to report back to your desired management server.
Another use case is likely to be of interest to you if you are using WebSphere Virtual Enterprise (or something similar), and you could benefit from the same ease of provisioning for those environments that WebSphere CloudBurst brings to WebSphere Application Server environments. You can use the same customization combination above (image extension and custom scripts) to enable WebSphere CloudBurst to essentially dispense WebSphere Virtual Enterprise cells.
Again, this scenario starts off by extending a WebSphere Application Server Hypervisor Edition virtual image. Once the virtual machine for the extension is created by WebSphere CloudBurst, you log in and install the WebSphere Virtual Enterprise product. After the installation is done, you capture the image and store it in the catalog. Next, you build a custom pattern based off of this image and include script packages that, upon deployment, augment the various parts in the pattern from WebSphere Application Server profiles to WebSphere Virtual Enterprise profiles. (You may wonder why you wouldn't just create the WebSphere Virtual Enterprise profiles during the image extension process. This is because during image extension, you cannot make changes to the virtual disk that contains the WebSphere Application Server profiles. Any changes made to the profiles will be wiped out during the capture process.)
There are countless more scenarios for creating custom virtual images in WebSphere CloudBurst. To name a few, you may want to install JDBC drivers that are common to almost all of your application environments, install required anti-virus software, or just make operating system configuration changes. All of these things can be accomplished through the image extension and capture process. Look for an article coming out soon that will discuss and explain, in much greater detail than I provided here, the process of installing and configuring Tivoli monitoring agents in environments dispensed by WebSphere CloudBurst. In the meantime, if you have any questions or comments, drop us a line here or check out our forum.
Over the past several months industry focus on cloud computing seems to have only intensified. Within IBM and for the purposes of this blog, WebSphere, there have been several announcements and offerings that indicate our commitment and belief in the cloud computing approach.
To further highlight WebSphere's focus and offerings in the cloud computing realm, we are embarking on a "WebSphere in the Clouds" campaign during the months of September and October. Our intent is to virtually deliver information about our cloud strategy and offerings directly from the experts to you, our WebSphere users.
The event will be kicked off by WebSphere's Director of Product Management, Kareem Yusuf, on September 23rd from 9-10 EDT. Kareem will talk about cloud computing in the enterprise, and its unique relationship to SOA thoughts and principles. In addition, he'll give an overview of what WebSphere has been doing in the cloud computing space. This will be followed by sessions from technical experts that detail WebSphere offerings in both the public and private clouds, as well as sessions that discuss enablers of application and application infrastructure elasticity.
To find out more about the "WebSphere in the Clouds" campaign, you can check out the main announcement page. To sign up for the series of virtual events visit the registration page. We hope you will join us for the series of webcasts to learn all about WebSphere's work in the clouds.
To continue with the series of blog posts regarding WebSphere CloudBurst FAQs, I want to take a look at one aspect of the deployment process.
When you leverage WebSphere CloudBurst to push patterns (complete WebSphere Application Server configurations) into a private cloud, the appliance provides an advanced placement algorithm to determine exactly where the resulting WebSphere virtual systems will reside. It attempts to match the needs of the pattern to the correct set of hypervisors that have been defined. WebSphere CloudBurst considers things like storage, CPU, memory, and high availability characteristics when placing the pattern, and this is all done by the appliance without you having to intervene at all.
This is certainly nice in that it absolves you from having to make such placement decisions. Having said this though, you may be thinking of a question that comes up quite often:
If WebSphere CloudBurst controls the placement of the pattern, how can I make sure that certain deployments end up on certain servers (hypervisors)?
Considering what I just told you above, it may not seem that it's possible to control what machines end up hosting your virtual system since the appliance takes care of that placement for you. However, the organized use of WebSphere CloudBurst cloud groups allows you to take advantage of the intelligent placement provided by the appliance while retaining a level of control over which machines end up hosting particular deployments.
In WebSphere CloudBurst all patterns are deployed to cloud groups. Cloud groups are a collection of hypervisors that have been defined within the appliance. The basic deployment mapping is depicted in the image below:
As seen above, you can create a cloud group for any purpose (dev, test, QA, production, etc.), including any hypervisors that you desire as long as a given hypervisor only belongs to a single cloud group. When you are ready to deploy a pattern, you simply select the cloud group you want to deploy to:
By selecting a cloud group for deployment, you are implicitly selecting the physical machines that will host your deployment. The cloud group could consist of anywhere from one to N hypervisors, so you are afforded the ability to restrict the location of your virtual systems as necessary.
I hope this helped explain a little bit about cloud groups in WebSphere CloudBurst. If you're looking for more information about WebSphere CloudBurst cloud groups, I'd also suggest you watch this video on our YouTube channel.