When we talk about the WebSphere Application Server Hypervisor Edition, we often get a lot of questions about whether or not SUSE Linux is the only flavor of the Linux operating system that we support. The short answer to that question is no.
While it is true that we only deliver the WebSphere Application Server Hypervisor Edition with a SUSE Linux operating system, we will support the use of the virtual image packaging with Red Hat Enterprise Linux as the base operating system. The basic process consists of creating a virtual machine disk based off of a suitable Red Hat install, altering the OVF file in WebSphere Application Server Hypervisor Edition to reference this virtual disk instead of the SUSE virtual disk, and then packaging a new OVA file that contains all the same WebSphere virtual disks (profiles, binaries, IBM HTTP Server) but swaps out the Red Hat virtual disk for the SUSE virtual disk. We have done this many times in both the lab and field, and we offer services to users who need help in creating the image.
Customers often ask if there is any difference in using Red Hat versus SUSE Linux. The answer is, of course, yes and no. The answer is yes in that users must bring their own licenses of Red Hat (SUSE Linux licenses are included in the WebSphere Application Server Hypervisor Edition), and users must support and maintain the Red Hat operating system on their own. However, once the image is built, there is absolutely no difference in the use of that image within WebSphere CloudBurst.
Once built, users upload the image into their WebSphere CloudBurst catalog and it is available for use in pattern building just like any other image. I mentioned that users are responsible for updating and maintaining the image, well users can use WebSphere CloudBurst to create these updated images. When patches or updates are ready for the Red Hat operating system, the Extend/Capture facility available for images in WebSphere CloudBurst can be used to create a new custom Red Hat operating system with your desired fixes. This is all done without ever having to worry about actually recreating and repackaging the image again.
I know seeing is believing, so with respect to the "sameness" of using a Red Hat version of the WebSphere Application Server Hypervisor Edition within WebSphere CloudBurst, I've created a short demo you can watch here. As always, let us know what you think and send any questions our way.
I’m going to take a different approach this week in the blog. Instead of me telling you about some of the features or uses of WebSphere CloudBurst, I thought I would catch up with someone using the product everyday, WebSphere Test Architect Robbie Minshall. Robbie is responsible for a team of testers that harness a lab of over 2,000 physical machines to put our WebSphere Application Server product through some pretty rigorous testing. Toward the beginning of this year Robbie’s team started to leverage the WebSphere CloudBurst Appliance in order to create the WebSphere Application Server environments needed for their testing.
Robbie, can you tell us a little bit about what the WebSphere Application Server test efforts entail?
In WebSphere Application Server development and test we have two primary scenarios. The first is making sure that developers have rapid access to code, test cases and server topologies so that they can write code, test cases and then execute test scenarios on meaningful topologies. The second scenario is an automated daily regression where in response to a build, we provision a massive amount of WebSphere Application Server topologies and execute our automated regression tests.
Previously we have supported these scenarios through the deployment of the Tivoli Provisioning Manager for operating system provisioning, some applications for checking out environments, and then a lot of automation scripts for the silent install and configuration of WebSphere Application Server cells.
Given those scenarios and the existing solution, what are your motivations for setting up a private cloud using WebSphere CloudBurst Appliance?
We are supporting these scenarios through a pretty complicated combination of technologies. These include silent WAS install scripts, wsadmin configuration scripts, a custom hardware leasing application and the utilization of Tivoli Provisioning Manager for OS Provisioning. This solution is working very well for us though as always we are looking for areas to improve, opportunities to simplify and to reduce our dependency on investment in our custom automation scripts. Mainly, there were 3 areas where we wanted to improve our framework: Availability, Utilization and Management. This is why we started looking to the WebSphere CloudBurst Appliance.
Can you expand a bit on what you are looking for in those three areas?
The first focus area we have is availability of environments. We really wanted to lower the entry requirement for the skills and education necessary to get a development or test environment. Setting up these environments has just been too hard, too time consuming, and too error prone. Using WebSphere CloudBurst we can provide an easy push button solution for developers to get on-demand access to the topologies they need.
The second area we are looking for significant improvements on is hardware utilization. Our budgets are tight and in our native automation pools we are only using between 6-12% of the available physical resources. In order to improve this we were looking at leveraging virtualization. WebSphere CloudBurst offers the classic benefit of virtualization with the nice additions of optimized WebSphere Application Server placement and really good topology and pattern management. In our initial experiments we were able to push the hardware utilization up to 90% of physical capacity and consistently were leveraging around 70% of our physical capacity.
Finally we are looking to improve and simplify our management of physical resources and automation. We work in a lot of small agile teams and organizational priorities change from iteration to iteration. Not only does WebSphere CloudBurst allow us to maintain a catalog of topologies or patterns for releases but it also allows us to adjust physical resource allocation to teams through the use of sub clouds or cloud groups.
Basically we felt that WebSphere CloudBurst would improve the availability of application environments, enhance automation, and improve hardware utilization all with very low physical and administrative costs.
What were some of the challenges involved with getting a cloud up and running in your test department?
One of our challenges seems like it would be common to many scenarios, especially in today’s world. Our budget for new hardware to build out our cloud infrastructure was initially very limited. Most cloud infrastructure designs depict very ideal hardware scenarios including SANs, large multicore machines, and private and public networks within a dedicated lab. Quite frankly we did not have the budget to create this from scratch. It was important for us to demonstrate value and data to warrant future investment in dedicated infrastructure. After some performance comparisons we were very happily surprised to see that we could leverage our existing mixed hardware within a distributed cloud. The performance of application environments dispensed by WebSphere CloudBurst on many small existing boxes in comparison to large multicore machines with a SAN was very comparable. This allows us to leverage existing hardware, with minimal investment all the while demonstrating the value and efficiencies of cloud computing. That data in turn has allowed us to obtain new dedicated hardware to iteratively build up a larger lab specifically for use with WebSphere CloudBurst.
Specifically with WebSphere CloudBurst, are there any tips/hints you would offer users getting started with the appliance?
Sure. First, we quickly realized as we added hypervisors to our WebSphere CloudBurst setup it was critical to have someone with network knowledge on hand. This is because the hypervisors came from various sections of our lab, and we really needed people with knowledge of how the network operated in those different sections. Once we had the right people we were able to setup WebSphere CloudBurst and deploy patterns within an hour and a half.
Moving forward we continued to have challenges as we dynamically moved systems between our native hardware pool and our cloud. Occasionally the WebSphere CloudBurst administrator would move a system into the cloud but incorrectly configure the network or storage information. This lead to some misconfigured hypervisors polluting our cloud. We overcame this, quite simply and satisfactorily I may add, by creating some simple WebSphere CloudBurst CLI scripts which add the hypervisors, test them individually, by carrying out a small deployment to that hypervisor, and then move the correctly configured hypervisors into the cloud after verifying success. Misconfigured hypervisors go into a pool for problem determination. This has allowed us to maintain a clean cloud, and we are able to dynamically move our hardware in and out of the cloud to meet our business objectives.
We also use the WebSphere CloudBurst CLI to prime the cloud so to speak. Before using a given hypervisor in our cloud, we execute scripts that ensure each unique virtual image in our catalog has been deployed to each of our hypervisors at least once. When the image is first deployed to a hypervisor, a cache is created on the hypervisor side of the connection, thus meaning subsequent deployments do not require the entire image to be transferred over the wire. This gives us consistent and fast deployment times once we are using a hypervisor in our cloud.
I would assume that like many applications deployed on WebSphere Application Server, your team’s applications have several external dependencies. Some of these dependencies won’t necessarily be in the cloud, so how did you handle this?
You’re right about the external dependencies. Our applications and test cases run on the WebSphere Application Server but are dependent upon many external resources such as databases, LDAP servers, external web services etc. WebSphere CloudBurst allows us to deploy WAS topologies in a very dynamic and configurable way but the 1.0.1 version does not allow us to deploy these external resources in the same manner. This was overcome by using script packages in our patterns. These script packages allow us to associate our test applications with various patterns we have defined. The script package definition also allows us to pass in parameters to the execution of our scripts. We supply these parameter values during deploy time, and these values are used to convey the name or location of various external resources. The scripts that install our applications can access these values and ensure the application is properly integrated with the set of resources not managed by the appliance.
What is your team looking to do next with WebSphere CloudBurst and their private cloud?
The next challenge on our plate is to keep up with the demand of our expanding cloud and to develop a more dynamic relationship between our native pools and our cloud using the Tivoli Provisioning Manager. These are fun challenges to have and we look forward to sharing our progress.
I'm glad I got to spend some time with Robbie to glean some insight into their work and progress with WebSphere CloudBurst. I hope this information was useful to you. It's always nice to hear about a product from practitioners who can give you hints, tips, gotchas, and other useful information. Be sure to let me know if you have any questions about what Robbie and his team are doing with WebSphere CloudBurst.
As fast as the Impact conference arrived is about as fast as it is disappearing in our collective rearview mirror. It seems like a blur. In a word, the conference was... exhausting! In other words, it was informative, exciting, and illuminating. I hope that many of you had a chance to make it out there, and I hope more of you make it to Impact in 2013.
For those of you familiar with the conference, you know that it is typically a launching ground for new product versions and altogether new products. This year was certainly no different with the launch of the new version of WebSphere Application Server (8.5), the new and improved IBM Business Process Manager and IBM Operational Decision Manager, a new version of WebSphere eXtreme Scale (8.5), and numerous updates across the messaging and connectivity stack. While I encourage you to follow up on all of these important announcements, they are not what I am going to focus on today. Instead, I am going to focus on the new addition to the IBM family that got plenty of attention this year: IBM PureApplication System.
Joe recently touched on this new offering, so I won't get into an exhaustive overview. To put it briefly, IBM PureApplication System is an expert integrated system. What does that mean? First and foremost it means that it is a system -- a whole. It is an integrated platform of hardware and software, optimized and tuned for running transactional web and database workloads. I do not mean that it is a system of software that we pre-install on off-the-shelf hardware. Rather, it is the result of hardware and software engineers across IBM working together to build a system that is expert at what it does. More than just the web application and database software though, IBM PureApplication System also contains pre-installed and pre-configured management software that delivers a soup to nuts (hardware to application) single pane of glass for managing the entire system. I could go on and on, but again that's not my purpose here. I encourage you to check out the new IBM PureSystems web page for more information and some pretty cool videos.
For those of you that take a look at IBM PureApplication System you will quickly find that the notion of pattern-based deployments (something I have talked about at length on this blog) plays a key role in the new system. In fact, the same virtual system and virtual application pattern constructs that you have come to know in IBM Workload Deployer are front and center in IBM PureApplication System as well. In the new system, you can build custom virtual system and virtual application patterns, deploy them to your cloud, and then manage them over time. If you are familiar with the IBM Workload Deployer user interface, you will likely find yourself immediately familiar with the interface of IBM PureApplication System. Given all of that, if you are like many of the users I talked to at Impact and since, you probably have some questions as to IBM Workload Deployer and this new system. Most commonly, I get these two questions: "What does this mean for the IBM Workload Deployer product?" and "How do I know when to use IBM PureApplication System versus IBM Workload Deployer?" Let me do my best to address those questions.
In terms of the impact of the IBM PureApplication System on the IBM Workload Deployer offering, I can only view it in one way: affirmation. As I said above, IBM PureApplication System puts the mode of pattern-based deployments front and center, and further affirms that this kind of approach is crucial to the evolution of application delivery and management. Those of you familiar with IBM Workload Deployer or its predecessor WebSphere CloudBurst know that we have been talking about patterns for years. Rest assured we will continue to talk about patterns and solutions for building, deploying, and managing them. As it stands, we have at least three ways for you to build, deploy, and manage patterns: IBM SmartCloud Application Services, IBM Workload Deployer, and IBM PureApplication System. As you can see, options for consuming patterns have only increased since the initial launch of WebSphere CloudBurst. Furthermore, if you were at Impact, you know that we have a vibrant and vocal community of IBM Workload Deployer users, and I hope to see that community continue to grow! As I see it, the core technology of IBM Workload Deployer is becoming our 'operating system' for cloud platform management.
The question of when to use IBM Workload Deployer or when to use IBM PureApplication System is one whose answer is a bit more nuanced and not something one can or should try to definitively answer in a blog post. One thing I do suggest though is that when evaluating these two technologies, it is important to acknowledge that they have different business value propositions. Sure they share common core technology in terms of building, deploying, and managing pattern-based environments, but beyond that they diverge a bit. Remember, IBM PureApplication System is, well a system. It is the hardware, software, and management technology you need to run your middleware application workloads. It is pre-built and pre-integrated to the point that it only requires you to roll it into your datacenter, hook it up to your network, and do some one time configuration. The aim for the duration between receipt of the system to up and running with your first deployment is four hours, and if you were at Impact you saw an amusing video with the chief architect (Jason McGee) that proves this claim.
IBM Workload Deployer is fundamentally different in terms of how you consume it and how it integrates with your infrastructure. Most notably, IBM Workload Deployer does not include optimized hardware (servers, storage, networking) for running your workloads or a single point of management for everything from hardware to applications. To use IBM Workload Deployer you attach it to your network and point it at existing virtualized servers. Simply put, IBM Workload Deployer assumes you have existing, under-utilized hardware that you can get more out of with the intelligent deployment and management approach the appliance delivers. While you do not get the pre-integrated and optimized system of hardware plus software, you do get the flexibility to use your existing infrastructure.
As you can see, there are similarities (patterns) and differences (whole system vs. management system), and the result is a pretty different set of value propositions. The key in evaluating these technologies is that you do so with a crisp understanding of your current needs AND your future plans for growth and evolution. I know this kind of advice is a bit generalized, but I hope the differences I discussed above help you to at least understand the capabilities of the two different offerings. As always, if you have any comments or questions, please reply to the post!
When I talk with WebSphere CloudBurst users, the topic of custom virtual images comes up frequently. In some cases they simply want to customize a shipped IBM Hypervisor Edition, and in other cases they want to create a completely custom image. Creating a customized version of an IBM Hypervisor Edition is relatively easy since we give you extend & capture in WebSphere CloudBurst. Creating a completely custom image has historically been a bit tougher, mostly owing ot the fact that there was not a standard tool or process for image assembly. I am happy to say that today's publication of the IBM Image Construction and Composition Tool changes all that.
Watch a demo of the IBM Image Construction and Composition Tool
The primary purpose of the Image Construction and Composition Tool is to enable a modular approach to virtual image construction, while taking into account the typical division of responsibilities within an organization. The tool allows the right people within an organization to contribute their specialized knowledge as appropriate to the virtual image creation process. This means OS teams can handle the OS and software teams can handle the appropriate software. A separate image builder can then use both OS and software components to meet the needs of users within the organization. Best of all, the image builder does not need intimate knowledge of how to install or configure any of the components in the image. They simply need to know which OS and software components to use.
When using the Image Construction and Composition Tool, you start by defining the base operating system you wish to use for your images. You can do this by importing an existing virtual image with an OS already installed, providing an ISO for the OS, or pointing to a base OS image on the IBM Cloud. The bottom line is that you have necessary flexibility to start with your certified or ‘golden’ operating system build. Once you have the base OS image defined in the Image Construction and Composition Tool, you can start defining custom software for use in the images you will compose.
In the tool, bundles represent the software you wish to install within a virtual image. The definition of a bundle contains two major parts: Installation and Configuration. The installation component of a bundle tells the Image Construction and Composition Tool how to install your software into the virtual image. You provide a script or set of scripts that install the necessary components into your image, and you direct the tool to call these scripts. These tasks run once during the initial creation of the virtual image, thus allowing you to capture large binaries, long-running installation tasks, or other necessary actions directly into your image.
The configuration section of a bundle defines actions that configure the software installed into the image. Like with the installation tasks, you provide a script or set of scripts for configuration tasks. Unlike installation tasks that run exactly once, configuration scripts become part of the image’s activation framework and as such, run during each image deployment. Using the tool, you can define input parameters for configuration scripts and optionally expose them so that users can provide values for the parameters at image deploy-time. Configuration tasks are important in providing flexibility that allows users to leverage a single virtual image for a number of different deployment scenarios.
Once you have your base OS image and one or more bundles defined in the Image Construction and Composition Tool, you can compose a virtual image. To compose a virtual image, you extend the base OS image and add any number of bundles into the new image. A base OS image plus a set of bundles defines a unique image.
After you define the image you want to construct, you initiate a synchronize action in the Image Construction and Composition Tool. When you start the synchronize action, the tool first creates a virtual machine in either a VMware or IBM Cloud environment (based on how you configured the tool). Next, the installation tasks of each bundle you included in the virtual image run to install the required software. Finally, the tool copies the configuration scripts from each bundle into the virtual machine and adds them to the image’s activation framework. This ensures the automatic invocation of all configuration scripts during subsequent image deployments.
Once the image is in the synchronized state, you can capture it. Capturing the image results in the creation of a virtual image based on the state of the synchronized virtual machine. The tool also automates the generation of metadata that becomes part of the virtual image package. When the capture of the virtual image completes, you can export it from the Image Construction and Composition Tool and deploy it using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
I am excited for users to get their hands on the Image Construction and Composition Tool. I believe it represents the first big step in helping users to design and construct more sustainable virtual images. Did I mention it is completely free to download and use? Visit the Image Construction and Composition Tool website for more details and a download link. I look forward to your comments and feedback.
One of the things I haven't written about much here is how the WebSphere CloudBurst Appliance integrates with other IBM software solutions. One of those interesting integration scenarios, and one I think is particularly useful for developers, involves Rational Build Forge.
Very simply put, Rational Build Forge is an adaptive execution framework that allows users to define completely automated workflows for just about any purpose. These workflows are represented as projects that contain a discrete number of steps. When looking at Rational Build Forge through the software assembly prism, the offering allows users to fully automate and govern the process of building, assembling, and delivering software into an application environment.
Now, on to the integration of WebSphere CloudBurst and Rational Build Forge. Users can build custom patterns in WebSphere CloudBurst that include a special script package (which I'll eventually provide a link to from here). This script package provides the glue between the deployment process in WebSphere CloudBurst and Rational Build Forge. When deploying a WebSphere CloudBurst pattern that contains this script package, users provide the name of a Rational Build Forge project as well as information about the Rational Build Forge server on which the project is defined.
Once the necessary information is supplied, the deployment process gets underway. Toward the end of the deployment, like all other scripts included in patterns, the special Rational Build Forge script is invoked. This results in the project specified during deployment being executed on the virtual machine created by WebSphere CloudBurst.
Because the Rational Build Forge project executes on a virtual machine setup by WebSphere CloudBurst, the individual steps of the project can very easily access the WebSphere Application Server environment. Thus, the Rational Build Forge project could very easily contain steps to build, package, and deploy an application into the WebSphere Application Server cell. The result is a fully automated process that includes everything from standing up the application environment to delivering applications into that environment.
I put together a short demonstration of this integration, and you can take a look at it here. As always, please let us know if you have any questions or comments. Your feedback is much appreciated!
When IBM Workload Deployer v3.0 rolled around, the appliance introduced the concept of shared services. These were services that a cloud administrator could launch into the cloud infrastructure defined to IBM Workload Deployer, and use to serve a number of different application deployments. There were, and continue to be, two main shared services: a proxy service and a cache service. The shared proxy service does pretty much what you may guess. It provides request routing capabilities across multiple different instances of multiple different applications, thereby providing a centralized resource that encapsulates this basic need in an application environment. You can probably also guess what the caching service does. It caches things! Specifically, in IBM Workload Deployer v3.0 it provided in-memory caching of HTTP sessions, thus ensuring high availability of data stored in those sessions.
Undoubtedly, the ability to make HTTP session data fault tolerant is extremely critical in any application environment, cloud-based environments included. However, the applicability of a shared cache service is much further reaching, and in IBM Workload Deployer v3.1, we are starting to open this service up to your applications. What does this mean to you? Quite simply it now means that you can access this cache directly from your application code. If you are familiar with WebSphere eXtreme Scale or the DataPower XC10 Caching Appliance, then you know exactly what I mean. You can use the WebSphere eXtreme Scale ObjectGrid API to insert, read, update, and delete entries that exist in the in-memory cache. The underlying cache technology is based on the same code that powers WebSphere eXtreme Scale and DataPower XC10, so you can be sure that your cache is scalable, fault tolerant, responsive, and otherwise able to meet the needs of your application.
As I hope you find to be the case with many IBM Workload Deployer capabilities, this is a superbly simple capability to leverage. When you deploy virtual application patterns based on the IBM Workload Deployer Pattern for Web Applications, the capability is simply there. The underlying runtime that is serving your application is automatically augmented with the capabilities necessary so that your applications can connect to and utilize the deployed caching service. It is also worth pointing out that you can utilize the caching capabilities provided by this shared service for applications and application infrastructure that you deploy via virtual system patterns as well. You can either choose to augment the WebSphere Application Server environment with the XC10 Feature Pack (a deploy-time option for virtual system patterns built on WebSphere Application Server Hypervisor Edition v8), or you can configure WebSphere Application Server as you always would when integrating with a WebSphere eXtreme Scale environment or a DataPower XC10 Appliance.
What's the real benefit to all of this you ask? Well, when you use the shared caching service, you get the benefits of a distributed, in-memory, extremely scalable cache without having to deal with too much setup or administration. You simply tell IBM Workload Deployer how many resources you want to dedicate to your cache, and deploy the shared service. IBM Workload Deployer takes care of the details, including scaling in and out the cache to meet the needs of the system. On top of all of this, there is also an option to configure 'Next to the Cloud' caching. If you currently own DataPower XC10 appliances, you can make those available to virtual application pattern deployments (this was already possible with virtual system patterns) by simply providing details of the location of the appliance collective in question.
Put simply, setting up, administering, and utilizing an object caching service for your applications has never been easier. Check it out and let us know what you think!
When one uses IBM Workload Deployer (previously WebSphere CloudBurst) to deploy a virtual system pattern, they benefit from a completely automated deployment process. The automation includes the creation and placement of virtual machines, injection of IP addresses, initiation of internal processes, and invocation of included scripts. Most of these processes are straightforward and require little more than a brief overview. However, the placement of virtual machines stands out, and it's inner workings are the subject of quite a few questions when I discuss the appliance. With that in mind, I thought I would provide a little more information on how the placement algorithm in IBM Workload Deployer works.
The placement subsystem in IBM Workload Deployer considers three primary elements: compute resource, availability, and license optimization. Compute resource availability is the gating factor for placement. That means that IBM Workload Deployer first looks at the available CPU, memory, and storage resource in the collection of hypervisors making up the cloud group(s) you are targeting for deployment. If a particular hypervisor cannot provide enough resource based on the amount you requested for your deployment, then it is automatically taken out of the eligible hosts pool. It is important to note that IBM Workload Deployer will overcommit CPU, and it will overcommit storage if you direct it to do so. It will not overcommit memory because that could severely degrade the performance of the application(s) running in the virtual machines.
After choosing the pool of hypervisors that are capable of hosting the virtual machines in your deployment from a compute resource perspective, the appliance then considers high availability. To better understand this particular placement stage, let's consider an example. Consider you are deploying a pattern based on WebSphere Application Server Hypervisor Edition and it contains two custom node parts. It is conceivable, and in fact likely, that these two custom node parts will host members of the same cluster, and thus the two nodes will support the same applications. As such, IBM Workload Deployer will attempt to place the two custom nodes on different physical machines in order to prevent a single point of failure. Of course, this depends on having two hypervisors with enough resource (CPU, memory, storage) to host the virtual machines, but the appliance makes that decision in the first placement stage.
After considering compute resource and high availability, IBM Workload Deployer moves to the last stage of placement: license optimization. In this stage, the placement subsystem attempts to place the virtual machines on hypervisors in a way that minimizes the licensing cost to you. The appliance can do this because it is aware of IBM virtualization licensing rules and takes those into account during this stage (if you aren't familiar with virtualization licensing rules and you are curious, ask you're sales representative to explain some time). During this stage, it will not violate any resource overcommit directives or rules in place, nor will it compromise system availability, but it will seek to minimize costs within these parameters.
At this point, I should make something clear that may already have occurred to you. You can override most of these placement rules by creating a cloud group containing only one hypervisor. In this case, IBM Workload Deployer will put all virtual machines on the single hypervisor until it runs out of compute resource (memory is likely to be the constraining factor). I would not suggest that you do this unless you have a good reason or you are in a simple pilot phase, but I do like to point out the art of the possible!
While not incredibly deep from a technical perspective, I do hope that this provided a few helpful details on what goes on during the placement stages of deployment. If you have any questions, do let me know.
I hate sitting on secrets. I always have. I understand that sometimes it's in the best interest of everyone (and your job) to keep tight lips, but that does not make it any more fun. Inevitably, the run-up to our annual Impact conference means everyone in the lab is doing their fair share of secret keeping -- just waiting for announce time. For a lot of us, that day ended Tuesday with the announcement of the IBM Workload Deployer v3.0.
Now, you may be wondering, 'I have never heard of this. Why is it version 3.0??' Well, IBM Workload Deployer is a sort of evolution of the WebSphere CloudBurst Appliance, which was previously at version 2.0. This is good news for all of our current WebSphere CloudBurst users because all of the functionality (plus new bits of course) that they have been using in WebSphere CloudBurst are present in IBM Workload Deployer. You can use and customize our IBM Hypervisor Edition images in IBM Workload Deployer. You can build and deploy custom patterns that contain custom scripts in order to create highly customized IBM middleware environments. So, what's the big deal here? Two words: workload patterns.
Workload patterns represent a new cloud deployment model and are an evolution of the traditional topology patterns you may have seen with WebSphere CloudBurst Appliance (I am a little torn between evolution and revolution, but that's splitting hairs). Fundamentally, workload patterns raise the level of abstraction one notch higher than topology patterns and put the focus on the application. That means, when you use a workload pattern the focus is on the application instead of the application infrastructure. Perhaps an example would be helpful to illustrate how a workload pattern may work in IBM Workload Deployer.
Let's consider the use of a workload pattern that was part of the recent announcement, the IBM Workload Deployer Pattern for Web Applications v1.0. Just how might something like this work? It's simple really. You upload your application (maybe a WAR or EAR file), upload a database schema file (if you want to deploy a database with the solution), upload an LDIF file (if you want to setup an LDAP in the deployment to configure application security), attach policies that describe application requirements (autonomic scaling behavior, availability guidelines, etc), and hit the deploy button. IBM Workload Deployer handles setting up the necessary application middleware, installing and configuring applications, and then managing the resultant runtime in accordance with the policies you defined. In short, workload patterns provide a completely application centric approach to deploying environments to the cloud.
Now, if you are a middleware administrator, application developer, or just a keen observer, you probably have picked up on the fact that in order to deliver something as consumable and easy to use as what I described above, one must make a certain number of assumptions. You are right. Workload patterns encapsulate the installation, configuration, and integration of middleware, as well as the installation and configuration of applications that run on that middleware. Most of this is completely hidden from you, the user. This means you have less control over configuration and integration, but you also have significantly reduced labor and increased freedom/agility. You can concentrate on the development of the application and its components and let IBM Workload Deployer create and manage the infrastructure that services that application.
Having shown and lobbied a bit for the benefits of workload patterns, I also completely understand that sometimes you just need more control. That is not a problem in IBM Workload Deployer because as I said before, you can still create custom patterns, with custom scripts based on custom IBM Hypervisor Edition images. The bottom line is that the IBM Workload Deployer offers choice and flexibility. If your application profile meshes well with a workload pattern, by all means use it. If you need more control over configuration or more highly customized environments, look into IBM Hypervisor Edition images and topology patterns. They are both present in IBM Workload Deployer, and the choice is yours.
If you happen to be coming to IBM Impact next week, there will be much more information about IBM Workload Deployer. I encourage you to drop-by our sessions, ask questions, and take the opportunity to meet some of our IBM lab experts. Hope to see you in Las Vegas!
The concepts that govern users and user groups in WebSphere CloudBurst are fairly basic, but I get asked about them enough that I believe they warrant a short discussion. First things first, you can define users in WebSphere CloudBurst and optionally define user groups to assemble users into logical collections. For both users and user groups, you can assign roles that define the actions a particular user or group of users can take using the appliance.
All of that is straight forward, but it can get a bit tricky once we start considering the effects of user permissions when managing at the user group level. The basic premise is that when a user belongs to a group or groups, the user's effective permissions are a sum of the permissions to all of the groups to which they belong. While that is easy to say, and maybe even to understand, I feel like an example always helps.
Consider that we have a single user WCAGuy that belongs to the PatternAuthors, ContentCreators, and CloudAdmins groups. The permissions for those groups are as follows:
PatternAuthors: Users in this group have permission to create and deploy patterns
ContentCreators: Users in this group have permission to create catalog content as well as create and deploy patterns
CloudAdmins: Users in this group have permission to administer the cloud, create catalog content, and create and deploy patterns
Naturally then, it follows that the WCAGuy user can administer the cloud, create catalog content, create patterns, and deploy patterns. So then, what happens if we remove the WCAGuy user from the CloudAdmins user group? Well, as you may expect, there is an update to the user's permissions. The WCAUser user can no longer administer the cloud, but they can still create catalog content, create patterns, and deploy patterns (owing to their membership in the other two groups). Similarly, if we next removed the WCAGuy user from the ContentCreators group, then the user would retain only the permission to create and deploy patterns.
Just one more thing, let's talk about what happens when I remove a user from a group and they no longer belong to any groups. Consider that I created the WCAGuy user with the permission to create catalog content as well as create and deploy patterns. Next, I added the user to the CloudAdmins group, meaning the user now has the permission to administer the cloud. I promptly decide that the user has no business with those permissions, so I remove the user from the CloudAdmins group. What happens? The user retains the permission set of the last group to which they belonged. In this case, that means the WCAGuy user retains cloud administration rights. I have to update the user's permission set if I want to take that right away, but in this case, it will not automatically disappear upon removing them from the CloudAdmins group.
I hope this helps clear up any ambiguity you may have had concerning users, user groups, and permission sets in WebSphere CloudBurst.
As a final preview of this week's building block sessions in the Enabling cloud computing with WebSphere campaign, I caught up with WebSphere DataPower architect Tim Smith. Tim is delivering a podcast that introduces and explains the new Application Optimization capabilities in the WebSphere DataPower line of products. Here is what Tim had to say:
Me: I speak with quite a few customers about the WebSphere CloudBurst Appliance, and for once I'm happy to be the one asking this question. Why do we deliver WebSphere DataPower in the appliance form factor?
Tim: DataPower has become a dominant player in the DMZ and in the ESB. Much of the reason is that this is a purpose built hardware appliance. There are many things that our customers like about this appliance package. First, it has security as part of its DNA. The basis for securing connections, applies throughout the network whether it is in a DMZ or in an ESB. The physical box provides tamper resistant protection. Another reason is availability -- there are no spinning media, dual power supplies, and a focus on fail over support.
In both the DMZ and the ESB, there has been a proliferation of products. The main reason for the proliferation is that customers want to remove as many decisions from the general purpose server as possible, and let servers do what they do best, process application requests. The devices that have been proliferating make more decisions on the request. They do deep packet processing and routing. They also may transform the request into an entirely different request. So, there are an abundance of "pre-processing" decisions and operations made. With DataPower, many functions are integrated into the single hardware platform, giving you a smaller box count. No need to purchase and maintain several platforms, their OS and software versions, compatibility lists, etc. With a single hardware box that does so many things, we can greatly reduce the total cost of ownership for our users.
The DataPower appliance is a blend of Hardware and firmware that is well provisioned with hardware assists that help compile, parse, and assist in many of the intensive packet processing capabilities. To summarize, you get an extremely flexible and adaptable product that reduces total cost while increasing performance.
Me: A theme that comes up in cloud computing over and over is consolidation. Can you speak to the consolidation offered by WebSphere DataPower appliances with respect to the self-balancing capabilities?
Tim: Yes. My answer to the prior question was a long-winded way of describing DataPower's ability to consolidate many features into a single platform. Self-balancing is an example. As DataPower became more popular, larger installations required multiple DataPower appliances in a tier of platforms. A common architecture was to place a load balancer or IP sprayer in front of the tier to distribute the traffic evenly among the tier of DataPower appliances. An IP sprayer is an example of another platform that needs to be added to the environment. It is another box that must be purchased, managed, and maintained. Self-balancing is a feature that was added to DataPower to eliminate the need for an IP sprayer. The way it works is that one of the DataPower appliances in the tier owns the Virtual IP (VIP) Address. It receives all of the traffic, and then distributes it to each of the other DataPower appliances in the tier. If the DataPower appliance that owns the VIP address goes down, one of the others is elected and it takes over. The result is one less product required to support the same level of functionality.
Me: For much of the past, cloud computing mostly focused on virtualization and management of resources at the raw compute level (servers, storage, networking, etc.). While there is definitely ongoing focus here, we start to see it moving up the stack towards applications, and part of that effort includes more evolved application load distribution. With that in mind, how can WebSphere DataPower help users more effectively distribute requests to their applications?
Tim: If a front end appliance or gateway device can dynamically learn information about its environment, specifically the back end, it will be able to make better decisions on how and where to route the request. This is one of the tasks that the Application Optimization feature addresses. Information from the back end can of course be manually configured, but the real value in cloud computing is dynamically adapting when new server resources are brought on line or are taken off line. In the 3.8.0 release, we implemented something called Intelligent Load Distribution (ILD). Intelligent load distribution focuses on continually learning the topology of a back end, updating DataPower's load balancers with that information, and distributing the load based on the updates. In addition to the topology, ILD learns the weights associated with each server. These weights can continually and automatically change as traffic patterns change. The result is load balancing to the back end that sends the optimal amount of load to each server.
Another traffic distribution aspect incorporated into ILD is session affinity. When a server application needs to receive every request from a given client, session affinity is used to route the requests to the same server. In some sense, session affinity overrides the load balancing algorithm. The session affinity support works with any type of back end server, but with a WebSphere back end, all session affinity information is automatically configured.
Me: Continuing on the theme of application intelligence, what is this new Application Routing option in WebSphere DataPower?
Tim: ILD focused on learning the topology of the network and making better decisions based on an ever changing cloud topology. Application Routing does something similar by learning which applications are running on each server. Once a request is handed to DataPower's load balancer, the request is classified as to the application that it is targeted for. Then the request is load balanced amongst the servers that are running that application. The information to perform application routing is dynamically learned and changes as applications are added or removed.
WebSphere has invested substantially in managing the life cycle of an application. Changing from one edition of an application to the next sounds like an easy task, but it can be very difficult to perform this type of maintenance on a production environment. The DataPower appliance supports life cycle management by working with the WebSphere back end to provide group and atomic edition rollout. The rollout feature allows traffic to be gracefully diverted from servers that are being taken offline and reloaded with the new application edition. This rollout can be done while leaving the other applications on the server unaffected. This support makes edition rollout a very simple task for the system administrator.
Next up on our sneak preview of the building block sessions for the Enabling cloud computing with WebSphere campaign is the Dynamic Infrastructure Services block. One portion of that block is a discussion about some of the technical capabilities of WebSphere Virtual Enterprise given by Nitin Gaur. Nitin is a Consulting IT Specialist within WebSphere, and an all-around WebSphere guru. I caught up with him to ask a few questions about his upcoming podcast.
Me: When people think cloud computing, one of the core concepts is 'on demand'. They want just enough resource at just the right time. In that sense, can you tell me a little about the On-Demand Router (ODR) in WebSphere Virtual Enterprise (WVE)? What is it and what core functions does it provide?
Nitin: So, first allow me to take a step back. In my view, cloud computing is a new consumption and delivery model nudged by consumer demand and continual growth in internet services. I classify any Cloud computing platform exhibits the following 6 key characteristics:
Standards based delivery
Usage based equitable chargeback
I thus, deliberately use the term platform in the context of a cloud computing environment that facilitates flexibility, robustness and agility, as a systemic approach in providing a stage to hosting applications without the concern for availability or provisioning of underlying resources. Since hardware and software virtualization do offer significant cost and resource management advantages, it is not rare to see virtualized platforms as core building blocks of any cloud platform. Such virtualization technologies provide an elastic infrastructure service. In this respect, WVE provides application server virtualization, which enables an elastic business-policy-driven application infrastructure.
Now back to the On-Demand Router. The ODR is the autonomic engine that drives the activity enabling the elastic infrastructure discussed above. The ODR operates in a highly dynamic WVE environment, so it is imperative for the ODR to be aware of any changes in the environment such as newly deployed applications, the addition of new application servers, and any planned or unplanned server outages. It achieves this awareness by continuously interacting with WVE's fluid and dynamic feedback mechanism.
Me: Autonomic capabilities seem to be a core part of WebSphere Virtual Enterprise. To that end, can you tell us a little about the autonomic capabilities provided by dynamic clusters in WVE?
Nitin: Dynamic application placement is a defining capability of WVE that directly contributes to WVE's ability to provide a dynamic, virtualized, and goal-oriented environment for workload management and continuous availability. The dynamic application management capability maximizes the efficient use of hardware resources by allocating resources appropriately per application based on fluctuating demands in the enterprise infrastructure. It determines which servers to stop and start in a dynamic server cluster in order to meet current demand for applications, and it does this in the context of a set of administrator-defined policies that uphold the enterprise’s service level agreements (SLAs) for its application infrastructure. The dynamic application placement framework must balance resource availability against health policies, service policies, and the importance levels assigned to applications.
Dynamic server clusters are key to WVE’s ability to dynamically adjust the application environment according to server load, and they provide the basis for a virtualized server runtime environment. The big difference between a dynamic cluster in WVE and a static cluster in WebSphere Application Server is that dynamic clusters grow and shrink as needed to meet current demand by starting and stopping members of the cluster. Although dynamic clusters and static clusters can co-exist in a cell, dynamic application placement can only work with dynamic clusters. To prevent unchecked growth, each dynamic cluster has a mechanism that you use to define a boundary for that cluster’s growth. The boundary is both quantitative (based on criteria that define the minimum and maximum number of application servers that can run in the cluster simultaneously) and locational (based on criteria that confine the growth of the dynamic cluster to a defined set of nodes).
Me: I know you have been around the country, and for that matter globe, helping our users to adopt and implement WebSphere Virtual Enterprise. Tell us about one of your favorite customer stories.
Nitin: So I would cite an example of one of the leaders in the entertainment Industry (and my favorite customer), let's call them Company X (since I cannot cite the name). The core of the company's application infrastructure system is the Sales App Infrastructure (SAI) consisting of more than 10 enterprise applications. To keep up with demand, Company X was required to procure more hardware and software to support the core systems. This strategy resulted in a large infrastructure footprint with low hardware utilization. The increase in hardware footprint became difficult to manage and required additional resources. The large footprint of the company's deployment put them in reaction mode rather than a posture of proactive monitoring. Some application servers rendered themselves unavailable and required the team to restart them every 24 hours. From a cost standpoint, it costs the company the same amount of money to request a virtual platform as it would to purchase a new physical server. This led to significantly under utilized hardware throughout the enterprise. WVE was brought in to Company X to help better manage their WebSphere Application Server footprint. Dynamic clusters, application health policies, and application editioning features helped the company to better utilize hardware, reduce hardware expenditures, increase visibility into their applications, and improve availability of their applications.
In addition to helping with the existing environment, WVE helped Company X to roll out a new project with applications that required continuous availability to worldwide users. The team made use of policy-based workload management to ensure performance and availability levels of these new applications met their business needs. In addition, the company was able to reduce the amount of WebSphere Application Server licenses and physical servers required for this new deployment. In sum, WebSphere Virtual Enterprise saves the company significant time, money, and management effort.
Yesterday, we kicked off a WebSphere in the Clouds campaign designed to connect you with IBMers that can help you to leverage WebSphere solutions to build clouds. The campaign consists of webcasts, podcasts, live Q&A sessions, and online JAMs. You can listen to replays and sign up for upcoming events by visiting the Global WebSphere Community website.
Next week, the campaign delivers a series of podcasts that discuss the WebSphere technologies that form the building blocks of clouds. These podcasts will discuss both the business and technical aspects of these solutions, and they will cover topics like application infrastructure in the cloud, policy-based workload management using application virtualization, hybrid cloud integration, and more. Over the past few days, I had the opportunity to catch up with the various presenters of these podcasts to ask them a few questions about their solutions. These interviews provide a nice sneak peak at what is coming in the podcasts, and I will be posting them here in the coming days.
To kick things off, I'm posting a video interview with Marc Haberkorn. Marc is the WebSphere Product Manager for WebSphere CloudBurst, WebSphere Application Server Hypervisor Edition, and WebSphere Virtual Enterprise. My colleague, Ryan Boyles, caught up with Marc and got his thoughts on how these solutions enable virtualization and automation for your cloud environments. Enjoy!
When it comes to building and using WebSphere CloudBurst patterns, people always ask me if I have any best practices. It turns out, I do. In fact, I have a singular piece of advice that wraps it all up: Build WebSphere CloudBurst patterns in a way such that once deployed, there is no after-the-fact, manual configuration for the running environment. That means, build the pattern so that it not only contains all the nodes necessary for your application environment, but it also contains all the configuration necessary for the environment.
Put like this, most everyone I talk to agrees with me. However, they quickly recognize that, absent this really cool integration with Rational Automation Framework for WebSphere, this means they will be writing scripts for many configuration actions and including them in patterns in the form of script packages. For users not familiar with configuration scripting for our WebSphere products, this can be a daunting proposition. But... it shouldn't be!
Recently, I put together a short presentation that lays out an iterative approach for developing script packages for WebSphere CloudBurst. Specifically, the presentation focuses on developing configuration script packages for the WebSphere Application Server (though the general concepts apply to all Hypervisor Edition products equally). I believe this method is useful for anyone, from novice users to WebSphere scripting gurus. The basic process goes something like this:
Identify: Identify the target WebSphere Application Server topology and configuration for your application environment.
Deploy: Build a WebSphere CloudBurst pattern that matches your desired topology and deploy it to your cloud.
Develop and Test: Develop and test your configuration script. Not a WebSphere Application Server scripting ninja? No worries. Use the Command Assistance feature in the WebSphere Application Server v7 administration console. This feature shows you the wsadmin commands that match the actions you manually take in the console. This affords a lower barrier of entry for those not familiar with wsadmin.
Package: Package up the resulting scripts into a script package along with metadata that describes the package.
Modify and redeploy: Load the new script package into your appliance, add it to your pattern, and then redeploy. Upon deployment completion, verify the scripts produce the desired result.
The presentation provides detail on the above steps and walks through an example scenario for this process. I am embedding it below, and I hope it proves useful. As always, feel free to send in any questions or comments.
A while back I had a four part FAQ series inspired by questions arising from customer visits discussing the first release of WebSphere CloudBurst. With the recent release of WebSphere CloudBurst 2.0, I think it is a good time to revisit an FAQ series with an entirely new set of questions.
For the first part of the series, I want to address a question we get all the time now: "What is the difference between WebSphere CloudBurst and WebSphere Virtual Enterprise?" This question was always fairly common, but now even more so because the new Intelligent Management Pack option for WebSphere Application Server Hypervisor Edition allows you to deploy WebSphere Virtual Enterprise cells using WebSphere CloudBurst.
Fundamentally, the difference between the WebSphere CloudBurst Appliance and WebSphere Virtual Enterprise is a complementary one. WebSphere CloudBurst provides a means to create your application environments, deploy them into a shared, cloud environment, and then manage them over time. In this respect, the appliance focuses on bringing cloud-like capabilities to the application infrastructure layer of your application environments. WebSphere CloudBurst does give you management capabilities for your running, virtualized application environments (i.e. applying maintenances and fixes), but for the most part those capabilities do not extend into the application runtime environment.
Now, you may ask why WebSphere CloudBurst does not extend its reach into the application runtime. The answer is simple: We already have a solution that does just that, WebSphere Virtual Enterprise. WebSphere Virtual Enterprise provides capabilities that allow you to dynamically and autonomically manage your application runtime. You can use WebSphere Virtual Enterprise to not only assign performance goals to your applications, but also to declare the importance of a given application meeting its goals relative to other applications in your organization. This enables the dynamic management of your applications and their resources such that your applications perform according to their goals and relative importance to your business. Simply put, you get an elastic runtime at the application layer of your application environments.
As I said, WebSphere CloudBurst and WebSphere Virtual Enterprise are complementary solutions. Both enable notions of cloud computing, but at different layers of your application environments. WebSphere CloudBurst hones in on the application infrastructure components, while WebSphere Virtual Enterprise zeros in on the applications running in those environments. The new Intelligent Management Pack for WebSphere Application Server Hypervisor Edition means that WebSphere CloudBurst can now dispense WebSphere Virtual Enterprise environments into your on-premise cloud. That means you can take advantage of these complementary solutions from a single and simple management plane.
I hope this helps to clear things up. As always, questions and comments are welcome!
May is almost here and that means that IBM IMPACT is right around the corner. Just like years past, IMPACT 2010 will be a great chance to get valuable education and insight into IBM WebSphere software and software from across the IBM software family. If you want to hear how IBM software is leading the march toward a smarter planet, register now.
IMPACT 2010 will be a great chance to hear the WebSphere cloud computing story. There will be multiple sessions on the WebSphere CloudBurst Appliance. These include customer-led sessions, internal adoption stories, overviews, and much more. I'll be there running a hands-on lab and delivering a session that discusses integration between WebSphere CloudBurst and IBM Rational tools. Of course, there is more to WebSphere and cloud computing than WebSphere CloudBurst. We have several other sessions that will detail all of IBM WebSphere's work in the cloud.
If you are interested, I put together a short video discussing some of the sessions on tap for WebSphere and cloud computing at IMPACT 2010. I'd also encourage you to check out the social media site for IBM IMPACT 2010. On that site, you will find tweets, videos, and blogs about the conference. Don't forget to sign up, and I hope to see you in Las Vegas!
-- Dustin Amrhein
The reason I suggest the application proxy approach is twofold. First, it affords you the ability of having custom interactions with the REST API. For instance, you may insert logic into the server-side proxy code that returns only a subset of the JSON data contained in the response from the appliance. Alternatively, in an effort to reduce the chattiness on your client-side, you may join JSON data from multiple different REST requests to the appliance to fulfill a single client request. You may even decide to represent the data in an all together different format than JSON. All of these options and many more are available to you if you implement an application-based proxy to the REST API.
The second reason I suggest the application approach is that it is easier, and seemingly safer, to not deal with user passwords on the client-side. If you setup your application proxy, you can configure it to retrieve the appropriate password from a secure location (like an encoded file) based on information passed along in the request. This means the password information is only present in the request (in encoded form of course) from the application proxy to the WebSphere CloudBurst Appliance.
The good news about the application-based proxy approach is that it is simple to put in place. I composed one using the open source Apache Wink project. The Apache Wink project is an open source implementation of the JAX-RS specification (and then some), and it enables you to develop POJOs that are in turn exposed in a RESTful manner. In my case, I had a single resource POJO:
The Apache Wink runtime routes any HTTP GET request whose path is like /resources/* to the getResources method in the WCAResource class. This method passes along information taken from the query string (the host name of the target WebSphere CloudBurst Appliance and the requesting WebSphere CloudBurst username), as well as the HTTP path information and sends it on to the getResource method declared as follows:
The getResource method above uses the WebSphere CloudBurst host name and the request path to construct the URL for the corresponding WebSphere CloudBurst REST API call. Next, it constructs an Apache Wink Resource object and sends the REST request along to the WebSphere CloudBurst Appliance. How do we authenticate this request? We use the WebSphere CloudBurst username (sent as a query string parameter) to retrieve the appropriate encoded password information. Once we have that, we construct the necessary header for basic authorization over SSL.
The application-based proxy shown here is simply a pass-through. It does not manipulate the data returned from the WebSphere CloudBurst REST API, nor does it map a single client-side call to multiple REST requests. However, it would be simple enough to extend it to do any of those things. If you have any questions about the code here, please let me know. I'd be happy to share more of the code, or talk about how and where to extend it.
The WebSphere Application Server Hypervisor Edition virtual image is made up of four different virtual disks. One of those disks contains pre-created and pre-configured WebSphere Application Server profiles. When the image is activated (either through WebSphere CloudBurst or in a standalone fashion), all of the profiles not being used are deleted leaving only the intended WebSphere profile type.
Since the profiles are pre-created, this implies that certain information must be updated after the image is activated to reflect things that change with each node that is created. Among other things, the cell name, node name, and host name of the WAS profile configuration are usually updated during the image activation process. Nearly every time I talk to WAS administrators about WebSphere CloudBurst and WebSphere Application Server Hypervisor Edition they are intrigued by this particular configuration update and almost always ask "How do you do it?" (Dustin's note: Since the command to rename the cell is not officially documented, I have removed it from this post. I'm sorry, but it is for your own good!)
Most of the time this question pops up because users are attempting to, with a more narrow focus than WAS Hypervisor Edition, freeze-dry certain WAS configurations in their organization. However, no matter how they do that (virtual images, zipped up configuration files, etc.), they too need to update things like the cell, node, and host names when attempting to reuse the configuration. Many have gone down the route of trying to identify all of the different XML files they need to change in order to update this information, but this is untenable and in fact unnecessary.
If you need to update the node or host name, forget manually updating XML files. Instead, use these three wsadmin commands:
The commands can be run from a standalone node or from a deployment manager node. They are pretty straight forward, and if you need more information about them just take a look in the WebSphere Application Server Information Center. I hope this is helpful information, and stay away from those XML files!
One of the most powerful features of WebSphere CloudBurst is the ability to take one of the WebSphere Application Server Hypervisor Edition virtual images that are shipped with the appliance and extend it to a produce a custom virtual image. This allows users to begin creating customized environments from the bottom up, starting with the operating system. To put it in better context, let's take a look at a couple of scenarios where this feature comes in quite handy.
First off, a very common need for our customers is the ability to continually monitor their application environments. For instance, you may install Tivoli monitoring agents on all of your machines hosting WebSphere Application Server processes and configure those agents to report back to a management server. This is a great case for image extension in WebSphere CloudBurst.
In this scenario, you would start by extending an existing WebSphere Application Server Hypervisor Edition image. WebSphere CloudBurst creates a running virtual machine based off of the selected image, and you log into that virtual machine and install the Tivoli monitoring agents. Once the installation is done, you capture the virtual image back into the WebSphere CloudBurst catalog and use the new image to build a custom pattern. The last step is to include a script package on this custom pattern that, upon deployment, will configure the installed monitoring agents to report back to your desired management server.
Another use case is likely to be of interest to you if you are using WebSphere Virtual Enterprise (or something similar), and you could benefit from the same ease of provisioning for those environments that WebSphere CloudBurst brings to WebSphere Application Server environments. You can use the same customization combination above (image extension and custom scripts) to enable WebSphere CloudBurst to essentially dispense WebSphere Virtual Enterprise cells.
Again, this scenario starts off by extending a WebSphere Application Server Hypervisor Edition virtual image. Once the virtual machine for the extension is created by WebSphere CloudBurst, you log in and install the WebSphere Virtual Enterprise product. After the installation is done, you capture the image and store it in the catalog. Next, you build a custom pattern based off of this image and include script packages that, upon deployment, augment the various parts in the pattern from WebSphere Application Server profiles to WebSphere Virtual Enterprise profiles. (You may wonder why you wouldn't just create the WebSphere Virtual Enterprise profiles during the image extension process. This is because during image extension, you cannot make changes to the virtual disk that contains the WebSphere Application Server profiles. Any changes made to the profiles will be wiped out during the capture process.)
There are countless more scenarios for creating custom virtual images in WebSphere CloudBurst. To name a few, you may want to install JDBC drivers that are common to almost all of your application environments, install required anti-virus software, or just make operating system configuration changes. All of these things can be accomplished through the image extension and capture process. Look for an article coming out soon that will discuss and explain, in much greater detail than I provided here, the process of installing and configuring Tivoli monitoring agents in environments dispensed by WebSphere CloudBurst. In the meantime, if you have any questions or comments, drop us a line here or check out our forum.
Over the past several months industry focus on cloud computing seems to have only intensified. Within IBM and for the purposes of this blog, WebSphere, there have been several announcements and offerings that indicate our commitment and belief in the cloud computing approach.
To further highlight WebSphere's focus and offerings in the cloud computing realm, we are embarking on a "WebSphere in the Clouds" campaign during the months of September and October. Our intent is to virtually deliver information about our cloud strategy and offerings directly from the experts to you, our WebSphere users.
The event will be kicked off by WebSphere's Director of Product Management, Kareem Yusuf, on September 23rd from 9-10 EDT. Kareem will talk about cloud computing in the enterprise, and its unique relationship to SOA thoughts and principles. In addition, he'll give an overview of what WebSphere has been doing in the cloud computing space. This will be followed by sessions from technical experts that detail WebSphere offerings in both the public and private clouds, as well as sessions that discuss enablers of application and application infrastructure elasticity.
To find out more about the "WebSphere in the Clouds" campaign, you can check out the main announcement page. To sign up for the series of virtual events visit the registration page. We hope you will join us for the series of webcasts to learn all about WebSphere's work in the clouds.
To continue with the series of blog posts regarding WebSphere CloudBurst FAQs, I want to take a look at one aspect of the deployment process.
When you leverage WebSphere CloudBurst to push patterns (complete WebSphere Application Server configurations) into a private cloud, the appliance provides an advanced placement algorithm to determine exactly where the resulting WebSphere virtual systems will reside. It attempts to match the needs of the pattern to the correct set of hypervisors that have been defined. WebSphere CloudBurst considers things like storage, CPU, memory, and high availability characteristics when placing the pattern, and this is all done by the appliance without you having to intervene at all.
This is certainly nice in that it absolves you from having to make such placement decisions. Having said this though, you may be thinking of a question that comes up quite often:
If WebSphere CloudBurst controls the placement of the pattern, how can I make sure that certain deployments end up on certain servers (hypervisors)?
Considering what I just told you above, it may not seem that it's possible to control what machines end up hosting your virtual system since the appliance takes care of that placement for you. However, the organized use of WebSphere CloudBurst cloud groups allows you to take advantage of the intelligent placement provided by the appliance while retaining a level of control over which machines end up hosting particular deployments.
In WebSphere CloudBurst all patterns are deployed to cloud groups. Cloud groups are a collection of hypervisors that have been defined within the appliance. The basic deployment mapping is depicted in the image below:
As seen above, you can create a cloud group for any purpose (dev, test, QA, production, etc.), including any hypervisors that you desire as long as a given hypervisor only belongs to a single cloud group. When you are ready to deploy a pattern, you simply select the cloud group you want to deploy to:
By selecting a cloud group for deployment, you are implicitly selecting the physical machines that will host your deployment. The cloud group could consist of anywhere from one to N hypervisors, so you are afforded the ability to restrict the location of your virtual systems as necessary.
I hope this helped explain a little bit about cloud groups in WebSphere CloudBurst. If you're looking for more information about WebSphere CloudBurst cloud groups, I'd also suggest you watch this video on our YouTube channel.
In the previous post Dustin shared a great video demonstrating the value of the IBM Image Construction and Composition Tool that is now delivered with IBM Workload Deployer V3.1. This is certainly one of the key new features of IBM Workload Deployer V3.1. However, there are also a number of other compelling enhancements and features that we would like communicate.
I created the attached video to highlight some of these features included in new Workload Deployer release. The video uses the web console to highlight some of the features and capabilities, giving a brief introduction for each one. Without going into a lot of depth, I think it gives a nice overview. This may be especially helpful if you already have Workload Deployer v3.0 and want to see the value you will get when you upgrade to Workload Deployer v3.1. Check it out.
We believe that these new features make IBM Workload Deployer V3.1 an even better solution for your private cloud needs. Please let us know what you think.
Lately Joe and I have been pretty vocal about bringing up the new IBM Image Construction and Composition Tool capabilities in IBM Workload Deployer v3.1. While writing about such new capabilities is always good, I think seeing is believing. In that light, I hope you will take a look at the recent demo I put together that shows how to use the Image Construction and Composition Tool with IBM Workload Deployer v3.1!
In a recent post, Joe Bohn detailed some of the new capabilities and enhancements that come along with the recently delivered IBM Workload Deployer v3.1. To be sure, there are many valuable new features such as PowerVM support for virtual application patterns, the Plugin Developer Kit, WebSphere Application Server Hypervisor Edition v8, and more. Each of these topics probably merit their own post, but today I want to talk about something I did not mention above. Specifically, I want to talk about the announcements regarding the IBM Image Construction and Composition Tool (ICCT) and what that means for IBM Workload Deployer users.
You may have read an earlier post that I wrote about the ICCT, but allow me a brief overview here. In short, the ICCT enables the construction of custom virtual images for use in IBM Workload Deployer. You use the tool to create virtual images, much like IBM Hypervisor Edition images, and then you can use those custom images (containing whatever content you need) to create your own custom virtual system patterns. The key point about the custom images you create with the ICCT is that they are dynamically configurable. That is, the tool helps you to create the images in such a way that you can defer configuration until deploy time rather than burning such configuration directly into an image. For those of you familiar with virtual image creation, you know this type of 'intelligent construction' is a huge step towards keeping image inventory at a reasonable level.
Okay, enough of a general overview for now. Let's talk about the two new items of note regarding IBM Workload Deployer v3.1 and the ICCT. The first thing you should know is that starting in IBM Workload Deployer v3.1, the ICCT is shipped with the appliance. This means that you do not need to go anywhere else in order to get your hands on the tool to start creating your custom images. You simply log into IBM Workload Deployer and click the download link on the appliance's welcome panel (shown in image below).
Getting your hands on the tool is one piece of the puzzle, but using it is quite another. While the ICCT has been available as an alphaWorks project for some time, that also implies that there has never been official support for the tool. That changes starting with IBM Workload Deployer v3.1. The ICCT is now a generally available product from IBM, and that means that it is fully and officially supported as well. Further, the images you create using the tool are also officially supported for use as building blocks of your IBM Workload Deployer virtual system patterns. For many of you who have been using the ICCT for some time, but have been hesitant to expand use because of the lack of a formal support statement, you should now feel free to charge forward!
I hope this helps clear up exactly what the new Image Construction and Composition Tool announcements that were part of IBM Workload Deployer v3.1 actually mean. I cannot wait to hear about how you all are putting the ICCT to use with IBM Workload Deployer. Finally, don't forget to send us any questions, comments, or other feedback that you may have regarding this or any other new feature in IBM Workload Deployer v3.1!
If you follow this blog often, you know that from time to time I like to post frequently asked questions. Well, it's been a while since I have done that, and since then I have added some new questions to my list -- along with some regulars. Take a look below, and if I don't answer your question feel free to leave a comment!
Can IBM Workload Deployer deploy software that is not IBM software? Yes. You can use one of the included images as a springboard and customize them with your own software via extend and capture. Additionally, you can use the IBM Image Construction and Composition Tool (I'm getting ahead of myself here) to create your own custom images from the ground up and use those within IBM Workload Deployer.
Can I use VMotion for the systems I deploy with IBM Workload Deployer? Yes. IBM Workload Deployer has tolerated the use of VMotion since the WebSphere CloudBurst days (see the Additional Considerations section on this page for more information). IBM Workload Deployer v3 introduced the notion of virtual machine mobility initiated directly from the appliance. This capability takes advantage of VMotion in the case of VMware-based cloud environments.
Can IBM Workload Deployer deploy just a base operating system? Yes. IBM Workload Deployer v3 introduced a base operating system image that contains 64-bit Red Hat Enterprise Linux. Internally, IBM Workload Deployer uses this as the foundation on top of which virtual application patterns are deployed. You can use it to deploy virtual machines containing just the base OS, or you can customize it to deploy software of your choosing. (As an aside, IBM Workload Deployer v3.1 will include a base operating system image for AIX)
Can I automate the process of calling/using IBM Workload Deployer? Yes. IBM Workload Deployer is built to fit a specific need -- creating and managing a cloud of middleware and middleware-based workloads. In that light, it would be a shortcoming if IBM Workload Deployer did not to fit well into more holistic or enterprise-wide cloud management systems. The REST API and CLI allow you to automate the use of IBM Workload Deployer, thereby allowing it to be mashed up into other processes.
Can I group two appliances together for high availability? Yes. IBM Workload Deployer v3.1 introduces the ability to configure appliances in a master/slave setup. You can connect two appliances, allow them to share a floating IP address, and be confident that data is continuously replicated between the two. If one appliance fails, the other appliance picks up the floating IP ensuring continuous service.
Are images created using the Image Construction and Composition Tool supported for use within IBM Workload Deployer? Yes. Part of the new IBM Workload Deployer 3.1 announcement was a statement of support for using images created by the Image Construction and Composition Tool as a component of your virtual system patterns. This is a very important enhancement as it allows you to extend the set of content deployed by IBM Workload Deployer while being sure that you are operating within the boundaries of intended use.
Can I use IBM Workload Deployer to provision to public clouds? No... and yes. If you install an IBM Workload Deployer appliance in your datacenter, you cannot use it to deploy to a public cloud environment. However, you may have recently heard about the IBM SmartCloud Application Services portfolio. IBM has announced that the pattern-based provisioning that one gets with IBM Workload Deployer will also be available as part of this portfolio. This means that you will be able to build and deploy patterns using a service hosted on the IBM SmartCloud. Further, your deployed systems will run on the IBM SmartCloud. Check out this demo for more information.
** IBM Workload Deployer 3.1 firmware is available on 11/18.
One of the things that often comes up at some point in IBM Workload Deployer conversations is the notion of self-service access. Specifically, users want to know what the appliance provides that enables them to allow various teams in their organization to directly deploy the middleware environments they need. In other words, they want to use IBM Workload Deployer to tear down the traditional barriers that exist between those that request the environment and those that fulfill said request. Now, as we begin to elaborate on this notion, it becomes quickly apparent that in order to effectively enable self-service, IBM Workload Deployer must deliver a few things.
First, IBM Workload Deployer must provide the means to define users with various levels of access. Second, IBM Workload Deployer must provide the means to define resource access at a fine-grained level to different users and groups of users. Check and check. The appliance has been doing this since the beginning of WebSphere CloudBurst. Without those two things, the conversation of self-service access would end pretty quickly. However, there is a final capability that is equally important: IBM Workload Deployer must deliver a means to limit resource consumption at a fine-grained level.
In IBM Workload Deployer there are a couple of ways to achieve this. First, you could define multiple cloud groups and allow access to those groups in a way that maps directly to resource entitlements. While that may work in some situations, others call for even more granularity. You may want to allow multiple different users or groups to access a cloud group, but you may want to allow different consumption limits for each of these groups. In this situation, you can take advantage of environment profiles and a new option when defining users of IBM Workload Deployer.
Consider the case that you have a group of developers and you want to limit their consumption of memory in the cloud. First, you start by defining your development users and for each you select Environment Profile Only as the value for the Deployment Options field.
By selecting the above value for the deployment options of a user, you restrict that user to only deploying via an environment profile as opposed to general cloud group deployments. After defining all of your development users, you may choose to organize them into a user group for easier management. At that point, you can define environment profiles and determine which ones your developers should have access to using the Access granted to field of the profile.
Within the environment profile, you can define resource consumption limits for compute resource and software licenses. For instance, you can define a limit on the amount of virtual memory consumed by all deployments using the profile. It is important to note that the limit is cumulative for ALL deployments that use the profile.
Now that all of the controls are in place, consider the deployment process for one of your development users. They pick a virtual system pattern, click the deploy icon and begin to configure the pattern for deployment. In the Choose Environment section of the deployment dialog, your development user will only be able to select the Choose profile option for deployment. Further, they will only be able to deploy using the environment profiles to which they have access.
After the deployment completes, a look at the Environment limits section in the profile shows the current usage totals.
Now suppose another development user, or even the same one, comes along and attempts to deploy another virtual system pattern even though the profile limits have already been reached. The user can initiate the deployment, but they will get a near immediate failure owing to the fact that they would exceed consumption limits if the deployment were allowed to proceed.
The same kind of enforcement occurs regardless of the resource limit type. You can use this approach to limit the consumption of CPU, virtual memory, storage, or software licenses among the various different users or groups of users you define in IBM Workload Deployer. If you combine fine-grained resource consumption limits with varying permissions and fine-grained access, I think you are on the road to truly enabling self-service in the enterprise.
A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.
Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:
Me:IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?
Marc: To sum it up, we offer a combination of depth and breadth. IWD delivers both workload aware management and general purpose management. Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context. There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools. This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity. By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.
That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun. As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer's entire environment, we offer a mix of workload-aware management and general purpose management.
Me:VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?
Marc: I think VMware has built a very compelling set of capability in the virtualization space. I think the main difference between VMware's suite and IBM Workload Deployer is the perspective from which the environments are managed. VMware puts the administrator in the position of thinking about infrastructure from the ground up. The administrator is thinking about virtual images, hypervisors, and scripts. In IBM Workload Deployer, we think about things from the perspective of the app, because that's ultimately what the business cares about. By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.
Me:The 'one tool to do it all' approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?
Marc: The advantages of a "one tool to do it all" are many: less integration, more uniformity, less complexity. As such, customers will always prefer a single tool when possible. This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the "everything else." As such, my advice to users is not to choose between breadth and depth - use IBM Workload Deployer which offers both.
Me:To close, I'm curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?
Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm. There are several reasons for this. One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged. Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models. However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data. Just look at IBM's Strategic Outsourcing business, which handles entire IT operations for many large businesses. I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption. In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public. When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.
Script packages are an integral part of virtual system patterns in IBM Workload Deployer. By attaching script packages to your patterns, you provide customizations particular to your unique cloud-based middleware environments. Customizations provided by script packages might include installing applications, creating application resources, integrating with external enterprise systems, and much more. The bottom line is, if you are creating virtual system patterns, you will almost certainly be creating script packages.
Largely, the act of creating a script package is independent of IBM Workload Deployer. The appliance does not dictate a particular scripting language, so all you need to do is make sure you can invoke your logic in the operating system environment. Your script package may be a wsadmin script, shell script, Java program, Perl script, and on and on. After you create the actual contents of your script package, you will then load that asset into the IBM Workload Deployer catalog.
Once loaded into the catalog, you define several attributes of your script package, including the executable command, command arguments, variables, execution time, and more. The process for defining these attributes is trivial using the intuitive UI in IBM Workload Deployer, but I wanted to take a little time to remind you of a technique I recommend to all users defining script packages. You can actually package a JSON file within the script package that defines all of the script's attributes. The format of the file is simple, and I am including an example below:
The example above is one taken from a script package in our samples gallery, and it shows the basics of which you need to be aware. Notice that in the JSON file, you can provide a name, description, unzip location, executable command, command arguments, variables, and more. You only need to ensure that the name of this JSON file is cbscript.json and that you include it at the root of the script package archive. Once you have done that, you load the script package archive into the catalog, refresh the script package details, and voila -- all the attribute definitions appear!
You may ask why I recommend this since it could seem like an unnecessary step. My answer to that is that you have to define these attributes anyway, so you might as well capture it once in the file. Once you capture it once in the file, you can ensure that if the same script needs to be reloaded, or if you need to move it to another appliance, its definition will be exactly the same (and presumably correct). I use this approach for all of my work, and for all of the samples I contribute to our gallery, and it really saves me a lot of misplaced effort that can result from typos. If you are out there creating script packages, try adopting this approach. I'm pretty sure you will be happy you did!
We've begun to seed this location with all sorts of helpful information on IBM Workload Deployer. Check it out and you will find links to a "getting started" section, articles, demos, redbooks, whitepapers, pointers to various blogs where authors write about private clouds or IBM Workload Deployer (yep, this blog is included), links to product documentation and education assistant, upcoming events, and more included in the wiki. We're still populating this location with content and we're looking for input on how to improve things ... so please provide your feedback and check back often to see how it evolves.
The content provided in the community is open and visible to everyone immediately. However, there is even more value if you create an id (or use your existing developerWorks id) to become a member of the community. Members can participate in the many collaborative elements that the community provides. This includes the ability to open discussions and collaborate on the forum, post blog entries in the IBM Workload Deployer community blog, or even share content that you have created which may be of interest to others.
There is even a specific section in the community focused on the Plugin Developer's Kit that Dustin mentioned in the previous post on extensibility ( see IBM Workload Deployer PDK wiki page ).
So please visit this new IBM Workload Deployer community and send us your feedback so that we can improve and grow this into a valuable resource. Ultimately, we want this to be a place where we can help each other be successful using IBM Workload Deployer. We also want to learn valuable insights from your experiences with IBM Workload Deployer so that we can continue to make improvements and optimizations in the appliance with the goal of improving your private cloud experience, making your business more agile and efficient. As always, please send us your feedback.
Customization capabilities have been very important to the design of IBM Workload Deployer going back to the beginning with WebSphere CloudBurst. Having the ability to quickly spin up environments in a cloud really does little good if those environments are not customized according to your needs. If you look at the virtual system pattern capability, it is why we always had the notion of custom images, custom patterns, and custom scripts. We give you a strong foundation, and you tweak it here and there to create what you want.
Customization is not a concept unique to virtual system patterns. The virtual application model in IBM Workload Deployer supports many different mechanisms for you to tailor your cloud-based environments. You can start with the virtual application pattern types that we ship and use any components in those patterns to build a custom environment. The patterns you build can include your own configuration (within the set of configurable parameters) and include policies that you need for your environment. In looking at just the IBM Workload Deployer Pattern for Web Applications and the IBM Workload Deployer Pattern for Databases, there are quite a number of scenarios you can support with your cloud. However, what happens when you want to go a little further and color outside the lines of what we provide?
At some point you may have heard or read that the entire virtual application pattern model resides on a pluggable architecture. In effect, this means that everything about a virtual application pattern type, from the elements that show up when building a pattern to the management interface you interact with after deployment, is customizable. The fundamental unit of customization for a virtual application pattern type is a plugin. Plugins provide the know-how in terms of installing, configuring, integrating, and managing the application types supported by a given pattern. Plugins also provide metadata that control what users see as they build and manage these patterns. In short, plugins are the source of truth for virtual application patterns!
If you looked in IBM Workload Deployer, you would find the collection of plugins that support the virtual application pattern types shipped with the offering. While that is interesting, you should also know that you can supply your own plugins. That's right. You can develop a plugin, and load it directly into the appliance. This allows you to do two very important things. First, you can extend the virtual application pattern types that come with IBM Workload Deployer with any kind of functionality you deem important. This may be additional monitoring, integration with external systems, or any number of other extensions. Second, you can create new virtual application pattern types that support your desired workloads. You can support the workloads with the software of your choosing so long as you can supply the necessary know-how in your plugins. In either case, you contribute the plugin, and your customized components become first class members of the IBM Workload Deployer landscape.
Okay, so I admit that this is not necessarily news. We have supported user-contributed plugins since the release of IBM Workload Deployer. However, there is something new that significantly lowers the barrier to entry in the custom plugin game. Early last week, IBM announced the IBM Workload Plugin Development Kit. This kit provides a set of tools and samples designed to make the construction and packaging of custom plugins a simple process. In my opinion, this reiterates our commitment to an extensible, application-centric cloud approach, and it represents a huge step forward in the industry as a whole. Be sure to check this out, and don't be shy with the comments and feedback!
As Joe mentioned in his last post, virtual application patterns are all the rage in IBM Workload Deployer. The high degree of abstraction provided by these patterns means users can remove tedious, time consuming tasks like middleware installation, configuration, and integration from their field of view. As a consequence, users can build and deploy application environments in unprecedented time, thus freeing up more time to focus on the actual application.
This is obviously important because building and deploying application environments are crucial, traditionally time consuming activities. However, what happens after you build and deploy the application? You manage it, that's what! Joe brought up the fact that IBM Workload Deployer makes this easier too by delivering an integrated management portal through which you can manage and monitor your application environments. Now, this probably already sounds valuable, but what really puts it over the top is the management portal exposes an interface that is workload aware. But, what does that mean?
To get an idea of what that means, consider the case that you use the shipped virtual application pattern to build a simple application environment with a web application and database. You deploy it with IBM Workload Deployer, and your application is up and ready. Now you want to start checking things out. You start by opening the management portal directly from the appliance, and you see both the application and database components listed in the view:
After you looked at basic machine statistics such as network activity and memory usage, you could move on to a more workload-centric view. For instance, you could examine statistics particular to a web application such as request counts and service response times:
You may also decide that you want to alter certain aspects of your deployed environment. As an example, you could update your deployed application or change certain configuration data in the deployed environment:
It is important to note that you have a management interface for each of the components in your environment. That means that from the same management interface, you can manage and monitor the database you deployed as part of your environment. For example, at different intervals, you may want to backup your database. You can do this directly from the management portal provided by IBM Workload Deployer:
Lest you think that you can only manage and monitor, this unique management interface is also a one stop shop for all of your troubleshooting needs. From the centralized portal, you can view log and trace data for each component:
Virtual application patterns are an attempt to encapsulate each phase of your application's lifecycle, from creation to deployment to management. In this regard, I hope the above provides a taste of some of the management capabilities provided by virtual application patterns. It truly is the tip of the iceberg!
In a post not long ago, I mentioned new enhancements to virtual system patterns in IBM Workload Deployer. A prominent part of those enhancements were updates to pattern construction that allow you to order virtual machine startup, order script package invocation, and include add-ons that provide system level configuration options. Recently I uploaded a demonstration to YouTube that highlights some of these new capabilities. Specifically, this provides a brief look at ordering and add-on enhancements.
I hope you take a look, and even more importantly, I hope to see some feedback. If you have something you would like to see captured in a demo, let me know and I'll work it to the top of a long and continually growing list!
When it comes to IBM Workload Deployer, I have no illusions concerning our competitors. They are out there, and they are constantly on the attack. Their dubious claims aside, I know this because I still get asked quite frequently to explain the benefits of IBM Workload Deployer versus some other general purpose cloud provisioning and management solution. So, while I have done that many times in various forums, I figured it was time to address the subject here on the blog.
When comparing IBM Workload Deployer to the other available solutions, I honestly feel comfortable saying we have no direct competition. I know you believe me to be biased, and rightly so, but let me explain why I think the competition is much more perception than reality. To do this, I want to focus on the patterns-based approach that IBM Workload Deployer takes to cloud provisioning and management.
Let's start with virtual system patterns in IBM Workload Deployer. Virtual system patterns allow you to build and deploy completely configured and integrated middleware environments as a single unit. These patterns build on top of our special IBM Hypervisor Edition images that bottle up the installation and quite a bit of the configuration of the underlying middleware products. Further, when using virtual system patterns, IBM Workload Deployer manages and automates the orchestration of the integration tasks that need to happen to setup a meaningful middleware environment. For instance, when deploying WebSphere Application Server you do not need to do anything on your end to deploy a clustered, highly available environment. When deploying WebSphere Process Server in this manner, you do not need to take any administrative actions to produce a golden topology. You just deploy patterns and the images, patterns, and appliance take care of the rest. Of course, you can add your own customizations and tweaks in the pattern, but we take care of the common administrative actions that would otherwise require your care.
I am not sure of a better way to say it, so I will be blunt: When deploying products delivered in IBM Hypervisor Edition form, no other solution compares to the virtual system pattern capability offered by IBM Workload Deployer. It is not even close. Can you provision products like WebSphere Application Server or WebSphere Portal using other cloud provisioning tools? Sure, but you should be aware that you will be writing and maintaining your own installation, configuration, and integration scripts. It is also likely that you will end up developing a custom interface through which deployers request your services (something not necessary when using the slick IBM Workload Deployer UI). All of this takes time, resource, and money. More importantly, this is not differentiating work and distracts from the real end goal: serving up applications. IBM Workload Deployer can deliver this operational capability right out of the box, and it can do so in a way that costs less than custom developed and maintained solutions.
When considering IBM Workload Deployer versus the competition, it is also important to consider the new virtual application pattern capability delivered in version 3.0. The virtual application pattern capability is a testament to IBM's thought leadership in and commitment to cloud computing for middleware application environments. Virtual application patterns take a bold step forward in raising the level of abstraction beyond the middleware environment and up to the most important resource in enterprise environments: the application. With a virtual application pattern, you simply provide your application and specify both functional and non-functional requirements for that application. When ready, you deploy that pattern, and IBM Workload Deployer sets up the necessary middleware infrastructure and deploys the provided application. Moreover, the appliance will monitor and autonomically manage the environment (i.e. scale it up and down) based on the policies you specify. Quite simply, this is a deployment and management capability our competition cannot match.
There is more to consider than just patterns though. The appliance makes it really simple to apply maintenance and upgrades to environments running in your cloud. It can autonomically manage your deployed environments (through policies in virtual application patterns and the Intelligent Management Pack for virtual system patterns), and it effectively abstracts the underlying infrastructure of your cloud environment. This abstraction is the reason IBM Workload Deployer can deploy your environments to PowerVM, zVM, and VMware environments. It also makes it easy to deploy the same environment to multiple different underlying platforms, thus accommodating typical platform changes that happen as an application moves from development to production. The best part of all is that the deployer’s experience is the same regardless of the underlying infrastructure since the appliance hides any platform idiosyncrasies.
The bottom line is that the appliance is purpose built to deploy and manage middleware and middleware application environments in a cloud, and as such, delivers immense out-of-the-box and ongoing value in this context. I should also point out that the design of the appliance acknowledges its purposeful nature. The CLI and REST API interfaces allow you to integrate the appliance into the operations of those general purpose provisioning solutions. In this way, IBM Workload Deployer acts as a middleware accelerator for your cloud computing efforts. This means that if you do have a general purpose solution, IBM Workload Deployer can still provide considerable value and let you avoid developing a considerable subsystem dedicated to deployment and management of middleware in the cloud. We believe in this type of integration, and have in fact built it into our own IBM solutions.
I could go on and on differentiating IBM Workload Deployer from the competition, but I hope my comments above give you a good context on why I think the appliance is in a league of its own. Of course, I always appreciate comments and feedback, so don't be shy!
If you are reading this blog then I am pretty sure that you are interested in the agility that can be achieved by rapidly provisioning middleware systems and standing up virtual applications in a private cloud environment. However there are other aspects of agility that you should also consider. One such aspect is the ability to build applications that can be easily maintained, updated, and extended. This is where OSGi technology comes into the picture.
If you have been working with the IBM Workload Deployer (or watching some IBM Workload Deployer demos) you may have noticed a category of components in the virtual application builder called OSGi Components.
Maybe you already know all about OSGi applications and the value they bring to an enterprise. Or, perhaps you noticed this and decided that you would search for some more information on this odd acronym and just what an OSGi application is all about.
In a nutshell OSGi technology is a way to define dynamic modules for Java. It provides a standard way to encapsulate components (called bundles) with metadata that define versioned package dependencies, service dependencies, packages exported, services exported, etc... basically everything you need to know about this bundle so that it can be connected up with other bundles to support a particular solution. These bundles can then be grouped together into applications and dynamically wired to fulfill necessary dependencies at runtime. The OSGi framework provides all of the necessary capability to manage the dependencies and resolve any problems.
Those who leverage OSGi technology benefit from improved time-to-market and reduced development costs. The loose coupling provided by the OSGi framework reduces maintenance costs and facilitates the dynamic delivery of components in a running system. Of course there's a lot more to it than just that ... involving portability across different environments, achieving the appropriate level of isolation or sharing within an environment, and integrating with the many different technologies and patterns already available today. I don't think I know enough about OSGi to do it justice here. But fortunately for me (and you) there are several experts who can make it all clear.
One such expert is Graham Charters and there is a great opportunity to hear him introduce this topic and also participate in a dialogue about the concepts and what they mean for your business. Graham will be leading a Global WebSphere Community Lab Chat on Wednesday of this week (July 20th) entitled: How can OSGi make your enterprise more agile. Graham is the IBM technical lead in the OSGi Alliance Enterprise Expert Group and an active participant in the open source community implementing many of these standards. So register now for this free session and learn how OSGi can make your enterprise even more agile.
A few weeks ago, I had a conversation with a current WebSphere customer about the potential value they could derive from the use of IBM Workload Deployer. Right away, this customer saw value in the consistency that a patterns-based approach could afford them. It was clear that patterns eliminate the uncertainty that can make its way into even the best-planned deployment processes. Initially though, the customer questioned the value of being able to do fast deployments because, in their words, "We don't deploy WebSphere environments that often." So, we continued our discussion, and then they asked an important question that I encourage all of our users to ask: "Why don't we deploy our WebSphere environments more frequently?"
It is interesting to talk with our WebSphere users that have a long history with our products. Often times, they have been taking a shared approach to WebSphere installations for many, many years. They develop innovative approaches and isolation schemes that allow them to carve up a single WebSphere installation (cell) amongst multiple different application teams. This allows them to avoid having to setup a cell for each application deployment and saves them the associated time. However, having talked to many different users taking this approach, it is not without its challenges.
As was the case in the customer I mention above, users typically made trade-offs when electing for larger, shared cells. As an example, if you have multiple different application teams with different types of applications using a single cell, applying fixes and upgrades to that cell can be a lot more complex. After all, you now have to coordinate plans across a number of different teams and find a window that fits all of their needs. For the same reason, trying incremental function via our feature packs is much more arduous in these types of cells. Additionally, administrative controls become more complex since teams with varying needs all require administrative access. Admittedly, this gets simpler with newer fine-grained security models in WebSphere Application Server v7 and v8, but it still requires organizational discipline and process.
At this point I should be clear that I am not denigrating the shared cell approach. It can work well, and we have many facilities built into the WebSphere Application Server product to support that model. However, if you are using this approach and you find yourself stumbling too much for your own liking, then I would strongly suggest that you explore the patterns-based approach of IBM Workload Deployer. By deploying patterns that represent your WebSphere cells using IBM Workload Deployer, you can quickly and consistently setup multiple WebSphere Application Server cells to support the varying needs of your application teams. You will still avoid spending an inordinate amount of time installing and configuring cells as that is an automated part of pattern deployment, and your application teams will still get the resources they need. Further, this can liberate your application teams in terms of how they apply maintenance, install upgrades, and absorb new function in the form of feature packs.
I am not suggesting a complete pendulum swing in your approach to how you manage multiple application environments. There is definitely a happy medium in terms of how many cells you end up with. After all, you do not want to trade in one set of problems for the problem of managing way too many different cells. However, I do think that decomposing monolithic, multi-purpose cells into smaller, more purposeful cells can be beneficial. In the course of thinking about this different approach, you may come to the same conclusion that the customer I mention above did. IBM Workload Deployer's rapid deployment capabilities are indeed valuable if you take a slightly different view of current processes.
In my opinion, declarative deployment models are key to the entire notion of Platform as a Service (PaaS). That is, users should concern themselves with what they want, but not necessarily how to get it. The PaaS system should be able to interpret imperatives from the user and automatically convert that to a running system. In this respect, I think the new virtual application pattern, and more specifically policies, in IBM Workload Deployer takes a giant leap toward a more declarative deployment model.
In IBM Workload Deployer, policies allow you to 'decorate' your virtual application pattern with functional and non-functional requirements. In other words, they provide a vehicle for you to tell the system what qualities of service you expect for your application environment. To put a little context around this discussion, let's examine the policies available in the virtual application pattern for web applications. Specifically, let's look at the four policy types you can attach to Enterprise Application, Web Application, and OSGI Application components in this pattern:
Scaling policy: When it comes to cloud, the first thing many folks think about is autonomic elasticity. Applications should scale up and down based on criteria defined by the user. Well, that is exactly what the scaling policy lets you do. You simply attach this policy to your application component, and then specify properties that define when to scale. First, you choose a scaling trigger from a list that includes application response time, CPU usage, JDBC connection wait time, and JDBC connection pool usage. After choosing your trigger, you decide the minimum and maximum number of application instances for your deployment, and then you choose the minimum number of seconds to wait for an add or remove action. At this point, you can deploy your application and IBM Workload Deployer will monitor the environment, automatically triggering scaling actions as needed.
JVM policy: I would be willing to bet that nearly all of you tune the JVM environment into which you deploy your applications. The JVM policy allows you to take two common tuning actions, setting the JVM heap sizes and passing in JVM arguments, as well as attach a debugger to the Java process (especially useful in development and test phases). You can also use the policy to enable verbose garbage collection (invaluable to understanding heap usage patterns for your application) and select the bit level (from 32 or 64) for your application. Again, all you have to do is attach the policy and specify the properties. IBM Workload Deployer will take care of the required configuration updates.
Routing policy: The routing policy provides a simple way to specify virtual hostnames and allowable protocols (HTTP or HTTPS) for your application. Attach the policy, provide the virtual hostname you want to use, select the desired protocols, and that's it! Remember, once you set the virtual hostname you will need to update your name server to map the hostname to the appropriate IP address.
Log policy: During the development and test phase, it is likely that you will want to enable certain trace strings in the application runtime. The log policy allows you to provide trace strings for your application, and it makes sure that the appropriate configuration updates occur in the deployed environment.
While this is not an exhaustive explanation of each of the policies above, I hope it gives you a basic idea of what they are and how to use them. To me, declarative deployment models are going to be a crucial part of making PaaS successful, so I am really excited about the notion of policies in IBM Workload Deployer. What do you think?
We've been talking a lot about IBM Workload Deployer V3 and we will continue to highlight different aspects of the capabilities it provides in the coming weeks. As we've already mentioned - IBM® Workload Deployer V3 is not just another release of the IBM WebSphere CloudBurst Appliance. While it builds on WebSphere CloudBurst's success, and supports and improves upon all of its original capabilities, Workload Deployer provides new application-centric computing capabilities for your private cloud, and brings you higher utilization, improved ease of use, and more rapid application deployment.
I just wanted to point out a great opportunity for anybody considering leveraging IBM Workload Deployer v3 to deploy Database workloads. On June 29th Rav Ahuja, a Senior Product Manager for Data Management at IBM, will be hosting a webcast entitled "Easily Deploying Private Clouds for Database Workloads". He will be joined by Chris Gruber (Product Manager, Database as a Service), Leon Katsnelson (Program Director, IM Cloud Computing Center of Competence), and Sal Vella (Vice President, Database Development and Warehousing) in this panel discussion.
As many of you already know, IBM Workload Deployer v3 comes pre-loaded with DB2 images and patterns that are configured to rapidly provision standardized database servers for any number of purposes. The servers can be deployed in standalone configurations or as part of a complete virtual system including web components with the database components. These servers can also be configured for high availability scenarios. This panel discussion will cover all of these scenarios and more.
You can read more about the webcast in this blog post by Rav Ahuja.
If you want further details about how to build and rapidly deploy databases in a private cloud, be sure to attend this free webinar on June 29th.
Among the major features of the new virtual application pattern in IBM Workload Deployer is the notion of elasticity. That is, as your application needs more resources, it gets them. When your application can meet its SLAs with fewer resources, the environment shrinks. With this kind of pattern, you enable elasticity by specifying a policy and defining the scaling trigger (i.e. CPU usage, application response times, database response times, etc.). What may have been a bit lost in some of these new announcements regarding IBM Workload Deployer is the fact that you can now leverage this core feature of cloud, elasticity, in your virtual system patterns.
If you have read this blog in the past, you probably already know that the Intelligent Management Pack is an option for virtual system patterns built using WebSphere Application Server Hypervisor Edition. When you enable the Intelligent Management Pack option, you are essentially building and deploying WebSphere Virtual Enterprise (WVE) environments. For those of you not familiar with WVE, the best way to describe it is that it provides you with application and application infrastructure virtualization capabilities. Of its many capabilities, one most germane to our discussion today is the ability for users to attach SLAs to applications and then have WVE automatically prioritize requests and manage resources in order to meet those SLAs. Inherent in this capability is the ability to dynamically start and stop application server processes (JVMs) as required. In other words, WVE provides JVM elasticity.
The fact that WVE provides JVM elasticity is nothing new. Further, IBM Workload Deployer started providing virtual machine (VM) elasticity in previous versions (when it was WebSphere CloudBurst). With this feature, you could add or remove VMs to an already deployed virtual system using dynamic virtual machine operations provided by the appliance. The catch was that the VM elasticity was a manual action and you could not link this elasticity to the same SLAs tied to your applications. Well, thanks to a new feature in WebSphere Virtual Enterprise and easy integration provided by the Intelligent Management Pack, this is no longer the case.
Starting in IBM Workload Deployer 3.0, you can take advantage of a new WVE feature called Elasticity Mode when using the Intelligent Management Pack. Elasticity mode is not unique to IBM Workload Deployer, but a concept new to the base WVE product. It allows one to define actions for how WVE should grow and shrink the set of nodes used by application server resources. Like the basic JVM elasticity capability in WVE, these node elasticity actions trigger based on SLAs tied to your applications. Consider the case that you are using elasticity mode and your application is not currently meetings its SLA. If WVE does not think it can start any more application server instances on the current set of nodes, it will grow the set of nodes per your elasticity configuration. Conversely, if WVE detects that it can meet SLAs with fewer nodes, it will shrink the resources per your elasticity configuration.
In IBM Workload Deployer, using elasticity mode becomes even easier. All you need to do is use the Intelligent Management Pack and enable the elasticity mode option in your virtual system patterns. When you do this, you get automatic integration between IBM Workload Deployer and the deployed WVE environment. What does that mean? It means that if WVE detects it needs more nodes, it will automatically call back into IBM Workload Deployer and request that the appliance provision a new VM that will serve as a node for application server processes. It also means that if WVE detects it could meet SLAs with fewer resources, it will call into IBM Workload Deployer and ask it to remove a node. All of this happens without any user scripting. All you have to do is enable this option in your patterns and configure SLAs appropriate for your applications.
To me, this exciting new feature brings out the best of elasticity capabilities in both IBM Workload Deployer and WebSphere Virtual Enterprise. The result is a single management plane that gives you both VM and JVM elasticity for your cloud-based application environments. Best of all, elasticity actions map directly to SLAs for your applications. After all, when it comes to cloud, it's the application that really matters!
As I have mentioned before, IBM Workload Deployer v3.0 introduces choices in pattern-based deployment models. One of those models, virtual system patterns, is a carry over from the WebSphere CloudBurst Appliance. When you use virtual system patterns in IBM Workload Deployer, you can take advantage of all of the techniques you put to use in WebSphere CloudBurst. This is certainly good news for current WebSphere CloudBurst users, but it goes a bit further. Instead of simply maintaining the status quo with virtual system patterns, which would have been reasonable considering the introduction of virtual application patterns, we chose to continue to expand on your customization options for this pattern deployment model. In particular, I want to discuss three new features in IBM Workload Deployer that may help you to better construct and manage virtual system patterns.
The first new feature is one that I have been eagerly awaiting. In the new version of the appliance, we provide you with the ability to specify part and script package ordering in your pattern. This means that, within the virtual system pattern editor, you can tell IBM Workload Deployer in which order to start the virtual machines in your pattern, and you can specify in which order to invoke the script packages within the pattern during deployment. This eliminates the need for special script invocation orchestration logic in your pattern (I had customers resorting to a semaphore like approach using a shared file system), and it allows you to be more declarative about the virtual machine bring-up process. There are constraints, specifically with the part ordering. Some images will impose an implied part start-up order that you cannot change. For instance, deployment manager parts in the WebSphere Application Server Hypervisor Edition image must start before custom node parts. The good news is the pattern editor will not allow you to specify a part start-up order that violates these constraints. The image below shows an example of the ordering view in the virtual system pattern editor.
Another new feature that may influence the way you build virtual system patterns is the introduction of Add-Ons. You can think of Add-Ons as special script packages that you can include in your virtual system pattern that perform system-level configuration actions. Specifically, you can include add-ons in your virtual system pattern to add an operating system user, add a virtual disk, or add a NIC during the deployment process. You include Add-Ons in your pattern by simply dragging and dropping them onto a part in your pattern, just as you do with script packages today. The difference between script packages and Add-Ons is that IBM Workload Deployer will ensure the invocation of all Add-Ons before any other scripts run during deployment. We include default Add-On implementations for adding a user, disk, and NIC.
The last new feature I want to talk about today has more to do with how you manage or govern the deployment of virtual system patterns. In WebSphere CloudBurst 2.0, we introduced the idea of Environment Profiles as a way to extend your customization reach into the deployment process. Initially, these profiles gave you the ability to directly assign IP addresses to virtual machines in your deployment, declaratively specify virtual machine naming formats, and easily split a single pattern deployment across multiple cloud groups. In IBM Workload Deployer, you will be able to use these same profiles to set resource consumption limits for pattern deployments. In particular, you will be able to set cumulative limits for virtual CPU, memory, storage, and software licenses used by deployments tied to a specific profile, thereby giving you finer-grained control over cloud resource consumption. The picture below shows the new resource limit aspects of environment profiles.
Virtual system patterns are key in the deployment model choices for IBM Workload Deployer. Not only did we carry the concept over from WebSphere CloudBurst to IBM Workload Deployer, but we made it even better. Expect this trend to continue!
More and more, I am getting a question about how to bring existing WebSphere environments into IBM Workload Deployer. While "bringing in an environment" can mean any number of things, let's take it to mean that a user wants to import their existing WebSphere cells, applications, and configuration into IBM Workload Deployer as a pattern they can subsequently deploy. While there may not be a big red easy button in the appliance that lets you point to an existing environment and import it, there are a couple of techniques that one can employ. I have covered both techniques before, but since I'm getting the question with increasing frequency, I felt like it was time for recap.
The first option is to use a combination of IBM Workload Deployer and Rational Automation Framework for WebSphere. This is a use case I have spoken about numerous times at conferences and in blog posts and articles. In fact, you can read a little about it here. In this sense, RAFW provides excellent capabilities to point at an existing cell, and import everything about it. This includes WebSphere configuration, applications, shared libraries, and more. Once imported as a RAFW project, you can use the IBM Workload Deployer integration script package provided by RAFW to replay that configuration on top of deployments created by the appliance.
The second option is something I talk about a little less frequently. This option revolves around the use of a sample script (provided for free in our samples gallery) that you can run against existing WebSphere cells. The invocation of this script produces IBM Workload Deployer script packages that you can use in patterns to apply the configuration of the target cell to your new cloud-based deployments. Under the covers the utility script and resultant script packages use backupConfig and restoreConfig respectively. They do ensure the update of the cell, node, and host names during the restoreConfig execution (which happens automatically during pattern deployment). Beyond that, the use of the script is subject to the same limitations and rules in place for the use of the backupConfig and restoreConfig commands. You can read more about this capability, watch it in action, and download it for free.
I hope this is all useful information for those of you looking for ways to import existing environments into IBM Workload Deployer as patterns. If you have any questions, please let me know!
WebSphere configuration management practices are common items of conversation that comes up when I am talking with users about IBM Workload Deployer (formerly WebSphere CloudBurst). This conversation can take on so many different avenues that it is hard to capture all of them in a short blog post. So, for the sake of this post, let's consider two facets of WebSphere configuration management. The first facet is addressing the need to consistently arrive at the same configuration across multiple deployments of a given WebSphere environment. The second facet involves managing the configuration of a deployed environment over time to protect against living drift. What is the best way to tackle these two challenges? Well, it comes down to picking the right tool for the job.
When it comes to ensuring consistency of initial WebSphere configuration from deployment to deployment, there is really no better means than patterns-based deployments enabled by IBM Workload Deployer. Whether you are using a virtual system or virtual application pattern, the bottom line is that you are representing your middleware application environments as a single, directly deployable unit. When you need to stand that environment up, you simply deploy the pattern. The deployment encapsulates the installation, configuration, and integration of the environment, and your applications if you so choose. The benefit of this approach is that once you get your pattern nailed down, you can be extremely confident that the initial configuration of your environments is extremely consistent from deploy to deploy. Basically, no more bad deployments because someone forgot to run configuration step 33 out of 100!
Because we talk about the benefits of consistency provided by our IBM Workload Deployer patterns, users often ask what IBM Workload Deployer does in terms of configuration governance for deployed environments. In other words, they ask how IBM Workload Deployer helps them to track configuration changes or compare the configuration of a deployed environment to a known good one. The honest answer is that this is a bit beyond the functional domain of the appliance. While IBM Workload Deployer does allow you to manage the deployed environment (apply fixes, update deployed applications, snapshot, etc.), it does not layer some of the common configuration governance concerns on top of that. However, there is a good reason why the appliance does not focus on that. It's because Rational Automation Framework for WebSphere does!
If you find yourself wanting to actively track configuration changes, periodically (and automatically at specified intervals) compare configuration changes to a 'golden' baseline, import configurations of a known good environment, apply common configuration across a number of cells, then the capabilities of RAFW would likely be of interest to you. It can do all this and give you an incredible toolbox of out-of-the-box application deployment and configuration capabilities for WebSphere environments. In my mind, for those that spend a good deal of time dealing with WebSphere configuration, whether it be deploying applications, configuring containers, or debugging inadvertent changes, an examination of RAFW functionality is a must.
Now it is time for a bit of disclaimer/clarification. I am not suggesting that you pick one or the other when it comes to IBM Workload Deployer and RAFW. In fact, there are many scenarios where 1+1=3 with these two solutions, and I have written about it many, many times (including this article). That said, I think it is important to highlight the relative strengths of each product, so that it is easier to map it back to your pain points. In honesty, many of the users I talk with have challenges in getting the initial configuration right AND managing it over time. That kind of problem beckons for the integrated IBM Workload Deployer/RAFW solution.
Of course, technology only gets you so far when it comes to these kinds of problems. It would be disingenuous of me to suggest otherwise. It has always been and will continue to be important to establish clear and rigorous processes around the way you deploy, manage, and change environments. This just gives you an idea of some of the tools you can leverage to aid in the implementation of those processes.
One of the fundamental tenants of IBM Workload Deployer is a choice of cloud deployment models. Starting in v3.0, users will be able to deploy to the cloud using virtual appliances (OVA files), virtual system patterns, or virtual application patterns. The ability to provision plain virtual appliances is a way to rapidly bring your own images, as they currently exist, into the provisioning realm of the appliance. As such, I think the use cases and basis for deciding to use this deployment model are fairly evident. However, when comparing the two patterns-based approaches, virtual system patterns and virtual application patterns, the decision requires a bit more scrutiny.
Our pattern approach is a good thing for you, the user. Basically, when we refer to patterns in the context of cloud, we are referring to the encapsulation of installation, configuration, and integration activities that make deploying and managing environments in a cloud much easier. Regardless of what kind of pattern you end up using, you benefit from treating a potentially complex middleware infrastructure environment or middleware application as a single atomic unit throughout its lifecycle (creation, deployment, and management). In turn, you benefit from decreased costs (administrative and operational) and increased agility via rapid, meaningful deployments of your environments. That said, it is imperative to understand the differences between virtual system and virtual application patterns, and more importantly, it is important to understand what those differences mean to you. Let's start by considering the admittedly simple 'Cloud Tradeoff' continuum below.
In the above graph, the X-axis represents the degree to which you have customization control over the resultant environment. The degree of control gets lower as we move from left to right. The left Y-axis represents total cost of ownership (TCO), which decreases as we move up the axis. The right Y-axis represents time to value, which similarly decreases as we go up the axis. Naturally, enterprises want to move up the Y-axis, but, and it can be quite a big but, they are sometimes hesitant to relinquish much control (move to the right on the X-axis) in order to do so. In that light, I think it helps to explore our two patterns-based approaches a bit more.
The most important thing to understand about this continuum is that the X-axis really represents the customization control ability from the point of view of the deployer and consumer of the environment. An example is probably the best way to explain. Let's consider a fairly simple web service application that we want to deploy to the cloud. If we were to use a virtual system pattern to achieve this, we would probably start by using parts from the WebSphere Application Server Hypervisor Edition image to layout our topology. We may have a deployment manager, two custom nodes, and a web server. After establishing the topology, we would add custom script packages to install the web service application and then configure any resources the application depended on. Users that wanted to deploy the virtual system pattern would access it, provide configuration details such as the WAS cell name, node names, virtual resource allocation, and custom script parameters, and then deploy. Once deployed, users could access the environment and middleware infrastructure as they always have. That means they could run administrative scripts, access the administrative console provided by the deployed middleware software, and any other thing one would normally do. The difference in using virtual system patterns is not necessarily the operational model for deployed environments (though IBM Workload Deployer makes some things, like patching environments, much easier). Instead, the difference is primarily in the delivery model for these environments.
Using a virtual application pattern to support the same web service application results in a markedly different experience from both a deployment and management standpoint. In using this approach, a user would start by selecting a suitable virtual application pattern based on the application type. This may be one shipped by IBM, such as the IBM Workload Deployer Pattern for Web Applications, or it may be one created by the user through the extensibility mechanisms built into the appliance. After selecting the appropriate pattern, a user would supply the web service application, define functional and non-functional requirements for the application via policies, and then deploy. The virtual application pattern and IBM Workload Deployer provide the knowledge necessary to install, configure, and integrate the middleware infrastructure and the application itself. Once deployed, a user manages the resultant application environment through a radically simplified lens provided by IBM Workload Deployer. It provides monitoring and ongoing management of the environment in a context appropriate for the application. This means that there are typically no administrative consoles (as in the case of the virtual application pattern IBM ships), and users can only alter well-defined facets of the environment. It is a substantial shift in the mindset of deploying and managing middleware applications.
Okay, with that explanation in the bag, let's revisit the diagram I inserted above. I hope it's clear that, all things being equal, virtual application patterns indeed provide the lowest TCO and shortest TTV because of the degree to which they encapsulate the steps involved in setting up complex middleware application environments. So, let's get back to my assertion that the customization control continuum really applies to the deployer and consumer. Why do I say that? It's simple. In the case of either the virtual system pattern or the virtual application pattern, the pattern composer has quite a bit of liberty in how they construct things. Sure, we enable you right out of the chute by shipping pre-built, pre-configured IBM Hypervisor Edition images, as well as pre-built virtual system and virtual application patterns. The key is though, that the IBM Workload Deployer's design and architecture also enables you to build your own patterns -- be they the virtual system or virtual application type. With anywhere from a little to a lot of work, you can build virtual system and virtual application patterns tailored to your use cases and needs.
At this point, you may be saying, "Well now you have really confused things! How am I supposed to decide what kind of patterns-based approach fits my needs?" I have some advice in that regard. First, map your needs to things that we enable with the assets you get right out of the box with IBM Workload Deployer. If your application fits into the functional scope of one of the virtual application patterns that we ship, use it. If you can support the application by using IBM Hypervisor Edition images, virtual system patterns, and custom scripts, do it. In this way, you benefit most from the value offered by IBM Workload Deployer. However, if you find that you cannot use any of the assets we provide right out of the box (e.g. you want to deploy your environment on software not offered in IBM Hypervisor Edition form or in a virtual application pattern), then ask yourself one simple question: "What do I want my user's experience to be?"
In this sense, I primarily mean a user to be a deployer or consumer of your patterns. You need to decide whether you favor the middleware infrastructure centric approach afforded by virtual system patterns, or if you prefer the application centric approach proffered by virtual application patterns. There is no way to answer this generically for all potential IBM Workload Deployer users. Instead, you have to look at your use case, understand what's available to help you accomplish that use case, and finally, decide on what you want your user's experience to be. I hope this helps!
Application-centric cloud computing is the main thrust behind the new capabilities of IBM Workload Deployer v3.0. But what does that really mean? After all, application-centricity is really just a concept. Granted, it is an important concept, but it is fairly meaningless until it is put into action or implemented. IBM Workload Deployer does just that with its new Virtual Application Patterns (VAPs).
VAPs are the embodiment of the workload pattern approach I briefly discussed in an overview post a few weeks back. The idea with a VAP is to give the user an interface through which they can provide their application, specify dependencies, declare functional and non-functional requirements and then deploy. Of course application middleware is a part of the overall solution, but IBM Workload Deployer has the smarts to build, configure, and integrate the necessary infrastructure in order to support the user's application. This is completely hidden from the user, so they are liberated to focus on the application and its requirements.
If we scratch a bit further beneath the surface of a VAP, we see that these patterns contain three primary pieces. These primary pieces are components, links, and policies, and they are fundamental to understanding how virtual application patterns work. Let's start with the building blocks of VAPs, components. Put simply, components represent different resources and functionality profiles that make up your application environment. As an example, the IBM Workload Deployer Pattern for Web Applications is a VAP that contains components for an EAR file, WAR file, message queue, and any number of other components that are typical requirements for a web application. The components will certainly vary based on the workload type (i.e. the components included in a web application VAP would be different than those included in a batch application VAP), but they are the foundation of any VAP.
From the ground up, the next logical element we come to in the VAP is a link. A link is a way to declare a dependency or integration point between two components. As an example, consider a VAP with a WAR file component and a database component. You might draw a link between the WAR component and the database component to indicate that your web application uses or otherwise depends on the database. IBM Workload Deployer interprets this link, and takes it as a directive to configure the integration between the two components as a part of deployment. In this case, that may mean configuring a data source in the application's container. This is just a simple example, and an application may have any number of links between components.
Finally, we come to the policy element within the VAP. A policy is a way for a user to specify functional and non-functional requirements for their application environment. Users attach policies to the VAP, or to components in their VAP, and IBM Workload Deployer interprets and enforces those policies. In the context of a web application, one example of a policy could be a scaling policy. The scaling policy might indicate scaling requirements for the application that included minimum application instances, maximum application instances, and conditions that triggered scaling activities. IBM Workload Deployer would use the information in a scaling policy within a VAP to appropriately manage the deployed, running environment. Other examples of a scaling policy may include a JVM policy that provides configuration directives for the java virtual machines in your application environment or a logging policy that defines logging configuration options. In any case, the policy element allows VAP builders to influence the configuration and management of the application environment.
In the example VAP below you can see the use of components (Enterprise Application, Database, User Registry, Messaging Service), links (blue lines between components), and policies (Scaling Policy, JVM Policy):
In total, when I look at a VAP a particular word sticks out to me: declarative. VAPs really enable declarative, application-centric cloud computing. What do I mean? By declarative, I mean you are telling IBM Workload Deployer what you want, but not necessarily how you want it done. It is the job of IBM Workload Deployer to take care of the how. This shift in approach to application environments enables the potential for significant savings, and more importantly to me, lays the foundation for a more agile, flexible approach to deploying and managing application environments.
There will be more in the weeks and months to come on IBM Workload Deployer, so stay tuned. I also want to put a plug in for a new blog from Jason McGee. For those that do not know Jason, he is an IBM Distinguished Engineer, and the lead architect behind IBM Workload Deployer. Be sure to check out his blog for insights on this new offering, as well as for all things cloud.
Jason McGee will be leading the second GWC Lab Chat this week on Wednesday, 4/20. The very timely topic is related to recent announcements from IBM regarding the IBM Workload Deployer (see previous posts). Entitled "Application-Centric Cloud Computing" the discussion will focus on the concept of deploying and managing your application workloads in a shared, self-managed environment rather than manually creating and managing the application middleware topologies. It places the focus on the application rather than the infrastructure. This concept promises to deliver greater simplicity, elasticity, and
density among other things. It can position your business to react more
quickly and efficiently to the increasing demands of your customers and
free you from the managing all of the details.
Many of you may have already heard Jason speak last week at IMPACT 2011 in the cloud mini main tent or perhaps at any number of other sessions that Jason was involved in. Jason is the key architect behind IBM's WebSphere cloud activities. Obviously, Jason understands the cloud space very well and has a clear view of the evolution into Application-Centric Cloud Computing. This GWC Lab Chat will provide the opportunity to get your questions answered and share your perspective on this technology.
Jason will provide a brief introduction to the concepts and ideas and then lead an open discussion. Put it on your calendar and plan to attend - and please plan to bring your questions and comments to help foster a rich discussion. We want to hear from you.
If you haven't registered yet it is not too late - learn more and register here. It is easy to register and there is no cost. This is a very timely event and a great way to dig a little more deeply into concepts you first heard at IMPACT or perhaps hear them for the first time. Don't miss it!
IBM Impact 2011 was a wildly busy week! Customer meetings, entertaining keynotes, informative sessions, and hands-on labs packed the 6 days with more than enough action. I spent a lot of the week presenting sessions and conducting labs for the newly announced IBM Workload Deployer. As one would expect with any new announcement, we got tons of questions about IBM Workload Deployer. While I cannot capture all the questions and their answers here, I will try to cover some of the more prevalent ones below.
Question: What happened to WebSphere CloudBurst?
Answer: The short answer is, it simply went through a rename. WebSphere CloudBurst became IBM Workload Deployer v3.0. The version 3.0 acknowledges this is an evolution of what we started with WebSphere CloudBurst, which was at version 2.0. Why remove WebSphere from the name? The fact that this is now an IBM branded offering is more accurate as it is capable of deploying and managing more than just WebSphere software.
Question: What is new in IBM Workload Deployer?
Answer: While there are many new features that I will be talking about over the coming months, the most prominent new facet is the introduction of workload patterns (also referred to as virtual application patterns). As opposed to topology patterns (traditionally referred to as simply patterns in the WebSphere CloudBurst product), workload patterns raise the level of abstraction to the application level. Instead of focusing on application infrastructure and its configuration as you do with topology patterns, workload patterns allow you to focus on the application and its requirements. When using workload patterns, you provide the application, attach policies that specify functional and non-functional requirements, and deploy. IBM Workload Deployer handles deploying and integration the middleware infrastructure necessary to support the application, and it automatically deploys your application on top of that middleware. In addition, IBM Workload Deployer manages the application runtime in accordance with the policies that you specify in order to provide capabilities such as runtime elasticity.
Question: If I am a current WebSphere CloudBurst user, what does this mean for me?
Answer: Not to worry. You will be able to use all of your WebSphere CloudBurst assets (patterns, scripts, images) in the new IBM Workload Deployer. All of the capabilities previously in WebSphere CloudBurst are present in IBM Workload Deployer (terminology may vary slightly -- topology pattern instead of just pattern for instance). Additionally, we continue to expand on the functionality that you are familiar with from WebSphere CloudBurst. This includes updates for Environment Profiles, new IBM Hypervisor Edition images, new pattern building capabilities, and more. Stay tuned for more information about these new features and for information on how you can move your WebSphere CloudBurst resources to the new IBM Workload Deployer.
Question: How do I choose between using workload and topology patterns?
Answer: There are a number of factors that will lead you to using either workload patterns, topology patterns, or both. The primary decision point will be how much control you really need (not want). When using workload patterns, you sacrifice some customization control over the configuration, integration, and administration of the middleware application environment since the workload pattern and management model abstracts away the 'guts' of the system. Everything about the workload pattern is application-centric. On the other hand, topology patterns give you intimate control over the configuration, integration, and administration of the middleware application environment. As a general rule of thumb, if your application requirements match the capabilities of a workload pattern, that is the way to go as it can greatly reduce complexity and cost associated with deployment and management. If a workload pattern does not meet the needs of your application, topology patterns can still greatly reduce cost and complexity and you can tailor them to fit almost any need. Beyond generalities, there is no hard and fast rule for choosing one over the other. It comes down to understanding your application environment and its needs.
Question: Is IBM Workload Deployer an appliance like WebSphere CloudBurst?
Answer: Yes, it is still an appliance, but an updated one! The new appliance is 2U, and it provides more storage, processing power, and memory. It is still just as easy to setup, but just slightly bigger.
Well, that is all for now, but I will be back many times over the coming months with more information. In the meantime, if you have any questions, please leave them in a comment below.
One of the key benefits of WebSphere CloudBurst adoption is rapid -- seriously fast -- deployments of middleware application environments. Our users are leveraging the appliance to bring up enterprise-class middleware environments in mere minutes. If you know a little bit about WebSphere CloudBurst, that statistic may be a little surprising considering the appliance dispenses large virtual images from the appliance over the network to a farm of hypervisors. You may ask how the appliance can achieve such rapid deployments in light of the mere physics involved in transferring large amounts of data over a network. The simple answer is caching of course!
WebSphere CloudBurst creates a cache for each unique virtual image on datastores associated with the hypervisors in your cloud. On subsequent deployments of the same virtual image to the same datastore, WebSphere CloudBurst does not need to transfer the image over the wire. It simply uses the virtual disks that are in the cache on the datastore. In the context of the virtual image cache, the deployment process goes something like this:
WebSphere CloudBurst identifies the images necessary to deploy the pattern selected by the user.
WebSphere CloudBurst identifies the hypervisors and associated datastores that will host the virtual machines created during deployment.
WebSphere CloudBurst checks the selected datastores to see if they already have caches for the images it will be deploying. From here, one of two things happens:
WebSphere CloudBurst detects that there is no cache on the datastore and transfers the images over to the hypervisor, thereby creating the cache on the underlying datastore.
WebSphere CloudBurst detects that there is a cache on the selected datastore and uses that cache in lieu of transferring the disk over the wire.
The process may sound complicated, but it is completely hidden from you, the user. You do not need to know how the cache works since WebSphere CloudBurst handles all of these interactions. So, why am I telling you all of this then? As a WebSphere CloudBurst user, it is good to be aware of the cache for two main reasons. First, you need to account for the storage space the cache needs when doing capacity planning for your WebSphere CloudBurst cloud. Second, anytime you upload or create a new image through extend and capture, I would strongly suggest you automatically prime the cache for this new image. You can do this by simply deploying a pattern built on the image to each unique hypervisor/datastore in your environment. This may take a temporary re-arrangement of cloud groups, but it is a simple process, and it guarantees rapid deployments for all users of the new image.
I hope this sheds a little light on a subject we do not discuss too often. As always, if you have any questions, do not hesitate to let me know!
I hardly ever have a conversation about WebSphere CloudBurst, or generally cloud computing for application middleware, without the topic of databases coming up. Databases are such an important piece of nearly every application middleware environment, so users want to be sure that whatever they do for their application servers, they can also do for the databases on which their applications rely. That is why the capability to deploy DB2 from WebSphere CloudBurst has been around for as nearly as long as the capability to deploy WebSphere Application Server.
Even though DB2 deployment capability has been around for a while, there are still some common misconceptions regarding the offering. First, I have talked to a fair number of users who are under the impression that we only offer a trial version of DB2 for deployment via WebSphere CloudBurst. While that was true for the first few months of the offering, that is no longer the case. For several months now, a fully supported, 64 bit, production-ready DB2 image has been ready for use in WebSphere CloudBurst. If you were waiting for a DB2 image that you could go live with, wait no longer!
The other misconception, or rather, point of confusion, arises from the fact that the DB2 image for WebSphere CloudBurst is not, by name, a Hypervisor Edition image. I can assure you that is in name only. The DB2 image looks like and behaves like any other IBM Hypervisor Edition image once you load it into the appliance. You can use it to build and deploy patterns in the same way you use other images in WebSphere CloudBurst. You may just have trouble finding it if you search for 'DB2 Hypervisor Edition' as opposed to 'DB2 Server for WebSphere CloudBurst Appliance.'
Instead of going into further detail, I want to refer you to a blog entry from a fellow IBMer, Leon Katsnelson. Leon is a program director for DB2 and is responsible for the team that develops and delivers the DB2 image for WebSphere CloudBurst. In his most recent post, he provides a nice overview of the image and gives good information for those looking to use DB2 and WebSphere CloudBurst (there is also a bit on cloud computing at the beginning that I think is spot on). Check out Leon's post, and let us know what you think!
In last week's post, I put the spotlight on various aspects of bundles in the Image Construction and Composition Tool. I finished with a look at a WebSphere CloudBurst virtual image created from the bundle. However, you do not just magically go from a bundle to an image that you can use in WebSphere CloudBurst, Tivoli Provisioning Manager, or on the IBM Cloud. Today, I want to show you how to go from a bundle to a custom virtual image using the IBM Image Construction and Composition Tool.
Once you have defined at least one bundle and one base operating system image, you are ready to compose a custom image. We already talked about creating a bundle, but the base operating system image is a new topic. You can do this by either starting from ISO and kickstart configuration files, or you can import an existing Open Virtual Appliance (OVA) image that contains your operating system of choice. Once you have that base image imported or defined in the Image Construction and Composition Tool, you can extend it to create a custom image on top of the base OS image.
After creating your extended image, you can add bundles that represent the software you want to install in your custom image. Simply click on the Software tab of the new virtual image. Click the add icon, and select the bundle that you want to add. You can add as many bundles as you would like to your custom image.
After adding a bundle, it will show up in the Planned list of software for the image. Click on it to display its details in the right side of the screen. You will notice General, Install, and Configuration sections for the bundle. In the Install section, you will find a list of the installation parameters you defined for the bundle. You can provide values for the parameters at this time.
If you click on the Configure section, you will see all of the configuration paramters you specified for the bundle. You can provide default values, and you can specify whether or not these should be configurable by deployers of your custom image. If you mark them as configurable, users will be able to provide values for the parameters at image deploy time, regardless of whether they provision the image using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
After you add the necessary bundles and specify installation and configuration data, you can save the image. Upon saving, the image status changes from Synchronized to Out of Sync.
Now you are ready to synchronize the image. To do this, simply click the synchronize icon. This will result in the creation of a virtual machine in the cloud envrionment (VMware or IBM Cloud) you defined in the selected cloud provider. The Image Construction and Composition Tool will then invoke the appropriate installation tasks (per the bundles you included in the image) within the running virtual machine. It will also copy over any configuration scripts you defined in the bundle.
After a while, the synchronization process completes, and the image returns to the Synchronized state. At this point, you are ready to capture the image by clicking the capture icon. This results in the creation of an OVA virtual image with your customizations. When the capture process completes, the image status changes to Deployable.
Once the image is in the deployable state, it is nearly ready to use. If you are using the IBM Cloud as your cloud provider, you can simply mark the image complete by clicking the complete icon. At this point, the image will show up in your private catalog on the IBM Cloud and it is ready to use. If you are using VMware as the cloud provider, you need to export the image. Click the export icon and provide information about an SCP-enabled server to which you want to export the image. Ideally, this location is directly reachable by the WebSphere CloudBurst or Tivoli Provisioning Manager environment into which you will import the image.
You can monitor the export status in a separate window by clicking on a link shown after clicking the OK button in the dialog above. When the export finishes, you are ready to import your new custom virtual image into WebSphere CloudBurst or Tivoli Provisioning Manager.
I hope the last three posts have given you a better idea of what the new IBM Image Construction and Composition Tool is all about. There will definitely be more to come about this tool in the near future, but in the meantime, if you have any questions or comments, please reach out to me. Until then, good luck and full speed ahead on your custom image compositions!
In previous posts, I have discussed the integration capability between WebSphere CloudBurst and Tivoli Service Automation Manager. Most recently, I discussed this in the context of integrating WebSphere and IBM CloudBurst. Today, I am happy to announce the publication of an article I co-wrote with Marcin Malawski from TSAM development on the subject of this integration.
If you are a WebSphere user interested in a holistic approach in building out a private cloud, I strongly recommend that you check the article out. If you are currently an IBM CloudBurst, IBM Service Delivery Manager, or Tivoli Service Automation Manager user and you provision a significant number of WebSphere environments, I strongly recommend that you check the article out. In fact, regardless of your current situation, do me a favor and check the article out!
As always, I look forward to feedback and comments. Good, bad, or indifferent. You can leave your comments here or on the article page. I look forward to hearing from you!
Though I feel like we've come a long way in some of the initial confusion surrounding IBM CloudBurst and WebSphere CloudBurst, I still get quite a few basic questions on the solutions. The two most common questions are, 'Are they different products?', and 'Can/should I use them together?'. I put together a really brief overview that answers these questions and talks about the basics of the combined solution. I hope it provides a good introduction!