When we talk about the WebSphere Application Server Hypervisor Edition, we often get a lot of questions about whether or not SUSE Linux is the only flavor of the Linux operating system that we support. The short answer to that question is no.
While it is true that we only deliver the WebSphere Application Server Hypervisor Edition with a SUSE Linux operating system, we will support the use of the virtual image packaging with Red Hat Enterprise Linux as the base operating system. The basic process consists of creating a virtual machine disk based off of a suitable Red Hat install, altering the OVF file in WebSphere Application Server Hypervisor Edition to reference this virtual disk instead of the SUSE virtual disk, and then packaging a new OVA file that contains all the same WebSphere virtual disks (profiles, binaries, IBM HTTP Server) but swaps out the Red Hat virtual disk for the SUSE virtual disk. We have done this many times in both the lab and field, and we offer services to users who need help in creating the image.
Customers often ask if there is any difference in using Red Hat versus SUSE Linux. The answer is, of course, yes and no. The answer is yes in that users must bring their own licenses of Red Hat (SUSE Linux licenses are included in the WebSphere Application Server Hypervisor Edition), and users must support and maintain the Red Hat operating system on their own. However, once the image is built, there is absolutely no difference in the use of that image within WebSphere CloudBurst.
Once built, users upload the image into their WebSphere CloudBurst catalog and it is available for use in pattern building just like any other image. I mentioned that users are responsible for updating and maintaining the image, well users can use WebSphere CloudBurst to create these updated images. When patches or updates are ready for the Red Hat operating system, the Extend/Capture facility available for images in WebSphere CloudBurst can be used to create a new custom Red Hat operating system with your desired fixes. This is all done without ever having to worry about actually recreating and repackaging the image again.
I know seeing is believing, so with respect to the "sameness" of using a Red Hat version of the WebSphere Application Server Hypervisor Edition within WebSphere CloudBurst, I've created a short demo you can watch here. As always, let us know what you think and send any questions our way.
I’m going to take a different approach this week in the blog. Instead of me telling you about some of the features or uses of WebSphere CloudBurst, I thought I would catch up with someone using the product everyday, WebSphere Test Architect Robbie Minshall. Robbie is responsible for a team of testers that harness a lab of over 2,000 physical machines to put our WebSphere Application Server product through some pretty rigorous testing. Toward the beginning of this year Robbie’s team started to leverage the WebSphere CloudBurst Appliance in order to create the WebSphere Application Server environments needed for their testing.
Robbie, can you tell us a little bit about what the WebSphere Application Server test efforts entail?
In WebSphere Application Server development and test we have two primary scenarios. The first is making sure that developers have rapid access to code, test cases and server topologies so that they can write code, test cases and then execute test scenarios on meaningful topologies. The second scenario is an automated daily regression where in response to a build, we provision a massive amount of WebSphere Application Server topologies and execute our automated regression tests.
Previously we have supported these scenarios through the deployment of the Tivoli Provisioning Manager for operating system provisioning, some applications for checking out environments, and then a lot of automation scripts for the silent install and configuration of WebSphere Application Server cells.
Given those scenarios and the existing solution, what are your motivations for setting up a private cloud using WebSphere CloudBurst Appliance?
We are supporting these scenarios through a pretty complicated combination of technologies. These include silent WAS install scripts, wsadmin configuration scripts, a custom hardware leasing application and the utilization of Tivoli Provisioning Manager for OS Provisioning. This solution is working very well for us though as always we are looking for areas to improve, opportunities to simplify and to reduce our dependency on investment in our custom automation scripts. Mainly, there were 3 areas where we wanted to improve our framework: Availability, Utilization and Management. This is why we started looking to the WebSphere CloudBurst Appliance.
Can you expand a bit on what you are looking for in those three areas?
The first focus area we have is availability of environments. We really wanted to lower the entry requirement for the skills and education necessary to get a development or test environment. Setting up these environments has just been too hard, too time consuming, and too error prone. Using WebSphere CloudBurst we can provide an easy push button solution for developers to get on-demand access to the topologies they need.
The second area we are looking for significant improvements on is hardware utilization. Our budgets are tight and in our native automation pools we are only using between 6-12% of the available physical resources. In order to improve this we were looking at leveraging virtualization. WebSphere CloudBurst offers the classic benefit of virtualization with the nice additions of optimized WebSphere Application Server placement and really good topology and pattern management. In our initial experiments we were able to push the hardware utilization up to 90% of physical capacity and consistently were leveraging around 70% of our physical capacity.
Finally we are looking to improve and simplify our management of physical resources and automation. We work in a lot of small agile teams and organizational priorities change from iteration to iteration. Not only does WebSphere CloudBurst allow us to maintain a catalog of topologies or patterns for releases but it also allows us to adjust physical resource allocation to teams through the use of sub clouds or cloud groups.
Basically we felt that WebSphere CloudBurst would improve the availability of application environments, enhance automation, and improve hardware utilization all with very low physical and administrative costs.
What were some of the challenges involved with getting a cloud up and running in your test department?
One of our challenges seems like it would be common to many scenarios, especially in today’s world. Our budget for new hardware to build out our cloud infrastructure was initially very limited. Most cloud infrastructure designs depict very ideal hardware scenarios including SANs, large multicore machines, and private and public networks within a dedicated lab. Quite frankly we did not have the budget to create this from scratch. It was important for us to demonstrate value and data to warrant future investment in dedicated infrastructure. After some performance comparisons we were very happily surprised to see that we could leverage our existing mixed hardware within a distributed cloud. The performance of application environments dispensed by WebSphere CloudBurst on many small existing boxes in comparison to large multicore machines with a SAN was very comparable. This allows us to leverage existing hardware, with minimal investment all the while demonstrating the value and efficiencies of cloud computing. That data in turn has allowed us to obtain new dedicated hardware to iteratively build up a larger lab specifically for use with WebSphere CloudBurst.
Specifically with WebSphere CloudBurst, are there any tips/hints you would offer users getting started with the appliance?
Sure. First, we quickly realized as we added hypervisors to our WebSphere CloudBurst setup it was critical to have someone with network knowledge on hand. This is because the hypervisors came from various sections of our lab, and we really needed people with knowledge of how the network operated in those different sections. Once we had the right people we were able to setup WebSphere CloudBurst and deploy patterns within an hour and a half.
Moving forward we continued to have challenges as we dynamically moved systems between our native hardware pool and our cloud. Occasionally the WebSphere CloudBurst administrator would move a system into the cloud but incorrectly configure the network or storage information. This lead to some misconfigured hypervisors polluting our cloud. We overcame this, quite simply and satisfactorily I may add, by creating some simple WebSphere CloudBurst CLI scripts which add the hypervisors, test them individually, by carrying out a small deployment to that hypervisor, and then move the correctly configured hypervisors into the cloud after verifying success. Misconfigured hypervisors go into a pool for problem determination. This has allowed us to maintain a clean cloud, and we are able to dynamically move our hardware in and out of the cloud to meet our business objectives.
We also use the WebSphere CloudBurst CLI to prime the cloud so to speak. Before using a given hypervisor in our cloud, we execute scripts that ensure each unique virtual image in our catalog has been deployed to each of our hypervisors at least once. When the image is first deployed to a hypervisor, a cache is created on the hypervisor side of the connection, thus meaning subsequent deployments do not require the entire image to be transferred over the wire. This gives us consistent and fast deployment times once we are using a hypervisor in our cloud.
I would assume that like many applications deployed on WebSphere Application Server, your team’s applications have several external dependencies. Some of these dependencies won’t necessarily be in the cloud, so how did you handle this?
You’re right about the external dependencies. Our applications and test cases run on the WebSphere Application Server but are dependent upon many external resources such as databases, LDAP servers, external web services etc. WebSphere CloudBurst allows us to deploy WAS topologies in a very dynamic and configurable way but the 1.0.1 version does not allow us to deploy these external resources in the same manner. This was overcome by using script packages in our patterns. These script packages allow us to associate our test applications with various patterns we have defined. The script package definition also allows us to pass in parameters to the execution of our scripts. We supply these parameter values during deploy time, and these values are used to convey the name or location of various external resources. The scripts that install our applications can access these values and ensure the application is properly integrated with the set of resources not managed by the appliance.
What is your team looking to do next with WebSphere CloudBurst and their private cloud?
The next challenge on our plate is to keep up with the demand of our expanding cloud and to develop a more dynamic relationship between our native pools and our cloud using the Tivoli Provisioning Manager. These are fun challenges to have and we look forward to sharing our progress.
I'm glad I got to spend some time with Robbie to glean some insight into their work and progress with WebSphere CloudBurst. I hope this information was useful to you. It's always nice to hear about a product from practitioners who can give you hints, tips, gotchas, and other useful information. Be sure to let me know if you have any questions about what Robbie and his team are doing with WebSphere CloudBurst.
As fast as the Impact conference arrived is about as fast as it is disappearing in our collective rearview mirror. It seems like a blur. In a word, the conference was... exhausting! In other words, it was informative, exciting, and illuminating. I hope that many of you had a chance to make it out there, and I hope more of you make it to Impact in 2013.
For those of you familiar with the conference, you know that it is typically a launching ground for new product versions and altogether new products. This year was certainly no different with the launch of the new version of WebSphere Application Server (8.5), the new and improved IBM Business Process Manager and IBM Operational Decision Manager, a new version of WebSphere eXtreme Scale (8.5), and numerous updates across the messaging and connectivity stack. While I encourage you to follow up on all of these important announcements, they are not what I am going to focus on today. Instead, I am going to focus on the new addition to the IBM family that got plenty of attention this year: IBM PureApplication System.
Joe recently touched on this new offering, so I won't get into an exhaustive overview. To put it briefly, IBM PureApplication System is an expert integrated system. What does that mean? First and foremost it means that it is a system -- a whole. It is an integrated platform of hardware and software, optimized and tuned for running transactional web and database workloads. I do not mean that it is a system of software that we pre-install on off-the-shelf hardware. Rather, it is the result of hardware and software engineers across IBM working together to build a system that is expert at what it does. More than just the web application and database software though, IBM PureApplication System also contains pre-installed and pre-configured management software that delivers a soup to nuts (hardware to application) single pane of glass for managing the entire system. I could go on and on, but again that's not my purpose here. I encourage you to check out the new IBM PureSystems web page for more information and some pretty cool videos.
For those of you that take a look at IBM PureApplication System you will quickly find that the notion of pattern-based deployments (something I have talked about at length on this blog) plays a key role in the new system. In fact, the same virtual system and virtual application pattern constructs that you have come to know in IBM Workload Deployer are front and center in IBM PureApplication System as well. In the new system, you can build custom virtual system and virtual application patterns, deploy them to your cloud, and then manage them over time. If you are familiar with the IBM Workload Deployer user interface, you will likely find yourself immediately familiar with the interface of IBM PureApplication System. Given all of that, if you are like many of the users I talked to at Impact and since, you probably have some questions as to IBM Workload Deployer and this new system. Most commonly, I get these two questions: "What does this mean for the IBM Workload Deployer product?" and "How do I know when to use IBM PureApplication System versus IBM Workload Deployer?" Let me do my best to address those questions.
In terms of the impact of the IBM PureApplication System on the IBM Workload Deployer offering, I can only view it in one way: affirmation. As I said above, IBM PureApplication System puts the mode of pattern-based deployments front and center, and further affirms that this kind of approach is crucial to the evolution of application delivery and management. Those of you familiar with IBM Workload Deployer or its predecessor WebSphere CloudBurst know that we have been talking about patterns for years. Rest assured we will continue to talk about patterns and solutions for building, deploying, and managing them. As it stands, we have at least three ways for you to build, deploy, and manage patterns: IBM SmartCloud Application Services, IBM Workload Deployer, and IBM PureApplication System. As you can see, options for consuming patterns have only increased since the initial launch of WebSphere CloudBurst. Furthermore, if you were at Impact, you know that we have a vibrant and vocal community of IBM Workload Deployer users, and I hope to see that community continue to grow! As I see it, the core technology of IBM Workload Deployer is becoming our 'operating system' for cloud platform management.
The question of when to use IBM Workload Deployer or when to use IBM PureApplication System is one whose answer is a bit more nuanced and not something one can or should try to definitively answer in a blog post. One thing I do suggest though is that when evaluating these two technologies, it is important to acknowledge that they have different business value propositions. Sure they share common core technology in terms of building, deploying, and managing pattern-based environments, but beyond that they diverge a bit. Remember, IBM PureApplication System is, well a system. It is the hardware, software, and management technology you need to run your middleware application workloads. It is pre-built and pre-integrated to the point that it only requires you to roll it into your datacenter, hook it up to your network, and do some one time configuration. The aim for the duration between receipt of the system to up and running with your first deployment is four hours, and if you were at Impact you saw an amusing video with the chief architect (Jason McGee) that proves this claim.
IBM Workload Deployer is fundamentally different in terms of how you consume it and how it integrates with your infrastructure. Most notably, IBM Workload Deployer does not include optimized hardware (servers, storage, networking) for running your workloads or a single point of management for everything from hardware to applications. To use IBM Workload Deployer you attach it to your network and point it at existing virtualized servers. Simply put, IBM Workload Deployer assumes you have existing, under-utilized hardware that you can get more out of with the intelligent deployment and management approach the appliance delivers. While you do not get the pre-integrated and optimized system of hardware plus software, you do get the flexibility to use your existing infrastructure.
As you can see, there are similarities (patterns) and differences (whole system vs. management system), and the result is a pretty different set of value propositions. The key in evaluating these technologies is that you do so with a crisp understanding of your current needs AND your future plans for growth and evolution. I know this kind of advice is a bit generalized, but I hope the differences I discussed above help you to at least understand the capabilities of the two different offerings. As always, if you have any comments or questions, please reply to the post!
When I talk with WebSphere CloudBurst users, the topic of custom virtual images comes up frequently. In some cases they simply want to customize a shipped IBM Hypervisor Edition, and in other cases they want to create a completely custom image. Creating a customized version of an IBM Hypervisor Edition is relatively easy since we give you extend & capture in WebSphere CloudBurst. Creating a completely custom image has historically been a bit tougher, mostly owing ot the fact that there was not a standard tool or process for image assembly. I am happy to say that today's publication of the IBM Image Construction and Composition Tool changes all that.
Watch a demo of the IBM Image Construction and Composition Tool
The primary purpose of the Image Construction and Composition Tool is to enable a modular approach to virtual image construction, while taking into account the typical division of responsibilities within an organization. The tool allows the right people within an organization to contribute their specialized knowledge as appropriate to the virtual image creation process. This means OS teams can handle the OS and software teams can handle the appropriate software. A separate image builder can then use both OS and software components to meet the needs of users within the organization. Best of all, the image builder does not need intimate knowledge of how to install or configure any of the components in the image. They simply need to know which OS and software components to use.
When using the Image Construction and Composition Tool, you start by defining the base operating system you wish to use for your images. You can do this by importing an existing virtual image with an OS already installed, providing an ISO for the OS, or pointing to a base OS image on the IBM Cloud. The bottom line is that you have necessary flexibility to start with your certified or ‘golden’ operating system build. Once you have the base OS image defined in the Image Construction and Composition Tool, you can start defining custom software for use in the images you will compose.
In the tool, bundles represent the software you wish to install within a virtual image. The definition of a bundle contains two major parts: Installation and Configuration. The installation component of a bundle tells the Image Construction and Composition Tool how to install your software into the virtual image. You provide a script or set of scripts that install the necessary components into your image, and you direct the tool to call these scripts. These tasks run once during the initial creation of the virtual image, thus allowing you to capture large binaries, long-running installation tasks, or other necessary actions directly into your image.
The configuration section of a bundle defines actions that configure the software installed into the image. Like with the installation tasks, you provide a script or set of scripts for configuration tasks. Unlike installation tasks that run exactly once, configuration scripts become part of the image’s activation framework and as such, run during each image deployment. Using the tool, you can define input parameters for configuration scripts and optionally expose them so that users can provide values for the parameters at image deploy-time. Configuration tasks are important in providing flexibility that allows users to leverage a single virtual image for a number of different deployment scenarios.
Once you have your base OS image and one or more bundles defined in the Image Construction and Composition Tool, you can compose a virtual image. To compose a virtual image, you extend the base OS image and add any number of bundles into the new image. A base OS image plus a set of bundles defines a unique image.
After you define the image you want to construct, you initiate a synchronize action in the Image Construction and Composition Tool. When you start the synchronize action, the tool first creates a virtual machine in either a VMware or IBM Cloud environment (based on how you configured the tool). Next, the installation tasks of each bundle you included in the virtual image run to install the required software. Finally, the tool copies the configuration scripts from each bundle into the virtual machine and adds them to the image’s activation framework. This ensures the automatic invocation of all configuration scripts during subsequent image deployments.
Once the image is in the synchronized state, you can capture it. Capturing the image results in the creation of a virtual image based on the state of the synchronized virtual machine. The tool also automates the generation of metadata that becomes part of the virtual image package. When the capture of the virtual image completes, you can export it from the Image Construction and Composition Tool and deploy it using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
I am excited for users to get their hands on the Image Construction and Composition Tool. I believe it represents the first big step in helping users to design and construct more sustainable virtual images. Did I mention it is completely free to download and use? Visit the Image Construction and Composition Tool website for more details and a download link. I look forward to your comments and feedback.
One of the things I haven't written about much here is how the WebSphere CloudBurst Appliance integrates with other IBM software solutions. One of those interesting integration scenarios, and one I think is particularly useful for developers, involves Rational Build Forge.
Very simply put, Rational Build Forge is an adaptive execution framework that allows users to define completely automated workflows for just about any purpose. These workflows are represented as projects that contain a discrete number of steps. When looking at Rational Build Forge through the software assembly prism, the offering allows users to fully automate and govern the process of building, assembling, and delivering software into an application environment.
Now, on to the integration of WebSphere CloudBurst and Rational Build Forge. Users can build custom patterns in WebSphere CloudBurst that include a special script package (which I'll eventually provide a link to from here). This script package provides the glue between the deployment process in WebSphere CloudBurst and Rational Build Forge. When deploying a WebSphere CloudBurst pattern that contains this script package, users provide the name of a Rational Build Forge project as well as information about the Rational Build Forge server on which the project is defined.
Once the necessary information is supplied, the deployment process gets underway. Toward the end of the deployment, like all other scripts included in patterns, the special Rational Build Forge script is invoked. This results in the project specified during deployment being executed on the virtual machine created by WebSphere CloudBurst.
Because the Rational Build Forge project executes on a virtual machine setup by WebSphere CloudBurst, the individual steps of the project can very easily access the WebSphere Application Server environment. Thus, the Rational Build Forge project could very easily contain steps to build, package, and deploy an application into the WebSphere Application Server cell. The result is a fully automated process that includes everything from standing up the application environment to delivering applications into that environment.
I put together a short demonstration of this integration, and you can take a look at it here. As always, please let us know if you have any questions or comments. Your feedback is much appreciated!
Lately, I have run into multiple situations where an IBM Workload Deployer user has been trying to decide exactly how they want to create their customized images for the cloud. Essentially, they have been trying to decide whether to use the native extend and capture capabilities of IBM Workload Deployer, or to pursue the use of the Image Construction and Composition Tool (also included with the appliance). The conversations have been interesting and challenging, but more importantly, they have been a reminder that constructing enterprise-ready environments for the cloud does not happen by magic. It takes thought, deliberate planning, sustainable design, and the tools to carry everything out.
The tools part we have covered. I have every confidence, bolstered by user experience after user experience, that IBM Workload Deployer and associated tools (like the Image Construction and Composition Tool) equip you to build highly customized, cloud-based application environments. In this post, I want to focus in on the thought process that goes into how you decide to build your customized environment. Specifically, I would like to talk about important points to consider as you try to understand whether to use the native extend and capture capabilities of IBM Workload Deployer or the Image Construction and Composition Tool.
To be clear from the outset, I am not trying to provide a decision flowchart in this post. For all intents and purposes, that would be next to impossible. Instead, I want to pose to you some important questions that you should ask of yourself, along with the reasons why I believe those queries to be important. Keeping in mind that this is not an all-inclusive list, here it goes:
Question: Are the customizations that you want to make congruent with an IBM-supplied image?
Reason: One of the first decisions you should make is whether or not you can start with an IBM-supplied image as the base for your customization. You need to know what middleware elements (type and version) make up your environment and what operating system should host that environment (version and distribution). You can match that information against the list of content that IBM supplies. If there is a match, you should start by looking at extend and capture to customize that image to meet your needs. If there is no direct match, you may be looking at the Image Construction and Composition Tool.
Question: Does your custom content supplement middleware content supplied in an IBM image?
Reason: If you simply need to add additional components that supplement software already in an IBM image, I believe it is best to first examine the use of extend and capture. Whether these components are IBM software or not is irrelevant as the extend and capture functionality does not care.
Question: How configurable do you want to make the custom content in your image?
Reason: If you are adding content into the image, you need to think about just how configurable you need it to be. When you use extend and capture, you add the content to an existing image in a manner that pretty well ends up being opaque to IBM Workload Deployer. To configure that content, you need to have script packages and make sure they are part of every pattern you create based on the image. Alternatively, if you use the Image Construction and Composition Tool, you can embed configuration behavior in the image's activation engine, and you can expose deploy-time parameters without needing to include script packages in every single pattern. As an example, if you need to add a monitoring agent into your environment, you would likely do this via extend and capture and end up with a pretty simple script package to configure that agent during deployment. If however, you need to create an image with a custom database, you would likely favor the Image Construction and Composition Tool as you could embed common deploy-time configuration parameters directly in the image. For a database, there are likely to be many more deploy-time configuration parameters that you want to expose as compared to a more simple monitoring agent.
Question: Is your main focus on making operating system changes?
Reason:If your primary focus is on making operating system changes AND the answer to the first question is that your target content aligns well with IBM-supplied images, then extend and capture is where you want to start. Of course, you need to make sure that you can make all necessary changes to the OS with extend and capture, but I will say that this capability is not very restrictive at all.
Admittedly, this is a short list, but I believe it is a good starting point for how you decide upon one approach versus the other. Also, I would be remiss not to point out that these tools are absolutely not mutually exclusive. Many users I work with use a combination of the two approaches. In fact, there are some use cases that call for both tools. Start by creating a completely custom image in the Image Construction and Composition Tool, and then subject that image to the extend and capture process in IBM Workload Deployer to customize it for a particular purpose, team, project, etc. I hope you find this helpful, and I welcome your feedback or thoughts!
When many people think of cloud computing they immediately think of virtualization and virtual machines in particular. This is completely natural and not at all surprising. After all, one of the core underlying technologies necessary for cloud computing is virtualization. However, it is important not to confuse one element of cloud computing with the entire thing - and this can sometimes happen. Many people have begun to leverage virtual machines in their on premise environment and sometimes begin to call this their private cloud. While virtualization is a substantial step forward and help gets you started down the necessary path of standardization and automation that is essential in a cloud - it is not in and of itself "a cloud".
The National Institute of Standards and Technology has published its definition of cloud computing. This is a very complete and yet concise definition that includes not only the essential characteristics of a cloud solution but also the service models (IaaS, PaaS, SaaS) and deployment models (public, private, hybrid, community). It is a great way to get a perspective on cloud and can be useful when considering the solutions of various vendors.
Let me summarize the essential elements of cloud from this definition here:
broad network access
So, this is interesting. Not only is this much more than just virtualization - but virtualization isn't even mentioned in the list explicitly. Not to worry - virtualization is of course important and is included under the resource pooling topic. I would assert that virtualization is also necessary to facilitate the type of on-demand, self-service, elastically scaling resources that are leveraged in a cloud. What is crystal clear from this definition is that there is a lot more to a cloud solution than just virtual images and some hypervisor infrastructure upon which to run them. Somebody must provide the necessary on-demand/self-service capabilities, the network access to these services, the management of the resource pools, enabling true elasticity for running systems, measuring services and so forth. IBM Workload Deployer provides just such capabilities for the on-premise cloud allowing you to efficiently deploy patterns built for virtual systems and virtual applications with deep knowledge of the middleware that is being provisioned to optimize these solutions. Furthermore, Workload Deployer provides the complete lifecycle management from pattern creation, to deployment and provisioning, applying maintenance, resource and license management in the on-premise cloud, elastic scalability, and eventually returning resources to the on-premise cloud to be reused. Workload Deployer is a complete solution for not only server virtualization but of course for cloud computing.
However, virtualization doesn't have to stop with just virtual machines. It is a general principle that can be applied to more than just servers. At its core, virtualization is really about providing a level of abstraction between some real resources and the consumers of those resources. This is a natural fit when we think of server virtualization and virtual machines. However, there are also substantial benefits to be gained by adopting a similar abstraction between the middleware and the applications themselves - sometimes referred to as application virtualization.
By application virtualization I mean providing the capabilities to abstract the application from the underlying infrastructure such that it can be elastic, participate in health management policies, and provide agility across the pool of application infrastructure resources. This type of application virtualization is built into our Virtual Application pattern (hence the name) in Workload Deployer and surfaced in solutions via policies (such as scaling and routing), and high availability functions built into the Web Application pattern type. For Virtual Applications these features are fully integrated and optimized functions as are all elements of Virtual Applications. However, similar features have also been available for WebSphere Application Deployments in Virtual System patterns with a special extension.
WebSphere Virtual Enterprise provides application virtualization for traditional WebSphere ND solutions and this same feature is delivered for Virtual System pattern deployments of WebSphere Application Server by use of the Intelligent Management Pack. Leveraging the capabilities of Workload Deployer with Virtual Systems lets you gain the benefits of server virtualization and to reduce hardware, provide rapid and consistent deployment of entire systems, dynamically adjust resource consumption, and much more. Leveraging the capabilities of the Intelligent Management Pack provides the ability to manage service level agreements with elastic scaling and health management, lower operational costs, and provide for improved application management. These two solutions together provide a powerful combination to improve the management and resiliency of your enterprise applications.
If you would like to learn more about application virtualization using the Intelligent Management Pack in conjunction with Virtual System Patterns in IWD then please join Keith Smith and myself tomorrow for a webcast on this very topic. Keith is the lead architect on our WebSphere Virtual Enterprise and Intelligent Management Pack products and brings a wealth of experience in this space. So don't miss this opportunity - register here.
When IBM Workload Deployer v3.0 rolled around, the appliance introduced the concept of shared services. These were services that a cloud administrator could launch into the cloud infrastructure defined to IBM Workload Deployer, and use to serve a number of different application deployments. There were, and continue to be, two main shared services: a proxy service and a cache service. The shared proxy service does pretty much what you may guess. It provides request routing capabilities across multiple different instances of multiple different applications, thereby providing a centralized resource that encapsulates this basic need in an application environment. You can probably also guess what the caching service does. It caches things! Specifically, in IBM Workload Deployer v3.0 it provided in-memory caching of HTTP sessions, thus ensuring high availability of data stored in those sessions.
Undoubtedly, the ability to make HTTP session data fault tolerant is extremely critical in any application environment, cloud-based environments included. However, the applicability of a shared cache service is much further reaching, and in IBM Workload Deployer v3.1, we are starting to open this service up to your applications. What does this mean to you? Quite simply it now means that you can access this cache directly from your application code. If you are familiar with WebSphere eXtreme Scale or the DataPower XC10 Caching Appliance, then you know exactly what I mean. You can use the WebSphere eXtreme Scale ObjectGrid API to insert, read, update, and delete entries that exist in the in-memory cache. The underlying cache technology is based on the same code that powers WebSphere eXtreme Scale and DataPower XC10, so you can be sure that your cache is scalable, fault tolerant, responsive, and otherwise able to meet the needs of your application.
As I hope you find to be the case with many IBM Workload Deployer capabilities, this is a superbly simple capability to leverage. When you deploy virtual application patterns based on the IBM Workload Deployer Pattern for Web Applications, the capability is simply there. The underlying runtime that is serving your application is automatically augmented with the capabilities necessary so that your applications can connect to and utilize the deployed caching service. It is also worth pointing out that you can utilize the caching capabilities provided by this shared service for applications and application infrastructure that you deploy via virtual system patterns as well. You can either choose to augment the WebSphere Application Server environment with the XC10 Feature Pack (a deploy-time option for virtual system patterns built on WebSphere Application Server Hypervisor Edition v8), or you can configure WebSphere Application Server as you always would when integrating with a WebSphere eXtreme Scale environment or a DataPower XC10 Appliance.
What's the real benefit to all of this you ask? Well, when you use the shared caching service, you get the benefits of a distributed, in-memory, extremely scalable cache without having to deal with too much setup or administration. You simply tell IBM Workload Deployer how many resources you want to dedicate to your cache, and deploy the shared service. IBM Workload Deployer takes care of the details, including scaling in and out the cache to meet the needs of the system. On top of all of this, there is also an option to configure 'Next to the Cloud' caching. If you currently own DataPower XC10 appliances, you can make those available to virtual application pattern deployments (this was already possible with virtual system patterns) by simply providing details of the location of the appliance collective in question.
Put simply, setting up, administering, and utilizing an object caching service for your applications has never been easier. Check it out and let us know what you think!
Typically we spend most of the real estate on this blog talking about cloud computing and specifically, IBM Workload Deployer. However, I am hoping that this week you permit me to take a bit of a detour to discuss a very important new announcement. Last week, IBM announced the early availability of the WebSphere Application Server v8.5 Alpha. In all fairness, your response may be 'You guys always have WAS Alphas. Why should I care about this one?' I have two words for you: Liberty Profile.
Based on my own experience in the IBM labs and my conversations with numerous enterprise developers out there, I think I understand many of the needs to create an efficient development environment. Developers need tools and runtimes that are lightweight, easy to install, simple to configure, and fast to recycle or otherwise update. Enhancements in our WebSphere Application Server v8.0 took many of these concerns head on with features such as directory-based install and drastically improved server startup times. The new v8.5 Alpha, and specifically the Liberty Profile, extend this developer focus even further.
The Liberty Profile is a lightweight, fast, and easy to use application runtime that you can download for free by visiting the WASdev community site. The design of the runtime is best described as fit-for-purpose and you configure it by selectively enabling and disabling features based on application need. For example, you may enable the servlet, JPA, and JSP features, or you may decide you only need to enable the servlet feature of the runtime. It is completely up to you! In addition to this innovative new runtime, the WebSphere Application Server v8.5 Alpha also includes free tools for Eclipse. These tools make it simple to create Liberty Profile server instances, start server instances, stop server instances, install applications, and remove applications. In fact, you can do all of this and even download and install the WebSphere Application Server v8.5 Alpha without ever leaving your Eclipse workspace! Check out the demonstration below to see an example of installing and using the new Alpha.
I really hope that you will participate in the new WebSphere Application Server v8.5 Alpha. The setup process that includes both tools and runtime will take just a few minutes of your time, and leaves but a small footprint on your machine (the Liberty Profile of the WAS v8.5 Alpha is only ~50 MB unzipped). In the meantime, you can find more information about the Alpha on the WASdev site or in the new Information Center. Finally, don't forget to join in on the conversation on the WASdev forum!
If you were to compare the deployment mechanics for virtual application patterns and virtual system patterns, you would notice differences in the way IBM Workload Deployer establishes these environments in your cloud. In both cases the end result is a virtualized environment with which you can work, but the construction of these environments varies. For the most part, you need to understand the virtual application pattern deployment process when creating custom patterns of that type, and you need to understand the virtual system pattern deployment process when creating custom patterns of that type. However, the way in which IBM Workload Deployer deploys virtual application patterns may have an effect on how you build custom virtual system patterns.
When deploying virtual application patterns, IBM Workload Deployer does not use traditional IBM Hypervisor Edition images to initially create the virtual machines for your deployment. Instead, the appliance deploys a virtual image that contains only a hardened operating system environment. After the virtual machine initializes, the appliance triggers the installation, configuration, and integration of software and applications that make up the requested virtual application pattern. This is a bit more of a bottom-up, modular approach as compared to the virtual system pattern deployment process which involves the use of IBM Hypervisor Edition images. Neither is necessarily better than the other (after all they both result in customized deployments that happen in mere minutes), but they are different.
Okay, so I promised that the way in which the appliance deploys virtual application patterns had something to do with virtual system pattern customization techniques, but what exactly? It goes back to the beginning of virtual application pattern deployment and the base virtual image deployed by IBM Workload Deployer. When you deploy virtual application patterns, you never directly interact with this image. However, the image comes pre-loaded on the appliance and appears in the catalog right next to the IBM Hypervisor Edition images. This is important because it means you can use this base OS image in the creation of your custom virtual system patterns as well!
The current version of the base image contains a 64-bit Red Hat Enterprise Linux operating system and a single part that you can use in your virtual system patterns. Further, we place no restrictions on how you use or customize this image. You can even subject this image to the extend and capture process in IBM Workload Deployer. In this way, you can install any software content you want into the image (provided it runs on the OS of course), use the image in a pattern, and deploy that software via the appliance. Since you can use the image to build a virtual system pattern, you can include any configuration scripts that you require. Again, we do not inhibit the way in which you customize the image, nor do we constrain the way you use it in a virtual system pattern. It is entirely up to you.
Personally, I think this base image opens up a new set of possibilities for you, our users. Over the course of WebSphere CloudBurst and now IBM Workload Deployer, we got a lot of feedback requesting a base OS image that allowed this kind of flexibility. Well, it is here now, and I cannot wait to see how everyone starts using it!
When one uses IBM Workload Deployer (previously WebSphere CloudBurst) to deploy a virtual system pattern, they benefit from a completely automated deployment process. The automation includes the creation and placement of virtual machines, injection of IP addresses, initiation of internal processes, and invocation of included scripts. Most of these processes are straightforward and require little more than a brief overview. However, the placement of virtual machines stands out, and it's inner workings are the subject of quite a few questions when I discuss the appliance. With that in mind, I thought I would provide a little more information on how the placement algorithm in IBM Workload Deployer works.
The placement subsystem in IBM Workload Deployer considers three primary elements: compute resource, availability, and license optimization. Compute resource availability is the gating factor for placement. That means that IBM Workload Deployer first looks at the available CPU, memory, and storage resource in the collection of hypervisors making up the cloud group(s) you are targeting for deployment. If a particular hypervisor cannot provide enough resource based on the amount you requested for your deployment, then it is automatically taken out of the eligible hosts pool. It is important to note that IBM Workload Deployer will overcommit CPU, and it will overcommit storage if you direct it to do so. It will not overcommit memory because that could severely degrade the performance of the application(s) running in the virtual machines.
After choosing the pool of hypervisors that are capable of hosting the virtual machines in your deployment from a compute resource perspective, the appliance then considers high availability. To better understand this particular placement stage, let's consider an example. Consider you are deploying a pattern based on WebSphere Application Server Hypervisor Edition and it contains two custom node parts. It is conceivable, and in fact likely, that these two custom node parts will host members of the same cluster, and thus the two nodes will support the same applications. As such, IBM Workload Deployer will attempt to place the two custom nodes on different physical machines in order to prevent a single point of failure. Of course, this depends on having two hypervisors with enough resource (CPU, memory, storage) to host the virtual machines, but the appliance makes that decision in the first placement stage.
After considering compute resource and high availability, IBM Workload Deployer moves to the last stage of placement: license optimization. In this stage, the placement subsystem attempts to place the virtual machines on hypervisors in a way that minimizes the licensing cost to you. The appliance can do this because it is aware of IBM virtualization licensing rules and takes those into account during this stage (if you aren't familiar with virtualization licensing rules and you are curious, ask you're sales representative to explain some time). During this stage, it will not violate any resource overcommit directives or rules in place, nor will it compromise system availability, but it will seek to minimize costs within these parameters.
At this point, I should make something clear that may already have occurred to you. You can override most of these placement rules by creating a cloud group containing only one hypervisor. In this case, IBM Workload Deployer will put all virtual machines on the single hypervisor until it runs out of compute resource (memory is likely to be the constraining factor). I would not suggest that you do this unless you have a good reason or you are in a simple pilot phase, but I do like to point out the art of the possible!
While not incredibly deep from a technical perspective, I do hope that this provided a few helpful details on what goes on during the placement stages of deployment. If you have any questions, do let me know.
I hate sitting on secrets. I always have. I understand that sometimes it's in the best interest of everyone (and your job) to keep tight lips, but that does not make it any more fun. Inevitably, the run-up to our annual Impact conference means everyone in the lab is doing their fair share of secret keeping -- just waiting for announce time. For a lot of us, that day ended Tuesday with the announcement of the IBM Workload Deployer v3.0.
Now, you may be wondering, 'I have never heard of this. Why is it version 3.0??' Well, IBM Workload Deployer is a sort of evolution of the WebSphere CloudBurst Appliance, which was previously at version 2.0. This is good news for all of our current WebSphere CloudBurst users because all of the functionality (plus new bits of course) that they have been using in WebSphere CloudBurst are present in IBM Workload Deployer. You can use and customize our IBM Hypervisor Edition images in IBM Workload Deployer. You can build and deploy custom patterns that contain custom scripts in order to create highly customized IBM middleware environments. So, what's the big deal here? Two words: workload patterns.
Workload patterns represent a new cloud deployment model and are an evolution of the traditional topology patterns you may have seen with WebSphere CloudBurst Appliance (I am a little torn between evolution and revolution, but that's splitting hairs). Fundamentally, workload patterns raise the level of abstraction one notch higher than topology patterns and put the focus on the application. That means, when you use a workload pattern the focus is on the application instead of the application infrastructure. Perhaps an example would be helpful to illustrate how a workload pattern may work in IBM Workload Deployer.
Let's consider the use of a workload pattern that was part of the recent announcement, the IBM Workload Deployer Pattern for Web Applications v1.0. Just how might something like this work? It's simple really. You upload your application (maybe a WAR or EAR file), upload a database schema file (if you want to deploy a database with the solution), upload an LDIF file (if you want to setup an LDAP in the deployment to configure application security), attach policies that describe application requirements (autonomic scaling behavior, availability guidelines, etc), and hit the deploy button. IBM Workload Deployer handles setting up the necessary application middleware, installing and configuring applications, and then managing the resultant runtime in accordance with the policies you defined. In short, workload patterns provide a completely application centric approach to deploying environments to the cloud.
Now, if you are a middleware administrator, application developer, or just a keen observer, you probably have picked up on the fact that in order to deliver something as consumable and easy to use as what I described above, one must make a certain number of assumptions. You are right. Workload patterns encapsulate the installation, configuration, and integration of middleware, as well as the installation and configuration of applications that run on that middleware. Most of this is completely hidden from you, the user. This means you have less control over configuration and integration, but you also have significantly reduced labor and increased freedom/agility. You can concentrate on the development of the application and its components and let IBM Workload Deployer create and manage the infrastructure that services that application.
Having shown and lobbied a bit for the benefits of workload patterns, I also completely understand that sometimes you just need more control. That is not a problem in IBM Workload Deployer because as I said before, you can still create custom patterns, with custom scripts based on custom IBM Hypervisor Edition images. The bottom line is that the IBM Workload Deployer offers choice and flexibility. If your application profile meshes well with a workload pattern, by all means use it. If you need more control over configuration or more highly customized environments, look into IBM Hypervisor Edition images and topology patterns. They are both present in IBM Workload Deployer, and the choice is yours.
If you happen to be coming to IBM Impact next week, there will be much more information about IBM Workload Deployer. I encourage you to drop-by our sessions, ask questions, and take the opportunity to meet some of our IBM lab experts. Hope to see you in Las Vegas!
The concepts that govern users and user groups in WebSphere CloudBurst are fairly basic, but I get asked about them enough that I believe they warrant a short discussion. First things first, you can define users in WebSphere CloudBurst and optionally define user groups to assemble users into logical collections. For both users and user groups, you can assign roles that define the actions a particular user or group of users can take using the appliance.
All of that is straight forward, but it can get a bit tricky once we start considering the effects of user permissions when managing at the user group level. The basic premise is that when a user belongs to a group or groups, the user's effective permissions are a sum of the permissions to all of the groups to which they belong. While that is easy to say, and maybe even to understand, I feel like an example always helps.
Consider that we have a single user WCAGuy that belongs to the PatternAuthors, ContentCreators, and CloudAdmins groups. The permissions for those groups are as follows:
PatternAuthors: Users in this group have permission to create and deploy patterns
ContentCreators: Users in this group have permission to create catalog content as well as create and deploy patterns
CloudAdmins: Users in this group have permission to administer the cloud, create catalog content, and create and deploy patterns
Naturally then, it follows that the WCAGuy user can administer the cloud, create catalog content, create patterns, and deploy patterns. So then, what happens if we remove the WCAGuy user from the CloudAdmins user group? Well, as you may expect, there is an update to the user's permissions. The WCAUser user can no longer administer the cloud, but they can still create catalog content, create patterns, and deploy patterns (owing to their membership in the other two groups). Similarly, if we next removed the WCAGuy user from the ContentCreators group, then the user would retain only the permission to create and deploy patterns.
Just one more thing, let's talk about what happens when I remove a user from a group and they no longer belong to any groups. Consider that I created the WCAGuy user with the permission to create catalog content as well as create and deploy patterns. Next, I added the user to the CloudAdmins group, meaning the user now has the permission to administer the cloud. I promptly decide that the user has no business with those permissions, so I remove the user from the CloudAdmins group. What happens? The user retains the permission set of the last group to which they belonged. In this case, that means the WCAGuy user retains cloud administration rights. I have to update the user's permission set if I want to take that right away, but in this case, it will not automatically disappear upon removing them from the CloudAdmins group.
I hope this helps clear up any ambiguity you may have had concerning users, user groups, and permission sets in WebSphere CloudBurst.
As a final preview of this week's building block sessions in the Enabling cloud computing with WebSphere campaign, I caught up with WebSphere DataPower architect Tim Smith. Tim is delivering a podcast that introduces and explains the new Application Optimization capabilities in the WebSphere DataPower line of products. Here is what Tim had to say:
Me: I speak with quite a few customers about the WebSphere CloudBurst Appliance, and for once I'm happy to be the one asking this question. Why do we deliver WebSphere DataPower in the appliance form factor?
Tim: DataPower has become a dominant player in the DMZ and in the ESB. Much of the reason is that this is a purpose built hardware appliance. There are many things that our customers like about this appliance package. First, it has security as part of its DNA. The basis for securing connections, applies throughout the network whether it is in a DMZ or in an ESB. The physical box provides tamper resistant protection. Another reason is availability -- there are no spinning media, dual power supplies, and a focus on fail over support.
In both the DMZ and the ESB, there has been a proliferation of products. The main reason for the proliferation is that customers want to remove as many decisions from the general purpose server as possible, and let servers do what they do best, process application requests. The devices that have been proliferating make more decisions on the request. They do deep packet processing and routing. They also may transform the request into an entirely different request. So, there are an abundance of "pre-processing" decisions and operations made. With DataPower, many functions are integrated into the single hardware platform, giving you a smaller box count. No need to purchase and maintain several platforms, their OS and software versions, compatibility lists, etc. With a single hardware box that does so many things, we can greatly reduce the total cost of ownership for our users.
The DataPower appliance is a blend of Hardware and firmware that is well provisioned with hardware assists that help compile, parse, and assist in many of the intensive packet processing capabilities. To summarize, you get an extremely flexible and adaptable product that reduces total cost while increasing performance.
Me: A theme that comes up in cloud computing over and over is consolidation. Can you speak to the consolidation offered by WebSphere DataPower appliances with respect to the self-balancing capabilities?
Tim: Yes. My answer to the prior question was a long-winded way of describing DataPower's ability to consolidate many features into a single platform. Self-balancing is an example. As DataPower became more popular, larger installations required multiple DataPower appliances in a tier of platforms. A common architecture was to place a load balancer or IP sprayer in front of the tier to distribute the traffic evenly among the tier of DataPower appliances. An IP sprayer is an example of another platform that needs to be added to the environment. It is another box that must be purchased, managed, and maintained. Self-balancing is a feature that was added to DataPower to eliminate the need for an IP sprayer. The way it works is that one of the DataPower appliances in the tier owns the Virtual IP (VIP) Address. It receives all of the traffic, and then distributes it to each of the other DataPower appliances in the tier. If the DataPower appliance that owns the VIP address goes down, one of the others is elected and it takes over. The result is one less product required to support the same level of functionality.
Me: For much of the past, cloud computing mostly focused on virtualization and management of resources at the raw compute level (servers, storage, networking, etc.). While there is definitely ongoing focus here, we start to see it moving up the stack towards applications, and part of that effort includes more evolved application load distribution. With that in mind, how can WebSphere DataPower help users more effectively distribute requests to their applications?
Tim: If a front end appliance or gateway device can dynamically learn information about its environment, specifically the back end, it will be able to make better decisions on how and where to route the request. This is one of the tasks that the Application Optimization feature addresses. Information from the back end can of course be manually configured, but the real value in cloud computing is dynamically adapting when new server resources are brought on line or are taken off line. In the 3.8.0 release, we implemented something called Intelligent Load Distribution (ILD). Intelligent load distribution focuses on continually learning the topology of a back end, updating DataPower's load balancers with that information, and distributing the load based on the updates. In addition to the topology, ILD learns the weights associated with each server. These weights can continually and automatically change as traffic patterns change. The result is load balancing to the back end that sends the optimal amount of load to each server.
Another traffic distribution aspect incorporated into ILD is session affinity. When a server application needs to receive every request from a given client, session affinity is used to route the requests to the same server. In some sense, session affinity overrides the load balancing algorithm. The session affinity support works with any type of back end server, but with a WebSphere back end, all session affinity information is automatically configured.
Me: Continuing on the theme of application intelligence, what is this new Application Routing option in WebSphere DataPower?
Tim: ILD focused on learning the topology of the network and making better decisions based on an ever changing cloud topology. Application Routing does something similar by learning which applications are running on each server. Once a request is handed to DataPower's load balancer, the request is classified as to the application that it is targeted for. Then the request is load balanced amongst the servers that are running that application. The information to perform application routing is dynamically learned and changes as applications are added or removed.
WebSphere has invested substantially in managing the life cycle of an application. Changing from one edition of an application to the next sounds like an easy task, but it can be very difficult to perform this type of maintenance on a production environment. The DataPower appliance supports life cycle management by working with the WebSphere back end to provide group and atomic edition rollout. The rollout feature allows traffic to be gracefully diverted from servers that are being taken offline and reloaded with the new application edition. This rollout can be done while leaving the other applications on the server unaffected. This support makes edition rollout a very simple task for the system administrator.
Next up on our sneak preview of the building block sessions for the Enabling cloud computing with WebSphere campaign is the Dynamic Infrastructure Services block. One portion of that block is a discussion about some of the technical capabilities of WebSphere Virtual Enterprise given by Nitin Gaur. Nitin is a Consulting IT Specialist within WebSphere, and an all-around WebSphere guru. I caught up with him to ask a few questions about his upcoming podcast.
Me: When people think cloud computing, one of the core concepts is 'on demand'. They want just enough resource at just the right time. In that sense, can you tell me a little about the On-Demand Router (ODR) in WebSphere Virtual Enterprise (WVE)? What is it and what core functions does it provide?
Nitin: So, first allow me to take a step back. In my view, cloud computing is a new consumption and delivery model nudged by consumer demand and continual growth in internet services. I classify any Cloud computing platform exhibits the following 6 key characteristics:
Standards based delivery
Usage based equitable chargeback
I thus, deliberately use the term platform in the context of a cloud computing environment that facilitates flexibility, robustness and agility, as a systemic approach in providing a stage to hosting applications without the concern for availability or provisioning of underlying resources. Since hardware and software virtualization do offer significant cost and resource management advantages, it is not rare to see virtualized platforms as core building blocks of any cloud platform. Such virtualization technologies provide an elastic infrastructure service. In this respect, WVE provides application server virtualization, which enables an elastic business-policy-driven application infrastructure.
Now back to the On-Demand Router. The ODR is the autonomic engine that drives the activity enabling the elastic infrastructure discussed above. The ODR operates in a highly dynamic WVE environment, so it is imperative for the ODR to be aware of any changes in the environment such as newly deployed applications, the addition of new application servers, and any planned or unplanned server outages. It achieves this awareness by continuously interacting with WVE's fluid and dynamic feedback mechanism.
Me: Autonomic capabilities seem to be a core part of WebSphere Virtual Enterprise. To that end, can you tell us a little about the autonomic capabilities provided by dynamic clusters in WVE?
Nitin: Dynamic application placement is a defining capability of WVE that directly contributes to WVE's ability to provide a dynamic, virtualized, and goal-oriented environment for workload management and continuous availability. The dynamic application management capability maximizes the efficient use of hardware resources by allocating resources appropriately per application based on fluctuating demands in the enterprise infrastructure. It determines which servers to stop and start in a dynamic server cluster in order to meet current demand for applications, and it does this in the context of a set of administrator-defined policies that uphold the enterprise’s service level agreements (SLAs) for its application infrastructure. The dynamic application placement framework must balance resource availability against health policies, service policies, and the importance levels assigned to applications.
Dynamic server clusters are key to WVE’s ability to dynamically adjust the application environment according to server load, and they provide the basis for a virtualized server runtime environment. The big difference between a dynamic cluster in WVE and a static cluster in WebSphere Application Server is that dynamic clusters grow and shrink as needed to meet current demand by starting and stopping members of the cluster. Although dynamic clusters and static clusters can co-exist in a cell, dynamic application placement can only work with dynamic clusters. To prevent unchecked growth, each dynamic cluster has a mechanism that you use to define a boundary for that cluster’s growth. The boundary is both quantitative (based on criteria that define the minimum and maximum number of application servers that can run in the cluster simultaneously) and locational (based on criteria that confine the growth of the dynamic cluster to a defined set of nodes).
Me: I know you have been around the country, and for that matter globe, helping our users to adopt and implement WebSphere Virtual Enterprise. Tell us about one of your favorite customer stories.
Nitin: So I would cite an example of one of the leaders in the entertainment Industry (and my favorite customer), let's call them Company X (since I cannot cite the name). The core of the company's application infrastructure system is the Sales App Infrastructure (SAI) consisting of more than 10 enterprise applications. To keep up with demand, Company X was required to procure more hardware and software to support the core systems. This strategy resulted in a large infrastructure footprint with low hardware utilization. The increase in hardware footprint became difficult to manage and required additional resources. The large footprint of the company's deployment put them in reaction mode rather than a posture of proactive monitoring. Some application servers rendered themselves unavailable and required the team to restart them every 24 hours. From a cost standpoint, it costs the company the same amount of money to request a virtual platform as it would to purchase a new physical server. This led to significantly under utilized hardware throughout the enterprise. WVE was brought in to Company X to help better manage their WebSphere Application Server footprint. Dynamic clusters, application health policies, and application editioning features helped the company to better utilize hardware, reduce hardware expenditures, increase visibility into their applications, and improve availability of their applications.
In addition to helping with the existing environment, WVE helped Company X to roll out a new project with applications that required continuous availability to worldwide users. The team made use of policy-based workload management to ensure performance and availability levels of these new applications met their business needs. In addition, the company was able to reduce the amount of WebSphere Application Server licenses and physical servers required for this new deployment. In sum, WebSphere Virtual Enterprise saves the company significant time, money, and management effort.
Yesterday, we kicked off a WebSphere in the Clouds campaign designed to connect you with IBMers that can help you to leverage WebSphere solutions to build clouds. The campaign consists of webcasts, podcasts, live Q&A sessions, and online JAMs. You can listen to replays and sign up for upcoming events by visiting the Global WebSphere Community website.
Next week, the campaign delivers a series of podcasts that discuss the WebSphere technologies that form the building blocks of clouds. These podcasts will discuss both the business and technical aspects of these solutions, and they will cover topics like application infrastructure in the cloud, policy-based workload management using application virtualization, hybrid cloud integration, and more. Over the past few days, I had the opportunity to catch up with the various presenters of these podcasts to ask them a few questions about their solutions. These interviews provide a nice sneak peak at what is coming in the podcasts, and I will be posting them here in the coming days.
To kick things off, I'm posting a video interview with Marc Haberkorn. Marc is the WebSphere Product Manager for WebSphere CloudBurst, WebSphere Application Server Hypervisor Edition, and WebSphere Virtual Enterprise. My colleague, Ryan Boyles, caught up with Marc and got his thoughts on how these solutions enable virtualization and automation for your cloud environments. Enjoy!
When it comes to building and using WebSphere CloudBurst patterns, people always ask me if I have any best practices. It turns out, I do. In fact, I have a singular piece of advice that wraps it all up: Build WebSphere CloudBurst patterns in a way such that once deployed, there is no after-the-fact, manual configuration for the running environment. That means, build the pattern so that it not only contains all the nodes necessary for your application environment, but it also contains all the configuration necessary for the environment.
Put like this, most everyone I talk to agrees with me. However, they quickly recognize that, absent this really cool integration with Rational Automation Framework for WebSphere, this means they will be writing scripts for many configuration actions and including them in patterns in the form of script packages. For users not familiar with configuration scripting for our WebSphere products, this can be a daunting proposition. But... it shouldn't be!
Recently, I put together a short presentation that lays out an iterative approach for developing script packages for WebSphere CloudBurst. Specifically, the presentation focuses on developing configuration script packages for the WebSphere Application Server (though the general concepts apply to all Hypervisor Edition products equally). I believe this method is useful for anyone, from novice users to WebSphere scripting gurus. The basic process goes something like this:
Identify: Identify the target WebSphere Application Server topology and configuration for your application environment.
Deploy: Build a WebSphere CloudBurst pattern that matches your desired topology and deploy it to your cloud.
Develop and Test: Develop and test your configuration script. Not a WebSphere Application Server scripting ninja? No worries. Use the Command Assistance feature in the WebSphere Application Server v7 administration console. This feature shows you the wsadmin commands that match the actions you manually take in the console. This affords a lower barrier of entry for those not familiar with wsadmin.
Package: Package up the resulting scripts into a script package along with metadata that describes the package.
Modify and redeploy: Load the new script package into your appliance, add it to your pattern, and then redeploy. Upon deployment completion, verify the scripts produce the desired result.
The presentation provides detail on the above steps and walks through an example scenario for this process. I am embedding it below, and I hope it proves useful. As always, feel free to send in any questions or comments.
A while back I had a four part FAQ series inspired by questions arising from customer visits discussing the first release of WebSphere CloudBurst. With the recent release of WebSphere CloudBurst 2.0, I think it is a good time to revisit an FAQ series with an entirely new set of questions.
For the first part of the series, I want to address a question we get all the time now: "What is the difference between WebSphere CloudBurst and WebSphere Virtual Enterprise?" This question was always fairly common, but now even more so because the new Intelligent Management Pack option for WebSphere Application Server Hypervisor Edition allows you to deploy WebSphere Virtual Enterprise cells using WebSphere CloudBurst.
Fundamentally, the difference between the WebSphere CloudBurst Appliance and WebSphere Virtual Enterprise is a complementary one. WebSphere CloudBurst provides a means to create your application environments, deploy them into a shared, cloud environment, and then manage them over time. In this respect, the appliance focuses on bringing cloud-like capabilities to the application infrastructure layer of your application environments. WebSphere CloudBurst does give you management capabilities for your running, virtualized application environments (i.e. applying maintenances and fixes), but for the most part those capabilities do not extend into the application runtime environment.
Now, you may ask why WebSphere CloudBurst does not extend its reach into the application runtime. The answer is simple: We already have a solution that does just that, WebSphere Virtual Enterprise. WebSphere Virtual Enterprise provides capabilities that allow you to dynamically and autonomically manage your application runtime. You can use WebSphere Virtual Enterprise to not only assign performance goals to your applications, but also to declare the importance of a given application meeting its goals relative to other applications in your organization. This enables the dynamic management of your applications and their resources such that your applications perform according to their goals and relative importance to your business. Simply put, you get an elastic runtime at the application layer of your application environments.
As I said, WebSphere CloudBurst and WebSphere Virtual Enterprise are complementary solutions. Both enable notions of cloud computing, but at different layers of your application environments. WebSphere CloudBurst hones in on the application infrastructure components, while WebSphere Virtual Enterprise zeros in on the applications running in those environments. The new Intelligent Management Pack for WebSphere Application Server Hypervisor Edition means that WebSphere CloudBurst can now dispense WebSphere Virtual Enterprise environments into your on-premise cloud. That means you can take advantage of these complementary solutions from a single and simple management plane.
I hope this helps to clear things up. As always, questions and comments are welcome!
May is almost here and that means that IBM IMPACT is right around the corner. Just like years past, IMPACT 2010 will be a great chance to get valuable education and insight into IBM WebSphere software and software from across the IBM software family. If you want to hear how IBM software is leading the march toward a smarter planet, register now.
IMPACT 2010 will be a great chance to hear the WebSphere cloud computing story. There will be multiple sessions on the WebSphere CloudBurst Appliance. These include customer-led sessions, internal adoption stories, overviews, and much more. I'll be there running a hands-on lab and delivering a session that discusses integration between WebSphere CloudBurst and IBM Rational tools. Of course, there is more to WebSphere and cloud computing than WebSphere CloudBurst. We have several other sessions that will detail all of IBM WebSphere's work in the cloud.
If you are interested, I put together a short video discussing some of the sessions on tap for WebSphere and cloud computing at IMPACT 2010. I'd also encourage you to check out the social media site for IBM IMPACT 2010. On that site, you will find tweets, videos, and blogs about the conference. Don't forget to sign up, and I hope to see you in Las Vegas!
-- Dustin Amrhein
The reason I suggest the application proxy approach is twofold. First, it affords you the ability of having custom interactions with the REST API. For instance, you may insert logic into the server-side proxy code that returns only a subset of the JSON data contained in the response from the appliance. Alternatively, in an effort to reduce the chattiness on your client-side, you may join JSON data from multiple different REST requests to the appliance to fulfill a single client request. You may even decide to represent the data in an all together different format than JSON. All of these options and many more are available to you if you implement an application-based proxy to the REST API.
The second reason I suggest the application approach is that it is easier, and seemingly safer, to not deal with user passwords on the client-side. If you setup your application proxy, you can configure it to retrieve the appropriate password from a secure location (like an encoded file) based on information passed along in the request. This means the password information is only present in the request (in encoded form of course) from the application proxy to the WebSphere CloudBurst Appliance.
The good news about the application-based proxy approach is that it is simple to put in place. I composed one using the open source Apache Wink project. The Apache Wink project is an open source implementation of the JAX-RS specification (and then some), and it enables you to develop POJOs that are in turn exposed in a RESTful manner. In my case, I had a single resource POJO:
The Apache Wink runtime routes any HTTP GET request whose path is like /resources/* to the getResources method in the WCAResource class. This method passes along information taken from the query string (the host name of the target WebSphere CloudBurst Appliance and the requesting WebSphere CloudBurst username), as well as the HTTP path information and sends it on to the getResource method declared as follows:
The getResource method above uses the WebSphere CloudBurst host name and the request path to construct the URL for the corresponding WebSphere CloudBurst REST API call. Next, it constructs an Apache Wink Resource object and sends the REST request along to the WebSphere CloudBurst Appliance. How do we authenticate this request? We use the WebSphere CloudBurst username (sent as a query string parameter) to retrieve the appropriate encoded password information. Once we have that, we construct the necessary header for basic authorization over SSL.
The application-based proxy shown here is simply a pass-through. It does not manipulate the data returned from the WebSphere CloudBurst REST API, nor does it map a single client-side call to multiple REST requests. However, it would be simple enough to extend it to do any of those things. If you have any questions about the code here, please let me know. I'd be happy to share more of the code, or talk about how and where to extend it.
The WebSphere Application Server Hypervisor Edition virtual image is made up of four different virtual disks. One of those disks contains pre-created and pre-configured WebSphere Application Server profiles. When the image is activated (either through WebSphere CloudBurst or in a standalone fashion), all of the profiles not being used are deleted leaving only the intended WebSphere profile type.
Since the profiles are pre-created, this implies that certain information must be updated after the image is activated to reflect things that change with each node that is created. Among other things, the cell name, node name, and host name of the WAS profile configuration are usually updated during the image activation process. Nearly every time I talk to WAS administrators about WebSphere CloudBurst and WebSphere Application Server Hypervisor Edition they are intrigued by this particular configuration update and almost always ask "How do you do it?" (Dustin's note: Since the command to rename the cell is not officially documented, I have removed it from this post. I'm sorry, but it is for your own good!)
Most of the time this question pops up because users are attempting to, with a more narrow focus than WAS Hypervisor Edition, freeze-dry certain WAS configurations in their organization. However, no matter how they do that (virtual images, zipped up configuration files, etc.), they too need to update things like the cell, node, and host names when attempting to reuse the configuration. Many have gone down the route of trying to identify all of the different XML files they need to change in order to update this information, but this is untenable and in fact unnecessary.
If you need to update the node or host name, forget manually updating XML files. Instead, use these three wsadmin commands:
The commands can be run from a standalone node or from a deployment manager node. They are pretty straight forward, and if you need more information about them just take a look in the WebSphere Application Server Information Center. I hope this is helpful information, and stay away from those XML files!
One of the most powerful features of WebSphere CloudBurst is the ability to take one of the WebSphere Application Server Hypervisor Edition virtual images that are shipped with the appliance and extend it to a produce a custom virtual image. This allows users to begin creating customized environments from the bottom up, starting with the operating system. To put it in better context, let's take a look at a couple of scenarios where this feature comes in quite handy.
First off, a very common need for our customers is the ability to continually monitor their application environments. For instance, you may install Tivoli monitoring agents on all of your machines hosting WebSphere Application Server processes and configure those agents to report back to a management server. This is a great case for image extension in WebSphere CloudBurst.
In this scenario, you would start by extending an existing WebSphere Application Server Hypervisor Edition image. WebSphere CloudBurst creates a running virtual machine based off of the selected image, and you log into that virtual machine and install the Tivoli monitoring agents. Once the installation is done, you capture the virtual image back into the WebSphere CloudBurst catalog and use the new image to build a custom pattern. The last step is to include a script package on this custom pattern that, upon deployment, will configure the installed monitoring agents to report back to your desired management server.
Another use case is likely to be of interest to you if you are using WebSphere Virtual Enterprise (or something similar), and you could benefit from the same ease of provisioning for those environments that WebSphere CloudBurst brings to WebSphere Application Server environments. You can use the same customization combination above (image extension and custom scripts) to enable WebSphere CloudBurst to essentially dispense WebSphere Virtual Enterprise cells.
Again, this scenario starts off by extending a WebSphere Application Server Hypervisor Edition virtual image. Once the virtual machine for the extension is created by WebSphere CloudBurst, you log in and install the WebSphere Virtual Enterprise product. After the installation is done, you capture the image and store it in the catalog. Next, you build a custom pattern based off of this image and include script packages that, upon deployment, augment the various parts in the pattern from WebSphere Application Server profiles to WebSphere Virtual Enterprise profiles. (You may wonder why you wouldn't just create the WebSphere Virtual Enterprise profiles during the image extension process. This is because during image extension, you cannot make changes to the virtual disk that contains the WebSphere Application Server profiles. Any changes made to the profiles will be wiped out during the capture process.)
There are countless more scenarios for creating custom virtual images in WebSphere CloudBurst. To name a few, you may want to install JDBC drivers that are common to almost all of your application environments, install required anti-virus software, or just make operating system configuration changes. All of these things can be accomplished through the image extension and capture process. Look for an article coming out soon that will discuss and explain, in much greater detail than I provided here, the process of installing and configuring Tivoli monitoring agents in environments dispensed by WebSphere CloudBurst. In the meantime, if you have any questions or comments, drop us a line here or check out our forum.
Over the past several months industry focus on cloud computing seems to have only intensified. Within IBM and for the purposes of this blog, WebSphere, there have been several announcements and offerings that indicate our commitment and belief in the cloud computing approach.
To further highlight WebSphere's focus and offerings in the cloud computing realm, we are embarking on a "WebSphere in the Clouds" campaign during the months of September and October. Our intent is to virtually deliver information about our cloud strategy and offerings directly from the experts to you, our WebSphere users.
The event will be kicked off by WebSphere's Director of Product Management, Kareem Yusuf, on September 23rd from 9-10 EDT. Kareem will talk about cloud computing in the enterprise, and its unique relationship to SOA thoughts and principles. In addition, he'll give an overview of what WebSphere has been doing in the cloud computing space. This will be followed by sessions from technical experts that detail WebSphere offerings in both the public and private clouds, as well as sessions that discuss enablers of application and application infrastructure elasticity.
To find out more about the "WebSphere in the Clouds" campaign, you can check out the main announcement page. To sign up for the series of virtual events visit the registration page. We hope you will join us for the series of webcasts to learn all about WebSphere's work in the clouds.
To continue with the series of blog posts regarding WebSphere CloudBurst FAQs, I want to take a look at one aspect of the deployment process.
When you leverage WebSphere CloudBurst to push patterns (complete WebSphere Application Server configurations) into a private cloud, the appliance provides an advanced placement algorithm to determine exactly where the resulting WebSphere virtual systems will reside. It attempts to match the needs of the pattern to the correct set of hypervisors that have been defined. WebSphere CloudBurst considers things like storage, CPU, memory, and high availability characteristics when placing the pattern, and this is all done by the appliance without you having to intervene at all.
This is certainly nice in that it absolves you from having to make such placement decisions. Having said this though, you may be thinking of a question that comes up quite often:
If WebSphere CloudBurst controls the placement of the pattern, how can I make sure that certain deployments end up on certain servers (hypervisors)?
Considering what I just told you above, it may not seem that it's possible to control what machines end up hosting your virtual system since the appliance takes care of that placement for you. However, the organized use of WebSphere CloudBurst cloud groups allows you to take advantage of the intelligent placement provided by the appliance while retaining a level of control over which machines end up hosting particular deployments.
In WebSphere CloudBurst all patterns are deployed to cloud groups. Cloud groups are a collection of hypervisors that have been defined within the appliance. The basic deployment mapping is depicted in the image below:
As seen above, you can create a cloud group for any purpose (dev, test, QA, production, etc.), including any hypervisors that you desire as long as a given hypervisor only belongs to a single cloud group. When you are ready to deploy a pattern, you simply select the cloud group you want to deploy to:
By selecting a cloud group for deployment, you are implicitly selecting the physical machines that will host your deployment. The cloud group could consist of anywhere from one to N hypervisors, so you are afforded the ability to restrict the location of your virtual systems as necessary.
I hope this helped explain a little bit about cloud groups in WebSphere CloudBurst. If you're looking for more information about WebSphere CloudBurst cloud groups, I'd also suggest you watch this video on our YouTube channel.
The more and more we visit with customers about our new WebSphere CloudBurst Appliance, the more we see a common thread of questions emerge around the offering. In an attempt to address some of these questions in a more accessible medium, I figured I'd start a series of blogs that relays some of these questions and of course the answers. Today I want to start with what is perhaps the most frequently asked question about WebSphere CloudBurst.
One thing I've noticed while talking to customers is that virtualization and virtualization management tools are widely used today. When we talk to customers already using these tools, they immediately understand the benefits WebSphere CloudBurst delivers in the form of virtualizing WebSphere Application Server environments and bringing a set of lifecycle capabilities to this virtualization. However, almost invariably they ask why they would use WebSphere CloudBurst over their existing tools, for instance VMware's vSphere offering.
There's a two-word answer to this question: WebSphere intelligence. What exactly does that mean? WebSphere CloudBurst was built with WebSphere in mind, and it knows how to configure, tune, and maintain WebSphere Application Server environments without the need for custom scripting.
For instance, when a user is building a WebSphere CloudBurst pattern (if you are wondering what a pattern is, or just want to learn the basics of WebSphere CloudBurst, take a look at this article) the relationships among the various WebSphere Application Server parts are automatically configured. This means that custom nodes are automatically federated into the Deployment Manager cell, web servers are automatically configured to route to application server nodes (and the web server's config file is setup to be automatically propagated), and much more. In addition to establishing these relationships, WebSphere CloudBurst also applies best practice tuning to the WebSphere environment. This tuning of course is just a suggestion and can be easily changed by users.
In addition to configuring and tuning a topology, WebSphere CloudBurst allows users to apply both fixes and service level upgrades to running virtual systems. Through the console, users can select virtual systems and apply either a fix or upgrade directly to the system. All the while, WebSphere CloudBurst automatically backs up the state of the system before the change is applied, and users can easily roll back to the previous system state if undesired behavior is encountered after the change.
I'm not disputing that all of these actions could be potentially accomplished using some "black-box" virtualization management tool, but the burden of supplying WebSphere intelligence is placed directly on the user. In order to configure, tune, or maintain virtualized WebSphere Application Server environments, users would accompany their virtual machine definitions with a heavy dose of scripting. These scripts only add to the pile of IT assets that need to be owned, updated, and maintained over time, and they only serve to distract users from the end-game of getting their applications up and running.
It's important to note too that WebSphere CloudBurst was only just released, so I would expect that the WebSphere intelligence it provides will only grow and get better over time. If you want to learn more about WebSphere CloudBurst, or if you think your company would be interested in a briefing and demo please get in touch. You can reach me at email@example.com. We would love to explain both the business value and technical capabilities of the appliance.
Customers are always impressed when they learn about the simplicity, resiliency, and rapid time to value they can received from virtual applications. However, they are usually a little mystified at how virtual applications really work. After all - they have become quite accustomed to doing things the "traditional way" where they control every aspect of their applications manually. Virtual Applications represent an entirely new way of thinking. Sure, the benefits are enormous but can you really trust them? How is it doing all of this anyway?
What seems like "magic" is really a sophisticated and coordinated set of activities driven and coordinated by IBM Workload Deployer while leveraging the expertise built into the pattern type. Yes, you can trust it because experts have worked to build the system and created to it react and respond much faster than you can. When moving away from manual processes to automated processes it is always nice to get a sense of what is really happening. I think it is just human nature. We can't really place our trust in something until we have first hand experience or understand what it is really doing ... I guess it is the critic inside each one of us. Even after you've experienced the value it is still reassuring to see and understand the "how".
It is the "how does it do that?" type of question that I attempted to answer for virtual applications in a blog post I wrote on the Expert Integrated Systems blog recently. It attempts to pull the curtain aside and describe what is actually happening to support a virtual application pattern. As with my previous post - this was written for IBM PureApplication Systems but the concepts are 100% applicable to IBM Workload Deployer. I think you will find it interesting ... Continue reading ...
Applications - just like humans, animals, plants, and many other things - have a life cycle. They are conceived, given birth, grow, do foolish things in youth, hopefully improve over time, have problems that need to be fixed, don't always age well .... and eventually they will die and release their assets to the next generation. Sounds kind of familiar, doesn't it?
One of the many benefits of virtual application patterns in IBM Workload Deployer and related IBM offerings is support for the complete life cycle of the application. You can manage the complete life cycle of virtual applications from a single interface that is fully integrated and well thought out - not just a series of links from one product UI to a different product UI. This eliminates the complexity of having to work with different interfaces, paradigms, metaphors, controls, labels, names, authorization, and so on - that is often the norm in many customer environments today. I think the benefits of this integration are obvious - eliminating confusion, configuration, miscommunication, interpretation, and mapping errors. Providing a truly complete solution also facilitates a common knowledge base and encourages cooperation and collaboration among teams. You can share patterns, providing consistent governance for a solution, guarantee consistency in deployments, and build upon the expertise provided by others. Having an integrated solution for design, development, deployment, configuration changes, monitoring, and problem determination ensures that time is not wasted and valuable information is not lost.
If you have been wanting to get some first hand experience with patterns of expertise in preparation for IBM PureApplication System or IBM Workload Deployer but you don't yet have a system of your very own to use ... then you will want to check out this post/video and then download the Virtual Pattern Toolkit for Developers! It's absolutely free and will get you up and running with a virtualized system in a short while. Check it out!
IBM Impact 2012 was just last week with a theme of "Change the Game" ... and I'm still reveling in all of the excitement and energy that goes into conferences such as this. I was fortunate to get a last minute spot to attend the conference and help out at the Solution Center where I had the chance to speak to a lot of customers and other IBMers interested in cloud computing. Among the many things that stood out - there is certainly a lot of interest in cloud computing and patterns of expertise - it also seems that folks are ready to get some first hand experience with these patterns. There's plenty of opportunity for that!
It does not seem like it has been a year, but here we are again. It is time for IBM Impact 2012, and like each year, this one promises to be a little better than its predecessors. As I type this post, I am 36,000 feet above either Texas or New Mexico on my way to the neon desert for a completely packed week. I can't wait to arrive!
An obvious summary of IBM Impact would be to say that it is a technology conference. That does not quite do it justice though. The event is packed full with stories of business transformation, emerging business paradigms and the technologies that support them, new product announcements and much more. That said, in my mind IBM Impact is first and foremost a premier technical education conference. The week is stuffed with technical session after technical session on a wide range of topics.
With that in mind, I thought I would share a few of the sessions I have highlighted on my calendar. To be honest, I had a hard time setting up my calendar for the week. In some cases I ended up booking three sessions in one time slot. There are simply too many good sessions to choose from, so this list is nowhere near exhaustive, and it is certainly not my full calendar!
1219 Overview of the IBM Mobile Foundation :: Monday 10:45 AM - 12:00 PM :: Palazzo N - Venetian
Summary: This session will provide an overview of the new IBM Mobile Foundation: a new middleware offering from IBM that will enable customers to build and deliver innovative mobile applications, centrally govern and manage their mobile infrastructure, and integrate with existing enterprise data and services. Attendees will leave with an understanding of what the platform is and how it can help them effectively and efficiency take advantage of mobile for their enterprise.
2138 Elastic Caching - Foundational Technology for Your Solutions and Offerings :: Monday 2:00 PM - 3:15 PM :: Palazzo J - Venetian
Summary: This session will provide an overview of elastic caching, explain IBM's offerings and technology and will share a set of usage scenarios that will demonstrate why this technology is so hot -- and why it can dramatically benefit our partners' offerings, solutions and ROI.
1371 Introducing IBM WebSphere Application Server v8.Next - Enhanced ND: A Huge Step Forward! :: Tuesday 10:45 AM - 12:00 PM :: Palazzo I - Venetian
Summary: Setting the bar higher for app server resiliency and robustness, IBM WebSphere Application Server Network Deployment v8.Next sets itself even further apart from the industry. Now included with WAS ND is virtualization, improved availability and health monitoring, Java batch processing and more. This session covers the details.
2484 Cloud, Virtualization and Application Pattern Trends and Directions :: Tuesday 1:30 PM - 2:45 PM :: Palazzo E - Venetian
Summary: Cloud and virtualization is being pared with a new best practices-based approach to application development, deployment and automation of custom and independent software vendor applications across a range of deployment environments. Whether you're targeting existing hardware and software stacks, new private cloud infrastructure or public cloud resources, or all three, a pattern-based approach to applications can deliver unmatched portability and time to market. Cloud computing is helping to deliver a level of automation and self service needed in todays dynamic business landscape. Learn how these technologies are unfolding, and what your company can do to get started today to drive speed, efficiency and lower total cost of ownership across your IT investments.
1520 Building Custom Content for Expert Integrated Systems :: Wednesday 10:45 AM - 12:00 PM :: Palazzo G - Venetian
Summary: This session will cover all the different ways that new functionality can be developed for use with IBM Workload Deployer and IBM PureApplication System. This will include the ICON Image Construction tool, Capture and Extend, and also the IWD PDK.
1563 Positioning Expert Integrated Systems with Its Competitors :: Wednesday 3:15 PM - 4:30 PM :: Palazzo F - Venetian
Summary: (Plug Alert! -- I will be co-presenting this session) IBM PureApplication System provides a virtualized platform of cost effective, next generation hardware with an optimized capabilities that automates workload lifecycle management, from deployment to quiescence. How is it different from other solutions that promise the same benefits? Join us as we examine the unique capabilities of IBM PureApplication System. Learn about the value of IBM PureApplication System for your business, and why it is truly heads and shoulders above the competition! Explore in detail why IBM PureApplication System can better deliver on these capabilities than other alternatives such as Oracle's Exalogic. Comparisons will be based on quantitative metrics using results from actual experiences with both products.
2150 Building a Private Cloud Using IBM Technology and Fit-for-Purpose Methodology :: Wednesday 4:45 PM - 6:00 PM :: Marcello 4401A - Venetian
Summary: In this session, we will examine a practical approach for organizations to optimize their computing environment by using IBM WebSphere technology. We explore the use of IBM's fit for purpose methodology for optimal workload placement, how the use of IBM's Workload Deployer, IBM Datapower, the IBM Rational Automation Framework for WebSphere and WebSphere Virtual Enterprise can be used in the development of a private cloud that maximizes your total computing environment.
2390 WAS vs. WebLogic, JBoss and Tomcat: An IBM Perspective :: Thursday 8:45 AM - 10:00 AM :: Lido 3105 - Venetian
Summary: Are you considering an Oracle WebLogic or an open source application server like Tomcat or JBoss? In this session we will discuss key factors to consider when making a decision on which application server to use, such as cost of licenses and support, performance, availability and usability lab tests, administrative and development tools, and real world customer experiences. We will discuss factors that contribute to TCO such as development and operating costs, and application performance and reliability. We will discuss how new capabilities of WAS v8.Next enhance its competitive position. Session will be presented by Roman Kharkovski, who has been a technical lead on the WW WebSphere Competitive Team since 1999 and Stuart Smith, who is lead consultant with Web Age Solutions and worked with all major application servers and Java since 1998.
Summary: eXtremeMemory allows you to store objects in native memory instead of on the Java heap. By moving objects off the Java heap, you can avoid garbage collection pauses, leading to more consistent performance and predictable response times.
These are just a few of the interesting sessions I have highlighted on my calendar. I am going to sit in on many more, and I will be writing a summary of the event soon enough. For those of you heading to IBM Impact, safe travels and I hope to see you there!
The answer is yes, I did a related but different blog post with a similar title a few weeks back. At that time I was primarily highlighting a webinar that I co-presented with Keith Smith regarding the various virtualization solutions and features that are available in IBM Workload Deployer in virtual application patterns and virtual system patterns leveraging the Intelligent Management Pack (IMP). If you didn't get a chance to attend that webcast live then I encourage you to check out the replay (especially Keith's portion with details on IMP - a really helpful overview).
This new blog post expands on the theme of that original blog post but takes a broader vision of where IBM has been with our private cloud offerings in WCA and IWD up to and including the recently announced IBM PureApplication System - and how this history demonstrates our leadership in supporting applications in the cloud.
It's here at long last! IBM PureSystems was announced today and in particular the IBM PureApplication System family member. IBM PureApplication System includes many of the capabilities that you have been hearing about and using in IBM Workload Deployer. While this solution includes and builds upon the capabilities of Workload Deployer, there's also a lot more functionality that is built into a completely integrated and optimized solution that not only manages your private cloud but runs it in the most optimized fashion. It really is a complete private cloud solution that is highly optimized to provide the best possible integration of software and hardware made simple for your cloud needs.
We've been talking a lot recently about Virtual Application Patterns and enhancements to this deployment model in IBM Workload Deployer v3.1. This is appropriate because virtual applications are a substantial evolution for application deployment in a private cloud. Virtual Application Patterns deliver on the promise of Platform-as-a-Service - letting you focus on the application while Workload Deployer builds the necessary platform to deploy and manage your application.
However, Virtual System Patterns are still alive and well ... and quite frankly, this is where many people begin to explore the functionality provided in Workload Deployer. For many, it is a logical first step to start recreating familiar physical environments in the private cloud and then leverage these environments to develop and test their applications. It is also a great way to build out new applications using familiar concepts, leveraging existing scripts, and take full advantage of the agility, consistency, and increased resource utilization available in a Workload Deployer managed private cloud.
You may recall that virtual system patterns are sometimes called topology patterns because they are used to define a topology middleware configuration to meet application requirements. With a virtual system pattern you define exactly the type of middleware configuration that you need for your application environment and Workload Deployer provisions exactly that configuration when the pattern is deployed to your private cloud.
To use an automotive analogy, you might compare virtual systems to building your own hot-rod from a molded frame while virtual applications are more like purchasing a complete vehicle from a dealer. When you purchase a vehicle from a dealer you receive a fully functional automobile. Sure, you can choose the color and some options – but you don't necessarily know the details of all of the components that make your vehicle functional. Just add a driver (you) and off you go! This saves you substantial time and money while freeing you from the need to be an automotive engineer. As with the production vehicle, virtual applications are optimized for a specific purpose and are extremely effective when used for that purpose. All you need to do is add your application (the driver) and run-time requirements. Virtual system patterns are like the hot-rod approach. You start with a modeled frame of sorts (hypervisor edition images) – thereby saving time and effort so you don't have a start from scratch. However you still have the responsibility and flexibility to create a very unique custom vehicle. Doing so requires more expertise and a greater time investment when compared to a production vehicle (virtual application), but you get to decide all of the details. With virtual systems you specify the exact vehicle you need for your application. This provides substantial flexibility but requires a deep knowledge of the middleware and an investment of time building necessary scripts and other elements to support your application environment.
So as I mentioned, virtual system patterns are very popular. And if you have been following recent posts about the enhancements delivered in IBM Workload Deployer v3.1 you noticed that several of the features primarily focused on virtual applications have at the same time been extended to virtual system patterns - such as the shared caching service and the new base AIX image. So we certainly consider virtual systems deployment model to be important. IBM Workload Deployer v3.1 delivered new hypervisor edition images and the IBM Image Construction and Composition Tool was bundled with Workload Deployer - primarily used for creating custom images to leverage in virtual system patterns. The IBM Image Construction tool is a substantial advancement in the ability to create your own custom base images.
To help communicate that we haven't been neglecting virtual system deployment patterns, I created a new demo to highlight this deployment model. The demo begins by providing a quick overview of the components that go into a virtual system pattern. It then shows how to clone a pattern to customize it for your own purpose, deploy it, monitor licenses, and monitor resource usage in your private cloud. Finally, it shows a quick demonstration of installing an emergency fix to a deployed virtual system instance.
I'll be showing this and other demos at IBM Pulse 2012 next week. I hope to see you there!
As many of you well know, virtual images are the foundation of virtual system patterns in IBM Workload Deployer. Whether you are using IBM Hypervisor Edition images or custom-built images produced by the IBM Image Construction and Composition Tool, every virtual system pattern has at least one virtual image as part of its foundation. So, if virtual images are the foundation of virtual system patterns, what is the foundation of these virtual images?
While you could probably make a good argument for a number of different things being the foundation of the virtual image (operating system, other installed software, etc.), I like to think that, at least in the context of IBM Workload Deployer, the activation engine inside the virtual image is the true foundation. Inside this activation engine, you will find a collection of scripts and services that are capable of configuring the virtual machine for use. Not only does this engine perform basic system-level actions like configuring the machine's hostname, IP address, time, and network interfaces, but it also configures the software on the inside of the virtual machine. For instance, the activation engine in the WebSphere Application Server Hypervisor Edition image is capable of fixing up profile information, federating nodes, creating application server clusters, and more. Best of all, in the case of IBM Hypervisor Edition images, you (the user) get all of this right out of the box. There is no logic to perform or administrative tasks to undertake in order for you to benefit from the activation engine. It is simply there!
So, at this point you may ask yourself 'If all of this is included right out of the box, why do I need to care?' That is a fair question, but ultimately I feel it is always important to understand the foundational elements of any technology. In this respect, I do not feel like the activation engine in the IBM Hypervisor Edition images is any different. Lately, I have been telling my users to take at least a little time to understand what the activation engine is and even more importantly, what it is doing for you during deployment. Specifically, I always suggest taking a little time to look at the scripts in the activation engine -- most often found in the /opt/IBM/AE/AS directory of a virtual machine deployed by IBM Workload Deployer.
What can be gained by taking the time to peruse through these scripts? I think most importantly, you will learn what the engine does for you and what you cannot do if you expect the image to deploy correctly. For instance, if you look in some of those activation engine scripts, you will see that it uses the sudo command in several places. While I know many of you may be tempted to remove the sudo command during extend and capture, if you do so it will break the activation engine. I have seen this happen multiple times, and trust me, if you did not know the activation engine used that command it is not necessarily an easy problem to debug. This is a case where the value of at least superficially understanding the activation engine is clear.
Want another example? Okay, consider that you want to run WebSphere Application Server as a user called wasadmin. At pattern deployment time, it is easy enough to supply wasadmin in the appropriate field of the part configuration data and click OK. IBM Workload Deployer deploys the system and voila, WebSphere Application Server is magically running as wasadmin. Everything is fine so far, but let's take this a step further and say that you previously performed an extend and capture, and you installed software components in the image that should be owned by your wasadmin user. It is technically possible to define users during extend and capture and then install software content via that user, but if you also want to specify that user as the WebSphere Application Server administrative user at deployment time, you will run into an issue. This is because the activation engine runs the usermod command during deployment to change the existing and default virtuser into the user that you specify -- in this case wasadmin. If the usermod command attempts to change virtuser to wasadmin but wasadmin already exists as a user on the operating system, the command will not complete properly, and it is very likely you will see further errors downstream. A simpler way to do this is to create the user during extend and capture, install any components via that user, and then delete the user before capturing. You can attach a deploy-time script that fixes up the appropriate settings for wasadmin (like user ID and group ID), and it will run after the activation engine successfully does a usermod and changes virtuser to wasadmin.Problem averted!
In reading some of the above, I fully realize that it may be a little confusing at first. That said, I assure you that there is not much to it at all once you have a basic understanding of the activation engine. With a basic understanding of the activation engine in tow, you will know what you do not need to do (e.g. create profiles, federate nodes, etc.), what you cannot do (e.g. remove the sudo command), and what you can do with a little bit of reconciliation work (e.g. define your WebSphere Application Server administrative user during image extension). I encourage you to take a little time with your next deployment and give the activation engine a once over. You will undoubtedly have a better understanding of the deployment process, and you will ultimately be in a position to most effectively leverage virtual system patterns in IBM Workload Deployer.
Everybody likes having choices. This is true whether you are talking about lunch or deploying to a private cloud. When IBM Workload Deployer v3.0 was first introduced it included a pattern type for our Database-as-a-Service offering. The DBaaS PatternType v1 provided substantial value in an easy to use form factor to get a database up and running quickly and then provided the necessary tools to manage that environment. Pretty impressive for a first release! But the story doesn't end there. IBM Workload Deployer v3.1 brings an updated version of this pattern type that builds upon this foundation and adds even more capabilities and more choices.
Some of you may not be familiar with the Workload Deployer Database-as-a-Service offering so let me give you a brief introduction. Database-as-a-Service patterns allow you to define and deploy database applications into your private cloud environment with speed and consistency. These offerings also provide integrated management and monitoring capabilities. The Database-as-a-Service capability can be used in conjunction with a web application pattern (Patterns -> Virtual Applications, IBM Web Application Pattern) by including a database component in a pattern connected the web application components to use it. In this case the web application and database are deployed and managed as a unified solution with a common life-cycle as shown in the pattern below.
Database patterns can also be created and deployed as standalone entities (Patterns -> Database Patterns) that have their own life-cycle, independent of the virtual web application(s) that use the database. What's more, you can leverage these stand-alone databases from applications both inside and outside your private cloud.
Whether you use a stand-alone database pattern or one that is part of a web application pattern, the attributes and capabilities of the database are consistent.
So what is new in this release? For starters, the DBaaS PatternType has been renamed and the capabilities expanded. For Workload Deployer v3.1 the pattern is delivered as the IBM Database Patterns v1.1 and includes several elements to provide predefined configurations: the IBM Transaction Database Pattern and the IBM Data Mart Pattern.
Before we take a closer look at the new features I just want to alert you to one thing. Before you can leverage any of these new features you first need to accept the licenses and configure the plugins for the database pattern types. So look at the link and follow the directions if you would like to along and you aren't seeing the same options in your IBM Workload Deployer V3.1 system.
Using the screen shot above as a reference, let's take a look at what you can specify when creating a database pattern. You start with a name for the pattern and an optional pattern description. You also specify the maximum user data space size and an optional schema file. These are pretty basic and were all available with in the previous release. Another really nice feature that has also been available since the first release is the ability to specify a compatibility mode for DB2 and Oracle (a nice feature if you are looking to move content from existing databases).
Some of the new enhancements appear in the middle of the view; the purpose and source. The purpose specifies if this database is to be used for production or non-production (test and development). Your selection will optimize license management for deployed instances of this pattern.
The source field lets you specify a database configuration to be used to provision this database. You can choose from two different provisioning approaches; applying a workload standard or cloning from a database image. When choosing apply a workload standard you select between two predefined, optimized database configurations. These configurations will run a set of scripts to tune the operating system and instance configuration for the database. The departmental transactional standard is optimized for online transaction processing applications while the data mart standard is optimized for data mining purposes and is therefore more suitable for reporting applications. If those aren't exactly what you want but you have an existing database you can use the clone from a database image approach by selecting an existing database image backup as a model for the new database pattern. When using the clone method metadata from the backup is retrieved and a DB2 restore command is used to set the same configuration for the new database instance. Reference the cloning from a database image topic in the IBM Database Patterns information center for more details.
Once the pattern has been created you can deploy the pattern to a target cloud group or an environment profile (another new feature for database deployments in IBM Workload Deployer V3.1).
I hope you can see the value that has been added with the source configuration choices and the ability to clone an existing configuration. They are certainly substantial new features of the Database-as-a-Service solution in Workload Deployer V3.1. However, there are a number of other significant enhancements that I would just like to mention as well. In other posts we've discussed the new ability to deploy virtual applications to run on AIX with a PowerVM hypervisor. As you might expect this same ability is also available to deploy database patterns to run on AIX systems leveraging PowerVM. Management capabilities have also been significantly enhanced with the ability to configure automated database backups using the IBM Tivoli Storage Manager. These features and many other aspect of the Database-as-a-Service model are detailed in the IBM Database Patterns information center and the IBM Workload Deployer information center. My goal here has not been to replicate our product documentation - it is rather my goal to provide a few highlights and provide pointers to help you get started. I hope it has been useful.
You can be sure that we will continue enhancing and improving our Database-as-a-Service offering in IBM Workload Deployer. Please provide your feedback so that we can make it even better.
In the previous post I spoke about how a Virtual Application feature introduced in Workload Deployer v3.1 actually had benefits for Virtual System patterns as well. In that case I was talking about the ability to deploy Virtual Applications running on AIX to PowerVM hypervisors and how this had hidden benefits for Virtual Systems as well. This is a great example of how an enhancement to Virtual Applications can sometimes benefit Virtual Systems. However, this is not the only instance where the two pattern types intersect.
Several other new or enhanced features that are primarily for Virtual Applications are also being extended to benefit and improve Virtual Systems ... and vice-versa. One such area of improvement involves Shared Service in v3.1. These services were introduced in v3.0 specifically for the benefit of Virtual Applications. However, several enhancements have extended these capabilities to Virtual Systems and likewise, some functionality that was previously only available to Virtual Systems has been extended to Virtual Applications in the form of Shared Services.
As you may already know, Shared Services were first introduced in v3.0 and are just what the name implies; services that are deployed by a cloud administrator and used by multiple virtual application deployments. Let's start by taking a look at the shared services available under Cloud -> Shared Services in v3.1. You will notice that there are now more shared services listed than there were in v3.0.
In addition to the familiar Caching Service and ELB Proxy Service (formerly Proxy Service) there are now additional entries for an External Caching Service and an External Application Monitoring Service. For simplicity let's just start from the top and go down the list, discussing the function of each service, what is new/improved for v3.1 with regard to virtual applications, and when applicable how this service can be used by virtual systems.
The Caching Service was introduced in v3.0. Its primary purpose is to cache HTTP session data using a highly scalable and fast in-memory cache. This is the same core technology that is included in our WebSphere eXtreme Scale and DataPower XC10 Caching appliance. To make use of this service all you need to do is deploy an instance of the Caching Service with the configuration parameters of your choice into a cloud group where you want to leverage that service. As you create virtual application patterns you simply select the Enable session caching check-box when you add a scaling policy. When this pattern is deployed it will be automatically configured to leverage the Caching Service for session persistence. It's as simple as that.
Several new features were introduced in v3.1 for the Caching Service. First, the Caching Service can now be launched with parameters to define the behavior for automatic scaling to meet the ever changing demands of your applications. Once set, Workload Deployer will manage this service to ensure sufficient capacity based upon your requirements, adding or removing containers. Second, and this is significant for Virtual System patterns, the caching service has been enhanced to add new operations to support listing, creating, and deleting various types of object grids. You can then use the WebSphere eXtreme Scale ObjectGrid APIs to persist and manage content in the grid from your application code from Virtual System deployments. This saves you the trouble of creating and configuring your own caching service for these purposes outside of the cloud and permits sharing of the service you have already configured - a nice savings.
Caching Service (External)
The External Caching Service is one of the new additions for v3.1. Let's say that you already have configured a caching solution for your enterprise using the DataPower XC10 appliance or a collective of appliances. It would be nice if you could leverage this same solution instead of launching yet another caching solution within your private cloud. Leveraging your existing solution would consolidate your caching needs and preserve the cloud resources for other purposes. With this new external caching service you can do just that. It provides you the ability to leverage an external caching solution for both your Virtual Application session persistence needs as well as your Virtual System and even non-cloud caching needs. Just point an instance of this external caching service at your DataPower XC10 caching solution and all of the HTTP session persistence needed by your virtual applications in the same cloud group will make use of the external caching service. You can also point multiple instances of the external caching service in multiple cloud groups to share the same XC10 appliance or collective.
Monitoring Application (External)
With the External Monitoring Application service you can deploy an External Application Monitoring service reference within a cloud group to point at a Tivoli Enterprise Monitoring Server installation outside of the cloud. The TEMS server must be at version 6.2.2 Fix Pack 5 or later. Once created, the Unix or Linux OS monitoring agents and the Workload monitoring agent that is provided for virtual application workloads will be automatically connected to the defined instance of the Tivoli server using the supplied primary and fail-over Tivoli Enterprise Management server, protocol, and port. This is especially useful if you want to consolidate all of your monitoring to a common console. As with the External Caching Solution, this enhancement also extends the integration capabilities of Virtual Application Patterns beyond the scope of your private cloud and allows you to consolidate and leverage investments you have already made.
ELB Proxy Service
The Proxy Service was first introduced in v3.0 and renamed to the ELB Proxy Service in v3.1 for clarity. As the name implies, its primary purpose is to provide routing and load balancing to multiple deployed web applications. As with the caching service, you deploy this service based upon your requirements for load and availability within a cloud group. When defining virtual application patterns to leverage this service you simply add a routing policy and define your virtual host name. When the virtual application pattern instance is deployed to the cloud group the necessary configuration will performed to add the virtual host name and configure your application environment to use the ELB Proxy Service. New in v3.1 is the capability to scale the ELB Proxy Service itself to meet the changing demands of your application mix.
One other item that I should point out (and to which I've already alluded) is that you can now deploy multiple instances of each of the shared services - one per cloud group. Shared services can also now be deployed using environment profiles. This was not previously the case in v3.0 where each service was a singleton for the appliance. Allowing multiple instances of shared services gives you the flexibility to configure the sharing of your services as necessary for your particular environment.
I hope this post has provided a useful overview of the value of shared services and the new capabilities introduced in v3.1. I also hope that you can see how these services make it easier to implement your solutions for both virtual applications and virtual systems within a private cloud environment and shed a little light on how we are continuing to improve IBM Workload Deployer. As always, these improvements are driven by the feedback we receive from you so please let us know what you think!
When IBM Workload Deployer V3.0 was introduced last year, one of the "hidden" values that it delivered was a base image used for virtual application patterns. I say "hidden" because this image, while delivered primarily for use in virtual application patterns, could also be leveraged for virtual system patterns. By now you may be scratching your head and wondering just what I'm talking about. Let me explain...
To begin with, it is helpful to understand a little bit about how virtual applications are deployed and how that differs from virtual system patterns. As you may already know, virtual system patterns are built from IBM Hypervisor Edition images to launch the virtual machines for your deployment. The IBM Hypervisor Edition images include the Operating System and middleware components together in the image. Therefore, building a virtual system pattern basically starts with a fairly complete image and activates the parts in that image necessary to fulfill the particular role this virtual machine will pay in a virtual system pattern. Virtual application patterns take a somewhat different approach. The starting point for a virtual application pattern is the base image which only includes the base Operating System. Workload Deployer launches a virtual machine with just this base image and then the appliance manages installation, configuration, and integration of software and applications to complete the role this virtual machine must fulfill for this virtual application pattern. At a high level you could consider virtual system patterns a template approach and virtual application patterns more of a build it as you need it approach.
So just what is the "hidden" value of these base images provided for virtual application patterns and how can that be used for virtual system patterns? The hidden value is that the base images used for virtual application patterns are delivered with IBM Workload Deployer in the image catalog and can be used for building virtual system patterns. If you already have an appliance you can take a look ... you will see the base images there under Catalog > Virtual Images right along side more familiar images like the IBM Hypervisor Edition images for WebSphere Application Server. For x86 systems this image is appropriately named "IBM Workload Deployer Image for x86 Systems". These images each include a base part called "Core OS" that can be included in a virtual system pattern.
So now you may be saying to yourself - well that's all great news but what is new about this? The new thing is that in IBM Workload Deployer V3.1 a significant new feature was added - the ability to deploy virtual application images to PowerVM environments using AIX. To enable that feature a base image was created for AIX, the "IBM OS Image for AIX Systems." As with the x86 image, this new image is now also available for your use in the image catalog. You can now employ that default AIX image for your own needs in virtual systems patterns - creating a very nice extension mechanism for PowerVM and AIX users.
This new base image contains the IBM AIX 6100-05 operating system and the Core OS part that you can include in virtual system patterns. As with the x86 base image delivered earlier, there are no restrictions on how you use or customize this image. To make it suitable for your purposes you can employ the IBM Workload Deployer extend and capture capability to install additional software content into the image. You can also enhance this image using the IBM Image Construction and Composition Tool (ICCT) that is now included with IBM Workload Deployer v3.1. When you include this part in a virtual system pattern you can also associate any configuration scripts that you may need, just as you would with any other part. Just as with the x86 part - this provides substantial value and a significant convenience for AIX users.
I hope this clues you in on the "hidden" benefits of a substantial new feature included in IBM Workload Deployer V3.1. We have often been asked to provide base OS images to build upon as starting from scratch is sometimes difficult when you need to create your own custom image. Now, with IBM Workload Deployer v3.1, you have your choice of two default images in addition to the many IBM Hypervisor Edition images delivered as well as a robust set of new features in IBM Workload Deployer V3.1!
In the previous post Dustin shared a great video demonstrating the value of the IBM Image Construction and Composition Tool that is now delivered with IBM Workload Deployer V3.1. This is certainly one of the key new features of IBM Workload Deployer V3.1. However, there are also a number of other compelling enhancements and features that we would like communicate.
I created the attached video to highlight some of these features included in new Workload Deployer release. The video uses the web console to highlight some of the features and capabilities, giving a brief introduction for each one. Without going into a lot of depth, I think it gives a nice overview. This may be especially helpful if you already have Workload Deployer v3.0 and want to see the value you will get when you upgrade to Workload Deployer v3.1. Check it out.
We believe that these new features make IBM Workload Deployer V3.1 an even better solution for your private cloud needs. Please let us know what you think.
Lately Joe and I have been pretty vocal about bringing up the new IBM Image Construction and Composition Tool capabilities in IBM Workload Deployer v3.1. While writing about such new capabilities is always good, I think seeing is believing. In that light, I hope you will take a look at the recent demo I put together that shows how to use the Image Construction and Composition Tool with IBM Workload Deployer v3.1!
In a recent post, Joe Bohn detailed some of the new capabilities and enhancements that come along with the recently delivered IBM Workload Deployer v3.1. To be sure, there are many valuable new features such as PowerVM support for virtual application patterns, the Plugin Developer Kit, WebSphere Application Server Hypervisor Edition v8, and more. Each of these topics probably merit their own post, but today I want to talk about something I did not mention above. Specifically, I want to talk about the announcements regarding the IBM Image Construction and Composition Tool (ICCT) and what that means for IBM Workload Deployer users.
You may have read an earlier post that I wrote about the ICCT, but allow me a brief overview here. In short, the ICCT enables the construction of custom virtual images for use in IBM Workload Deployer. You use the tool to create virtual images, much like IBM Hypervisor Edition images, and then you can use those custom images (containing whatever content you need) to create your own custom virtual system patterns. The key point about the custom images you create with the ICCT is that they are dynamically configurable. That is, the tool helps you to create the images in such a way that you can defer configuration until deploy time rather than burning such configuration directly into an image. For those of you familiar with virtual image creation, you know this type of 'intelligent construction' is a huge step towards keeping image inventory at a reasonable level.
Okay, enough of a general overview for now. Let's talk about the two new items of note regarding IBM Workload Deployer v3.1 and the ICCT. The first thing you should know is that starting in IBM Workload Deployer v3.1, the ICCT is shipped with the appliance. This means that you do not need to go anywhere else in order to get your hands on the tool to start creating your custom images. You simply log into IBM Workload Deployer and click the download link on the appliance's welcome panel (shown in image below).
Getting your hands on the tool is one piece of the puzzle, but using it is quite another. While the ICCT has been available as an alphaWorks project for some time, that also implies that there has never been official support for the tool. That changes starting with IBM Workload Deployer v3.1. The ICCT is now a generally available product from IBM, and that means that it is fully and officially supported as well. Further, the images you create using the tool are also officially supported for use as building blocks of your IBM Workload Deployer virtual system patterns. For many of you who have been using the ICCT for some time, but have been hesitant to expand use because of the lack of a formal support statement, you should now feel free to charge forward!
I hope this helps clear up exactly what the new Image Construction and Composition Tool announcements that were part of IBM Workload Deployer v3.1 actually mean. I cannot wait to hear about how you all are putting the ICCT to use with IBM Workload Deployer. Finally, don't forget to send us any questions, comments, or other feedback that you may have regarding this or any other new feature in IBM Workload Deployer v3.1!
IBM Workload Deployer v3.1 firmware has been released and is available for download. V3.1 includes many improvements, building upon the solid foundation that was laid in V3.0 and earlier releases of WCA. There are many improvements and enhanced features. Dustin already alluded to a few of these in his previous post but let me list again here some of the more prominent new features:
The ground breaking capabilities offered in our Virtual Application Patterns have been extended to include deployments for AIX on PowerVM - giving you more choices for your private cloud environment. Along with this support a new base operating system image for AIX is also available for extension using either extend and capture or the IBM Image Construction and Composition Tool. Of course, Virtual System Patterns continue to be supported on all three private cloud hypervisors we support: VMWare, PowerVM, and zVM.
A new version of the Web Application Pattern Type (formerly WebApp Pattern Type) has been released. The Web Application Pattern Type V2.0 is built upon the feature rich WebSphere Application Server V8.0 release.
The DBaaS Pattern Type has been updated and is now the IBM Database Patterns 1.1 which includes both the IBM Data Mart Pattern 1.1 and the IBM Transactional Database Pattern 1.1 (OLTP - the default). These pattern types support a broader range of offerings for both production and non-production use. You can choose to create a new type of workload standard to apply to the DB instance or you can choose to clone an existing DB image that has been backed up to your DB image catalog repository.
A number of improvements have been made to the shared services leveraged by Virtual Application patterns. The caching service used to persist session data when scaling a web application can itself now be configured to scale, adjusting to increased demand. We have also extended the shared services to support external caching services and to leverage an external monitoring service based upon Tivoli Enterprise Monitoring Server (TEMS). You can also deploy multiple instances of shared services by deploying to multiple cloud groups.
The Plugin Developer Kit that was previously released to support building your own plugins and pattern types for Virtual Application patterns is now available for download directly from the IBM Workload Deployer dashboard - making it even easier to gain access and experience using this extension mechanism to deliver your own custom plugins and pattern types.
Images created using the IBM Image Construction and Composition Tool are now fully supported in IBM Workload Deployer V3.1 Virtual System patterns. Furthermore, the IBM Image Construction and Composition Tool is now a generally available product that is fully supported and available for download directly from the IBM Workload Deployer dashboard.
Speaking of Virtual System Patterns - a new hypervisor edition image of WebSphere Application Server V8 is now delivered with the appliance. WebSphere Application Server V8 fully supports the JavaEE6 programming model and includes many other programming models directly in the base image that were previously delivered only as feature packs including OSGi, JPA, and many more.
One item already mentioned by Dustin is the ability to configure multiple IBM Workload Deployer appliances in a master/slave relationship with a floating IP address to support continuous availability in the event that the master become unavailable. This feature can also be leveraged to support continuous operation while performing maintenance.
Another key appliance improvement is increased appliance security through the introduction of several new security roles for separation of duties. This is to ensure that no single user has unrestricted control without oversight. Among the new roles is an auditing role and auditing operations to provide data for forensic analysis of security attacks and better assist with compliance with the Health Insurance Portability and Accountability Act (HIPAA) and the Sarbanes Oxley Act (SOX).
We believe that these new features and several more make the value proposition delivered by IBM Workload Deployer V3.1 an even more compelling offering that can increase agility, consistency, and time to value for your applications. You can download IBM Workload Deployer V3.1 from IBM Fix Central. Please let us know what you think!
If you follow this blog often, you know that from time to time I like to post frequently asked questions. Well, it's been a while since I have done that, and since then I have added some new questions to my list -- along with some regulars. Take a look below, and if I don't answer your question feel free to leave a comment!
Can IBM Workload Deployer deploy software that is not IBM software? Yes. You can use one of the included images as a springboard and customize them with your own software via extend and capture. Additionally, you can use the IBM Image Construction and Composition Tool (I'm getting ahead of myself here) to create your own custom images from the ground up and use those within IBM Workload Deployer.
Can I use VMotion for the systems I deploy with IBM Workload Deployer? Yes. IBM Workload Deployer has tolerated the use of VMotion since the WebSphere CloudBurst days (see the Additional Considerations section on this page for more information). IBM Workload Deployer v3 introduced the notion of virtual machine mobility initiated directly from the appliance. This capability takes advantage of VMotion in the case of VMware-based cloud environments.
Can IBM Workload Deployer deploy just a base operating system? Yes. IBM Workload Deployer v3 introduced a base operating system image that contains 64-bit Red Hat Enterprise Linux. Internally, IBM Workload Deployer uses this as the foundation on top of which virtual application patterns are deployed. You can use it to deploy virtual machines containing just the base OS, or you can customize it to deploy software of your choosing. (As an aside, IBM Workload Deployer v3.1 will include a base operating system image for AIX)
Can I automate the process of calling/using IBM Workload Deployer? Yes. IBM Workload Deployer is built to fit a specific need -- creating and managing a cloud of middleware and middleware-based workloads. In that light, it would be a shortcoming if IBM Workload Deployer did not to fit well into more holistic or enterprise-wide cloud management systems. The REST API and CLI allow you to automate the use of IBM Workload Deployer, thereby allowing it to be mashed up into other processes.
Can I group two appliances together for high availability? Yes. IBM Workload Deployer v3.1 introduces the ability to configure appliances in a master/slave setup. You can connect two appliances, allow them to share a floating IP address, and be confident that data is continuously replicated between the two. If one appliance fails, the other appliance picks up the floating IP ensuring continuous service.
Are images created using the Image Construction and Composition Tool supported for use within IBM Workload Deployer? Yes. Part of the new IBM Workload Deployer 3.1 announcement was a statement of support for using images created by the Image Construction and Composition Tool as a component of your virtual system patterns. This is a very important enhancement as it allows you to extend the set of content deployed by IBM Workload Deployer while being sure that you are operating within the boundaries of intended use.
Can I use IBM Workload Deployer to provision to public clouds? No... and yes. If you install an IBM Workload Deployer appliance in your datacenter, you cannot use it to deploy to a public cloud environment. However, you may have recently heard about the IBM SmartCloud Application Services portfolio. IBM has announced that the pattern-based provisioning that one gets with IBM Workload Deployer will also be available as part of this portfolio. This means that you will be able to build and deploy patterns using a service hosted on the IBM SmartCloud. Further, your deployed systems will run on the IBM SmartCloud. Check out this demo for more information.
** IBM Workload Deployer 3.1 firmware is available on 11/18.
For those of you basically familiar with IBM Workload Deployer, you are likely aware that the appliance has many different capabilities. On the surface it is a cloud management device for middleware and middleware applications. Of course, there are quite a few details that are important to understanding the functionality provided, and I spend quite a bit of my time talking with various users and potential users about these details. One thing I have noticed that can become an obstacle in having effective communication regarding IBM Workload Deployer is the lack of a commonly understood language. I sometimes find that me and the user are simply using different terminology to describe the same thing. As you can imagine, this just serves to create confusion, and neither party gets the most out of the conversation.
In order to combat this communication gap, I thought I would put together a simple presentation that introduces and defines IBM Workload Deployer terminology. Check it out below (you can also download it here):
While the presentation does not dive deep into the terms it introduces, it does provide a basic definition and illustrative example of each. My hope is that this fosters an understanding of some of the basic concepts in IBM Workload Deployer, and ultimately pushes us towards a common vernacular. Please let me know what you think!
One of the things that often comes up at some point in IBM Workload Deployer conversations is the notion of self-service access. Specifically, users want to know what the appliance provides that enables them to allow various teams in their organization to directly deploy the middleware environments they need. In other words, they want to use IBM Workload Deployer to tear down the traditional barriers that exist between those that request the environment and those that fulfill said request. Now, as we begin to elaborate on this notion, it becomes quickly apparent that in order to effectively enable self-service, IBM Workload Deployer must deliver a few things.
First, IBM Workload Deployer must provide the means to define users with various levels of access. Second, IBM Workload Deployer must provide the means to define resource access at a fine-grained level to different users and groups of users. Check and check. The appliance has been doing this since the beginning of WebSphere CloudBurst. Without those two things, the conversation of self-service access would end pretty quickly. However, there is a final capability that is equally important: IBM Workload Deployer must deliver a means to limit resource consumption at a fine-grained level.
In IBM Workload Deployer there are a couple of ways to achieve this. First, you could define multiple cloud groups and allow access to those groups in a way that maps directly to resource entitlements. While that may work in some situations, others call for even more granularity. You may want to allow multiple different users or groups to access a cloud group, but you may want to allow different consumption limits for each of these groups. In this situation, you can take advantage of environment profiles and a new option when defining users of IBM Workload Deployer.
Consider the case that you have a group of developers and you want to limit their consumption of memory in the cloud. First, you start by defining your development users and for each you select Environment Profile Only as the value for the Deployment Options field.
By selecting the above value for the deployment options of a user, you restrict that user to only deploying via an environment profile as opposed to general cloud group deployments. After defining all of your development users, you may choose to organize them into a user group for easier management. At that point, you can define environment profiles and determine which ones your developers should have access to using the Access granted to field of the profile.
Within the environment profile, you can define resource consumption limits for compute resource and software licenses. For instance, you can define a limit on the amount of virtual memory consumed by all deployments using the profile. It is important to note that the limit is cumulative for ALL deployments that use the profile.
Now that all of the controls are in place, consider the deployment process for one of your development users. They pick a virtual system pattern, click the deploy icon and begin to configure the pattern for deployment. In the Choose Environment section of the deployment dialog, your development user will only be able to select the Choose profile option for deployment. Further, they will only be able to deploy using the environment profiles to which they have access.
After the deployment completes, a look at the Environment limits section in the profile shows the current usage totals.
Now suppose another development user, or even the same one, comes along and attempts to deploy another virtual system pattern even though the profile limits have already been reached. The user can initiate the deployment, but they will get a near immediate failure owing to the fact that they would exceed consumption limits if the deployment were allowed to proceed.
The same kind of enforcement occurs regardless of the resource limit type. You can use this approach to limit the consumption of CPU, virtual memory, storage, or software licenses among the various different users or groups of users you define in IBM Workload Deployer. If you combine fine-grained resource consumption limits with varying permissions and fine-grained access, I think you are on the road to truly enabling self-service in the enterprise.
A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.
Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:
Me:IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?
Marc: To sum it up, we offer a combination of depth and breadth. IWD delivers both workload aware management and general purpose management. Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context. There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools. This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity. By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.
That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun. As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer's entire environment, we offer a mix of workload-aware management and general purpose management.
Me:VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?
Marc: I think VMware has built a very compelling set of capability in the virtualization space. I think the main difference between VMware's suite and IBM Workload Deployer is the perspective from which the environments are managed. VMware puts the administrator in the position of thinking about infrastructure from the ground up. The administrator is thinking about virtual images, hypervisors, and scripts. In IBM Workload Deployer, we think about things from the perspective of the app, because that's ultimately what the business cares about. By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.
Me:The 'one tool to do it all' approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?
Marc: The advantages of a "one tool to do it all" are many: less integration, more uniformity, less complexity. As such, customers will always prefer a single tool when possible. This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the "everything else." As such, my advice to users is not to choose between breadth and depth - use IBM Workload Deployer which offers both.
Me:To close, I'm curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?
Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm. There are several reasons for this. One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged. Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models. However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data. Just look at IBM's Strategic Outsourcing business, which handles entire IT operations for many large businesses. I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption. In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public. When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.
Sorry for the late notice - but I just realized that I hadn't blogged about a webcast that I am participating in tomorrow (Tuesday, 9/13)!
Chris Brealey (a Senior Technical Staff Member and Rational Enterprise Architect) and I are hosting an InformationWeek WebCast tomorrow (Tuesday, 9/13) entitled "Quickly and Efficiently Design, Develop, Deploy, and Test Workload Application Patterns to Save Months and Millions". I encourage you to register now for this free event (or if you can't make it tomorrow listen to it at your convenience as it will be recorded ... but you still need to register).
I'm really looking forward to this webcast. IBM Workload Deployer's predecessor, WebSphere Cloudburst Appliance, delivered unmatched capabilities for middleware deployments and management using Virtual System patterns (topology) - delivering complete middleware topologies in a rapid, consistent, and repeatable fashion. This has greatly improved the ability of development and test organizations to meet the ever increasing demands of today's agile development processes in addition to the assurance it provides for production environments. All of that value is still present (and improved) in IBM Workload Deployer but there is even more value in the new Virtual Application Patterns, as we've mentioned in previous posts.
Virtual Applications build upon this same notion of consistency and speed found in Virtual Systems while at the same time introducing a radical simplification to hosting your applications. Using an application-centric, declarative approach with Virtual Applications (workloads) it is even easier to deliver your applications rapidly leaving Workload Deployer to ensure the middleware environment is constructed and optimized to meet your application criteria. Virtual Applications usher IBM Workload Deployer into the realm of Platform-as-a-Service ... with even greater simplicity and agility to host your application in the most efficient fashion. As with Virtual System patterns earlier, we expect the introduction of Virtual Applications to continue to improve the dev/test lifecycle as well as production. The robust capabilities of Rational Application Developer and the simplicity of Virtual Application patterns in Workload Deployer make for a great combination.
I will start off the webcast with a discussion of PaaS and IBM Workload Deployer Virtual Application patterns. Chris will then discuss the application development process and how that is influenced with the introduction of the cloud environment. Chris will then explore the integration that is available in Rational Application Developer for IBM Workload Deployer. Finally, we will walk through a scenario that demonstrates how to leverage Virtual Application patterns in IBM Workload Deployer to design a solution that is then shared with the developer. Using Rational Application Developer the developer delivers the application into the pattern and moves it to test and finally pre-production. We will end with a question and answer time. I hope you can join us as we explore how we can use these technologies to increase agility and efficiency.
Script packages are an integral part of virtual system patterns in IBM Workload Deployer. By attaching script packages to your patterns, you provide customizations particular to your unique cloud-based middleware environments. Customizations provided by script packages might include installing applications, creating application resources, integrating with external enterprise systems, and much more. The bottom line is, if you are creating virtual system patterns, you will almost certainly be creating script packages.
Largely, the act of creating a script package is independent of IBM Workload Deployer. The appliance does not dictate a particular scripting language, so all you need to do is make sure you can invoke your logic in the operating system environment. Your script package may be a wsadmin script, shell script, Java program, Perl script, and on and on. After you create the actual contents of your script package, you will then load that asset into the IBM Workload Deployer catalog.
Once loaded into the catalog, you define several attributes of your script package, including the executable command, command arguments, variables, execution time, and more. The process for defining these attributes is trivial using the intuitive UI in IBM Workload Deployer, but I wanted to take a little time to remind you of a technique I recommend to all users defining script packages. You can actually package a JSON file within the script package that defines all of the script's attributes. The format of the file is simple, and I am including an example below:
The example above is one taken from a script package in our samples gallery, and it shows the basics of which you need to be aware. Notice that in the JSON file, you can provide a name, description, unzip location, executable command, command arguments, variables, and more. You only need to ensure that the name of this JSON file is cbscript.json and that you include it at the root of the script package archive. Once you have done that, you load the script package archive into the catalog, refresh the script package details, and voila -- all the attribute definitions appear!
You may ask why I recommend this since it could seem like an unnecessary step. My answer to that is that you have to define these attributes anyway, so you might as well capture it once in the file. Once you capture it once in the file, you can ensure that if the same script needs to be reloaded, or if you need to move it to another appliance, its definition will be exactly the same (and presumably correct). I use this approach for all of my work, and for all of the samples I contribute to our gallery, and it really saves me a lot of misplaced effort that can result from typos. If you are out there creating script packages, try adopting this approach. I'm pretty sure you will be happy you did!