When IBM Workload Deployer v3.0 rolled around, the appliance introduced the concept of shared services. These were services that a cloud administrator could launch into the cloud infrastructure defined to IBM Workload Deployer, and use to serve a number of different application deployments. There were, and continue to be, two main shared services: a proxy service and a cache service. The shared proxy service does pretty much what you may guess. It provides request routing capabilities across multiple different instances of multiple different applications, thereby providing a centralized resource that encapsulates this basic need in an application environment. You can probably also guess what the caching service does. It caches things! Specifically, in IBM Workload Deployer v3.0 it provided in-memory caching of HTTP sessions, thus ensuring high availability of data stored in those sessions.
Undoubtedly, the ability to make HTTP session data fault tolerant is extremely critical in any application environment, cloud-based environments included. However, the applicability of a shared cache service is much further reaching, and in IBM Workload Deployer v3.1, we are starting to open this service up to your applications. What does this mean to you? Quite simply it now means that you can access this cache directly from your application code. If you are familiar with WebSphere eXtreme Scale or the DataPower XC10 Caching Appliance, then you know exactly what I mean. You can use the WebSphere eXtreme Scale ObjectGrid API to insert, read, update, and delete entries that exist in the in-memory cache. The underlying cache technology is based on the same code that powers WebSphere eXtreme Scale and DataPower XC10, so you can be sure that your cache is scalable, fault tolerant, responsive, and otherwise able to meet the needs of your application.
As I hope you find to be the case with many IBM Workload Deployer capabilities, this is a superbly simple capability to leverage. When you deploy virtual application patterns based on the IBM Workload Deployer Pattern for Web Applications, the capability is simply there. The underlying runtime that is serving your application is automatically augmented with the capabilities necessary so that your applications can connect to and utilize the deployed caching service. It is also worth pointing out that you can utilize the caching capabilities provided by this shared service for applications and application infrastructure that you deploy via virtual system patterns as well. You can either choose to augment the WebSphere Application Server environment with the XC10 Feature Pack (a deploy-time option for virtual system patterns built on WebSphere Application Server Hypervisor Edition v8), or you can configure WebSphere Application Server as you always would when integrating with a WebSphere eXtreme Scale environment or a DataPower XC10 Appliance.
What's the real benefit to all of this you ask? Well, when you use the shared caching service, you get the benefits of a distributed, in-memory, extremely scalable cache without having to deal with too much setup or administration. You simply tell IBM Workload Deployer how many resources you want to dedicate to your cache, and deploy the shared service. IBM Workload Deployer takes care of the details, including scaling in and out the cache to meet the needs of the system. On top of all of this, there is also an option to configure 'Next to the Cloud' caching. If you currently own DataPower XC10 appliances, you can make those available to virtual application pattern deployments (this was already possible with virtual system patterns) by simply providing details of the location of the appliance collective in question.
Put simply, setting up, administering, and utilizing an object caching service for your applications has never been easier. Check it out and let us know what you think!
Virtual Application Patterns are one of the major new features in IBM Workload Deployer v3. You've heard this concept discussed on this blog before and it is really a revolutionary way to manage your applications in a private cloud environment. With Virtual Application Patterns you provide declarative information about your application including functional and non-functional requirements of that application. You get to focus on the application rather than the middleware configuration and IBM Workload Deployer takes care of all the details necessary to launch your application with the criteria you specify. This application-centric approach radically simplifies the deployment of applications in a private cloud. And it is not just the deployment that is simplified; it is also the monitoring, metering, logging, security, caching, etc ... that is consolidated and simplified as well. Everything is custom tailored for the particular application type to provide a significant level of integration and optimization for elastic, efficient, multi-tenant, automated management and execution of that application workload.
In IBM Workload Deployer v3 there are two different types of virtual application patterns provided out of the box; a pattern for web applications and a pattern for database applications. It's no accident that these are also the two most heavily utilized types of applications in most enterprises. Of course more patterns will be appearing in the future and you have the opportunity to create your own custom patterns ... but these first two patterns can cover a substantial number of current application workloads.
So why am I introducing all of this again? Well, I want to make you aware of a new article that was just published which covers virtual application patterns in a very consumable way with enough detail and screen shots to get you started down this path. It is appropriately named: Easy virtual app automation using Workload Deployer . It really does a great job of covering not only the web application pattern - but it also introduces the database pattern (DBaaS) and shared services. If you are about to embark on virtual applications this is a great place to start.
WebSphere configuration management practices are common items of conversation that comes up when I am talking with users about IBM Workload Deployer (formerly WebSphere CloudBurst). This conversation can take on so many different avenues that it is hard to capture all of them in a short blog post. So, for the sake of this post, let's consider two facets of WebSphere configuration management. The first facet is addressing the need to consistently arrive at the same configuration across multiple deployments of a given WebSphere environment. The second facet involves managing the configuration of a deployed environment over time to protect against living drift. What is the best way to tackle these two challenges? Well, it comes down to picking the right tool for the job.
When it comes to ensuring consistency of initial WebSphere configuration from deployment to deployment, there is really no better means than patterns-based deployments enabled by IBM Workload Deployer. Whether you are using a virtual system or virtual application pattern, the bottom line is that you are representing your middleware application environments as a single, directly deployable unit. When you need to stand that environment up, you simply deploy the pattern. The deployment encapsulates the installation, configuration, and integration of the environment, and your applications if you so choose. The benefit of this approach is that once you get your pattern nailed down, you can be extremely confident that the initial configuration of your environments is extremely consistent from deploy to deploy. Basically, no more bad deployments because someone forgot to run configuration step 33 out of 100!
Because we talk about the benefits of consistency provided by our IBM Workload Deployer patterns, users often ask what IBM Workload Deployer does in terms of configuration governance for deployed environments. In other words, they ask how IBM Workload Deployer helps them to track configuration changes or compare the configuration of a deployed environment to a known good one. The honest answer is that this is a bit beyond the functional domain of the appliance. While IBM Workload Deployer does allow you to manage the deployed environment (apply fixes, update deployed applications, snapshot, etc.), it does not layer some of the common configuration governance concerns on top of that. However, there is a good reason why the appliance does not focus on that. It's because Rational Automation Framework for WebSphere does!
If you find yourself wanting to actively track configuration changes, periodically (and automatically at specified intervals) compare configuration changes to a 'golden' baseline, import configurations of a known good environment, apply common configuration across a number of cells, then the capabilities of RAFW would likely be of interest to you. It can do all this and give you an incredible toolbox of out-of-the-box application deployment and configuration capabilities for WebSphere environments. In my mind, for those that spend a good deal of time dealing with WebSphere configuration, whether it be deploying applications, configuring containers, or debugging inadvertent changes, an examination of RAFW functionality is a must.
Now it is time for a bit of disclaimer/clarification. I am not suggesting that you pick one or the other when it comes to IBM Workload Deployer and RAFW. In fact, there are many scenarios where 1+1=3 with these two solutions, and I have written about it many, many times (including this article). That said, I think it is important to highlight the relative strengths of each product, so that it is easier to map it back to your pain points. In honesty, many of the users I talk with have challenges in getting the initial configuration right AND managing it over time. That kind of problem beckons for the integrated IBM Workload Deployer/RAFW solution.
Of course, technology only gets you so far when it comes to these kinds of problems. It would be disingenuous of me to suggest otherwise. It has always been and will continue to be important to establish clear and rigorous processes around the way you deploy, manage, and change environments. This just gives you an idea of some of the tools you can leverage to aid in the implementation of those processes.
If you follow this blog often, you know that from time to time I like to post frequently asked questions. Well, it's been a while since I have done that, and since then I have added some new questions to my list -- along with some regulars. Take a look below, and if I don't answer your question feel free to leave a comment!
Can IBM Workload Deployer deploy software that is not IBM software? Yes. You can use one of the included images as a springboard and customize them with your own software via extend and capture. Additionally, you can use the IBM Image Construction and Composition Tool (I'm getting ahead of myself here) to create your own custom images from the ground up and use those within IBM Workload Deployer.
Can I use VMotion for the systems I deploy with IBM Workload Deployer? Yes. IBM Workload Deployer has tolerated the use of VMotion since the WebSphere CloudBurst days (see the Additional Considerations section on this page for more information). IBM Workload Deployer v3 introduced the notion of virtual machine mobility initiated directly from the appliance. This capability takes advantage of VMotion in the case of VMware-based cloud environments.
Can IBM Workload Deployer deploy just a base operating system? Yes. IBM Workload Deployer v3 introduced a base operating system image that contains 64-bit Red Hat Enterprise Linux. Internally, IBM Workload Deployer uses this as the foundation on top of which virtual application patterns are deployed. You can use it to deploy virtual machines containing just the base OS, or you can customize it to deploy software of your choosing. (As an aside, IBM Workload Deployer v3.1 will include a base operating system image for AIX)
Can I automate the process of calling/using IBM Workload Deployer? Yes. IBM Workload Deployer is built to fit a specific need -- creating and managing a cloud of middleware and middleware-based workloads. In that light, it would be a shortcoming if IBM Workload Deployer did not to fit well into more holistic or enterprise-wide cloud management systems. The REST API and CLI allow you to automate the use of IBM Workload Deployer, thereby allowing it to be mashed up into other processes.
Can I group two appliances together for high availability? Yes. IBM Workload Deployer v3.1 introduces the ability to configure appliances in a master/slave setup. You can connect two appliances, allow them to share a floating IP address, and be confident that data is continuously replicated between the two. If one appliance fails, the other appliance picks up the floating IP ensuring continuous service.
Are images created using the Image Construction and Composition Tool supported for use within IBM Workload Deployer? Yes. Part of the new IBM Workload Deployer 3.1 announcement was a statement of support for using images created by the Image Construction and Composition Tool as a component of your virtual system patterns. This is a very important enhancement as it allows you to extend the set of content deployed by IBM Workload Deployer while being sure that you are operating within the boundaries of intended use.
Can I use IBM Workload Deployer to provision to public clouds? No... and yes. If you install an IBM Workload Deployer appliance in your datacenter, you cannot use it to deploy to a public cloud environment. However, you may have recently heard about the IBM SmartCloud Application Services portfolio. IBM has announced that the pattern-based provisioning that one gets with IBM Workload Deployer will also be available as part of this portfolio. This means that you will be able to build and deploy patterns using a service hosted on the IBM SmartCloud. Further, your deployed systems will run on the IBM SmartCloud. Check out this demo for more information.
** IBM Workload Deployer 3.1 firmware is available on 11/18.
Script packages are an integral part of virtual system patterns in IBM Workload Deployer. By attaching script packages to your patterns, you provide customizations particular to your unique cloud-based middleware environments. Customizations provided by script packages might include installing applications, creating application resources, integrating with external enterprise systems, and much more. The bottom line is, if you are creating virtual system patterns, you will almost certainly be creating script packages.
Largely, the act of creating a script package is independent of IBM Workload Deployer. The appliance does not dictate a particular scripting language, so all you need to do is make sure you can invoke your logic in the operating system environment. Your script package may be a wsadmin script, shell script, Java program, Perl script, and on and on. After you create the actual contents of your script package, you will then load that asset into the IBM Workload Deployer catalog.
Once loaded into the catalog, you define several attributes of your script package, including the executable command, command arguments, variables, execution time, and more. The process for defining these attributes is trivial using the intuitive UI in IBM Workload Deployer, but I wanted to take a little time to remind you of a technique I recommend to all users defining script packages. You can actually package a JSON file within the script package that defines all of the script's attributes. The format of the file is simple, and I am including an example below:
The example above is one taken from a script package in our samples gallery, and it shows the basics of which you need to be aware. Notice that in the JSON file, you can provide a name, description, unzip location, executable command, command arguments, variables, and more. You only need to ensure that the name of this JSON file is cbscript.json and that you include it at the root of the script package archive. Once you have done that, you load the script package archive into the catalog, refresh the script package details, and voila -- all the attribute definitions appear!
You may ask why I recommend this since it could seem like an unnecessary step. My answer to that is that you have to define these attributes anyway, so you might as well capture it once in the file. Once you capture it once in the file, you can ensure that if the same script needs to be reloaded, or if you need to move it to another appliance, its definition will be exactly the same (and presumably correct). I use this approach for all of my work, and for all of the samples I contribute to our gallery, and it really saves me a lot of misplaced effort that can result from typos. If you are out there creating script packages, try adopting this approach. I'm pretty sure you will be happy you did!
One of the things that often comes up at some point in IBM Workload Deployer conversations is the notion of self-service access. Specifically, users want to know what the appliance provides that enables them to allow various teams in their organization to directly deploy the middleware environments they need. In other words, they want to use IBM Workload Deployer to tear down the traditional barriers that exist between those that request the environment and those that fulfill said request. Now, as we begin to elaborate on this notion, it becomes quickly apparent that in order to effectively enable self-service, IBM Workload Deployer must deliver a few things.
First, IBM Workload Deployer must provide the means to define users with various levels of access. Second, IBM Workload Deployer must provide the means to define resource access at a fine-grained level to different users and groups of users. Check and check. The appliance has been doing this since the beginning of WebSphere CloudBurst. Without those two things, the conversation of self-service access would end pretty quickly. However, there is a final capability that is equally important: IBM Workload Deployer must deliver a means to limit resource consumption at a fine-grained level.
In IBM Workload Deployer there are a couple of ways to achieve this. First, you could define multiple cloud groups and allow access to those groups in a way that maps directly to resource entitlements. While that may work in some situations, others call for even more granularity. You may want to allow multiple different users or groups to access a cloud group, but you may want to allow different consumption limits for each of these groups. In this situation, you can take advantage of environment profiles and a new option when defining users of IBM Workload Deployer.
Consider the case that you have a group of developers and you want to limit their consumption of memory in the cloud. First, you start by defining your development users and for each you select Environment Profile Only as the value for the Deployment Options field.
By selecting the above value for the deployment options of a user, you restrict that user to only deploying via an environment profile as opposed to general cloud group deployments. After defining all of your development users, you may choose to organize them into a user group for easier management. At that point, you can define environment profiles and determine which ones your developers should have access to using the Access granted to field of the profile.
Within the environment profile, you can define resource consumption limits for compute resource and software licenses. For instance, you can define a limit on the amount of virtual memory consumed by all deployments using the profile. It is important to note that the limit is cumulative for ALL deployments that use the profile.
Now that all of the controls are in place, consider the deployment process for one of your development users. They pick a virtual system pattern, click the deploy icon and begin to configure the pattern for deployment. In the Choose Environment section of the deployment dialog, your development user will only be able to select the Choose profile option for deployment. Further, they will only be able to deploy using the environment profiles to which they have access.
After the deployment completes, a look at the Environment limits section in the profile shows the current usage totals.
Now suppose another development user, or even the same one, comes along and attempts to deploy another virtual system pattern even though the profile limits have already been reached. The user can initiate the deployment, but they will get a near immediate failure owing to the fact that they would exceed consumption limits if the deployment were allowed to proceed.
The same kind of enforcement occurs regardless of the resource limit type. You can use this approach to limit the consumption of CPU, virtual memory, storage, or software licenses among the various different users or groups of users you define in IBM Workload Deployer. If you combine fine-grained resource consumption limits with varying permissions and fine-grained access, I think you are on the road to truly enabling self-service in the enterprise.
Lately Joe and I have been pretty vocal about bringing up the new IBM Image Construction and Composition Tool capabilities in IBM Workload Deployer v3.1. While writing about such new capabilities is always good, I think seeing is believing. In that light, I hope you will take a look at the recent demo I put together that shows how to use the Image Construction and Composition Tool with IBM Workload Deployer v3.1!
For those of you basically familiar with IBM Workload Deployer, you are likely aware that the appliance has many different capabilities. On the surface it is a cloud management device for middleware and middleware applications. Of course, there are quite a few details that are important to understanding the functionality provided, and I spend quite a bit of my time talking with various users and potential users about these details. One thing I have noticed that can become an obstacle in having effective communication regarding IBM Workload Deployer is the lack of a commonly understood language. I sometimes find that me and the user are simply using different terminology to describe the same thing. As you can imagine, this just serves to create confusion, and neither party gets the most out of the conversation.
In order to combat this communication gap, I thought I would put together a simple presentation that introduces and defines IBM Workload Deployer terminology. Check it out below (you can also download it here):
While the presentation does not dive deep into the terms it introduces, it does provide a basic definition and illustrative example of each. My hope is that this fosters an understanding of some of the basic concepts in IBM Workload Deployer, and ultimately pushes us towards a common vernacular. Please let me know what you think!