As fast as the Impact conference arrived is about as fast as it is disappearing in our collective rearview mirror. It seems like a blur. In a word, the conference was... exhausting! In other words, it was informative, exciting, and illuminating. I hope that many of you had a chance to make it out there, and I hope more of you make it to Impact in 2013.
For those of you familiar with the conference, you know that it is typically a launching ground for new product versions and altogether new products. This year was certainly no different with the launch of the new version of WebSphere Application Server (8.5), the new and improved IBM Business Process Manager and IBM Operational Decision Manager, a new version of WebSphere eXtreme Scale (8.5), and numerous updates across the messaging and connectivity stack. While I encourage you to follow up on all of these important announcements, they are not what I am going to focus on today. Instead, I am going to focus on the new addition to the IBM family that got plenty of attention this year: IBM PureApplication System.
Joe recently touched on this new offering, so I won't get into an exhaustive overview. To put it briefly, IBM PureApplication System is an expert integrated system. What does that mean? First and foremost it means that it is a system -- a whole. It is an integrated platform of hardware and software, optimized and tuned for running transactional web and database workloads. I do not mean that it is a system of software that we pre-install on off-the-shelf hardware. Rather, it is the result of hardware and software engineers across IBM working together to build a system that is expert at what it does. More than just the web application and database software though, IBM PureApplication System also contains pre-installed and pre-configured management software that delivers a soup to nuts (hardware to application) single pane of glass for managing the entire system. I could go on and on, but again that's not my purpose here. I encourage you to check out the new IBM PureSystems web page for more information and some pretty cool videos.
For those of you that take a look at IBM PureApplication System you will quickly find that the notion of pattern-based deployments (something I have talked about at length on this blog) plays a key role in the new system. In fact, the same virtual system and virtual application pattern constructs that you have come to know in IBM Workload Deployer are front and center in IBM PureApplication System as well. In the new system, you can build custom virtual system and virtual application patterns, deploy them to your cloud, and then manage them over time. If you are familiar with the IBM Workload Deployer user interface, you will likely find yourself immediately familiar with the interface of IBM PureApplication System. Given all of that, if you are like many of the users I talked to at Impact and since, you probably have some questions as to IBM Workload Deployer and this new system. Most commonly, I get these two questions: "What does this mean for the IBM Workload Deployer product?" and "How do I know when to use IBM PureApplication System versus IBM Workload Deployer?" Let me do my best to address those questions.
In terms of the impact of the IBM PureApplication System on the IBM Workload Deployer offering, I can only view it in one way: affirmation. As I said above, IBM PureApplication System puts the mode of pattern-based deployments front and center, and further affirms that this kind of approach is crucial to the evolution of application delivery and management. Those of you familiar with IBM Workload Deployer or its predecessor WebSphere CloudBurst know that we have been talking about patterns for years. Rest assured we will continue to talk about patterns and solutions for building, deploying, and managing them. As it stands, we have at least three ways for you to build, deploy, and manage patterns: IBM SmartCloud Application Services, IBM Workload Deployer, and IBM PureApplication System. As you can see, options for consuming patterns have only increased since the initial launch of WebSphere CloudBurst. Furthermore, if you were at Impact, you know that we have a vibrant and vocal community of IBM Workload Deployer users, and I hope to see that community continue to grow! As I see it, the core technology of IBM Workload Deployer is becoming our 'operating system' for cloud platform management.
The question of when to use IBM Workload Deployer or when to use IBM PureApplication System is one whose answer is a bit more nuanced and not something one can or should try to definitively answer in a blog post. One thing I do suggest though is that when evaluating these two technologies, it is important to acknowledge that they have different business value propositions. Sure they share common core technology in terms of building, deploying, and managing pattern-based environments, but beyond that they diverge a bit. Remember, IBM PureApplication System is, well a system. It is the hardware, software, and management technology you need to run your middleware application workloads. It is pre-built and pre-integrated to the point that it only requires you to roll it into your datacenter, hook it up to your network, and do some one time configuration. The aim for the duration between receipt of the system to up and running with your first deployment is four hours, and if you were at Impact you saw an amusing video with the chief architect (Jason McGee) that proves this claim.
IBM Workload Deployer is fundamentally different in terms of how you consume it and how it integrates with your infrastructure. Most notably, IBM Workload Deployer does not include optimized hardware (servers, storage, networking) for running your workloads or a single point of management for everything from hardware to applications. To use IBM Workload Deployer you attach it to your network and point it at existing virtualized servers. Simply put, IBM Workload Deployer assumes you have existing, under-utilized hardware that you can get more out of with the intelligent deployment and management approach the appliance delivers. While you do not get the pre-integrated and optimized system of hardware plus software, you do get the flexibility to use your existing infrastructure.
As you can see, there are similarities (patterns) and differences (whole system vs. management system), and the result is a pretty different set of value propositions. The key in evaluating these technologies is that you do so with a crisp understanding of your current needs AND your future plans for growth and evolution. I know this kind of advice is a bit generalized, but I hope the differences I discussed above help you to at least understand the capabilities of the two different offerings. As always, if you have any comments or questions, please reply to the post!
As many of you well know, virtual images are the foundation of virtual system patterns in IBM Workload Deployer. Whether you are using IBM Hypervisor Edition images or custom-built images produced by the IBM Image Construction and Composition Tool, every virtual system pattern has at least one virtual image as part of its foundation. So, if virtual images are the foundation of virtual system patterns, what is the foundation of these virtual images?
While you could probably make a good argument for a number of different things being the foundation of the virtual image (operating system, other installed software, etc.), I like to think that, at least in the context of IBM Workload Deployer, the activation engine inside the virtual image is the true foundation. Inside this activation engine, you will find a collection of scripts and services that are capable of configuring the virtual machine for use. Not only does this engine perform basic system-level actions like configuring the machine's hostname, IP address, time, and network interfaces, but it also configures the software on the inside of the virtual machine. For instance, the activation engine in the WebSphere Application Server Hypervisor Edition image is capable of fixing up profile information, federating nodes, creating application server clusters, and more. Best of all, in the case of IBM Hypervisor Edition images, you (the user) get all of this right out of the box. There is no logic to perform or administrative tasks to undertake in order for you to benefit from the activation engine. It is simply there!
So, at this point you may ask yourself 'If all of this is included right out of the box, why do I need to care?' That is a fair question, but ultimately I feel it is always important to understand the foundational elements of any technology. In this respect, I do not feel like the activation engine in the IBM Hypervisor Edition images is any different. Lately, I have been telling my users to take at least a little time to understand what the activation engine is and even more importantly, what it is doing for you during deployment. Specifically, I always suggest taking a little time to look at the scripts in the activation engine -- most often found in the /opt/IBM/AE/AS directory of a virtual machine deployed by IBM Workload Deployer.
What can be gained by taking the time to peruse through these scripts? I think most importantly, you will learn what the engine does for you and what you cannot do if you expect the image to deploy correctly. For instance, if you look in some of those activation engine scripts, you will see that it uses the sudo command in several places. While I know many of you may be tempted to remove the sudo command during extend and capture, if you do so it will break the activation engine. I have seen this happen multiple times, and trust me, if you did not know the activation engine used that command it is not necessarily an easy problem to debug. This is a case where the value of at least superficially understanding the activation engine is clear.
Want another example? Okay, consider that you want to run WebSphere Application Server as a user called wasadmin. At pattern deployment time, it is easy enough to supply wasadmin in the appropriate field of the part configuration data and click OK. IBM Workload Deployer deploys the system and voila, WebSphere Application Server is magically running as wasadmin. Everything is fine so far, but let's take this a step further and say that you previously performed an extend and capture, and you installed software components in the image that should be owned by your wasadmin user. It is technically possible to define users during extend and capture and then install software content via that user, but if you also want to specify that user as the WebSphere Application Server administrative user at deployment time, you will run into an issue. This is because the activation engine runs the usermod command during deployment to change the existing and default virtuser into the user that you specify -- in this case wasadmin. If the usermod command attempts to change virtuser to wasadmin but wasadmin already exists as a user on the operating system, the command will not complete properly, and it is very likely you will see further errors downstream. A simpler way to do this is to create the user during extend and capture, install any components via that user, and then delete the user before capturing. You can attach a deploy-time script that fixes up the appropriate settings for wasadmin (like user ID and group ID), and it will run after the activation engine successfully does a usermod and changes virtuser to wasadmin.Problem averted!
In reading some of the above, I fully realize that it may be a little confusing at first. That said, I assure you that there is not much to it at all once you have a basic understanding of the activation engine. With a basic understanding of the activation engine in tow, you will know what you do not need to do (e.g. create profiles, federate nodes, etc.), what you cannot do (e.g. remove the sudo command), and what you can do with a little bit of reconciliation work (e.g. define your WebSphere Application Server administrative user during image extension). I encourage you to take a little time with your next deployment and give the activation engine a once over. You will undoubtedly have a better understanding of the deployment process, and you will ultimately be in a position to most effectively leverage virtual system patterns in IBM Workload Deployer.
I hate sitting on secrets. I always have. I understand that sometimes it's in the best interest of everyone (and your job) to keep tight lips, but that does not make it any more fun. Inevitably, the run-up to our annual Impact conference means everyone in the lab is doing their fair share of secret keeping -- just waiting for announce time. For a lot of us, that day ended Tuesday with the announcement of the IBM Workload Deployer v3.0.
Now, you may be wondering, 'I have never heard of this. Why is it version 3.0??' Well, IBM Workload Deployer is a sort of evolution of the WebSphere CloudBurst Appliance, which was previously at version 2.0. This is good news for all of our current WebSphere CloudBurst users because all of the functionality (plus new bits of course) that they have been using in WebSphere CloudBurst are present in IBM Workload Deployer. You can use and customize our IBM Hypervisor Edition images in IBM Workload Deployer. You can build and deploy custom patterns that contain custom scripts in order to create highly customized IBM middleware environments. So, what's the big deal here? Two words: workload patterns.
Workload patterns represent a new cloud deployment model and are an evolution of the traditional topology patterns you may have seen with WebSphere CloudBurst Appliance (I am a little torn between evolution and revolution, but that's splitting hairs). Fundamentally, workload patterns raise the level of abstraction one notch higher than topology patterns and put the focus on the application. That means, when you use a workload pattern the focus is on the application instead of the application infrastructure. Perhaps an example would be helpful to illustrate how a workload pattern may work in IBM Workload Deployer.
Let's consider the use of a workload pattern that was part of the recent announcement, the IBM Workload Deployer Pattern for Web Applications v1.0. Just how might something like this work? It's simple really. You upload your application (maybe a WAR or EAR file), upload a database schema file (if you want to deploy a database with the solution), upload an LDIF file (if you want to setup an LDAP in the deployment to configure application security), attach policies that describe application requirements (autonomic scaling behavior, availability guidelines, etc), and hit the deploy button. IBM Workload Deployer handles setting up the necessary application middleware, installing and configuring applications, and then managing the resultant runtime in accordance with the policies you defined. In short, workload patterns provide a completely application centric approach to deploying environments to the cloud.
Now, if you are a middleware administrator, application developer, or just a keen observer, you probably have picked up on the fact that in order to deliver something as consumable and easy to use as what I described above, one must make a certain number of assumptions. You are right. Workload patterns encapsulate the installation, configuration, and integration of middleware, as well as the installation and configuration of applications that run on that middleware. Most of this is completely hidden from you, the user. This means you have less control over configuration and integration, but you also have significantly reduced labor and increased freedom/agility. You can concentrate on the development of the application and its components and let IBM Workload Deployer create and manage the infrastructure that services that application.
Having shown and lobbied a bit for the benefits of workload patterns, I also completely understand that sometimes you just need more control. That is not a problem in IBM Workload Deployer because as I said before, you can still create custom patterns, with custom scripts based on custom IBM Hypervisor Edition images. The bottom line is that the IBM Workload Deployer offers choice and flexibility. If your application profile meshes well with a workload pattern, by all means use it. If you need more control over configuration or more highly customized environments, look into IBM Hypervisor Edition images and topology patterns. They are both present in IBM Workload Deployer, and the choice is yours.
If you happen to be coming to IBM Impact next week, there will be much more information about IBM Workload Deployer. I encourage you to drop-by our sessions, ask questions, and take the opportunity to meet some of our IBM lab experts. Hope to see you in Las Vegas!
Everybody likes having choices. This is true whether you are talking about lunch or deploying to a private cloud. When IBM Workload Deployer v3.0 was first introduced it included a pattern type for our Database-as-a-Service offering. The DBaaS PatternType v1 provided substantial value in an easy to use form factor to get a database up and running quickly and then provided the necessary tools to manage that environment. Pretty impressive for a first release! But the story doesn't end there. IBM Workload Deployer v3.1 brings an updated version of this pattern type that builds upon this foundation and adds even more capabilities and more choices.
Some of you may not be familiar with the Workload Deployer Database-as-a-Service offering so let me give you a brief introduction. Database-as-a-Service patterns allow you to define and deploy database applications into your private cloud environment with speed and consistency. These offerings also provide integrated management and monitoring capabilities. The Database-as-a-Service capability can be used in conjunction with a web application pattern (Patterns -> Virtual Applications, IBM Web Application Pattern) by including a database component in a pattern connected the web application components to use it. In this case the web application and database are deployed and managed as a unified solution with a common life-cycle as shown in the pattern below.
Database patterns can also be created and deployed as standalone entities (Patterns -> Database Patterns) that have their own life-cycle, independent of the virtual web application(s) that use the database. What's more, you can leverage these stand-alone databases from applications both inside and outside your private cloud.
Whether you use a stand-alone database pattern or one that is part of a web application pattern, the attributes and capabilities of the database are consistent.
So what is new in this release? For starters, the DBaaS PatternType has been renamed and the capabilities expanded. For Workload Deployer v3.1 the pattern is delivered as the IBM Database Patterns v1.1 and includes several elements to provide predefined configurations: the IBM Transaction Database Pattern and the IBM Data Mart Pattern.
Before we take a closer look at the new features I just want to alert you to one thing. Before you can leverage any of these new features you first need to accept the licenses and configure the plugins for the database pattern types. So look at the link and follow the directions if you would like to along and you aren't seeing the same options in your IBM Workload Deployer V3.1 system.
Using the screen shot above as a reference, let's take a look at what you can specify when creating a database pattern. You start with a name for the pattern and an optional pattern description. You also specify the maximum user data space size and an optional schema file. These are pretty basic and were all available with in the previous release. Another really nice feature that has also been available since the first release is the ability to specify a compatibility mode for DB2 and Oracle (a nice feature if you are looking to move content from existing databases).
Some of the new enhancements appear in the middle of the view; the purpose and source. The purpose specifies if this database is to be used for production or non-production (test and development). Your selection will optimize license management for deployed instances of this pattern.
The source field lets you specify a database configuration to be used to provision this database. You can choose from two different provisioning approaches; applying a workload standard or cloning from a database image. When choosing apply a workload standard you select between two predefined, optimized database configurations. These configurations will run a set of scripts to tune the operating system and instance configuration for the database. The departmental transactional standard is optimized for online transaction processing applications while the data mart standard is optimized for data mining purposes and is therefore more suitable for reporting applications. If those aren't exactly what you want but you have an existing database you can use the clone from a database image approach by selecting an existing database image backup as a model for the new database pattern. When using the clone method metadata from the backup is retrieved and a DB2 restore command is used to set the same configuration for the new database instance. Reference the cloning from a database image topic in the IBM Database Patterns information center for more details.
Once the pattern has been created you can deploy the pattern to a target cloud group or an environment profile (another new feature for database deployments in IBM Workload Deployer V3.1).
I hope you can see the value that has been added with the source configuration choices and the ability to clone an existing configuration. They are certainly substantial new features of the Database-as-a-Service solution in Workload Deployer V3.1. However, there are a number of other significant enhancements that I would just like to mention as well. In other posts we've discussed the new ability to deploy virtual applications to run on AIX with a PowerVM hypervisor. As you might expect this same ability is also available to deploy database patterns to run on AIX systems leveraging PowerVM. Management capabilities have also been significantly enhanced with the ability to configure automated database backups using the IBM Tivoli Storage Manager. These features and many other aspect of the Database-as-a-Service model are detailed in the IBM Database Patterns information center and the IBM Workload Deployer information center. My goal here has not been to replicate our product documentation - it is rather my goal to provide a few highlights and provide pointers to help you get started. I hope it has been useful.
You can be sure that we will continue enhancing and improving our Database-as-a-Service offering in IBM Workload Deployer. Please provide your feedback so that we can make it even better.
In the previous post Dustin shared a great video demonstrating the value of the IBM Image Construction and Composition Tool that is now delivered with IBM Workload Deployer V3.1. This is certainly one of the key new features of IBM Workload Deployer V3.1. However, there are also a number of other compelling enhancements and features that we would like communicate.
I created the attached video to highlight some of these features included in new Workload Deployer release. The video uses the web console to highlight some of the features and capabilities, giving a brief introduction for each one. Without going into a lot of depth, I think it gives a nice overview. This may be especially helpful if you already have Workload Deployer v3.0 and want to see the value you will get when you upgrade to Workload Deployer v3.1. Check it out.
We believe that these new features make IBM Workload Deployer V3.1 an even better solution for your private cloud needs. Please let us know what you think.
Lately Joe and I have been pretty vocal about bringing up the new IBM Image Construction and Composition Tool capabilities in IBM Workload Deployer v3.1. While writing about such new capabilities is always good, I think seeing is believing. In that light, I hope you will take a look at the recent demo I put together that shows how to use the Image Construction and Composition Tool with IBM Workload Deployer v3.1!
When IBM Workload Deployer V3.0 was introduced last year, one of the "hidden" values that it delivered was a base image used for virtual application patterns. I say "hidden" because this image, while delivered primarily for use in virtual application patterns, could also be leveraged for virtual system patterns. By now you may be scratching your head and wondering just what I'm talking about. Let me explain...
To begin with, it is helpful to understand a little bit about how virtual applications are deployed and how that differs from virtual system patterns. As you may already know, virtual system patterns are built from IBM Hypervisor Edition images to launch the virtual machines for your deployment. The IBM Hypervisor Edition images include the Operating System and middleware components together in the image. Therefore, building a virtual system pattern basically starts with a fairly complete image and activates the parts in that image necessary to fulfill the particular role this virtual machine will pay in a virtual system pattern. Virtual application patterns take a somewhat different approach. The starting point for a virtual application pattern is the base image which only includes the base Operating System. Workload Deployer launches a virtual machine with just this base image and then the appliance manages installation, configuration, and integration of software and applications to complete the role this virtual machine must fulfill for this virtual application pattern. At a high level you could consider virtual system patterns a template approach and virtual application patterns more of a build it as you need it approach.
So just what is the "hidden" value of these base images provided for virtual application patterns and how can that be used for virtual system patterns? The hidden value is that the base images used for virtual application patterns are delivered with IBM Workload Deployer in the image catalog and can be used for building virtual system patterns. If you already have an appliance you can take a look ... you will see the base images there under Catalog > Virtual Images right along side more familiar images like the IBM Hypervisor Edition images for WebSphere Application Server. For x86 systems this image is appropriately named "IBM Workload Deployer Image for x86 Systems". These images each include a base part called "Core OS" that can be included in a virtual system pattern.
So now you may be saying to yourself - well that's all great news but what is new about this? The new thing is that in IBM Workload Deployer V3.1 a significant new feature was added - the ability to deploy virtual application images to PowerVM environments using AIX. To enable that feature a base image was created for AIX, the "IBM OS Image for AIX Systems." As with the x86 image, this new image is now also available for your use in the image catalog. You can now employ that default AIX image for your own needs in virtual systems patterns - creating a very nice extension mechanism for PowerVM and AIX users.
This new base image contains the IBM AIX 6100-05 operating system and the Core OS part that you can include in virtual system patterns. As with the x86 base image delivered earlier, there are no restrictions on how you use or customize this image. To make it suitable for your purposes you can employ the IBM Workload Deployer extend and capture capability to install additional software content into the image. You can also enhance this image using the IBM Image Construction and Composition Tool (ICCT) that is now included with IBM Workload Deployer v3.1. When you include this part in a virtual system pattern you can also associate any configuration scripts that you may need, just as you would with any other part. Just as with the x86 part - this provides substantial value and a significant convenience for AIX users.
I hope this clues you in on the "hidden" benefits of a substantial new feature included in IBM Workload Deployer V3.1. We have often been asked to provide base OS images to build upon as starting from scratch is sometimes difficult when you need to create your own custom image. Now, with IBM Workload Deployer v3.1, you have your choice of two default images in addition to the many IBM Hypervisor Edition images delivered as well as a robust set of new features in IBM Workload Deployer V3.1!
In the previous post I spoke about how a Virtual Application feature introduced in Workload Deployer v3.1 actually had benefits for Virtual System patterns as well. In that case I was talking about the ability to deploy Virtual Applications running on AIX to PowerVM hypervisors and how this had hidden benefits for Virtual Systems as well. This is a great example of how an enhancement to Virtual Applications can sometimes benefit Virtual Systems. However, this is not the only instance where the two pattern types intersect.
Several other new or enhanced features that are primarily for Virtual Applications are also being extended to benefit and improve Virtual Systems ... and vice-versa. One such area of improvement involves Shared Service in v3.1. These services were introduced in v3.0 specifically for the benefit of Virtual Applications. However, several enhancements have extended these capabilities to Virtual Systems and likewise, some functionality that was previously only available to Virtual Systems has been extended to Virtual Applications in the form of Shared Services.
As you may already know, Shared Services were first introduced in v3.0 and are just what the name implies; services that are deployed by a cloud administrator and used by multiple virtual application deployments. Let's start by taking a look at the shared services available under Cloud -> Shared Services in v3.1. You will notice that there are now more shared services listed than there were in v3.0.
In addition to the familiar Caching Service and ELB Proxy Service (formerly Proxy Service) there are now additional entries for an External Caching Service and an External Application Monitoring Service. For simplicity let's just start from the top and go down the list, discussing the function of each service, what is new/improved for v3.1 with regard to virtual applications, and when applicable how this service can be used by virtual systems.
The Caching Service was introduced in v3.0. Its primary purpose is to cache HTTP session data using a highly scalable and fast in-memory cache. This is the same core technology that is included in our WebSphere eXtreme Scale and DataPower XC10 Caching appliance. To make use of this service all you need to do is deploy an instance of the Caching Service with the configuration parameters of your choice into a cloud group where you want to leverage that service. As you create virtual application patterns you simply select the Enable session caching check-box when you add a scaling policy. When this pattern is deployed it will be automatically configured to leverage the Caching Service for session persistence. It's as simple as that.
Several new features were introduced in v3.1 for the Caching Service. First, the Caching Service can now be launched with parameters to define the behavior for automatic scaling to meet the ever changing demands of your applications. Once set, Workload Deployer will manage this service to ensure sufficient capacity based upon your requirements, adding or removing containers. Second, and this is significant for Virtual System patterns, the caching service has been enhanced to add new operations to support listing, creating, and deleting various types of object grids. You can then use the WebSphere eXtreme Scale ObjectGrid APIs to persist and manage content in the grid from your application code from Virtual System deployments. This saves you the trouble of creating and configuring your own caching service for these purposes outside of the cloud and permits sharing of the service you have already configured - a nice savings.
Caching Service (External)
The External Caching Service is one of the new additions for v3.1. Let's say that you already have configured a caching solution for your enterprise using the DataPower XC10 appliance or a collective of appliances. It would be nice if you could leverage this same solution instead of launching yet another caching solution within your private cloud. Leveraging your existing solution would consolidate your caching needs and preserve the cloud resources for other purposes. With this new external caching service you can do just that. It provides you the ability to leverage an external caching solution for both your Virtual Application session persistence needs as well as your Virtual System and even non-cloud caching needs. Just point an instance of this external caching service at your DataPower XC10 caching solution and all of the HTTP session persistence needed by your virtual applications in the same cloud group will make use of the external caching service. You can also point multiple instances of the external caching service in multiple cloud groups to share the same XC10 appliance or collective.
Monitoring Application (External)
With the External Monitoring Application service you can deploy an External Application Monitoring service reference within a cloud group to point at a Tivoli Enterprise Monitoring Server installation outside of the cloud. The TEMS server must be at version 6.2.2 Fix Pack 5 or later. Once created, the Unix or Linux OS monitoring agents and the Workload monitoring agent that is provided for virtual application workloads will be automatically connected to the defined instance of the Tivoli server using the supplied primary and fail-over Tivoli Enterprise Management server, protocol, and port. This is especially useful if you want to consolidate all of your monitoring to a common console. As with the External Caching Solution, this enhancement also extends the integration capabilities of Virtual Application Patterns beyond the scope of your private cloud and allows you to consolidate and leverage investments you have already made.
ELB Proxy Service
The Proxy Service was first introduced in v3.0 and renamed to the ELB Proxy Service in v3.1 for clarity. As the name implies, its primary purpose is to provide routing and load balancing to multiple deployed web applications. As with the caching service, you deploy this service based upon your requirements for load and availability within a cloud group. When defining virtual application patterns to leverage this service you simply add a routing policy and define your virtual host name. When the virtual application pattern instance is deployed to the cloud group the necessary configuration will performed to add the virtual host name and configure your application environment to use the ELB Proxy Service. New in v3.1 is the capability to scale the ELB Proxy Service itself to meet the changing demands of your application mix.
One other item that I should point out (and to which I've already alluded) is that you can now deploy multiple instances of each of the shared services - one per cloud group. Shared services can also now be deployed using environment profiles. This was not previously the case in v3.0 where each service was a singleton for the appliance. Allowing multiple instances of shared services gives you the flexibility to configure the sharing of your services as necessary for your particular environment.
I hope this post has provided a useful overview of the value of shared services and the new capabilities introduced in v3.1. I also hope that you can see how these services make it easier to implement your solutions for both virtual applications and virtual systems within a private cloud environment and shed a little light on how we are continuing to improve IBM Workload Deployer. As always, these improvements are driven by the feedback we receive from you so please let us know what you think!
If you follow this blog often, you know that from time to time I like to post frequently asked questions. Well, it's been a while since I have done that, and since then I have added some new questions to my list -- along with some regulars. Take a look below, and if I don't answer your question feel free to leave a comment!
Can IBM Workload Deployer deploy software that is not IBM software? Yes. You can use one of the included images as a springboard and customize them with your own software via extend and capture. Additionally, you can use the IBM Image Construction and Composition Tool (I'm getting ahead of myself here) to create your own custom images from the ground up and use those within IBM Workload Deployer.
Can I use VMotion for the systems I deploy with IBM Workload Deployer? Yes. IBM Workload Deployer has tolerated the use of VMotion since the WebSphere CloudBurst days (see the Additional Considerations section on this page for more information). IBM Workload Deployer v3 introduced the notion of virtual machine mobility initiated directly from the appliance. This capability takes advantage of VMotion in the case of VMware-based cloud environments.
Can IBM Workload Deployer deploy just a base operating system? Yes. IBM Workload Deployer v3 introduced a base operating system image that contains 64-bit Red Hat Enterprise Linux. Internally, IBM Workload Deployer uses this as the foundation on top of which virtual application patterns are deployed. You can use it to deploy virtual machines containing just the base OS, or you can customize it to deploy software of your choosing. (As an aside, IBM Workload Deployer v3.1 will include a base operating system image for AIX)
Can I automate the process of calling/using IBM Workload Deployer? Yes. IBM Workload Deployer is built to fit a specific need -- creating and managing a cloud of middleware and middleware-based workloads. In that light, it would be a shortcoming if IBM Workload Deployer did not to fit well into more holistic or enterprise-wide cloud management systems. The REST API and CLI allow you to automate the use of IBM Workload Deployer, thereby allowing it to be mashed up into other processes.
Can I group two appliances together for high availability? Yes. IBM Workload Deployer v3.1 introduces the ability to configure appliances in a master/slave setup. You can connect two appliances, allow them to share a floating IP address, and be confident that data is continuously replicated between the two. If one appliance fails, the other appliance picks up the floating IP ensuring continuous service.
Are images created using the Image Construction and Composition Tool supported for use within IBM Workload Deployer? Yes. Part of the new IBM Workload Deployer 3.1 announcement was a statement of support for using images created by the Image Construction and Composition Tool as a component of your virtual system patterns. This is a very important enhancement as it allows you to extend the set of content deployed by IBM Workload Deployer while being sure that you are operating within the boundaries of intended use.
Can I use IBM Workload Deployer to provision to public clouds? No... and yes. If you install an IBM Workload Deployer appliance in your datacenter, you cannot use it to deploy to a public cloud environment. However, you may have recently heard about the IBM SmartCloud Application Services portfolio. IBM has announced that the pattern-based provisioning that one gets with IBM Workload Deployer will also be available as part of this portfolio. This means that you will be able to build and deploy patterns using a service hosted on the IBM SmartCloud. Further, your deployed systems will run on the IBM SmartCloud. Check out this demo for more information.
** IBM Workload Deployer 3.1 firmware is available on 11/18.
If you have been wanting to get some first hand experience with patterns of expertise in preparation for IBM PureApplication System or IBM Workload Deployer but you don't yet have a system of your very own to use ... then you will want to check out this post/video and then download the Virtual Pattern Toolkit for Developers! It's absolutely free and will get you up and running with a virtualized system in a short while. Check it out!
Application-centric cloud computing is the main thrust behind the new capabilities of IBM Workload Deployer v3.0. But what does that really mean? After all, application-centricity is really just a concept. Granted, it is an important concept, but it is fairly meaningless until it is put into action or implemented. IBM Workload Deployer does just that with its new Virtual Application Patterns (VAPs).
VAPs are the embodiment of the workload pattern approach I briefly discussed in an overview post a few weeks back. The idea with a VAP is to give the user an interface through which they can provide their application, specify dependencies, declare functional and non-functional requirements and then deploy. Of course application middleware is a part of the overall solution, but IBM Workload Deployer has the smarts to build, configure, and integrate the necessary infrastructure in order to support the user's application. This is completely hidden from the user, so they are liberated to focus on the application and its requirements.
If we scratch a bit further beneath the surface of a VAP, we see that these patterns contain three primary pieces. These primary pieces are components, links, and policies, and they are fundamental to understanding how virtual application patterns work. Let's start with the building blocks of VAPs, components. Put simply, components represent different resources and functionality profiles that make up your application environment. As an example, the IBM Workload Deployer Pattern for Web Applications is a VAP that contains components for an EAR file, WAR file, message queue, and any number of other components that are typical requirements for a web application. The components will certainly vary based on the workload type (i.e. the components included in a web application VAP would be different than those included in a batch application VAP), but they are the foundation of any VAP.
From the ground up, the next logical element we come to in the VAP is a link. A link is a way to declare a dependency or integration point between two components. As an example, consider a VAP with a WAR file component and a database component. You might draw a link between the WAR component and the database component to indicate that your web application uses or otherwise depends on the database. IBM Workload Deployer interprets this link, and takes it as a directive to configure the integration between the two components as a part of deployment. In this case, that may mean configuring a data source in the application's container. This is just a simple example, and an application may have any number of links between components.
Finally, we come to the policy element within the VAP. A policy is a way for a user to specify functional and non-functional requirements for their application environment. Users attach policies to the VAP, or to components in their VAP, and IBM Workload Deployer interprets and enforces those policies. In the context of a web application, one example of a policy could be a scaling policy. The scaling policy might indicate scaling requirements for the application that included minimum application instances, maximum application instances, and conditions that triggered scaling activities. IBM Workload Deployer would use the information in a scaling policy within a VAP to appropriately manage the deployed, running environment. Other examples of a scaling policy may include a JVM policy that provides configuration directives for the java virtual machines in your application environment or a logging policy that defines logging configuration options. In any case, the policy element allows VAP builders to influence the configuration and management of the application environment.
In the example VAP below you can see the use of components (Enterprise Application, Database, User Registry, Messaging Service), links (blue lines between components), and policies (Scaling Policy, JVM Policy):
In total, when I look at a VAP a particular word sticks out to me: declarative. VAPs really enable declarative, application-centric cloud computing. What do I mean? By declarative, I mean you are telling IBM Workload Deployer what you want, but not necessarily how you want it done. It is the job of IBM Workload Deployer to take care of the how. This shift in approach to application environments enables the potential for significant savings, and more importantly to me, lays the foundation for a more agile, flexible approach to deploying and managing application environments.
There will be more in the weeks and months to come on IBM Workload Deployer, so stay tuned. I also want to put a plug in for a new blog from Jason McGee. For those that do not know Jason, he is an IBM Distinguished Engineer, and the lead architect behind IBM Workload Deployer. Be sure to check out his blog for insights on this new offering, as well as for all things cloud.
A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.
Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:
Me:IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?
Marc: To sum it up, we offer a combination of depth and breadth. IWD delivers both workload aware management and general purpose management. Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context. There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools. This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity. By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.
That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun. As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer's entire environment, we offer a mix of workload-aware management and general purpose management.
Me:VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?
Marc: I think VMware has built a very compelling set of capability in the virtualization space. I think the main difference between VMware's suite and IBM Workload Deployer is the perspective from which the environments are managed. VMware puts the administrator in the position of thinking about infrastructure from the ground up. The administrator is thinking about virtual images, hypervisors, and scripts. In IBM Workload Deployer, we think about things from the perspective of the app, because that's ultimately what the business cares about. By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.
Me:The 'one tool to do it all' approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?
Marc: The advantages of a "one tool to do it all" are many: less integration, more uniformity, less complexity. As such, customers will always prefer a single tool when possible. This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the "everything else." As such, my advice to users is not to choose between breadth and depth - use IBM Workload Deployer which offers both.
Me:To close, I'm curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?
Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm. There are several reasons for this. One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged. Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models. However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data. Just look at IBM's Strategic Outsourcing business, which handles entire IT operations for many large businesses. I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption. In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public. When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.
IBM Impact 2011 was a wildly busy week! Customer meetings, entertaining keynotes, informative sessions, and hands-on labs packed the 6 days with more than enough action. I spent a lot of the week presenting sessions and conducting labs for the newly announced IBM Workload Deployer. As one would expect with any new announcement, we got tons of questions about IBM Workload Deployer. While I cannot capture all the questions and their answers here, I will try to cover some of the more prevalent ones below.
Question: What happened to WebSphere CloudBurst?
Answer: The short answer is, it simply went through a rename. WebSphere CloudBurst became IBM Workload Deployer v3.0. The version 3.0 acknowledges this is an evolution of what we started with WebSphere CloudBurst, which was at version 2.0. Why remove WebSphere from the name? The fact that this is now an IBM branded offering is more accurate as it is capable of deploying and managing more than just WebSphere software.
Question: What is new in IBM Workload Deployer?
Answer: While there are many new features that I will be talking about over the coming months, the most prominent new facet is the introduction of workload patterns (also referred to as virtual application patterns). As opposed to topology patterns (traditionally referred to as simply patterns in the WebSphere CloudBurst product), workload patterns raise the level of abstraction to the application level. Instead of focusing on application infrastructure and its configuration as you do with topology patterns, workload patterns allow you to focus on the application and its requirements. When using workload patterns, you provide the application, attach policies that specify functional and non-functional requirements, and deploy. IBM Workload Deployer handles deploying and integration the middleware infrastructure necessary to support the application, and it automatically deploys your application on top of that middleware. In addition, IBM Workload Deployer manages the application runtime in accordance with the policies that you specify in order to provide capabilities such as runtime elasticity.
Question: If I am a current WebSphere CloudBurst user, what does this mean for me?
Answer: Not to worry. You will be able to use all of your WebSphere CloudBurst assets (patterns, scripts, images) in the new IBM Workload Deployer. All of the capabilities previously in WebSphere CloudBurst are present in IBM Workload Deployer (terminology may vary slightly -- topology pattern instead of just pattern for instance). Additionally, we continue to expand on the functionality that you are familiar with from WebSphere CloudBurst. This includes updates for Environment Profiles, new IBM Hypervisor Edition images, new pattern building capabilities, and more. Stay tuned for more information about these new features and for information on how you can move your WebSphere CloudBurst resources to the new IBM Workload Deployer.
Question: How do I choose between using workload and topology patterns?
Answer: There are a number of factors that will lead you to using either workload patterns, topology patterns, or both. The primary decision point will be how much control you really need (not want). When using workload patterns, you sacrifice some customization control over the configuration, integration, and administration of the middleware application environment since the workload pattern and management model abstracts away the 'guts' of the system. Everything about the workload pattern is application-centric. On the other hand, topology patterns give you intimate control over the configuration, integration, and administration of the middleware application environment. As a general rule of thumb, if your application requirements match the capabilities of a workload pattern, that is the way to go as it can greatly reduce complexity and cost associated with deployment and management. If a workload pattern does not meet the needs of your application, topology patterns can still greatly reduce cost and complexity and you can tailor them to fit almost any need. Beyond generalities, there is no hard and fast rule for choosing one over the other. It comes down to understanding your application environment and its needs.
Question: Is IBM Workload Deployer an appliance like WebSphere CloudBurst?
Answer: Yes, it is still an appliance, but an updated one! The new appliance is 2U, and it provides more storage, processing power, and memory. It is still just as easy to setup, but just slightly bigger.
Well, that is all for now, but I will be back many times over the coming months with more information. In the meantime, if you have any questions, please leave them in a comment below.
When many people think of cloud computing they immediately think of virtualization and virtual machines in particular. This is completely natural and not at all surprising. After all, one of the core underlying technologies necessary for cloud computing is virtualization. However, it is important not to confuse one element of cloud computing with the entire thing - and this can sometimes happen. Many people have begun to leverage virtual machines in their on premise environment and sometimes begin to call this their private cloud. While virtualization is a substantial step forward and help gets you started down the necessary path of standardization and automation that is essential in a cloud - it is not in and of itself "a cloud".
The National Institute of Standards and Technology has published its definition of cloud computing. This is a very complete and yet concise definition that includes not only the essential characteristics of a cloud solution but also the service models (IaaS, PaaS, SaaS) and deployment models (public, private, hybrid, community). It is a great way to get a perspective on cloud and can be useful when considering the solutions of various vendors.
Let me summarize the essential elements of cloud from this definition here:
broad network access
So, this is interesting. Not only is this much more than just virtualization - but virtualization isn't even mentioned in the list explicitly. Not to worry - virtualization is of course important and is included under the resource pooling topic. I would assert that virtualization is also necessary to facilitate the type of on-demand, self-service, elastically scaling resources that are leveraged in a cloud. What is crystal clear from this definition is that there is a lot more to a cloud solution than just virtual images and some hypervisor infrastructure upon which to run them. Somebody must provide the necessary on-demand/self-service capabilities, the network access to these services, the management of the resource pools, enabling true elasticity for running systems, measuring services and so forth. IBM Workload Deployer provides just such capabilities for the on-premise cloud allowing you to efficiently deploy patterns built for virtual systems and virtual applications with deep knowledge of the middleware that is being provisioned to optimize these solutions. Furthermore, Workload Deployer provides the complete lifecycle management from pattern creation, to deployment and provisioning, applying maintenance, resource and license management in the on-premise cloud, elastic scalability, and eventually returning resources to the on-premise cloud to be reused. Workload Deployer is a complete solution for not only server virtualization but of course for cloud computing.
However, virtualization doesn't have to stop with just virtual machines. It is a general principle that can be applied to more than just servers. At its core, virtualization is really about providing a level of abstraction between some real resources and the consumers of those resources. This is a natural fit when we think of server virtualization and virtual machines. However, there are also substantial benefits to be gained by adopting a similar abstraction between the middleware and the applications themselves - sometimes referred to as application virtualization.
By application virtualization I mean providing the capabilities to abstract the application from the underlying infrastructure such that it can be elastic, participate in health management policies, and provide agility across the pool of application infrastructure resources. This type of application virtualization is built into our Virtual Application pattern (hence the name) in Workload Deployer and surfaced in solutions via policies (such as scaling and routing), and high availability functions built into the Web Application pattern type. For Virtual Applications these features are fully integrated and optimized functions as are all elements of Virtual Applications. However, similar features have also been available for WebSphere Application Deployments in Virtual System patterns with a special extension.
WebSphere Virtual Enterprise provides application virtualization for traditional WebSphere ND solutions and this same feature is delivered for Virtual System pattern deployments of WebSphere Application Server by use of the Intelligent Management Pack. Leveraging the capabilities of Workload Deployer with Virtual Systems lets you gain the benefits of server virtualization and to reduce hardware, provide rapid and consistent deployment of entire systems, dynamically adjust resource consumption, and much more. Leveraging the capabilities of the Intelligent Management Pack provides the ability to manage service level agreements with elastic scaling and health management, lower operational costs, and provide for improved application management. These two solutions together provide a powerful combination to improve the management and resiliency of your enterprise applications.
If you would like to learn more about application virtualization using the Intelligent Management Pack in conjunction with Virtual System Patterns in IWD then please join Keith Smith and myself tomorrow for a webcast on this very topic. Keith is the lead architect on our WebSphere Virtual Enterprise and Intelligent Management Pack products and brings a wealth of experience in this space. So don't miss this opportunity - register here.
IBM Impact 2012 was just last week with a theme of "Change the Game" ... and I'm still reveling in all of the excitement and energy that goes into conferences such as this. I was fortunate to get a last minute spot to attend the conference and help out at the Solution Center where I had the chance to speak to a lot of customers and other IBMers interested in cloud computing. Among the many things that stood out - there is certainly a lot of interest in cloud computing and patterns of expertise - it also seems that folks are ready to get some first hand experience with these patterns. There's plenty of opportunity for that!