Applications - just like humans, animals, plants, and many other things - have a life cycle. They are conceived, given birth, grow, do foolish things in youth, hopefully improve over time, have problems that need to be fixed, don't always age well .... and eventually they will die and release their assets to the next generation. Sounds kind of familiar, doesn't it?
One of the many benefits of virtual application patterns in IBM Workload Deployer and related IBM offerings is support for the complete life cycle of the application. You can manage the complete life cycle of virtual applications from a single interface that is fully integrated and well thought out - not just a series of links from one product UI to a different product UI. This eliminates the complexity of having to work with different interfaces, paradigms, metaphors, controls, labels, names, authorization, and so on - that is often the norm in many customer environments today. I think the benefits of this integration are obvious - eliminating confusion, configuration, miscommunication, interpretation, and mapping errors. Providing a truly complete solution also facilitates a common knowledge base and encourages cooperation and collaboration among teams. You can share patterns, providing consistent governance for a solution, guarantee consistency in deployments, and build upon the expertise provided by others. Having an integrated solution for design, development, deployment, configuration changes, monitoring, and problem determination ensures that time is not wasted and valuable information is not lost.
As Joe mentioned in his last post, virtual application patterns are all the rage in IBM Workload Deployer. The high degree of abstraction provided by these patterns means users can remove tedious, time consuming tasks like middleware installation, configuration, and integration from their field of view. As a consequence, users can build and deploy application environments in unprecedented time, thus freeing up more time to focus on the actual application.
This is obviously important because building and deploying application environments are crucial, traditionally time consuming activities. However, what happens after you build and deploy the application? You manage it, that's what! Joe brought up the fact that IBM Workload Deployer makes this easier too by delivering an integrated management portal through which you can manage and monitor your application environments. Now, this probably already sounds valuable, but what really puts it over the top is the management portal exposes an interface that is workload aware. But, what does that mean?
To get an idea of what that means, consider the case that you use the shipped virtual application pattern to build a simple application environment with a web application and database. You deploy it with IBM Workload Deployer, and your application is up and ready. Now you want to start checking things out. You start by opening the management portal directly from the appliance, and you see both the application and database components listed in the view:
After you looked at basic machine statistics such as network activity and memory usage, you could move on to a more workload-centric view. For instance, you could examine statistics particular to a web application such as request counts and service response times:
You may also decide that you want to alter certain aspects of your deployed environment. As an example, you could update your deployed application or change certain configuration data in the deployed environment:
It is important to note that you have a management interface for each of the components in your environment. That means that from the same management interface, you can manage and monitor the database you deployed as part of your environment. For example, at different intervals, you may want to backup your database. You can do this directly from the management portal provided by IBM Workload Deployer:
Lest you think that you can only manage and monitor, this unique management interface is also a one stop shop for all of your troubleshooting needs. From the centralized portal, you can view log and trace data for each component:
Virtual application patterns are an attempt to encapsulate each phase of your application's lifecycle, from creation to deployment to management. In this regard, I hope the above provides a taste of some of the management capabilities provided by virtual application patterns. It truly is the tip of the iceberg!
IBM Impact 2012 was just last week with a theme of "Change the Game" ... and I'm still reveling in all of the excitement and energy that goes into conferences such as this. I was fortunate to get a last minute spot to attend the conference and help out at the Solution Center where I had the chance to speak to a lot of customers and other IBMers interested in cloud computing. Among the many things that stood out - there is certainly a lot of interest in cloud computing and patterns of expertise - it also seems that folks are ready to get some first hand experience with these patterns. There's plenty of opportunity for that!
Among the major features of the new virtual application pattern in IBM Workload Deployer is the notion of elasticity. That is, as your application needs more resources, it gets them. When your application can meet its SLAs with fewer resources, the environment shrinks. With this kind of pattern, you enable elasticity by specifying a policy and defining the scaling trigger (i.e. CPU usage, application response times, database response times, etc.). What may have been a bit lost in some of these new announcements regarding IBM Workload Deployer is the fact that you can now leverage this core feature of cloud, elasticity, in your virtual system patterns.
If you have read this blog in the past, you probably already know that the Intelligent Management Pack is an option for virtual system patterns built using WebSphere Application Server Hypervisor Edition. When you enable the Intelligent Management Pack option, you are essentially building and deploying WebSphere Virtual Enterprise (WVE) environments. For those of you not familiar with WVE, the best way to describe it is that it provides you with application and application infrastructure virtualization capabilities. Of its many capabilities, one most germane to our discussion today is the ability for users to attach SLAs to applications and then have WVE automatically prioritize requests and manage resources in order to meet those SLAs. Inherent in this capability is the ability to dynamically start and stop application server processes (JVMs) as required. In other words, WVE provides JVM elasticity.
The fact that WVE provides JVM elasticity is nothing new. Further, IBM Workload Deployer started providing virtual machine (VM) elasticity in previous versions (when it was WebSphere CloudBurst). With this feature, you could add or remove VMs to an already deployed virtual system using dynamic virtual machine operations provided by the appliance. The catch was that the VM elasticity was a manual action and you could not link this elasticity to the same SLAs tied to your applications. Well, thanks to a new feature in WebSphere Virtual Enterprise and easy integration provided by the Intelligent Management Pack, this is no longer the case.
Starting in IBM Workload Deployer 3.0, you can take advantage of a new WVE feature called Elasticity Mode when using the Intelligent Management Pack. Elasticity mode is not unique to IBM Workload Deployer, but a concept new to the base WVE product. It allows one to define actions for how WVE should grow and shrink the set of nodes used by application server resources. Like the basic JVM elasticity capability in WVE, these node elasticity actions trigger based on SLAs tied to your applications. Consider the case that you are using elasticity mode and your application is not currently meetings its SLA. If WVE does not think it can start any more application server instances on the current set of nodes, it will grow the set of nodes per your elasticity configuration. Conversely, if WVE detects that it can meet SLAs with fewer nodes, it will shrink the resources per your elasticity configuration.
In IBM Workload Deployer, using elasticity mode becomes even easier. All you need to do is use the Intelligent Management Pack and enable the elasticity mode option in your virtual system patterns. When you do this, you get automatic integration between IBM Workload Deployer and the deployed WVE environment. What does that mean? It means that if WVE detects it needs more nodes, it will automatically call back into IBM Workload Deployer and request that the appliance provision a new VM that will serve as a node for application server processes. It also means that if WVE detects it could meet SLAs with fewer resources, it will call into IBM Workload Deployer and ask it to remove a node. All of this happens without any user scripting. All you have to do is enable this option in your patterns and configure SLAs appropriate for your applications.
To me, this exciting new feature brings out the best of elasticity capabilities in both IBM Workload Deployer and WebSphere Virtual Enterprise. The result is a single management plane that gives you both VM and JVM elasticity for your cloud-based application environments. Best of all, elasticity actions map directly to SLAs for your applications. After all, when it comes to cloud, it's the application that really matters!
In a post not long ago, I mentioned new enhancements to virtual system patterns in IBM Workload Deployer. A prominent part of those enhancements were updates to pattern construction that allow you to order virtual machine startup, order script package invocation, and include add-ons that provide system level configuration options. Recently I uploaded a demonstration to YouTube that highlights some of these new capabilities. Specifically, this provides a brief look at ordering and add-on enhancements.
I hope you take a look, and even more importantly, I hope to see some feedback. If you have something you would like to see captured in a demo, let me know and I'll work it to the top of a long and continually growing list!
In the previous post Dustin shared a great video demonstrating the value of the IBM Image Construction and Composition Tool that is now delivered with IBM Workload Deployer V3.1. This is certainly one of the key new features of IBM Workload Deployer V3.1. However, there are also a number of other compelling enhancements and features that we would like communicate.
I created the attached video to highlight some of these features included in new Workload Deployer release. The video uses the web console to highlight some of the features and capabilities, giving a brief introduction for each one. Without going into a lot of depth, I think it gives a nice overview. This may be especially helpful if you already have Workload Deployer v3.0 and want to see the value you will get when you upgrade to Workload Deployer v3.1. Check it out.
We believe that these new features make IBM Workload Deployer V3.1 an even better solution for your private cloud needs. Please let us know what you think.
I just wanted to point out a great opportunity for anybody considering leveraging IBM Workload Deployer v3 to deploy Database workloads. On June 29th Rav Ahuja, a Senior Product Manager for Data Management at IBM, will be hosting a webcast entitled "Easily Deploying Private Clouds for Database Workloads". He will be joined by Chris Gruber (Product Manager, Database as a Service), Leon Katsnelson (Program Director, IM Cloud Computing Center of Competence), and Sal Vella (Vice President, Database Development and Warehousing) in this panel discussion.
As many of you already know, IBM Workload Deployer v3 comes pre-loaded with DB2 images and patterns that are configured to rapidly provision standardized database servers for any number of purposes. The servers can be deployed in standalone configurations or as part of a complete virtual system including web components with the database components. These servers can also be configured for high availability scenarios. This panel discussion will cover all of these scenarios and more.
You can read more about the webcast in this blog post by Rav Ahuja.
If you want further details about how to build and rapidly deploy databases in a private cloud, be sure to attend this free webinar on June 29th.
In my opinion, declarative deployment models are key to the entire notion of Platform as a Service (PaaS). That is, users should concern themselves with what they want, but not necessarily how to get it. The PaaS system should be able to interpret imperatives from the user and automatically convert that to a running system. In this respect, I think the new virtual application pattern, and more specifically policies, in IBM Workload Deployer takes a giant leap toward a more declarative deployment model.
In IBM Workload Deployer, policies allow you to 'decorate' your virtual application pattern with functional and non-functional requirements. In other words, they provide a vehicle for you to tell the system what qualities of service you expect for your application environment. To put a little context around this discussion, let's examine the policies available in the virtual application pattern for web applications. Specifically, let's look at the four policy types you can attach to Enterprise Application, Web Application, and OSGI Application components in this pattern:
Scaling policy: When it comes to cloud, the first thing many folks think about is autonomic elasticity. Applications should scale up and down based on criteria defined by the user. Well, that is exactly what the scaling policy lets you do. You simply attach this policy to your application component, and then specify properties that define when to scale. First, you choose a scaling trigger from a list that includes application response time, CPU usage, JDBC connection wait time, and JDBC connection pool usage. After choosing your trigger, you decide the minimum and maximum number of application instances for your deployment, and then you choose the minimum number of seconds to wait for an add or remove action. At this point, you can deploy your application and IBM Workload Deployer will monitor the environment, automatically triggering scaling actions as needed.
JVM policy: I would be willing to bet that nearly all of you tune the JVM environment into which you deploy your applications. The JVM policy allows you to take two common tuning actions, setting the JVM heap sizes and passing in JVM arguments, as well as attach a debugger to the Java process (especially useful in development and test phases). You can also use the policy to enable verbose garbage collection (invaluable to understanding heap usage patterns for your application) and select the bit level (from 32 or 64) for your application. Again, all you have to do is attach the policy and specify the properties. IBM Workload Deployer will take care of the required configuration updates.
Routing policy: The routing policy provides a simple way to specify virtual hostnames and allowable protocols (HTTP or HTTPS) for your application. Attach the policy, provide the virtual hostname you want to use, select the desired protocols, and that's it! Remember, once you set the virtual hostname you will need to update your name server to map the hostname to the appropriate IP address.
Log policy: During the development and test phase, it is likely that you will want to enable certain trace strings in the application runtime. The log policy allows you to provide trace strings for your application, and it makes sure that the appropriate configuration updates occur in the deployed environment.
While this is not an exhaustive explanation of each of the policies above, I hope it gives you a basic idea of what they are and how to use them. To me, declarative deployment models are going to be a crucial part of making PaaS successful, so I am really excited about the notion of policies in IBM Workload Deployer. What do you think?
As many of you well know, virtual images are the foundation of virtual system patterns in IBM Workload Deployer. Whether you are using IBM Hypervisor Edition images or custom-built images produced by the IBM Image Construction and Composition Tool, every virtual system pattern has at least one virtual image as part of its foundation. So, if virtual images are the foundation of virtual system patterns, what is the foundation of these virtual images?
While you could probably make a good argument for a number of different things being the foundation of the virtual image (operating system, other installed software, etc.), I like to think that, at least in the context of IBM Workload Deployer, the activation engine inside the virtual image is the true foundation. Inside this activation engine, you will find a collection of scripts and services that are capable of configuring the virtual machine for use. Not only does this engine perform basic system-level actions like configuring the machine's hostname, IP address, time, and network interfaces, but it also configures the software on the inside of the virtual machine. For instance, the activation engine in the WebSphere Application Server Hypervisor Edition image is capable of fixing up profile information, federating nodes, creating application server clusters, and more. Best of all, in the case of IBM Hypervisor Edition images, you (the user) get all of this right out of the box. There is no logic to perform or administrative tasks to undertake in order for you to benefit from the activation engine. It is simply there!
So, at this point you may ask yourself 'If all of this is included right out of the box, why do I need to care?' That is a fair question, but ultimately I feel it is always important to understand the foundational elements of any technology. In this respect, I do not feel like the activation engine in the IBM Hypervisor Edition images is any different. Lately, I have been telling my users to take at least a little time to understand what the activation engine is and even more importantly, what it is doing for you during deployment. Specifically, I always suggest taking a little time to look at the scripts in the activation engine -- most often found in the /opt/IBM/AE/AS directory of a virtual machine deployed by IBM Workload Deployer.
What can be gained by taking the time to peruse through these scripts? I think most importantly, you will learn what the engine does for you and what you cannot do if you expect the image to deploy correctly. For instance, if you look in some of those activation engine scripts, you will see that it uses the sudo command in several places. While I know many of you may be tempted to remove the sudo command during extend and capture, if you do so it will break the activation engine. I have seen this happen multiple times, and trust me, if you did not know the activation engine used that command it is not necessarily an easy problem to debug. This is a case where the value of at least superficially understanding the activation engine is clear.
Want another example? Okay, consider that you want to run WebSphere Application Server as a user called wasadmin. At pattern deployment time, it is easy enough to supply wasadmin in the appropriate field of the part configuration data and click OK. IBM Workload Deployer deploys the system and voila, WebSphere Application Server is magically running as wasadmin. Everything is fine so far, but let's take this a step further and say that you previously performed an extend and capture, and you installed software components in the image that should be owned by your wasadmin user. It is technically possible to define users during extend and capture and then install software content via that user, but if you also want to specify that user as the WebSphere Application Server administrative user at deployment time, you will run into an issue. This is because the activation engine runs the usermod command during deployment to change the existing and default virtuser into the user that you specify -- in this case wasadmin. If the usermod command attempts to change virtuser to wasadmin but wasadmin already exists as a user on the operating system, the command will not complete properly, and it is very likely you will see further errors downstream. A simpler way to do this is to create the user during extend and capture, install any components via that user, and then delete the user before capturing. You can attach a deploy-time script that fixes up the appropriate settings for wasadmin (like user ID and group ID), and it will run after the activation engine successfully does a usermod and changes virtuser to wasadmin.Problem averted!
In reading some of the above, I fully realize that it may be a little confusing at first. That said, I assure you that there is not much to it at all once you have a basic understanding of the activation engine. With a basic understanding of the activation engine in tow, you will know what you do not need to do (e.g. create profiles, federate nodes, etc.), what you cannot do (e.g. remove the sudo command), and what you can do with a little bit of reconciliation work (e.g. define your WebSphere Application Server administrative user during image extension). I encourage you to take a little time with your next deployment and give the activation engine a once over. You will undoubtedly have a better understanding of the deployment process, and you will ultimately be in a position to most effectively leverage virtual system patterns in IBM Workload Deployer.
Customers are always impressed when they learn about the simplicity, resiliency, and rapid time to value they can received from virtual applications. However, they are usually a little mystified at how virtual applications really work. After all - they have become quite accustomed to doing things the "traditional way" where they control every aspect of their applications manually. Virtual Applications represent an entirely new way of thinking. Sure, the benefits are enormous but can you really trust them? How is it doing all of this anyway?
What seems like "magic" is really a sophisticated and coordinated set of activities driven and coordinated by IBM Workload Deployer while leveraging the expertise built into the pattern type. Yes, you can trust it because experts have worked to build the system and created to it react and respond much faster than you can. When moving away from manual processes to automated processes it is always nice to get a sense of what is really happening. I think it is just human nature. We can't really place our trust in something until we have first hand experience or understand what it is really doing ... I guess it is the critic inside each one of us. Even after you've experienced the value it is still reassuring to see and understand the "how".
It is the "how does it do that?" type of question that I attempted to answer for virtual applications in a blog post I wrote on the Expert Integrated Systems blog recently. It attempts to pull the curtain aside and describe what is actually happening to support a virtual application pattern. As with my previous post - this was written for IBM PureApplication Systems but the concepts are 100% applicable to IBM Workload Deployer. I think you will find it interesting ... Continue reading ...