As a final preview of this week's building block sessions in the Enabling cloud computing with WebSphere campaign, I caught up with WebSphere DataPower architect Tim Smith. Tim is delivering a podcast that introduces and explains the new Application Optimization capabilities in the WebSphere DataPower line of products. Here is what Tim had to say:
Me: I speak with quite a few customers about the WebSphere CloudBurst Appliance, and for once I'm happy to be the one asking this question. Why do we deliver WebSphere DataPower in the appliance form factor?
Tim: DataPower has become a dominant player in the DMZ and in the ESB. Much of the reason is that this is a purpose built hardware appliance. There are many things that our customers like about this appliance package. First, it has security as part of its DNA. The basis for securing connections, applies throughout the network whether it is in a DMZ or in an ESB. The physical box provides tamper resistant protection. Another reason is availability -- there are no spinning media, dual power supplies, and a focus on fail over support.
In both the DMZ and the ESB, there has been a proliferation of products. The main reason for the proliferation is that customers want to remove as many decisions from the general purpose server as possible, and let servers do what they do best, process application requests. The devices that have been proliferating make more decisions on the request. They do deep packet processing and routing. They also may transform the request into an entirely different request. So, there are an abundance of "pre-processing" decisions and operations made. With DataPower, many functions are integrated into the single hardware platform, giving you a smaller box count. No need to purchase and maintain several platforms, their OS and software versions, compatibility lists, etc. With a single hardware box that does so many things, we can greatly reduce the total cost of ownership for our users.
The DataPower appliance is a blend of Hardware and firmware that is well provisioned with hardware assists that help compile, parse, and assist in many of the intensive packet processing capabilities. To summarize, you get an extremely flexible and adaptable product that reduces total cost while increasing performance.
Me: A theme that comes up in cloud computing over and over is consolidation. Can you speak to the consolidation offered by WebSphere DataPower appliances with respect to the self-balancing capabilities?
Tim: Yes. My answer to the prior question was a long-winded way of describing DataPower's ability to consolidate many features into a single platform. Self-balancing is an example. As DataPower became more popular, larger installations required multiple DataPower appliances in a tier of platforms. A common architecture was to place a load balancer or IP sprayer in front of the tier to distribute the traffic evenly among the tier of DataPower appliances. An IP sprayer is an example of another platform that needs to be added to the environment. It is another box that must be purchased, managed, and maintained. Self-balancing is a feature that was added to DataPower to eliminate the need for an IP sprayer. The way it works is that one of the DataPower appliances in the tier owns the Virtual IP (VIP) Address. It receives all of the traffic, and then distributes it to each of the other DataPower appliances in the tier. If the DataPower appliance that owns the VIP address goes down, one of the others is elected and it takes over. The result is one less product required to support the same level of functionality.
Me: For much of the past, cloud computing mostly focused on virtualization and management of resources at the raw compute level (servers, storage, networking, etc.). While there is definitely ongoing focus here, we start to see it moving up the stack towards applications, and part of that effort includes more evolved application load distribution. With that in mind, how can WebSphere DataPower help users more effectively distribute requests to their applications?
Tim: If a front end appliance or gateway device can dynamically learn information about its environment, specifically the back end, it will be able to make better decisions on how and where to route the request. This is one of the tasks that the Application Optimization feature addresses. Information from the back end can of course be manually configured, but the real value in cloud computing is dynamically adapting when new server resources are brought on line or are taken off line. In the 3.8.0 release, we implemented something called Intelligent Load Distribution (ILD). Intelligent load distribution focuses on continually learning the topology of a back end, updating DataPower's load balancers with that information, and distributing the load based on the updates. In addition to the topology, ILD learns the weights associated with each server. These weights can continually and automatically change as traffic patterns change. The result is load balancing to the back end that sends the optimal amount of load to each server.
Another traffic distribution aspect incorporated into ILD is session affinity. When a server application needs to receive every request from a given client, session affinity is used to route the requests to the same server. In some sense, session affinity overrides the load balancing algorithm. The session affinity support works with any type of back end server, but with a WebSphere back end, all session affinity information is automatically configured.
Me: Continuing on the theme of application intelligence, what is this new Application Routing option in WebSphere DataPower?
Tim: ILD focused on learning the topology of the network and making better decisions based on an ever changing cloud topology. Application Routing does something similar by learning which applications are running on each server. Once a request is handed to DataPower's load balancer, the request is classified as to the application that it is targeted for. Then the request is load balanced amongst the servers that are running that application. The information to perform application routing is dynamically learned and changes as applications are added or removed.
WebSphere has invested substantially in managing the life cycle of an application. Changing from one edition of an application to the next sounds like an easy task, but it can be very difficult to perform this type of maintenance on a production environment. The DataPower appliance supports life cycle management by working with the WebSphere back end to provide group and atomic edition rollout. The rollout feature allows traffic to be gracefully diverted from servers that are being taken offline and reloaded with the new application edition. This rollout can be done while leaving the other applications on the server unaffected. This support makes edition rollout a very simple task for the system administrator.
In last week's post, I put the spotlight on various aspects of bundles in the Image Construction and Composition Tool. I finished with a look at a WebSphere CloudBurst virtual image created from the bundle. However, you do not just magically go from a bundle to an image that you can use in WebSphere CloudBurst, Tivoli Provisioning Manager, or on the IBM Cloud. Today, I want to show you how to go from a bundle to a custom virtual image using the IBM Image Construction and Composition Tool.
Once you have defined at least one bundle and one base operating system image, you are ready to compose a custom image. We already talked about creating a bundle, but the base operating system image is a new topic. You can do this by either starting from ISO and kickstart configuration files, or you can import an existing Open Virtual Appliance (OVA) image that contains your operating system of choice. Once you have that base image imported or defined in the Image Construction and Composition Tool, you can extend it to create a custom image on top of the base OS image.
After creating your extended image, you can add bundles that represent the software you want to install in your custom image. Simply click on the Software tab of the new virtual image. Click the add icon, and select the bundle that you want to add. You can add as many bundles as you would like to your custom image.
After adding a bundle, it will show up in the Planned list of software for the image. Click on it to display its details in the right side of the screen. You will notice General, Install, and Configuration sections for the bundle. In the Install section, you will find a list of the installation parameters you defined for the bundle. You can provide values for the parameters at this time.
If you click on the Configure section, you will see all of the configuration paramters you specified for the bundle. You can provide default values, and you can specify whether or not these should be configurable by deployers of your custom image. If you mark them as configurable, users will be able to provide values for the parameters at image deploy time, regardless of whether they provision the image using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
After you add the necessary bundles and specify installation and configuration data, you can save the image. Upon saving, the image status changes from Synchronized to Out of Sync.
Now you are ready to synchronize the image. To do this, simply click the synchronize icon. This will result in the creation of a virtual machine in the cloud envrionment (VMware or IBM Cloud) you defined in the selected cloud provider. The Image Construction and Composition Tool will then invoke the appropriate installation tasks (per the bundles you included in the image) within the running virtual machine. It will also copy over any configuration scripts you defined in the bundle.
After a while, the synchronization process completes, and the image returns to the Synchronized state. At this point, you are ready to capture the image by clicking the capture icon. This results in the creation of an OVA virtual image with your customizations. When the capture process completes, the image status changes to Deployable.
Once the image is in the deployable state, it is nearly ready to use. If you are using the IBM Cloud as your cloud provider, you can simply mark the image complete by clicking the complete icon. At this point, the image will show up in your private catalog on the IBM Cloud and it is ready to use. If you are using VMware as the cloud provider, you need to export the image. Click the export icon and provide information about an SCP-enabled server to which you want to export the image. Ideally, this location is directly reachable by the WebSphere CloudBurst or Tivoli Provisioning Manager environment into which you will import the image.
You can monitor the export status in a separate window by clicking on a link shown after clicking the OK button in the dialog above. When the export finishes, you are ready to import your new custom virtual image into WebSphere CloudBurst or Tivoli Provisioning Manager.
I hope the last three posts have given you a better idea of what the new IBM Image Construction and Composition Tool is all about. There will definitely be more to come about this tool in the near future, but in the meantime, if you have any questions or comments, please reach out to me. Until then, good luck and full speed ahead on your custom image compositions!
Next up on our sneak preview of the building block sessions for the Enabling cloud computing with WebSphere campaign is the Dynamic Infrastructure Services block. One portion of that block is a discussion about some of the technical capabilities of WebSphere Virtual Enterprise given by Nitin Gaur. Nitin is a Consulting IT Specialist within WebSphere, and an all-around WebSphere guru. I caught up with him to ask a few questions about his upcoming podcast.
Me: When people think cloud computing, one of the core concepts is 'on demand'. They want just enough resource at just the right time. In that sense, can you tell me a little about the On-Demand Router (ODR) in WebSphere Virtual Enterprise (WVE)? What is it and what core functions does it provide?
Nitin: So, first allow me to take a step back. In my view, cloud computing is a new consumption and delivery model nudged by consumer demand and continual growth in internet services. I classify any Cloud computing platform exhibits the following 6 key characteristics:
Standards based delivery
Usage based equitable chargeback
I thus, deliberately use the term platform in the context of a cloud computing environment that facilitates flexibility, robustness and agility, as a systemic approach in providing a stage to hosting applications without the concern for availability or provisioning of underlying resources. Since hardware and software virtualization do offer significant cost and resource management advantages, it is not rare to see virtualized platforms as core building blocks of any cloud platform. Such virtualization technologies provide an elastic infrastructure service. In this respect, WVE provides application server virtualization, which enables an elastic business-policy-driven application infrastructure.
Now back to the On-Demand Router. The ODR is the autonomic engine that drives the activity enabling the elastic infrastructure discussed above. The ODR operates in a highly dynamic WVE environment, so it is imperative for the ODR to be aware of any changes in the environment such as newly deployed applications, the addition of new application servers, and any planned or unplanned server outages. It achieves this awareness by continuously interacting with WVE's fluid and dynamic feedback mechanism.
Me: Autonomic capabilities seem to be a core part of WebSphere Virtual Enterprise. To that end, can you tell us a little about the autonomic capabilities provided by dynamic clusters in WVE?
Nitin: Dynamic application placement is a defining capability of WVE that directly contributes to WVE's ability to provide a dynamic, virtualized, and goal-oriented environment for workload management and continuous availability. The dynamic application management capability maximizes the efficient use of hardware resources by allocating resources appropriately per application based on fluctuating demands in the enterprise infrastructure. It determines which servers to stop and start in a dynamic server cluster in order to meet current demand for applications, and it does this in the context of a set of administrator-defined policies that uphold the enterprise’s service level agreements (SLAs) for its application infrastructure. The dynamic application placement framework must balance resource availability against health policies, service policies, and the importance levels assigned to applications.
Dynamic server clusters are key to WVE’s ability to dynamically adjust the application environment according to server load, and they provide the basis for a virtualized server runtime environment. The big difference between a dynamic cluster in WVE and a static cluster in WebSphere Application Server is that dynamic clusters grow and shrink as needed to meet current demand by starting and stopping members of the cluster. Although dynamic clusters and static clusters can co-exist in a cell, dynamic application placement can only work with dynamic clusters. To prevent unchecked growth, each dynamic cluster has a mechanism that you use to define a boundary for that cluster’s growth. The boundary is both quantitative (based on criteria that define the minimum and maximum number of application servers that can run in the cluster simultaneously) and locational (based on criteria that confine the growth of the dynamic cluster to a defined set of nodes).
Me: I know you have been around the country, and for that matter globe, helping our users to adopt and implement WebSphere Virtual Enterprise. Tell us about one of your favorite customer stories.
Nitin: So I would cite an example of one of the leaders in the entertainment Industry (and my favorite customer), let's call them Company X (since I cannot cite the name). The core of the company's application infrastructure system is the Sales App Infrastructure (SAI) consisting of more than 10 enterprise applications. To keep up with demand, Company X was required to procure more hardware and software to support the core systems. This strategy resulted in a large infrastructure footprint with low hardware utilization. The increase in hardware footprint became difficult to manage and required additional resources. The large footprint of the company's deployment put them in reaction mode rather than a posture of proactive monitoring. Some application servers rendered themselves unavailable and required the team to restart them every 24 hours. From a cost standpoint, it costs the company the same amount of money to request a virtual platform as it would to purchase a new physical server. This led to significantly under utilized hardware throughout the enterprise. WVE was brought in to Company X to help better manage their WebSphere Application Server footprint. Dynamic clusters, application health policies, and application editioning features helped the company to better utilize hardware, reduce hardware expenditures, increase visibility into their applications, and improve availability of their applications.
In addition to helping with the existing environment, WVE helped Company X to roll out a new project with applications that required continuous availability to worldwide users. The team made use of policy-based workload management to ensure performance and availability levels of these new applications met their business needs. In addition, the company was able to reduce the amount of WebSphere Application Server licenses and physical servers required for this new deployment. In sum, WebSphere Virtual Enterprise saves the company significant time, money, and management effort.
When you build application environments in WebSphere CloudBurst, there are three main elements that comprise those environments: virtual images, patterns, and script packages. It is likely that at some point you will want to export your environments from a particular WebSphere CloudBurst Appliance. This may be in order to apply version control techniques, share resources among multiple appliances, backup business-critical files, or any number of other reasons. Whatever the reason, WebSphere CloudBurst provides the necessary facilities to support both image and pattern export. WebSphere CloudBurst provides export capability for virtual images that you can access via the web console and CLI. In addition, when you download the CLI from the appliance, you get a sample script called patternToPython.jy that you can use to facilitate pattern export.
The patternToPython.jy sample produces a script that you can use to recreate the targeted pattern on an appliance of your choosing. However, before running the script to recreate the pattern on an appliance, you must ensure that any images and script packages referenced by the pattern exist on the target appliance. Since WebSphere CloudBurst enables you to easily export and import virtual images, all you have to do is account for script packages when attempting to export complete application environments from WebSphere CloudBurst. While the appliance does not directly provide the means to export script packages like it does for images and patterns, the WebSphere CloudBurst Samples Gallery includes a sample that does. You can find this sample in the CLI scripts section of the samples gallery, with the title Export a script package in a portable format.
After downloading the sample CLI script, you simply unzip the archive and use the embedded Jython script from the WebSphere CloudBurst CLI with the following command:
This command will create a ZIP file containing the contents of the script package specified by SCRIPT_PACKAGE_NAME. In addition to simply copying the contents of the specified script package into the new ZIP file, the command will trigger the creation of a cbscript.json file based on the definition of the target script package. This file defines the properties of the script package such as the execution command, command arguments, etc., and the exportScriptPackage.jy script adds it to the newly produced ZIP file.
The result of using this sample is a self-contained ZIP file that you can load into any other WebSphere CloudBurst Appliance. Since the ZIP file includes the cbscript.json file, when you load it into another appliance you do not have to define any of the properties for the script package. This eliminates the potential for definition errors as you move script packages from one appliance to the other and makes it simple to export/import script packages among appliances.
There are a couple of things about the sample worth mentioning. First, if a cbscript.json file exists in the specified script package, the export script will not create a new one. Instead, the sample simply copies the existing one into the new ZIP file. Second, the target script package's contents must be a ZIP file. That is to say, the file associated with the script package in WebSphere CloudBurst must be a ZIP. If you are using anything prior to WebSphere CloudBurst 2.0, this is not an issue since you can only associate ZIP files with script packages. However, WebSphere CloudBurst 2.0 allows you to associate any type of file (ZIP, shell script, python script, etc.) with a script package.
If you are looking to effectively export all of the components of your WebSphere CloudBurst patterns, check out this sample script. I think it will make the process a bit easier for you. As always, comments and feedback are welcome.
When I talk with WebSphere CloudBurst users, the topic of custom virtual images comes up frequently. In some cases they simply want to customize a shipped IBM Hypervisor Edition, and in other cases they want to create a completely custom image. Creating a customized version of an IBM Hypervisor Edition is relatively easy since we give you extend & capture in WebSphere CloudBurst. Creating a completely custom image has historically been a bit tougher, mostly owing ot the fact that there was not a standard tool or process for image assembly. I am happy to say that today's publication of the IBM Image Construction and Composition Tool changes all that.
Watch a demo of the IBM Image Construction and Composition Tool
The primary purpose of the Image Construction and Composition Tool is to enable a modular approach to virtual image construction, while taking into account the typical division of responsibilities within an organization. The tool allows the right people within an organization to contribute their specialized knowledge as appropriate to the virtual image creation process. This means OS teams can handle the OS and software teams can handle the appropriate software. A separate image builder can then use both OS and software components to meet the needs of users within the organization. Best of all, the image builder does not need intimate knowledge of how to install or configure any of the components in the image. They simply need to know which OS and software components to use.
When using the Image Construction and Composition Tool, you start by defining the base operating system you wish to use for your images. You can do this by importing an existing virtual image with an OS already installed, providing an ISO for the OS, or pointing to a base OS image on the IBM Cloud. The bottom line is that you have necessary flexibility to start with your certified or ‘golden’ operating system build. Once you have the base OS image defined in the Image Construction and Composition Tool, you can start defining custom software for use in the images you will compose.
In the tool, bundles represent the software you wish to install within a virtual image. The definition of a bundle contains two major parts: Installation and Configuration. The installation component of a bundle tells the Image Construction and Composition Tool how to install your software into the virtual image. You provide a script or set of scripts that install the necessary components into your image, and you direct the tool to call these scripts. These tasks run once during the initial creation of the virtual image, thus allowing you to capture large binaries, long-running installation tasks, or other necessary actions directly into your image.
The configuration section of a bundle defines actions that configure the software installed into the image. Like with the installation tasks, you provide a script or set of scripts for configuration tasks. Unlike installation tasks that run exactly once, configuration scripts become part of the image’s activation framework and as such, run during each image deployment. Using the tool, you can define input parameters for configuration scripts and optionally expose them so that users can provide values for the parameters at image deploy-time. Configuration tasks are important in providing flexibility that allows users to leverage a single virtual image for a number of different deployment scenarios.
Once you have your base OS image and one or more bundles defined in the Image Construction and Composition Tool, you can compose a virtual image. To compose a virtual image, you extend the base OS image and add any number of bundles into the new image. A base OS image plus a set of bundles defines a unique image.
After you define the image you want to construct, you initiate a synchronize action in the Image Construction and Composition Tool. When you start the synchronize action, the tool first creates a virtual machine in either a VMware or IBM Cloud environment (based on how you configured the tool). Next, the installation tasks of each bundle you included in the virtual image run to install the required software. Finally, the tool copies the configuration scripts from each bundle into the virtual machine and adds them to the image’s activation framework. This ensures the automatic invocation of all configuration scripts during subsequent image deployments.
Once the image is in the synchronized state, you can capture it. Capturing the image results in the creation of a virtual image based on the state of the synchronized virtual machine. The tool also automates the generation of metadata that becomes part of the virtual image package. When the capture of the virtual image completes, you can export it from the Image Construction and Composition Tool and deploy it using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
I am excited for users to get their hands on the Image Construction and Composition Tool. I believe it represents the first big step in helping users to design and construct more sustainable virtual images. Did I mention it is completely free to download and use? Visit the Image Construction and Composition Tool website for more details and a download link. I look forward to your comments and feedback.
When it comes to provisioning and managing WebSphere application environments in a cloud, nothing approaches WebSphere CloudBurst in terms of expertise and instant value. However, I bet there is more to your data center provisioning and management activities than just WebSphere application environments. You probably deploy and manage a wide variety of both IBM and non-IBM software. While some of these activities may be beyond the scope of the WebSphere expertise you get with WebSphere CloudBurst, they fall well within the reach of offerings from IBM Tivoli.
One of the Tivoli offerings that comes to mind in the service delivery automation arena is the Tivoli Service Automation Manager (TSAM). TSAM delivers capabilities to request, deploy, monitor, and manage a broad range of IT services within a cloud environment, in large part by using both virtualization and automation as delivery vehicles. Even better for WebSphere users, you can integrate TSAM and WebSphere CloudBurst to make use of TSAM capabilities in concert with the WebSphere deployment and management expertise delivered by WebSphere CloudBurst. When using these two together, you actually deploy and manage WebSphere CloudBurst patterns directly from the TSAM user interface.
The integration starts by providing information about a target WebSphere CloudBurst Appliance (essentially the location of the appliance and login credentials) within TSAM. After that, you run a discovery process included with TSAM to gather information about patterns on the target appliance. Once you discover the pattern information, you perform one last configuration step, and you are ready to go.
As far as actually initiating a pattern deployment, it works much like other project requests in TSAM. From the TSAM user interface, you create a new project based on a WebSphere CloudBurst pattern. The request goes into the queue, where an administrator can approve or reject the request. This gives a nice touch of workflow governance to WebSphere CloudBurst deployments. If approved, the project request proceeds and TSAM, by way of the WebSphere CloudBurst REST APIs, initiates the deployment of the selected pattern from the appliance. Of course, there is also a means to remove the virtual system directly from the TSAM user interface. You can cancel any WebSphere CloudBurst based project, and if approved by an administrator, TSAM again leverages the WebSphere CloudBurst REST API to trigger the deletion of the virtual system.
The integration of TSAM and WebSphere CloudBurst provides the best of both worlds really. You can use a single portal as a gateway for provisioning and managing a broad range of IT services within a cloud environment, while still leveraging the significant out-of-the-box know-how and value provided by WebSphere CloudBurst for WebSphere environments. Check out a demo of this integration here, and as always, let me know if you have any questions or comments.
One of the fundamental tenants of IBM Workload Deployer is a choice of cloud deployment models. Starting in v3.0, users will be able to deploy to the cloud using virtual appliances (OVA files), virtual system patterns, or virtual application patterns. The ability to provision plain virtual appliances is a way to rapidly bring your own images, as they currently exist, into the provisioning realm of the appliance. As such, I think the use cases and basis for deciding to use this deployment model are fairly evident. However, when comparing the two patterns-based approaches, virtual system patterns and virtual application patterns, the decision requires a bit more scrutiny.
Our pattern approach is a good thing for you, the user. Basically, when we refer to patterns in the context of cloud, we are referring to the encapsulation of installation, configuration, and integration activities that make deploying and managing environments in a cloud much easier. Regardless of what kind of pattern you end up using, you benefit from treating a potentially complex middleware infrastructure environment or middleware application as a single atomic unit throughout its lifecycle (creation, deployment, and management). In turn, you benefit from decreased costs (administrative and operational) and increased agility via rapid, meaningful deployments of your environments. That said, it is imperative to understand the differences between virtual system and virtual application patterns, and more importantly, it is important to understand what those differences mean to you. Let's start by considering the admittedly simple 'Cloud Tradeoff' continuum below.
In the above graph, the X-axis represents the degree to which you have customization control over the resultant environment. The degree of control gets lower as we move from left to right. The left Y-axis represents total cost of ownership (TCO), which decreases as we move up the axis. The right Y-axis represents time to value, which similarly decreases as we go up the axis. Naturally, enterprises want to move up the Y-axis, but, and it can be quite a big but, they are sometimes hesitant to relinquish much control (move to the right on the X-axis) in order to do so. In that light, I think it helps to explore our two patterns-based approaches a bit more.
The most important thing to understand about this continuum is that the X-axis really represents the customization control ability from the point of view of the deployer and consumer of the environment. An example is probably the best way to explain. Let's consider a fairly simple web service application that we want to deploy to the cloud. If we were to use a virtual system pattern to achieve this, we would probably start by using parts from the WebSphere Application Server Hypervisor Edition image to layout our topology. We may have a deployment manager, two custom nodes, and a web server. After establishing the topology, we would add custom script packages to install the web service application and then configure any resources the application depended on. Users that wanted to deploy the virtual system pattern would access it, provide configuration details such as the WAS cell name, node names, virtual resource allocation, and custom script parameters, and then deploy. Once deployed, users could access the environment and middleware infrastructure as they always have. That means they could run administrative scripts, access the administrative console provided by the deployed middleware software, and any other thing one would normally do. The difference in using virtual system patterns is not necessarily the operational model for deployed environments (though IBM Workload Deployer makes some things, like patching environments, much easier). Instead, the difference is primarily in the delivery model for these environments.
Using a virtual application pattern to support the same web service application results in a markedly different experience from both a deployment and management standpoint. In using this approach, a user would start by selecting a suitable virtual application pattern based on the application type. This may be one shipped by IBM, such as the IBM Workload Deployer Pattern for Web Applications, or it may be one created by the user through the extensibility mechanisms built into the appliance. After selecting the appropriate pattern, a user would supply the web service application, define functional and non-functional requirements for the application via policies, and then deploy. The virtual application pattern and IBM Workload Deployer provide the knowledge necessary to install, configure, and integrate the middleware infrastructure and the application itself. Once deployed, a user manages the resultant application environment through a radically simplified lens provided by IBM Workload Deployer. It provides monitoring and ongoing management of the environment in a context appropriate for the application. This means that there are typically no administrative consoles (as in the case of the virtual application pattern IBM ships), and users can only alter well-defined facets of the environment. It is a substantial shift in the mindset of deploying and managing middleware applications.
Okay, with that explanation in the bag, let's revisit the diagram I inserted above. I hope it's clear that, all things being equal, virtual application patterns indeed provide the lowest TCO and shortest TTV because of the degree to which they encapsulate the steps involved in setting up complex middleware application environments. So, let's get back to my assertion that the customization control continuum really applies to the deployer and consumer. Why do I say that? It's simple. In the case of either the virtual system pattern or the virtual application pattern, the pattern composer has quite a bit of liberty in how they construct things. Sure, we enable you right out of the chute by shipping pre-built, pre-configured IBM Hypervisor Edition images, as well as pre-built virtual system and virtual application patterns. The key is though, that the IBM Workload Deployer's design and architecture also enables you to build your own patterns -- be they the virtual system or virtual application type. With anywhere from a little to a lot of work, you can build virtual system and virtual application patterns tailored to your use cases and needs.
At this point, you may be saying, "Well now you have really confused things! How am I supposed to decide what kind of patterns-based approach fits my needs?" I have some advice in that regard. First, map your needs to things that we enable with the assets you get right out of the box with IBM Workload Deployer. If your application fits into the functional scope of one of the virtual application patterns that we ship, use it. If you can support the application by using IBM Hypervisor Edition images, virtual system patterns, and custom scripts, do it. In this way, you benefit most from the value offered by IBM Workload Deployer. However, if you find that you cannot use any of the assets we provide right out of the box (e.g. you want to deploy your environment on software not offered in IBM Hypervisor Edition form or in a virtual application pattern), then ask yourself one simple question: "What do I want my user's experience to be?"
In this sense, I primarily mean a user to be a deployer or consumer of your patterns. You need to decide whether you favor the middleware infrastructure centric approach afforded by virtual system patterns, or if you prefer the application centric approach proffered by virtual application patterns. There is no way to answer this generically for all potential IBM Workload Deployer users. Instead, you have to look at your use case, understand what's available to help you accomplish that use case, and finally, decide on what you want your user's experience to be. I hope this helps!
As fast as the Impact conference arrived is about as fast as it is disappearing in our collective rearview mirror. It seems like a blur. In a word, the conference was... exhausting! In other words, it was informative, exciting, and illuminating. I hope that many of you had a chance to make it out there, and I hope more of you make it to Impact in 2013.
For those of you familiar with the conference, you know that it is typically a launching ground for new product versions and altogether new products. This year was certainly no different with the launch of the new version of WebSphere Application Server (8.5), the new and improved IBM Business Process Manager and IBM Operational Decision Manager, a new version of WebSphere eXtreme Scale (8.5), and numerous updates across the messaging and connectivity stack. While I encourage you to follow up on all of these important announcements, they are not what I am going to focus on today. Instead, I am going to focus on the new addition to the IBM family that got plenty of attention this year: IBM PureApplication System.
Joe recently touched on this new offering, so I won't get into an exhaustive overview. To put it briefly, IBM PureApplication System is an expert integrated system. What does that mean? First and foremost it means that it is a system -- a whole. It is an integrated platform of hardware and software, optimized and tuned for running transactional web and database workloads. I do not mean that it is a system of software that we pre-install on off-the-shelf hardware. Rather, it is the result of hardware and software engineers across IBM working together to build a system that is expert at what it does. More than just the web application and database software though, IBM PureApplication System also contains pre-installed and pre-configured management software that delivers a soup to nuts (hardware to application) single pane of glass for managing the entire system. I could go on and on, but again that's not my purpose here. I encourage you to check out the new IBM PureSystems web page for more information and some pretty cool videos.
For those of you that take a look at IBM PureApplication System you will quickly find that the notion of pattern-based deployments (something I have talked about at length on this blog) plays a key role in the new system. In fact, the same virtual system and virtual application pattern constructs that you have come to know in IBM Workload Deployer are front and center in IBM PureApplication System as well. In the new system, you can build custom virtual system and virtual application patterns, deploy them to your cloud, and then manage them over time. If you are familiar with the IBM Workload Deployer user interface, you will likely find yourself immediately familiar with the interface of IBM PureApplication System. Given all of that, if you are like many of the users I talked to at Impact and since, you probably have some questions as to IBM Workload Deployer and this new system. Most commonly, I get these two questions: "What does this mean for the IBM Workload Deployer product?" and "How do I know when to use IBM PureApplication System versus IBM Workload Deployer?" Let me do my best to address those questions.
In terms of the impact of the IBM PureApplication System on the IBM Workload Deployer offering, I can only view it in one way: affirmation. As I said above, IBM PureApplication System puts the mode of pattern-based deployments front and center, and further affirms that this kind of approach is crucial to the evolution of application delivery and management. Those of you familiar with IBM Workload Deployer or its predecessor WebSphere CloudBurst know that we have been talking about patterns for years. Rest assured we will continue to talk about patterns and solutions for building, deploying, and managing them. As it stands, we have at least three ways for you to build, deploy, and manage patterns: IBM SmartCloud Application Services, IBM Workload Deployer, and IBM PureApplication System. As you can see, options for consuming patterns have only increased since the initial launch of WebSphere CloudBurst. Furthermore, if you were at Impact, you know that we have a vibrant and vocal community of IBM Workload Deployer users, and I hope to see that community continue to grow! As I see it, the core technology of IBM Workload Deployer is becoming our 'operating system' for cloud platform management.
The question of when to use IBM Workload Deployer or when to use IBM PureApplication System is one whose answer is a bit more nuanced and not something one can or should try to definitively answer in a blog post. One thing I do suggest though is that when evaluating these two technologies, it is important to acknowledge that they have different business value propositions. Sure they share common core technology in terms of building, deploying, and managing pattern-based environments, but beyond that they diverge a bit. Remember, IBM PureApplication System is, well a system. It is the hardware, software, and management technology you need to run your middleware application workloads. It is pre-built and pre-integrated to the point that it only requires you to roll it into your datacenter, hook it up to your network, and do some one time configuration. The aim for the duration between receipt of the system to up and running with your first deployment is four hours, and if you were at Impact you saw an amusing video with the chief architect (Jason McGee) that proves this claim.
IBM Workload Deployer is fundamentally different in terms of how you consume it and how it integrates with your infrastructure. Most notably, IBM Workload Deployer does not include optimized hardware (servers, storage, networking) for running your workloads or a single point of management for everything from hardware to applications. To use IBM Workload Deployer you attach it to your network and point it at existing virtualized servers. Simply put, IBM Workload Deployer assumes you have existing, under-utilized hardware that you can get more out of with the intelligent deployment and management approach the appliance delivers. While you do not get the pre-integrated and optimized system of hardware plus software, you do get the flexibility to use your existing infrastructure.
As you can see, there are similarities (patterns) and differences (whole system vs. management system), and the result is a pretty different set of value propositions. The key in evaluating these technologies is that you do so with a crisp understanding of your current needs AND your future plans for growth and evolution. I know this kind of advice is a bit generalized, but I hope the differences I discussed above help you to at least understand the capabilities of the two different offerings. As always, if you have any comments or questions, please reply to the post!
As many of you well know, virtual images are the foundation of virtual system patterns in IBM Workload Deployer. Whether you are using IBM Hypervisor Edition images or custom-built images produced by the IBM Image Construction and Composition Tool, every virtual system pattern has at least one virtual image as part of its foundation. So, if virtual images are the foundation of virtual system patterns, what is the foundation of these virtual images?
While you could probably make a good argument for a number of different things being the foundation of the virtual image (operating system, other installed software, etc.), I like to think that, at least in the context of IBM Workload Deployer, the activation engine inside the virtual image is the true foundation. Inside this activation engine, you will find a collection of scripts and services that are capable of configuring the virtual machine for use. Not only does this engine perform basic system-level actions like configuring the machine's hostname, IP address, time, and network interfaces, but it also configures the software on the inside of the virtual machine. For instance, the activation engine in the WebSphere Application Server Hypervisor Edition image is capable of fixing up profile information, federating nodes, creating application server clusters, and more. Best of all, in the case of IBM Hypervisor Edition images, you (the user) get all of this right out of the box. There is no logic to perform or administrative tasks to undertake in order for you to benefit from the activation engine. It is simply there!
So, at this point you may ask yourself 'If all of this is included right out of the box, why do I need to care?' That is a fair question, but ultimately I feel it is always important to understand the foundational elements of any technology. In this respect, I do not feel like the activation engine in the IBM Hypervisor Edition images is any different. Lately, I have been telling my users to take at least a little time to understand what the activation engine is and even more importantly, what it is doing for you during deployment. Specifically, I always suggest taking a little time to look at the scripts in the activation engine -- most often found in the /opt/IBM/AE/AS directory of a virtual machine deployed by IBM Workload Deployer.
What can be gained by taking the time to peruse through these scripts? I think most importantly, you will learn what the engine does for you and what you cannot do if you expect the image to deploy correctly. For instance, if you look in some of those activation engine scripts, you will see that it uses the sudo command in several places. While I know many of you may be tempted to remove the sudo command during extend and capture, if you do so it will break the activation engine. I have seen this happen multiple times, and trust me, if you did not know the activation engine used that command it is not necessarily an easy problem to debug. This is a case where the value of at least superficially understanding the activation engine is clear.
Want another example? Okay, consider that you want to run WebSphere Application Server as a user called wasadmin. At pattern deployment time, it is easy enough to supply wasadmin in the appropriate field of the part configuration data and click OK. IBM Workload Deployer deploys the system and voila, WebSphere Application Server is magically running as wasadmin. Everything is fine so far, but let's take this a step further and say that you previously performed an extend and capture, and you installed software components in the image that should be owned by your wasadmin user. It is technically possible to define users during extend and capture and then install software content via that user, but if you also want to specify that user as the WebSphere Application Server administrative user at deployment time, you will run into an issue. This is because the activation engine runs the usermod command during deployment to change the existing and default virtuser into the user that you specify -- in this case wasadmin. If the usermod command attempts to change virtuser to wasadmin but wasadmin already exists as a user on the operating system, the command will not complete properly, and it is very likely you will see further errors downstream. A simpler way to do this is to create the user during extend and capture, install any components via that user, and then delete the user before capturing. You can attach a deploy-time script that fixes up the appropriate settings for wasadmin (like user ID and group ID), and it will run after the activation engine successfully does a usermod and changes virtuser to wasadmin.Problem averted!
In reading some of the above, I fully realize that it may be a little confusing at first. That said, I assure you that there is not much to it at all once you have a basic understanding of the activation engine. With a basic understanding of the activation engine in tow, you will know what you do not need to do (e.g. create profiles, federate nodes, etc.), what you cannot do (e.g. remove the sudo command), and what you can do with a little bit of reconciliation work (e.g. define your WebSphere Application Server administrative user during image extension). I encourage you to take a little time with your next deployment and give the activation engine a once over. You will undoubtedly have a better understanding of the deployment process, and you will ultimately be in a position to most effectively leverage virtual system patterns in IBM Workload Deployer.
When we talk about the WebSphere Application Server Hypervisor Edition, we often get a lot of questions about whether or not SUSE Linux is the only flavor of the Linux operating system that we support. The short answer to that question is no.
While it is true that we only deliver the WebSphere Application Server Hypervisor Edition with a SUSE Linux operating system, we will support the use of the virtual image packaging with Red Hat Enterprise Linux as the base operating system. The basic process consists of creating a virtual machine disk based off of a suitable Red Hat install, altering the OVF file in WebSphere Application Server Hypervisor Edition to reference this virtual disk instead of the SUSE virtual disk, and then packaging a new OVA file that contains all the same WebSphere virtual disks (profiles, binaries, IBM HTTP Server) but swaps out the Red Hat virtual disk for the SUSE virtual disk. We have done this many times in both the lab and field, and we offer services to users who need help in creating the image.
Customers often ask if there is any difference in using Red Hat versus SUSE Linux. The answer is, of course, yes and no. The answer is yes in that users must bring their own licenses of Red Hat (SUSE Linux licenses are included in the WebSphere Application Server Hypervisor Edition), and users must support and maintain the Red Hat operating system on their own. However, once the image is built, there is absolutely no difference in the use of that image within WebSphere CloudBurst.
Once built, users upload the image into their WebSphere CloudBurst catalog and it is available for use in pattern building just like any other image. I mentioned that users are responsible for updating and maintaining the image, well users can use WebSphere CloudBurst to create these updated images. When patches or updates are ready for the Red Hat operating system, the Extend/Capture facility available for images in WebSphere CloudBurst can be used to create a new custom Red Hat operating system with your desired fixes. This is all done without ever having to worry about actually recreating and repackaging the image again.
I know seeing is believing, so with respect to the "sameness" of using a Red Hat version of the WebSphere Application Server Hypervisor Edition within WebSphere CloudBurst, I've created a short demo you can watch here. As always, let us know what you think and send any questions our way.
Since its introduction and initial release around one year ago, activity around WebSphere CloudBurst has been a steady buzz. New images, features, enhancements have been rolling in, and can sometimes be a little overwhelming to digest. With that in mind, I want to address a related and frequent question. What products does IBM support for use in WebSphere CloudBurst?
To answer that question, we only need to look at the IBM Hypervisor Edition images currently provided by IBM. Here's a quick matrix of those images:
Starting in WebSphere CloudBurst 2.0, there are different levels of elasticity that you can achieve in your WebSphere deployments. As I mentioned in a previous post, the Intelligent Management Pack allows you to define dynamic clusters. This means cluster membership and the number of instances of a given application adjusts on the fly to meet SLAs for your application. This enables a more dynamic environment as opposed to static cluster definitions, but there is a layer of elasticity below this that bears exploring.
Dynamic clusters work with WebSphere nodes that already exists. Users define the nodes available for use by a dynamic cluster, and the runtime uses SLAs and current system state to determine the actual nodes used and application instances started. So, what if you need more nodes than what you currently have in a given WebSphere environment? A dynamic cluster will not create a new node, so you have to define extra nodes. Starting in WebSphere CloudBurst 2.0, this is as easy as pushing a button.
Dynamic virtual machine operations allow you to add and remove nodes on the fly for a given virtual system. For instance, take the pattern in the picture below:
If you were to deploy this pattern, you would end up with a WebSphere Application Server cell with a node makeup similar to the below:
Now that the environment is out there (in mere minutes I should mention), suppose you want to add more nodes? Before WebSphere CloudBurst 2.0, you could have done it, but it would involve creating another pattern with a custom node part and deploying it. This results in two different virtual systems and complicates the maintenance stream. Now, in WebSphere CloudBurst 2.0, you can simply click a button to add a node to the existing virtual system.
From the virtual systems view, if you expand the virtual machines, by each virtual machine you will see an Actions column with a View link. If you want to add a node to the environment shown above, you simply click the View link, and then click the clone icon highlighted in green below:
WebSphere CloudBurst prompts you for the number of nodes to add. You make the selection and then click OK. The appliance creates the new node and federates it into the cell for you. For instance, if you chose to add a single node, at the end of the clone not only would you have another virtual machine in your virtual system, but also another node automatically federated into your WebSphere Application Server cell:
On the flip side, you can remove a node by clicking the delete icon in the same dialog as the clone icon above. This removes the node from the WebSphere Application Server cell and deletes the virtual machine.
The ability to easily add and remove virtual machines from your WebSphere CloudBurst virtual systems enables a very valuable level of elasticity. Now you can very easily add and remove nodes on the fly based on the current demands of your system. As always, let me know if you have any questions or comments.
Every time I've visited with customers about WebSphere CloudBurst, without fail someone requests that the appliance support products besides the WebSphere Application Server. We started to address these requests with WebSphere CloudBurst 1.1 when we announced the availability of a DB2 Enterprise 9.7 trial virtual image specifically packaged for use in the appliance. Very recently we continued to respond to customer requests by extending the list of supported products in WebSphere CloudBurst to include WebSphere Portal.
The WebSphere Portal Hypervisor Edition, initially offered as a Beta product, is a virtual image packaging of WebSphere Portal 6.1.5 ready for use in the WebSphere CloudBurst Appliance. The image includes a pre-installed, pre-configured instance of WebSphere Portal. Also contained within the image is an IBM HTTP Server instance configured to route to the WebSphere Portal instance and a DB2 instance installed and configured as the external database for WebSphere Portal. The WebSphere Portal instance also includes Web Content Management enablement along with several samples to help users get started right away.
The user experience when building and deploying WebSphere Portal patterns remains consistent with the existing experience for WebSphere Application Server and DB2 patterns. Another good note is that you can expect similar rapid deployment capability for WebSphere Portal patterns. I got a running virtual system, with all the parts I mentioned above installed and configured (meaning no after the fact integration scripting was necessary) in under 15 minutes.
To see more, check out my new demonstration of the WebSphere Portal Hypervisor Edition for the WebSphere CloudBurst Appliance. If you have a WebSphere CloudBurst Appliance you can download the WebSphere Portal Hypervisor Edition image and a usage guide from here.
It seems like it was announcement day across IBM, and specifically in WebSphere. While the announcements were numerous and touched many different topics, I want to focus on a couple of announcements of particular interest to those of you interested in WebSphere CloudBurst and IBM Hypervisor Edition virtual images.
First, for all of our WebSphere Process Server and WebSphere Business Monitor users, there are a couple of important pieces of information in this announcement. This announcement outlines the availability of WebSphere Business Monitor Hypervisor Edition. The new image allows you to dispense WebSphere Business Monitor 7.0 environments using WebSphere CloudBurst to VMware hypervisors. In addition, the announcement outlines the expansion of the existing WebSphere Process Server Hypervisor Edition image to support the z/VM platform and the Red Hat Enterprise Linux (RHEL) operating system for VMware.
Moving beyond our BPM set of solutions, IBM also announced the availability of a WebSphere Message Broker Hypervisor Edition. This virtual image allows you to construct and deploy WebSphere Message Broker and WebSphere MQ environments using WebSphere CloudBurst. The stack includes the RHEL operating system, and it is ready to run on VMware hypervisors.
With that in mind, here's an update to the WebSphere CloudBurst supported product matrix:
* Availability subject to dates documented in referenced announcement letters
As you can see, we are continuing our effort to expand the choice you have when using WebSphere CloudBurst to create and deploy application environments to your cloud. If you are interested in using WebSphere CloudBurst for WebSphere Business Monitor, WebSphere Process Server, or WebSphere Message Broker, check out the above announcements. You will find more technical information as well as planned availability dates.
Just one last scrap of food for thought. Feedback from you, our users, is instrumental as we continue to expand software choice with WebSphere CloudBurst. Please continue to let us know your thoughts and needs!
The WebSphere Application Server Hypervisor Edition virtual image is made up of four different virtual disks. One of those disks contains pre-created and pre-configured WebSphere Application Server profiles. When the image is activated (either through WebSphere CloudBurst or in a standalone fashion), all of the profiles not being used are deleted leaving only the intended WebSphere profile type.
Since the profiles are pre-created, this implies that certain information must be updated after the image is activated to reflect things that change with each node that is created. Among other things, the cell name, node name, and host name of the WAS profile configuration are usually updated during the image activation process. Nearly every time I talk to WAS administrators about WebSphere CloudBurst and WebSphere Application Server Hypervisor Edition they are intrigued by this particular configuration update and almost always ask "How do you do it?" (Dustin's note: Since the command to rename the cell is not officially documented, I have removed it from this post. I'm sorry, but it is for your own good!)
Most of the time this question pops up because users are attempting to, with a more narrow focus than WAS Hypervisor Edition, freeze-dry certain WAS configurations in their organization. However, no matter how they do that (virtual images, zipped up configuration files, etc.), they too need to update things like the cell, node, and host names when attempting to reuse the configuration. Many have gone down the route of trying to identify all of the different XML files they need to change in order to update this information, but this is untenable and in fact unnecessary.
If you need to update the node or host name, forget manually updating XML files. Instead, use these three wsadmin commands:
The commands can be run from a standalone node or from a deployment manager node. They are pretty straight forward, and if you need more information about them just take a look in the WebSphere Application Server Information Center. I hope this is helpful information, and stay away from those XML files!