In last week's post, I put the spotlight on various aspects of bundles in the Image Construction and Composition Tool. I finished with a look at a WebSphere CloudBurst virtual image created from the bundle. However, you do not just magically go from a bundle to an image that you can use in WebSphere CloudBurst, Tivoli Provisioning Manager, or on the IBM Cloud. Today, I want to show you how to go from a bundle to a custom virtual image using the IBM Image Construction and Composition Tool.
Once you have defined at least one bundle and one base operating system image, you are ready to compose a custom image. We already talked about creating a bundle, but the base operating system image is a new topic. You can do this by either starting from ISO and kickstart configuration files, or you can import an existing Open Virtual Appliance (OVA) image that contains your operating system of choice. Once you have that base image imported or defined in the Image Construction and Composition Tool, you can extend it to create a custom image on top of the base OS image.
After creating your extended image, you can add bundles that represent the software you want to install in your custom image. Simply click on the Software tab of the new virtual image. Click the add icon, and select the bundle that you want to add. You can add as many bundles as you would like to your custom image.
After adding a bundle, it will show up in the Planned list of software for the image. Click on it to display its details in the right side of the screen. You will notice General, Install, and Configuration sections for the bundle. In the Install section, you will find a list of the installation parameters you defined for the bundle. You can provide values for the parameters at this time.
If you click on the Configure section, you will see all of the configuration paramters you specified for the bundle. You can provide default values, and you can specify whether or not these should be configurable by deployers of your custom image. If you mark them as configurable, users will be able to provide values for the parameters at image deploy time, regardless of whether they provision the image using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
After you add the necessary bundles and specify installation and configuration data, you can save the image. Upon saving, the image status changes from Synchronized to Out of Sync.
Now you are ready to synchronize the image. To do this, simply click the synchronize icon. This will result in the creation of a virtual machine in the cloud envrionment (VMware or IBM Cloud) you defined in the selected cloud provider. The Image Construction and Composition Tool will then invoke the appropriate installation tasks (per the bundles you included in the image) within the running virtual machine. It will also copy over any configuration scripts you defined in the bundle.
After a while, the synchronization process completes, and the image returns to the Synchronized state. At this point, you are ready to capture the image by clicking the capture icon. This results in the creation of an OVA virtual image with your customizations. When the capture process completes, the image status changes to Deployable.
Once the image is in the deployable state, it is nearly ready to use. If you are using the IBM Cloud as your cloud provider, you can simply mark the image complete by clicking the complete icon. At this point, the image will show up in your private catalog on the IBM Cloud and it is ready to use. If you are using VMware as the cloud provider, you need to export the image. Click the export icon and provide information about an SCP-enabled server to which you want to export the image. Ideally, this location is directly reachable by the WebSphere CloudBurst or Tivoli Provisioning Manager environment into which you will import the image.
You can monitor the export status in a separate window by clicking on a link shown after clicking the OK button in the dialog above. When the export finishes, you are ready to import your new custom virtual image into WebSphere CloudBurst or Tivoli Provisioning Manager.
I hope the last three posts have given you a better idea of what the new IBM Image Construction and Composition Tool is all about. There will definitely be more to come about this tool in the near future, but in the meantime, if you have any questions or comments, please reach out to me. Until then, good luck and full speed ahead on your custom image compositions!
When I talk with WebSphere CloudBurst users, the topic of custom virtual images comes up frequently. In some cases they simply want to customize a shipped IBM Hypervisor Edition, and in other cases they want to create a completely custom image. Creating a customized version of an IBM Hypervisor Edition is relatively easy since we give you extend & capture in WebSphere CloudBurst. Creating a completely custom image has historically been a bit tougher, mostly owing ot the fact that there was not a standard tool or process for image assembly. I am happy to say that today's publication of the IBM Image Construction and Composition Tool changes all that.
Watch a demo of the IBM Image Construction and Composition Tool
The primary purpose of the Image Construction and Composition Tool is to enable a modular approach to virtual image construction, while taking into account the typical division of responsibilities within an organization. The tool allows the right people within an organization to contribute their specialized knowledge as appropriate to the virtual image creation process. This means OS teams can handle the OS and software teams can handle the appropriate software. A separate image builder can then use both OS and software components to meet the needs of users within the organization. Best of all, the image builder does not need intimate knowledge of how to install or configure any of the components in the image. They simply need to know which OS and software components to use.
When using the Image Construction and Composition Tool, you start by defining the base operating system you wish to use for your images. You can do this by importing an existing virtual image with an OS already installed, providing an ISO for the OS, or pointing to a base OS image on the IBM Cloud. The bottom line is that you have necessary flexibility to start with your certified or ‘golden’ operating system build. Once you have the base OS image defined in the Image Construction and Composition Tool, you can start defining custom software for use in the images you will compose.
In the tool, bundles represent the software you wish to install within a virtual image. The definition of a bundle contains two major parts: Installation and Configuration. The installation component of a bundle tells the Image Construction and Composition Tool how to install your software into the virtual image. You provide a script or set of scripts that install the necessary components into your image, and you direct the tool to call these scripts. These tasks run once during the initial creation of the virtual image, thus allowing you to capture large binaries, long-running installation tasks, or other necessary actions directly into your image.
The configuration section of a bundle defines actions that configure the software installed into the image. Like with the installation tasks, you provide a script or set of scripts for configuration tasks. Unlike installation tasks that run exactly once, configuration scripts become part of the image’s activation framework and as such, run during each image deployment. Using the tool, you can define input parameters for configuration scripts and optionally expose them so that users can provide values for the parameters at image deploy-time. Configuration tasks are important in providing flexibility that allows users to leverage a single virtual image for a number of different deployment scenarios.
Once you have your base OS image and one or more bundles defined in the Image Construction and Composition Tool, you can compose a virtual image. To compose a virtual image, you extend the base OS image and add any number of bundles into the new image. A base OS image plus a set of bundles defines a unique image.
After you define the image you want to construct, you initiate a synchronize action in the Image Construction and Composition Tool. When you start the synchronize action, the tool first creates a virtual machine in either a VMware or IBM Cloud environment (based on how you configured the tool). Next, the installation tasks of each bundle you included in the virtual image run to install the required software. Finally, the tool copies the configuration scripts from each bundle into the virtual machine and adds them to the image’s activation framework. This ensures the automatic invocation of all configuration scripts during subsequent image deployments.
Once the image is in the synchronized state, you can capture it. Capturing the image results in the creation of a virtual image based on the state of the synchronized virtual machine. The tool also automates the generation of metadata that becomes part of the virtual image package. When the capture of the virtual image completes, you can export it from the Image Construction and Composition Tool and deploy it using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
I am excited for users to get their hands on the Image Construction and Composition Tool. I believe it represents the first big step in helping users to design and construct more sustainable virtual images. Did I mention it is completely free to download and use? Visit the Image Construction and Composition Tool website for more details and a download link. I look forward to your comments and feedback.
When you build application environments in WebSphere CloudBurst, there are three main elements that comprise those environments: virtual images, patterns, and script packages. It is likely that at some point you will want to export your environments from a particular WebSphere CloudBurst Appliance. This may be in order to apply version control techniques, share resources among multiple appliances, backup business-critical files, or any number of other reasons. Whatever the reason, WebSphere CloudBurst provides the necessary facilities to support both image and pattern export. WebSphere CloudBurst provides export capability for virtual images that you can access via the web console and CLI. In addition, when you download the CLI from the appliance, you get a sample script called patternToPython.jy that you can use to facilitate pattern export.
The patternToPython.jy sample produces a script that you can use to recreate the targeted pattern on an appliance of your choosing. However, before running the script to recreate the pattern on an appliance, you must ensure that any images and script packages referenced by the pattern exist on the target appliance. Since WebSphere CloudBurst enables you to easily export and import virtual images, all you have to do is account for script packages when attempting to export complete application environments from WebSphere CloudBurst. While the appliance does not directly provide the means to export script packages like it does for images and patterns, the WebSphere CloudBurst Samples Gallery includes a sample that does. You can find this sample in the CLI scripts section of the samples gallery, with the title Export a script package in a portable format.
After downloading the sample CLI script, you simply unzip the archive and use the embedded Jython script from the WebSphere CloudBurst CLI with the following command:
This command will create a ZIP file containing the contents of the script package specified by SCRIPT_PACKAGE_NAME. In addition to simply copying the contents of the specified script package into the new ZIP file, the command will trigger the creation of a cbscript.json file based on the definition of the target script package. This file defines the properties of the script package such as the execution command, command arguments, etc., and the exportScriptPackage.jy script adds it to the newly produced ZIP file.
The result of using this sample is a self-contained ZIP file that you can load into any other WebSphere CloudBurst Appliance. Since the ZIP file includes the cbscript.json file, when you load it into another appliance you do not have to define any of the properties for the script package. This eliminates the potential for definition errors as you move script packages from one appliance to the other and makes it simple to export/import script packages among appliances.
There are a couple of things about the sample worth mentioning. First, if a cbscript.json file exists in the specified script package, the export script will not create a new one. Instead, the sample simply copies the existing one into the new ZIP file. Second, the target script package's contents must be a ZIP file. That is to say, the file associated with the script package in WebSphere CloudBurst must be a ZIP. If you are using anything prior to WebSphere CloudBurst 2.0, this is not an issue since you can only associate ZIP files with script packages. However, WebSphere CloudBurst 2.0 allows you to associate any type of file (ZIP, shell script, python script, etc.) with a script package.
If you are looking to effectively export all of the components of your WebSphere CloudBurst patterns, check out this sample script. I think it will make the process a bit easier for you. As always, comments and feedback are welcome.
As many of you well know, virtual images are the foundation of virtual system patterns in IBM Workload Deployer. Whether you are using IBM Hypervisor Edition images or custom-built images produced by the IBM Image Construction and Composition Tool, every virtual system pattern has at least one virtual image as part of its foundation. So, if virtual images are the foundation of virtual system patterns, what is the foundation of these virtual images?
While you could probably make a good argument for a number of different things being the foundation of the virtual image (operating system, other installed software, etc.), I like to think that, at least in the context of IBM Workload Deployer, the activation engine inside the virtual image is the true foundation. Inside this activation engine, you will find a collection of scripts and services that are capable of configuring the virtual machine for use. Not only does this engine perform basic system-level actions like configuring the machine's hostname, IP address, time, and network interfaces, but it also configures the software on the inside of the virtual machine. For instance, the activation engine in the WebSphere Application Server Hypervisor Edition image is capable of fixing up profile information, federating nodes, creating application server clusters, and more. Best of all, in the case of IBM Hypervisor Edition images, you (the user) get all of this right out of the box. There is no logic to perform or administrative tasks to undertake in order for you to benefit from the activation engine. It is simply there!
So, at this point you may ask yourself 'If all of this is included right out of the box, why do I need to care?' That is a fair question, but ultimately I feel it is always important to understand the foundational elements of any technology. In this respect, I do not feel like the activation engine in the IBM Hypervisor Edition images is any different. Lately, I have been telling my users to take at least a little time to understand what the activation engine is and even more importantly, what it is doing for you during deployment. Specifically, I always suggest taking a little time to look at the scripts in the activation engine -- most often found in the /opt/IBM/AE/AS directory of a virtual machine deployed by IBM Workload Deployer.
What can be gained by taking the time to peruse through these scripts? I think most importantly, you will learn what the engine does for you and what you cannot do if you expect the image to deploy correctly. For instance, if you look in some of those activation engine scripts, you will see that it uses the sudo command in several places. While I know many of you may be tempted to remove the sudo command during extend and capture, if you do so it will break the activation engine. I have seen this happen multiple times, and trust me, if you did not know the activation engine used that command it is not necessarily an easy problem to debug. This is a case where the value of at least superficially understanding the activation engine is clear.
Want another example? Okay, consider that you want to run WebSphere Application Server as a user called wasadmin. At pattern deployment time, it is easy enough to supply wasadmin in the appropriate field of the part configuration data and click OK. IBM Workload Deployer deploys the system and voila, WebSphere Application Server is magically running as wasadmin. Everything is fine so far, but let's take this a step further and say that you previously performed an extend and capture, and you installed software components in the image that should be owned by your wasadmin user. It is technically possible to define users during extend and capture and then install software content via that user, but if you also want to specify that user as the WebSphere Application Server administrative user at deployment time, you will run into an issue. This is because the activation engine runs the usermod command during deployment to change the existing and default virtuser into the user that you specify -- in this case wasadmin. If the usermod command attempts to change virtuser to wasadmin but wasadmin already exists as a user on the operating system, the command will not complete properly, and it is very likely you will see further errors downstream. A simpler way to do this is to create the user during extend and capture, install any components via that user, and then delete the user before capturing. You can attach a deploy-time script that fixes up the appropriate settings for wasadmin (like user ID and group ID), and it will run after the activation engine successfully does a usermod and changes virtuser to wasadmin.Problem averted!
In reading some of the above, I fully realize that it may be a little confusing at first. That said, I assure you that there is not much to it at all once you have a basic understanding of the activation engine. With a basic understanding of the activation engine in tow, you will know what you do not need to do (e.g. create profiles, federate nodes, etc.), what you cannot do (e.g. remove the sudo command), and what you can do with a little bit of reconciliation work (e.g. define your WebSphere Application Server administrative user during image extension). I encourage you to take a little time with your next deployment and give the activation engine a once over. You will undoubtedly have a better understanding of the deployment process, and you will ultimately be in a position to most effectively leverage virtual system patterns in IBM Workload Deployer.
Since its introduction and initial release around one year ago, activity around WebSphere CloudBurst has been a steady buzz. New images, features, enhancements have been rolling in, and can sometimes be a little overwhelming to digest. With that in mind, I want to address a related and frequent question. What products does IBM support for use in WebSphere CloudBurst?
To answer that question, we only need to look at the IBM Hypervisor Edition images currently provided by IBM. Here's a quick matrix of those images:
It seems like it was announcement day across IBM, and specifically in WebSphere. While the announcements were numerous and touched many different topics, I want to focus on a couple of announcements of particular interest to those of you interested in WebSphere CloudBurst and IBM Hypervisor Edition virtual images.
First, for all of our WebSphere Process Server and WebSphere Business Monitor users, there are a couple of important pieces of information in this announcement. This announcement outlines the availability of WebSphere Business Monitor Hypervisor Edition. The new image allows you to dispense WebSphere Business Monitor 7.0 environments using WebSphere CloudBurst to VMware hypervisors. In addition, the announcement outlines the expansion of the existing WebSphere Process Server Hypervisor Edition image to support the z/VM platform and the Red Hat Enterprise Linux (RHEL) operating system for VMware.
Moving beyond our BPM set of solutions, IBM also announced the availability of a WebSphere Message Broker Hypervisor Edition. This virtual image allows you to construct and deploy WebSphere Message Broker and WebSphere MQ environments using WebSphere CloudBurst. The stack includes the RHEL operating system, and it is ready to run on VMware hypervisors.
With that in mind, here's an update to the WebSphere CloudBurst supported product matrix:
* Availability subject to dates documented in referenced announcement letters
As you can see, we are continuing our effort to expand the choice you have when using WebSphere CloudBurst to create and deploy application environments to your cloud. If you are interested in using WebSphere CloudBurst for WebSphere Business Monitor, WebSphere Process Server, or WebSphere Message Broker, check out the above announcements. You will find more technical information as well as planned availability dates.
Just one last scrap of food for thought. Feedback from you, our users, is instrumental as we continue to expand software choice with WebSphere CloudBurst. Please continue to let us know your thoughts and needs!
The WebSphere Application Server Hypervisor Edition virtual image is made up of four different virtual disks. One of those disks contains pre-created and pre-configured WebSphere Application Server profiles. When the image is activated (either through WebSphere CloudBurst or in a standalone fashion), all of the profiles not being used are deleted leaving only the intended WebSphere profile type.
Since the profiles are pre-created, this implies that certain information must be updated after the image is activated to reflect things that change with each node that is created. Among other things, the cell name, node name, and host name of the WAS profile configuration are usually updated during the image activation process. Nearly every time I talk to WAS administrators about WebSphere CloudBurst and WebSphere Application Server Hypervisor Edition they are intrigued by this particular configuration update and almost always ask "How do you do it?" (Dustin's note: Since the command to rename the cell is not officially documented, I have removed it from this post. I'm sorry, but it is for your own good!)
Most of the time this question pops up because users are attempting to, with a more narrow focus than WAS Hypervisor Edition, freeze-dry certain WAS configurations in their organization. However, no matter how they do that (virtual images, zipped up configuration files, etc.), they too need to update things like the cell, node, and host names when attempting to reuse the configuration. Many have gone down the route of trying to identify all of the different XML files they need to change in order to update this information, but this is untenable and in fact unnecessary.
If you need to update the node or host name, forget manually updating XML files. Instead, use these three wsadmin commands:
The commands can be run from a standalone node or from a deployment manager node. They are pretty straight forward, and if you need more information about them just take a look in the WebSphere Application Server Information Center. I hope this is helpful information, and stay away from those XML files!
Every time I've visited with customers about WebSphere CloudBurst, without fail someone requests that the appliance support products besides the WebSphere Application Server. We started to address these requests with WebSphere CloudBurst 1.1 when we announced the availability of a DB2 Enterprise 9.7 trial virtual image specifically packaged for use in the appliance. Very recently we continued to respond to customer requests by extending the list of supported products in WebSphere CloudBurst to include WebSphere Portal.
The WebSphere Portal Hypervisor Edition, initially offered as a Beta product, is a virtual image packaging of WebSphere Portal 6.1.5 ready for use in the WebSphere CloudBurst Appliance. The image includes a pre-installed, pre-configured instance of WebSphere Portal. Also contained within the image is an IBM HTTP Server instance configured to route to the WebSphere Portal instance and a DB2 instance installed and configured as the external database for WebSphere Portal. The WebSphere Portal instance also includes Web Content Management enablement along with several samples to help users get started right away.
The user experience when building and deploying WebSphere Portal patterns remains consistent with the existing experience for WebSphere Application Server and DB2 patterns. Another good note is that you can expect similar rapid deployment capability for WebSphere Portal patterns. I got a running virtual system, with all the parts I mentioned above installed and configured (meaning no after the fact integration scripting was necessary) in under 15 minutes.
To see more, check out my new demonstration of the WebSphere Portal Hypervisor Edition for the WebSphere CloudBurst Appliance. If you have a WebSphere CloudBurst Appliance you can download the WebSphere Portal Hypervisor Edition image and a usage guide from here.
When writing a new tool for the WebSphere CloudBurst samples gallery last week, I got the chance to use an API in the CLI that was new to me. Specifically, I got a chance to use the WebSphere CloudBurst CLI in order to retrieve an audit log from the appliance for a specified date period. In case this is new and interesting to you, I thought I would share what I found.
First off, let's take a look at the API I am talking about. It's pretty simple: cloudburst.audit.get(file, start, end). Here, start is the start date for the audit entries and (naturally) end is the end date for those entries. The file parameter simply denotes the location or file object you want to use to store the audit archive retrieved via the get method.
This is a simple enough API. The only wrinkle comes in dealing with calculating the start and end dates. According to the WebSphere CloudBurst Information Center, both the start and end times are 'specified as the number of seconds since midnight, January 1, 1970 UTC. Floating point values can be specified to indicate fractional seconds.' For my use case, I wanted to let a user or calling program pass the start and end times as arguments to the CLI script that retrieves the audit archive. Check out the relevant portion of my script below:
As you can see, the script takes in the start and end time in the MM/dd/yy HH:mm format (i.e. 05/20/10 15:30). It parses the value to produce a date, gets the long value of the date (which is in milliseconds according to the java.util.Date API), and divides that value by 1000. This is to account for the fact that the cloudburst.audit.get method expects you to express the start and end times in seconds. The script passes the converted dates along with the output file location to the get method. The result is a ZIP file that contains an appliance audit, license audit, and PVU audit file for the specified date range.
One of my favorite things about the WebSphere CloudBurst CLI is that it is Jython-based. This means I can leverage Java APIs from my CLI scripts, and that is huge for me because of my existing knowledge of the Java language. You certainly can substitute Python APIs for my use of Java APIs to handle the start and end date calculation. I hope this is helpful, and good luck with the WebSphere CloudBurst CLI!
One of the most exciting announcements at IBM IMPACT last week was that of the new WebSphere Process Server Hypervisor Edition. This new virtual image allows you to provision complete WebSphere Process Server environments into your on-premise cloud using the WebSphere CloudBurst Appliance. Just like with the other environments you can provision using WebSphere CloudBurst (namely WebSphere Application Server, DB2, and Portal Server), you can stand up these WebSphere Process Server environments in a matter of minutes.
The WebSphere Process Server does not come pre-loaded on the appliance, but it does come with a cool utility that helps you get it on the appliance. The WebSphere Process Server Hypervisor Edition loader provides a wizard-like tool that loads the image into the catalog of an appliance you specify. The tool is simple to use and is included as part of the image package that you download from Passport Advantage.
Not only does the loader above populate the WebSphere Process Server Hypervisor Edition into the appliance's catalog, but it also creates a set of patterns for the WebSphere CloudBurst Appliance. These patterns encapsulate golden topology environments for WebSphere Process Server Hypervisor Edition. At the time of my post, the patterns created by the loader include the following:
Standalone server: This pattern represents a single server instance of WebSphere Process Server. Deployment of the pattern results in a single virtual machine that contains both the server instance and a DB2 instance.
Simulated environment: This pattern contains a single part called a 'Full function control node'. Deployment of the pattern results in the creation of a deployment manager, proxy server, DB2 environment, and three WebSphere Process Server clusters (application target cluster, support cluster, and messaging cluster), all in a single virtual machine.
Scalable environment: This pattern contains a deployment manager, 'Basic function nodes' part, DB2 part, and a proxy server. Deploying the pattern results in the same components as the pattern above, but in this case each component resides in its own virtual machine.
The announcement of the WebSphere Process Server Hypervisor Edition only serves to increase the applicability of WebSphere CloudBurst for constructing on-premise WebSphere clouds. If you have any questions, or want to learn more about this new virtual image, please let me know.