When I talk with WebSphere CloudBurst users, the topic of custom virtual images comes up frequently. In some cases they simply want to customize a shipped IBM Hypervisor Edition, and in other cases they want to create a completely custom image. Creating a customized version of an IBM Hypervisor Edition is relatively easy since we give you extend & capture in WebSphere CloudBurst. Creating a completely custom image has historically been a bit tougher, mostly owing ot the fact that there was not a standard tool or process for image assembly. I am happy to say that today's publication of the IBM Image Construction and Composition Tool changes all that.
Watch a demo of the IBM Image Construction and Composition Tool
The primary purpose of the Image Construction and Composition Tool is to enable a modular approach to virtual image construction, while taking into account the typical division of responsibilities within an organization. The tool allows the right people within an organization to contribute their specialized knowledge as appropriate to the virtual image creation process. This means OS teams can handle the OS and software teams can handle the appropriate software. A separate image builder can then use both OS and software components to meet the needs of users within the organization. Best of all, the image builder does not need intimate knowledge of how to install or configure any of the components in the image. They simply need to know which OS and software components to use.
When using the Image Construction and Composition Tool, you start by defining the base operating system you wish to use for your images. You can do this by importing an existing virtual image with an OS already installed, providing an ISO for the OS, or pointing to a base OS image on the IBM Cloud. The bottom line is that you have necessary flexibility to start with your certified or ‘golden’ operating system build. Once you have the base OS image defined in the Image Construction and Composition Tool, you can start defining custom software for use in the images you will compose.
In the tool, bundles represent the software you wish to install within a virtual image. The definition of a bundle contains two major parts: Installation and Configuration. The installation component of a bundle tells the Image Construction and Composition Tool how to install your software into the virtual image. You provide a script or set of scripts that install the necessary components into your image, and you direct the tool to call these scripts. These tasks run once during the initial creation of the virtual image, thus allowing you to capture large binaries, long-running installation tasks, or other necessary actions directly into your image.
The configuration section of a bundle defines actions that configure the software installed into the image. Like with the installation tasks, you provide a script or set of scripts for configuration tasks. Unlike installation tasks that run exactly once, configuration scripts become part of the image’s activation framework and as such, run during each image deployment. Using the tool, you can define input parameters for configuration scripts and optionally expose them so that users can provide values for the parameters at image deploy-time. Configuration tasks are important in providing flexibility that allows users to leverage a single virtual image for a number of different deployment scenarios.
Once you have your base OS image and one or more bundles defined in the Image Construction and Composition Tool, you can compose a virtual image. To compose a virtual image, you extend the base OS image and add any number of bundles into the new image. A base OS image plus a set of bundles defines a unique image.
After you define the image you want to construct, you initiate a synchronize action in the Image Construction and Composition Tool. When you start the synchronize action, the tool first creates a virtual machine in either a VMware or IBM Cloud environment (based on how you configured the tool). Next, the installation tasks of each bundle you included in the virtual image run to install the required software. Finally, the tool copies the configuration scripts from each bundle into the virtual machine and adds them to the image’s activation framework. This ensures the automatic invocation of all configuration scripts during subsequent image deployments.
Once the image is in the synchronized state, you can capture it. Capturing the image results in the creation of a virtual image based on the state of the synchronized virtual machine. The tool also automates the generation of metadata that becomes part of the virtual image package. When the capture of the virtual image completes, you can export it from the Image Construction and Composition Tool and deploy it using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
I am excited for users to get their hands on the Image Construction and Composition Tool. I believe it represents the first big step in helping users to design and construct more sustainable virtual images. Did I mention it is completely free to download and use? Visit the Image Construction and Composition Tool website for more details and a download link. I look forward to your comments and feedback.
Lately, I have run into multiple situations where an IBM Workload Deployer user has been trying to decide exactly how they want to create their customized images for the cloud. Essentially, they have been trying to decide whether to use the native extend and capture capabilities of IBM Workload Deployer, or to pursue the use of the Image Construction and Composition Tool (also included with the appliance). The conversations have been interesting and challenging, but more importantly, they have been a reminder that constructing enterprise-ready environments for the cloud does not happen by magic. It takes thought, deliberate planning, sustainable design, and the tools to carry everything out.
The tools part we have covered. I have every confidence, bolstered by user experience after user experience, that IBM Workload Deployer and associated tools (like the Image Construction and Composition Tool) equip you to build highly customized, cloud-based application environments. In this post, I want to focus in on the thought process that goes into how you decide to build your customized environment. Specifically, I would like to talk about important points to consider as you try to understand whether to use the native extend and capture capabilities of IBM Workload Deployer or the Image Construction and Composition Tool.
To be clear from the outset, I am not trying to provide a decision flowchart in this post. For all intents and purposes, that would be next to impossible. Instead, I want to pose to you some important questions that you should ask of yourself, along with the reasons why I believe those queries to be important. Keeping in mind that this is not an all-inclusive list, here it goes:
Question: Are the customizations that you want to make congruent with an IBM-supplied image?
Reason: One of the first decisions you should make is whether or not you can start with an IBM-supplied image as the base for your customization. You need to know what middleware elements (type and version) make up your environment and what operating system should host that environment (version and distribution). You can match that information against the list of content that IBM supplies. If there is a match, you should start by looking at extend and capture to customize that image to meet your needs. If there is no direct match, you may be looking at the Image Construction and Composition Tool.
Question: Does your custom content supplement middleware content supplied in an IBM image?
Reason: If you simply need to add additional components that supplement software already in an IBM image, I believe it is best to first examine the use of extend and capture. Whether these components are IBM software or not is irrelevant as the extend and capture functionality does not care.
Question: How configurable do you want to make the custom content in your image?
Reason: If you are adding content into the image, you need to think about just how configurable you need it to be. When you use extend and capture, you add the content to an existing image in a manner that pretty well ends up being opaque to IBM Workload Deployer. To configure that content, you need to have script packages and make sure they are part of every pattern you create based on the image. Alternatively, if you use the Image Construction and Composition Tool, you can embed configuration behavior in the image's activation engine, and you can expose deploy-time parameters without needing to include script packages in every single pattern. As an example, if you need to add a monitoring agent into your environment, you would likely do this via extend and capture and end up with a pretty simple script package to configure that agent during deployment. If however, you need to create an image with a custom database, you would likely favor the Image Construction and Composition Tool as you could embed common deploy-time configuration parameters directly in the image. For a database, there are likely to be many more deploy-time configuration parameters that you want to expose as compared to a more simple monitoring agent.
Question: Is your main focus on making operating system changes?
Reason:If your primary focus is on making operating system changes AND the answer to the first question is that your target content aligns well with IBM-supplied images, then extend and capture is where you want to start. Of course, you need to make sure that you can make all necessary changes to the OS with extend and capture, but I will say that this capability is not very restrictive at all.
Admittedly, this is a short list, but I believe it is a good starting point for how you decide upon one approach versus the other. Also, I would be remiss not to point out that these tools are absolutely not mutually exclusive. Many users I work with use a combination of the two approaches. In fact, there are some use cases that call for both tools. Start by creating a completely custom image in the Image Construction and Composition Tool, and then subject that image to the extend and capture process in IBM Workload Deployer to customize it for a particular purpose, team, project, etc. I hope you find this helpful, and I welcome your feedback or thoughts!
The WebSphere Application Server Hypervisor Edition virtual image is made up of four different virtual disks. One of those disks contains pre-created and pre-configured WebSphere Application Server profiles. When the image is activated (either through WebSphere CloudBurst or in a standalone fashion), all of the profiles not being used are deleted leaving only the intended WebSphere profile type.
Since the profiles are pre-created, this implies that certain information must be updated after the image is activated to reflect things that change with each node that is created. Among other things, the cell name, node name, and host name of the WAS profile configuration are usually updated during the image activation process. Nearly every time I talk to WAS administrators about WebSphere CloudBurst and WebSphere Application Server Hypervisor Edition they are intrigued by this particular configuration update and almost always ask "How do you do it?" (Dustin's note: Since the command to rename the cell is not officially documented, I have removed it from this post. I'm sorry, but it is for your own good!)
Most of the time this question pops up because users are attempting to, with a more narrow focus than WAS Hypervisor Edition, freeze-dry certain WAS configurations in their organization. However, no matter how they do that (virtual images, zipped up configuration files, etc.), they too need to update things like the cell, node, and host names when attempting to reuse the configuration. Many have gone down the route of trying to identify all of the different XML files they need to change in order to update this information, but this is untenable and in fact unnecessary.
If you need to update the node or host name, forget manually updating XML files. Instead, use these three wsadmin commands:
The commands can be run from a standalone node or from a deployment manager node. They are pretty straight forward, and if you need more information about them just take a look in the WebSphere Application Server Information Center. I hope this is helpful information, and stay away from those XML files!
As many of you well know, virtual images are the foundation of virtual system patterns in IBM Workload Deployer. Whether you are using IBM Hypervisor Edition images or custom-built images produced by the IBM Image Construction and Composition Tool, every virtual system pattern has at least one virtual image as part of its foundation. So, if virtual images are the foundation of virtual system patterns, what is the foundation of these virtual images?
While you could probably make a good argument for a number of different things being the foundation of the virtual image (operating system, other installed software, etc.), I like to think that, at least in the context of IBM Workload Deployer, the activation engine inside the virtual image is the true foundation. Inside this activation engine, you will find a collection of scripts and services that are capable of configuring the virtual machine for use. Not only does this engine perform basic system-level actions like configuring the machine's hostname, IP address, time, and network interfaces, but it also configures the software on the inside of the virtual machine. For instance, the activation engine in the WebSphere Application Server Hypervisor Edition image is capable of fixing up profile information, federating nodes, creating application server clusters, and more. Best of all, in the case of IBM Hypervisor Edition images, you (the user) get all of this right out of the box. There is no logic to perform or administrative tasks to undertake in order for you to benefit from the activation engine. It is simply there!
So, at this point you may ask yourself 'If all of this is included right out of the box, why do I need to care?' That is a fair question, but ultimately I feel it is always important to understand the foundational elements of any technology. In this respect, I do not feel like the activation engine in the IBM Hypervisor Edition images is any different. Lately, I have been telling my users to take at least a little time to understand what the activation engine is and even more importantly, what it is doing for you during deployment. Specifically, I always suggest taking a little time to look at the scripts in the activation engine -- most often found in the /opt/IBM/AE/AS directory of a virtual machine deployed by IBM Workload Deployer.
What can be gained by taking the time to peruse through these scripts? I think most importantly, you will learn what the engine does for you and what you cannot do if you expect the image to deploy correctly. For instance, if you look in some of those activation engine scripts, you will see that it uses the sudo command in several places. While I know many of you may be tempted to remove the sudo command during extend and capture, if you do so it will break the activation engine. I have seen this happen multiple times, and trust me, if you did not know the activation engine used that command it is not necessarily an easy problem to debug. This is a case where the value of at least superficially understanding the activation engine is clear.
Want another example? Okay, consider that you want to run WebSphere Application Server as a user called wasadmin. At pattern deployment time, it is easy enough to supply wasadmin in the appropriate field of the part configuration data and click OK. IBM Workload Deployer deploys the system and voila, WebSphere Application Server is magically running as wasadmin. Everything is fine so far, but let's take this a step further and say that you previously performed an extend and capture, and you installed software components in the image that should be owned by your wasadmin user. It is technically possible to define users during extend and capture and then install software content via that user, but if you also want to specify that user as the WebSphere Application Server administrative user at deployment time, you will run into an issue. This is because the activation engine runs the usermod command during deployment to change the existing and default virtuser into the user that you specify -- in this case wasadmin. If the usermod command attempts to change virtuser to wasadmin but wasadmin already exists as a user on the operating system, the command will not complete properly, and it is very likely you will see further errors downstream. A simpler way to do this is to create the user during extend and capture, install any components via that user, and then delete the user before capturing. You can attach a deploy-time script that fixes up the appropriate settings for wasadmin (like user ID and group ID), and it will run after the activation engine successfully does a usermod and changes virtuser to wasadmin.Problem averted!
In reading some of the above, I fully realize that it may be a little confusing at first. That said, I assure you that there is not much to it at all once you have a basic understanding of the activation engine. With a basic understanding of the activation engine in tow, you will know what you do not need to do (e.g. create profiles, federate nodes, etc.), what you cannot do (e.g. remove the sudo command), and what you can do with a little bit of reconciliation work (e.g. define your WebSphere Application Server administrative user during image extension). I encourage you to take a little time with your next deployment and give the activation engine a once over. You will undoubtedly have a better understanding of the deployment process, and you will ultimately be in a position to most effectively leverage virtual system patterns in IBM Workload Deployer.
In a recent post, Joe Bohn detailed some of the new capabilities and enhancements that come along with the recently delivered IBM Workload Deployer v3.1. To be sure, there are many valuable new features such as PowerVM support for virtual application patterns, the Plugin Developer Kit, WebSphere Application Server Hypervisor Edition v8, and more. Each of these topics probably merit their own post, but today I want to talk about something I did not mention above. Specifically, I want to talk about the announcements regarding the IBM Image Construction and Composition Tool (ICCT) and what that means for IBM Workload Deployer users.
You may have read an earlier post that I wrote about the ICCT, but allow me a brief overview here. In short, the ICCT enables the construction of custom virtual images for use in IBM Workload Deployer. You use the tool to create virtual images, much like IBM Hypervisor Edition images, and then you can use those custom images (containing whatever content you need) to create your own custom virtual system patterns. The key point about the custom images you create with the ICCT is that they are dynamically configurable. That is, the tool helps you to create the images in such a way that you can defer configuration until deploy time rather than burning such configuration directly into an image. For those of you familiar with virtual image creation, you know this type of 'intelligent construction' is a huge step towards keeping image inventory at a reasonable level.
Okay, enough of a general overview for now. Let's talk about the two new items of note regarding IBM Workload Deployer v3.1 and the ICCT. The first thing you should know is that starting in IBM Workload Deployer v3.1, the ICCT is shipped with the appliance. This means that you do not need to go anywhere else in order to get your hands on the tool to start creating your custom images. You simply log into IBM Workload Deployer and click the download link on the appliance's welcome panel (shown in image below).
Getting your hands on the tool is one piece of the puzzle, but using it is quite another. While the ICCT has been available as an alphaWorks project for some time, that also implies that there has never been official support for the tool. That changes starting with IBM Workload Deployer v3.1. The ICCT is now a generally available product from IBM, and that means that it is fully and officially supported as well. Further, the images you create using the tool are also officially supported for use as building blocks of your IBM Workload Deployer virtual system patterns. For many of you who have been using the ICCT for some time, but have been hesitant to expand use because of the lack of a formal support statement, you should now feel free to charge forward!
I hope this helps clear up exactly what the new Image Construction and Composition Tool announcements that were part of IBM Workload Deployer v3.1 actually mean. I cannot wait to hear about how you all are putting the ICCT to use with IBM Workload Deployer. Finally, don't forget to send us any questions, comments, or other feedback that you may have regarding this or any other new feature in IBM Workload Deployer v3.1!
One of the key benefits of WebSphere CloudBurst adoption is rapid -- seriously fast -- deployments of middleware application environments. Our users are leveraging the appliance to bring up enterprise-class middleware environments in mere minutes. If you know a little bit about WebSphere CloudBurst, that statistic may be a little surprising considering the appliance dispenses large virtual images from the appliance over the network to a farm of hypervisors. You may ask how the appliance can achieve such rapid deployments in light of the mere physics involved in transferring large amounts of data over a network. The simple answer is caching of course!
WebSphere CloudBurst creates a cache for each unique virtual image on datastores associated with the hypervisors in your cloud. On subsequent deployments of the same virtual image to the same datastore, WebSphere CloudBurst does not need to transfer the image over the wire. It simply uses the virtual disks that are in the cache on the datastore. In the context of the virtual image cache, the deployment process goes something like this:
WebSphere CloudBurst identifies the images necessary to deploy the pattern selected by the user.
WebSphere CloudBurst identifies the hypervisors and associated datastores that will host the virtual machines created during deployment.
WebSphere CloudBurst checks the selected datastores to see if they already have caches for the images it will be deploying. From here, one of two things happens:
WebSphere CloudBurst detects that there is no cache on the datastore and transfers the images over to the hypervisor, thereby creating the cache on the underlying datastore.
WebSphere CloudBurst detects that there is a cache on the selected datastore and uses that cache in lieu of transferring the disk over the wire.
The process may sound complicated, but it is completely hidden from you, the user. You do not need to know how the cache works since WebSphere CloudBurst handles all of these interactions. So, why am I telling you all of this then? As a WebSphere CloudBurst user, it is good to be aware of the cache for two main reasons. First, you need to account for the storage space the cache needs when doing capacity planning for your WebSphere CloudBurst cloud. Second, anytime you upload or create a new image through extend and capture, I would strongly suggest you automatically prime the cache for this new image. You can do this by simply deploying a pattern built on the image to each unique hypervisor/datastore in your environment. This may take a temporary re-arrangement of cloud groups, but it is a simple process, and it guarantees rapid deployments for all users of the new image.
I hope this sheds a little light on a subject we do not discuss too often. As always, if you have any questions, do not hesitate to let me know!
I hardly ever have a conversation about WebSphere CloudBurst, or generally cloud computing for application middleware, without the topic of databases coming up. Databases are such an important piece of nearly every application middleware environment, so users want to be sure that whatever they do for their application servers, they can also do for the databases on which their applications rely. That is why the capability to deploy DB2 from WebSphere CloudBurst has been around for as nearly as long as the capability to deploy WebSphere Application Server.
Even though DB2 deployment capability has been around for a while, there are still some common misconceptions regarding the offering. First, I have talked to a fair number of users who are under the impression that we only offer a trial version of DB2 for deployment via WebSphere CloudBurst. While that was true for the first few months of the offering, that is no longer the case. For several months now, a fully supported, 64 bit, production-ready DB2 image has been ready for use in WebSphere CloudBurst. If you were waiting for a DB2 image that you could go live with, wait no longer!
The other misconception, or rather, point of confusion, arises from the fact that the DB2 image for WebSphere CloudBurst is not, by name, a Hypervisor Edition image. I can assure you that is in name only. The DB2 image looks like and behaves like any other IBM Hypervisor Edition image once you load it into the appliance. You can use it to build and deploy patterns in the same way you use other images in WebSphere CloudBurst. You may just have trouble finding it if you search for 'DB2 Hypervisor Edition' as opposed to 'DB2 Server for WebSphere CloudBurst Appliance.'
Instead of going into further detail, I want to refer you to a blog entry from a fellow IBMer, Leon Katsnelson. Leon is a program director for DB2 and is responsible for the team that develops and delivers the DB2 image for WebSphere CloudBurst. In his most recent post, he provides a nice overview of the image and gives good information for those looking to use DB2 and WebSphere CloudBurst (there is also a bit on cloud computing at the beginning that I think is spot on). Check out Leon's post, and let us know what you think!
In last week's post, I put the spotlight on various aspects of bundles in the Image Construction and Composition Tool. I finished with a look at a WebSphere CloudBurst virtual image created from the bundle. However, you do not just magically go from a bundle to an image that you can use in WebSphere CloudBurst, Tivoli Provisioning Manager, or on the IBM Cloud. Today, I want to show you how to go from a bundle to a custom virtual image using the IBM Image Construction and Composition Tool.
Once you have defined at least one bundle and one base operating system image, you are ready to compose a custom image. We already talked about creating a bundle, but the base operating system image is a new topic. You can do this by either starting from ISO and kickstart configuration files, or you can import an existing Open Virtual Appliance (OVA) image that contains your operating system of choice. Once you have that base image imported or defined in the Image Construction and Composition Tool, you can extend it to create a custom image on top of the base OS image.
After creating your extended image, you can add bundles that represent the software you want to install in your custom image. Simply click on the Software tab of the new virtual image. Click the add icon, and select the bundle that you want to add. You can add as many bundles as you would like to your custom image.
After adding a bundle, it will show up in the Planned list of software for the image. Click on it to display its details in the right side of the screen. You will notice General, Install, and Configuration sections for the bundle. In the Install section, you will find a list of the installation parameters you defined for the bundle. You can provide values for the parameters at this time.
If you click on the Configure section, you will see all of the configuration paramters you specified for the bundle. You can provide default values, and you can specify whether or not these should be configurable by deployers of your custom image. If you mark them as configurable, users will be able to provide values for the parameters at image deploy time, regardless of whether they provision the image using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
After you add the necessary bundles and specify installation and configuration data, you can save the image. Upon saving, the image status changes from Synchronized to Out of Sync.
Now you are ready to synchronize the image. To do this, simply click the synchronize icon. This will result in the creation of a virtual machine in the cloud envrionment (VMware or IBM Cloud) you defined in the selected cloud provider. The Image Construction and Composition Tool will then invoke the appropriate installation tasks (per the bundles you included in the image) within the running virtual machine. It will also copy over any configuration scripts you defined in the bundle.
After a while, the synchronization process completes, and the image returns to the Synchronized state. At this point, you are ready to capture the image by clicking the capture icon. This results in the creation of an OVA virtual image with your customizations. When the capture process completes, the image status changes to Deployable.
Once the image is in the deployable state, it is nearly ready to use. If you are using the IBM Cloud as your cloud provider, you can simply mark the image complete by clicking the complete icon. At this point, the image will show up in your private catalog on the IBM Cloud and it is ready to use. If you are using VMware as the cloud provider, you need to export the image. Click the export icon and provide information about an SCP-enabled server to which you want to export the image. Ideally, this location is directly reachable by the WebSphere CloudBurst or Tivoli Provisioning Manager environment into which you will import the image.
You can monitor the export status in a separate window by clicking on a link shown after clicking the OK button in the dialog above. When the export finishes, you are ready to import your new custom virtual image into WebSphere CloudBurst or Tivoli Provisioning Manager.
I hope the last three posts have given you a better idea of what the new IBM Image Construction and Composition Tool is all about. There will definitely be more to come about this tool in the near future, but in the meantime, if you have any questions or comments, please reach out to me. Until then, good luck and full speed ahead on your custom image compositions!
Since bundles are such a core component of the IBM Image Construction and Composition Tool, I thought it would help to take a closer, more thorough look at them than I did in my post last week (if you have not already, I suggest reading the overview post before continuing). To help us in our closer examination, we will consider an example bundle I built using the IBM Image Construction and Composition Tool. The example bundle I built encapsulates the logic to install and configure WebSphere Application Server Community Edition. Let's take this step by step.
The first part of the bundle is the General section. This section allows you to provide a name and description for the bundle, the bundle ID and version, and the products represented by the bundle.
The next section of a bundle is the Requirements section. In this section, you can define the operating system and software requirements for your bundle. In the OS section, you specify the type, distribution, and version level of the OS your bundle requires. In the software section, you can indicate that your bundle requires other bundles defined in the IBM Image Construction and Composition Tool. You do this by providing the bundle ID for required bundles.
Next, we move on to the Install section of the bundle. Two major subsections make up this section. The first subsection is the Files to Copy section. Here, you provide files, via a file upload dialog or by providing a URI, and you specify a destination directory. When you add a bundle to an image and initiate the synchronization process, the IBM Image Construction and Composition Tool will automatically copy the files you list here to the specified destination directory on the virtual machine. In the sample WebSphere Application Server Community Edition bundle, I specify a single install.sh file to copy to the virtual machine.
The second major subsection of the Install section is the Command subsection. In this section, you will specify the installation command that the IBM Image Construction and Composition Tool should automatically invoke during the synchronization process. Additionally, you can define variables that you want to make available to your installation scripts. The tool makes these available as environment variables for the process within which your script runs. In the sample bundle, I tell the Image Construction and Composition Tool to invoke the install.sh script specified above, and I define parameters that specify the location of the binaries to install, the location to install the binaries on disk, and more.
The next section in a bundle is the Configuration section. The configuration section allows you to define configuration operations that provide actions that execute for each deployment of an image containing the bundle. You can define 0 to N configuration operations in a bundle, and each configuration operation definition contains three major subsections. The first is the Files to Copy subsection. This subsection is similar to the Files to Copy subsection in the Install section. You provide files or file URIs and you provide a destination directory to which the tool will copy the file. The WebSphere Application Server Community Edition bundle contains a single configuration operation called ConfigWASCE. In the Files to Copy section, I define a single file to copy into the image's activation engine directory.
The second major subsection in the configuration operation definition is the Command subsection. Like the Command subsection in the Install section of the bundle, you specify a command to execute and optionally associate variables with the command. There is a key difference between the command definition for configuration operations as opposed to installation operations. The Image Construction and Composition Tool invokes the command you specify for installation operations exactly ONCE at image creation (synchronization) time. On the other hand, commands you specify in the configuration operation definition execute EACH time someone deploys an image containing your bundle. In the sample bundle, my ConfigWASCE.sh script will automatically execute for each deployment. The tool will package the image in such a way that ensures the automatic passing of parameters defined in the Arguments list (including num_servers, WASCE_HOME, and more) to the ConfigWASCE.sh script.
The final major subsection of a configuration operation definition is the Dependencies section. This allows you to define other services on which your configuration operation is dependent. This can include other configuration operations in the same or other bundles, and it can include general operating system services. The WebSphere Application Server Community Edition sample bundle includes a few dependencies.
The Install and Configuration sections are really the meat of your bundle, but there is more. There is a Firewall section that allows you to define port ranges and associated protocols that the IBM Image Construction and Composition Tool should ensure are open when provisioning an image containing your bundle. Currently, the tool supports firewall configuration data when building images for the IBM Cloud. The Reset section of the bundle allows you to define tasks that should execute when capturing the image back into the Image Construction and Composition Tool (after synchronziation completes). This allows you to clean up the state of the image after the install completes. Reset configuration is not currently available in the alphaWorks version of the tool. Finally, there is a License section where you can define software licenses associated with your bundle. The tool automatically adds these licenes to the constructed image's metadata, thereby allowing deployment tools to prompt the user to accept all pertinent licenses. The WebSphere Application Server Community Edition sample bundle defines a product license.
Of course, once the bundle definition is complete, you can leverage it to compose and produce an image that you can use in WebSphere CloudBurst, Tivoli Provisioning Manager, or on the IBM Cloud. In the case of the WebSphere Application Server Community Edition sample bundle, I used it to create an image that I loaded into WebSphere CloudBurst and used to build patterns.
I hope this helps to provide a better idea of what bundles are all about in the Image Construction and Composition Tool. Don't forget to take a look at the overview demo and stay tuned for more to come about this new tool!
Though I feel like we've come a long way in some of the initial confusion surrounding IBM CloudBurst and WebSphere CloudBurst, I still get quite a few basic questions on the solutions. The two most common questions are, 'Are they different products?', and 'Can/should I use them together?'. I put together a really brief overview that answers these questions and talks about the basics of the combined solution. I hope it provides a good introduction!
One of my favorite books from childhood is If You Give a Mouse a Cookie. Although targeted at children, the book illustrates a frequently occurring human behavior that is important for all of us understand. That behavior is the tendency for escalating expectations. The book offers this up by starting out with the simple action of giving a mouse a cookie. The mouse in turn asks for a glass of milk, various flavors of cookies, and on and on, until the mouse circles back to asking for another cookie.
Nearly all of us exhibit this same kind of behavior, and it can often produce positive results. In particular, in IT we always push for the next best thing or a slightly better outcome. Personally, I am no stranger to this behavior because I experience it from WebSphere CloudBurst users quite frequently. In these cases, it usually revolves around one particular outcome: speed of deployment.
Bar none, users of WebSphere CloudBurst are experiencing unprecedented deployment times for the environments they dispense through the appliance. The fact that we say you can deploy meaningful enterprise application environments in a matter of minutes is far beyond just marketing literature. Our users prove it everyday. However, just because they are deploying things faster than ever does not mean they are content to rest on those achievements. They want to push the envelope, and I love it.
For our users looking to achieve even speedier deployment times, I offer up one reminder and one tip. First, analyze all of your script packages to ensure you are using the right means of customization. If you have some scripts that run for considerably longer than most other script packages, you may want to at least consider applying that customization by creating a custom image. You still need to adhere to the customization principles outlined here, but you may benefit from applying the customization in an image once and avoiding the penalty for applying it during every deployment. You may also be able to break this customization out with a combination of a custom image and script packages. For instance, instead of having a script that installs and configures monitoring agents, you may install the agents in a custom image and configure them during deployment. Being selective about how and when you apply customizations can go a long way in improving your deployment times.
In addition to the reminder above, I also have a tip. Take a look at all of the script packages you use in pattern deployments and look to see if there are any that you can apply in an asynchronous manner. In other words, identify customizations that need to start, but not necessarily complete as part of the deployment process. Going back to our example of configuring monitoring agents during the deployment process, it may be important to kick off the configuration script during deployment, but is it crucial to wait on the results? Maybe not. If it is not, consider defining the executable argument in your script package in a manner that kicks off the execution and proceeds -- i.e. nohup executable command &. This approach can save deployment time in certain situations.
My advice to users of WebSphere CloudBurst: keep pushing your deployment process! Pare as many minutes off the process as you can. I hope that the tips above help in that regard, and be sure to pass along other techniques that you have found helpful.
I spent most of my time growing up doing two things, going to school and playing sports. I made many fond memories -- mostly from the latter :) -- and learned more than a few lessons over that time. Of all of those lessons, there was one in particular that stuck out in both the classroom and on the baseball diamond: Sometimes you have to get back to the basics.
In that vein, I think it is time to revisit the basics of WebSphere CloudBurst. In revisiting the basics, I am not talking about the technical basics of the appliance. Rather, I am talking about revisiting exactly why WebSphere CloudBurst exists in the first place. In other words, let's take a look at the problem domains WebSphere CloudBurst addresses, and let's discuss a little bit about how the appliance does so.
It seems like it was announcement day across IBM, and specifically in WebSphere. While the announcements were numerous and touched many different topics, I want to focus on a couple of announcements of particular interest to those of you interested in WebSphere CloudBurst and IBM Hypervisor Edition virtual images.
First, for all of our WebSphere Process Server and WebSphere Business Monitor users, there are a couple of important pieces of information in this announcement. This announcement outlines the availability of WebSphere Business Monitor Hypervisor Edition. The new image allows you to dispense WebSphere Business Monitor 7.0 environments using WebSphere CloudBurst to VMware hypervisors. In addition, the announcement outlines the expansion of the existing WebSphere Process Server Hypervisor Edition image to support the z/VM platform and the Red Hat Enterprise Linux (RHEL) operating system for VMware.
Moving beyond our BPM set of solutions, IBM also announced the availability of a WebSphere Message Broker Hypervisor Edition. This virtual image allows you to construct and deploy WebSphere Message Broker and WebSphere MQ environments using WebSphere CloudBurst. The stack includes the RHEL operating system, and it is ready to run on VMware hypervisors.
With that in mind, here's an update to the WebSphere CloudBurst supported product matrix:
* Availability subject to dates documented in referenced announcement letters
As you can see, we are continuing our effort to expand the choice you have when using WebSphere CloudBurst to create and deploy application environments to your cloud. If you are interested in using WebSphere CloudBurst for WebSphere Business Monitor, WebSphere Process Server, or WebSphere Message Broker, check out the above announcements. You will find more technical information as well as planned availability dates.
Just one last scrap of food for thought. Feedback from you, our users, is instrumental as we continue to expand software choice with WebSphere CloudBurst. Please continue to let us know your thoughts and needs!
When you build application environments in WebSphere CloudBurst, there are three main elements that comprise those environments: virtual images, patterns, and script packages. It is likely that at some point you will want to export your environments from a particular WebSphere CloudBurst Appliance. This may be in order to apply version control techniques, share resources among multiple appliances, backup business-critical files, or any number of other reasons. Whatever the reason, WebSphere CloudBurst provides the necessary facilities to support both image and pattern export. WebSphere CloudBurst provides export capability for virtual images that you can access via the web console and CLI. In addition, when you download the CLI from the appliance, you get a sample script called patternToPython.jy that you can use to facilitate pattern export.
The patternToPython.jy sample produces a script that you can use to recreate the targeted pattern on an appliance of your choosing. However, before running the script to recreate the pattern on an appliance, you must ensure that any images and script packages referenced by the pattern exist on the target appliance. Since WebSphere CloudBurst enables you to easily export and import virtual images, all you have to do is account for script packages when attempting to export complete application environments from WebSphere CloudBurst. While the appliance does not directly provide the means to export script packages like it does for images and patterns, the WebSphere CloudBurst Samples Gallery includes a sample that does. You can find this sample in the CLI scripts section of the samples gallery, with the title Export a script package in a portable format.
After downloading the sample CLI script, you simply unzip the archive and use the embedded Jython script from the WebSphere CloudBurst CLI with the following command:
This command will create a ZIP file containing the contents of the script package specified by SCRIPT_PACKAGE_NAME. In addition to simply copying the contents of the specified script package into the new ZIP file, the command will trigger the creation of a cbscript.json file based on the definition of the target script package. This file defines the properties of the script package such as the execution command, command arguments, etc., and the exportScriptPackage.jy script adds it to the newly produced ZIP file.
The result of using this sample is a self-contained ZIP file that you can load into any other WebSphere CloudBurst Appliance. Since the ZIP file includes the cbscript.json file, when you load it into another appliance you do not have to define any of the properties for the script package. This eliminates the potential for definition errors as you move script packages from one appliance to the other and makes it simple to export/import script packages among appliances.
There are a couple of things about the sample worth mentioning. First, if a cbscript.json file exists in the specified script package, the export script will not create a new one. Instead, the sample simply copies the existing one into the new ZIP file. Second, the target script package's contents must be a ZIP file. That is to say, the file associated with the script package in WebSphere CloudBurst must be a ZIP. If you are using anything prior to WebSphere CloudBurst 2.0, this is not an issue since you can only associate ZIP files with script packages. However, WebSphere CloudBurst 2.0 allows you to associate any type of file (ZIP, shell script, python script, etc.) with a script package.
If you are looking to effectively export all of the components of your WebSphere CloudBurst patterns, check out this sample script. I think it will make the process a bit easier for you. As always, comments and feedback are welcome.
When writing a new tool for the WebSphere CloudBurst samples gallery last week, I got the chance to use an API in the CLI that was new to me. Specifically, I got a chance to use the WebSphere CloudBurst CLI in order to retrieve an audit log from the appliance for a specified date period. In case this is new and interesting to you, I thought I would share what I found.
First off, let's take a look at the API I am talking about. It's pretty simple: cloudburst.audit.get(file, start, end). Here, start is the start date for the audit entries and (naturally) end is the end date for those entries. The file parameter simply denotes the location or file object you want to use to store the audit archive retrieved via the get method.
This is a simple enough API. The only wrinkle comes in dealing with calculating the start and end dates. According to the WebSphere CloudBurst Information Center, both the start and end times are 'specified as the number of seconds since midnight, January 1, 1970 UTC. Floating point values can be specified to indicate fractional seconds.' For my use case, I wanted to let a user or calling program pass the start and end times as arguments to the CLI script that retrieves the audit archive. Check out the relevant portion of my script below:
As you can see, the script takes in the start and end time in the MM/dd/yy HH:mm format (i.e. 05/20/10 15:30). It parses the value to produce a date, gets the long value of the date (which is in milliseconds according to the java.util.Date API), and divides that value by 1000. This is to account for the fact that the cloudburst.audit.get method expects you to express the start and end times in seconds. The script passes the converted dates along with the output file location to the get method. The result is a ZIP file that contains an appliance audit, license audit, and PVU audit file for the specified date range.
One of my favorite things about the WebSphere CloudBurst CLI is that it is Jython-based. This means I can leverage Java APIs from my CLI scripts, and that is huge for me because of my existing knowledge of the Java language. You certainly can substitute Python APIs for my use of Java APIs to handle the start and end date calculation. I hope this is helpful, and good luck with the WebSphere CloudBurst CLI!
Since its introduction and initial release around one year ago, activity around WebSphere CloudBurst has been a steady buzz. New images, features, enhancements have been rolling in, and can sometimes be a little overwhelming to digest. With that in mind, I want to address a related and frequent question. What products does IBM support for use in WebSphere CloudBurst?
To answer that question, we only need to look at the IBM Hypervisor Edition images currently provided by IBM. Here's a quick matrix of those images:
One of the most exciting announcements at IBM IMPACT last week was that of the new WebSphere Process Server Hypervisor Edition. This new virtual image allows you to provision complete WebSphere Process Server environments into your on-premise cloud using the WebSphere CloudBurst Appliance. Just like with the other environments you can provision using WebSphere CloudBurst (namely WebSphere Application Server, DB2, and Portal Server), you can stand up these WebSphere Process Server environments in a matter of minutes.
The WebSphere Process Server does not come pre-loaded on the appliance, but it does come with a cool utility that helps you get it on the appliance. The WebSphere Process Server Hypervisor Edition loader provides a wizard-like tool that loads the image into the catalog of an appliance you specify. The tool is simple to use and is included as part of the image package that you download from Passport Advantage.
Not only does the loader above populate the WebSphere Process Server Hypervisor Edition into the appliance's catalog, but it also creates a set of patterns for the WebSphere CloudBurst Appliance. These patterns encapsulate golden topology environments for WebSphere Process Server Hypervisor Edition. At the time of my post, the patterns created by the loader include the following:
Standalone server: This pattern represents a single server instance of WebSphere Process Server. Deployment of the pattern results in a single virtual machine that contains both the server instance and a DB2 instance.
Simulated environment: This pattern contains a single part called a 'Full function control node'. Deployment of the pattern results in the creation of a deployment manager, proxy server, DB2 environment, and three WebSphere Process Server clusters (application target cluster, support cluster, and messaging cluster), all in a single virtual machine.
Scalable environment: This pattern contains a deployment manager, 'Basic function nodes' part, DB2 part, and a proxy server. Deploying the pattern results in the same components as the pattern above, but in this case each component resides in its own virtual machine.
The announcement of the WebSphere Process Server Hypervisor Edition only serves to increase the applicability of WebSphere CloudBurst for constructing on-premise WebSphere clouds. If you have any questions, or want to learn more about this new virtual image, please let me know.
In WebSphere CloudBurst, a script package is your vehicle to provide custom middleware configuration. This may mean installing applications, configuring application dependencies, or otherwise tuning the middleware layer. Script packages are essentially ZIP files that include some executable (shell script, wsadmin script, Java program, etc.), and optionally, artifacts that support the execution of the script. As was the intention, you can achieve just about anything you want with a script package. This allows you to be as flexible and creative as you need to be, but it can also leave you asking "Where do I start?" In this post, I want to take an in-depth look at constructing and using a script package in WebSphere CloudBurst.
Specifically, I want to create a script package that supplies configuration functionality for something I believe a fair number of you do: change the default ports used in WebSphere Application Server. To create this and deploy a pattern using the script package, I do the following:
Create a shell script that configures the desired ports
Add the new script as a WebSphere CloudBurst script package
Create a pattern with the new script package
Deploy the pattern and verify the result
First things first. I create the following shell script that configures the ports:
The script uses documented ANT commands included with the WebSphere Application Server to update the ports based on a starting port number. You will notice the script first sources the /etc/virtualimage.properties file. This file is automatically created by WebSphere CloudBurst on every virtual machine it starts. The file is a key/value file with basic information about the WebSphere cell such as the install root ($WAS_INSTALL_ROOT), the profile name ($PROFILE_NAME), host name ($HOSTNAME), and more. For a full list of the data that WebSphere CloudBurst includes in this file, check out this documentation.
In addition to utilizing the standard set of variables provided by WebSphere CloudBurst, my script above also makes use of the $STARTING_PORT variable. Obviously this variable is not in the standard set. In fact, I define the STARTING_POINT variable when I define my new script package in WebSphere CloudBurst.
First I zip up the shell script above and attach it to the new script package. Next, I tell WebSphere CloudBurst where to unzip the script package on the virtual machine, how to invoke the included script, and the name of any parameters to associate with the script. Once that is done I can use the script package in a new pattern.
For the sake of simplicity here, I create a new pattern by cloning an existing WebSphere Application Server single server pattern. I drag and drop the new Configure ports script package on the single part and end up with the pattern shown below.
Now I am ready to deploy the pattern by clicking the Deploy button. During the deployment process I configure each part in the pattern (in this there is only a single part). I supply configuration information like virtual memory allocation, WebSphere cell name, WebSphere node name, and password information. In addition, I also supply a value for the STARTING_PORT parameter that is part of the Configure ports script package included in the pattern. The value I supply here will get inserted into the /etc/virtualimage.properties file on the virtual machine, and the value's key will be STARTING_PORT.
Once the configuration information is supplied, I click OK on the configuration panel and deployment panel, and WebSphere CloudBurst goes about standing up my virtualized WebSphere cell and running my script to configure the ports for the server instance. When it is done, I login to the WebSphere Application Server administration console to verify my results. To do this, I navigate to the configuration for the single application server instance, and pull up its port definitions.
Based on the results I can see my customizations took effect. I successfully captured my own unique WebSphere environment (in this case with a custom port range) in the form of a pattern. This custom environment can be deployed as many times as I need, in an automated fashion, and I'm guaranteed consistent results each and every time.
I hope this gives you a better idea of what script packages are all about and how they can utilize both WebSphere CloudBurst and user-supplied data that exists in the /etc/virtualimage.properties file of each virtual machine. If you have any questions let me know. I'm on Twitter @damrhein, or you can leave a comment right here.
I'm out at the RSA conference in San Francisco this week, and I'm expecting a lot of good conversations about WebSphere CloudBurst and security. This topic always comes up when I'm out and talking to customers, and I approach it from a few different angles.
First of all, WebSphere CloudBurst enables the creation of on-premise clouds (clouds in your data center). This means that you retain control over the resources that make up and support your cloud, and you have the ability to very tightly secure said resources. Notice that I say "you have the ability". I'm careful to point out that on-premise clouds do not inherently make your environment secure. If you don't already have a robust security strategy in place within your enterprise, then simply moving to a cloud model will not solve much. That being said, if you do have a comprehensive security strategy in place, one built around customized processes and access rights, then on-premise clouds are likely to make much more sense for you.
Moving beyond the opportunity for customized security controls provided by on-premise clouds, WebSphere CloudBurst delivers additional, unique security features. It starts on the outside with the tamper-resistant physical casing. If a malicious user attempts to remove the casing to get to the inner contents, the appliance is put into a dormant state, and it must be sent to IBM to be reset. "So what!" you say. If the user removes the casing and gets to the contents, couldn't they simply read the contents off the flash memory or hard disks directly, or insert them into another WebSphere CloudBurst Appliance and read them from there? Nope. All of the contents stored on the appliance's flash memory and hard disks are encrypted with a private key that cannot be changed and is unique to each and every appliance.
If you are at all familiar with WebSphere CloudBurst, you know that the appliance dispenses and monitors virtual systems running on a collection of hypervisors. Obviously then, the appliance must remotely communicate with the hypervisors. In order to secure this communication, all information between WebSphere CloudBurst and the hypervisors (and vice versa) is encrypted. This encryption is achieved by using an SSL certificate that is exchanged when a hypervisor is defined in WebSphere CloudBurst. This certificate must be accepted by a user, thus preventing rogue hypervisors from being defined in WebSphere CloudBurst.
Finally, WebSphere CloudBurst provides for the definition of users and user groups with varying permissions and resource access rights in the appliance. You don't have to turn over the keys to your cloud kingdom when you add a user to the appliance. You have the capability to define varying permissions (from simply deploying patterns, to creating them, all the way up to administering the cloud and appliance), and you have the ability to control access to resources (patterns, virtual images, script packages, cloud groups, etc.) at a fine-grained level. These two capabilities combine to allow you to control not only what actions a user can take, but also on which resources they can take those actions.
WebSphere CloudBurst was designed with focus on delivering a secure cloud experience, and I think it hit the mark. I'm sure I didn't address all your WebSphere CloudBurst and security related questions. If you have something specific in mind, leave a comment on the blog or reach out to me on Twitter. I'll do my best to address your question.
One of the new features that debuted in WebSphere CloudBurst 1.1 is the ability to resize the disks in a virtual image during the extend and capture (image customization) process. If you remember, the virtual images that exist in the WebSphere CloudBurst catalog are made of multiple virtual disks. In WebSphere CloudBurst 1.0 a default size was used for the virtual disks and this could not be changed, even during the image extension process. To be quite honest we got quite a bit of feedback about this, and so with version 1.1 while default sizes are still provided, you can specify the eventual size of each of the virtual disks during the image extension process.
As an example, consider the WebSphere Application Server Hypervisor Edition virtual image. This image contains four virtual disks: one for the WebSphere Application Server binaries, one for the WebSphere Application Server profiles, one for the IBM HTTP Server, and one for the operating system. The default size of each of these disks in the 184.108.40.206 version of the image is 6GB, 2GB, 1GB, and 12GB respectively, for a total of roughly 21GB. While that may be fine for some, what happens if you are going to be installing various other third-party software packages in the image? You may need more disk space for the operating system's virtual disk. Perhaps your WebSphere applications produce log files of considerable size. In that case you may want to increase the default size of the WebSphere Application Server profiles disk space.
Those scenarios and more are exactly why the resizing capability was added. When you extend the WebSphere Application Server Hypervisor Edition 220.127.116.11 virtual image in WebSphere CloudBurst 1.1, you will be presented the option to resize one or more of the virtual disks:
In the case above the default operating system disk size is bumped up to 16GB from the default 12GB size. Also note that in addition to changing the disk size, you can specify the number of network interfaces for your custom image.
Obviously, when you increase the size of the disks within the virtual image you are also increasing the storage requirements for that image when it is deployed to a hypervisor. Keep this in mind when you are calculating the upper bound capacity of your cloud. If you want to see more about how this feature works, check out this video.
The ability to package custom maintenance packages and upload them as emergency fixes is perhaps a lesser known feature of WebSphere CloudBurst, but nevertheless something that's been around since the product's initial release. This is a powerful feature that allows you to build your own fix packages that you can then apply the same way you would use WebSphere CloudBurst to apply a PAK file or fixpack shipped by IBM.
Since IBM is delivering fixes and updates to all of the contents within WebSphere Application Server Hypervisor Edition virtual images (including the OS and IBM software components), you may wonder why you would even want to create your own maintenance packages. One reason would be if you switched out the SUSE Linux operating system shipped with the VMware ESX based images in favor of your own Red Hat operating system. In that case you would be responsible for maintenance to the operating system, and custom maintenance packages would be of interest to you. Another scenario where these custom maintenance packages come in handy would be if you created your own customized images that include non-shipped third-party software in addition to the software shipped in the images. If at some point you have the need to fix or update this software in a running virtual machine, custom maintenance packages provide you the vehicle with which to do just that.
What do these custom maintenance packages look like? In short, they are simply archives or ZIP files. The contents of the archive are largely decided by you, but there is one piece of metadata that is necessary if you want to use WebSphere CloudBurst to apply the maintenance. A file called service.xml is inserted into the root of the archive and tells WebSphere CloudBurst critical information about the custom fix archive. Here's an example of a service.xml file:
Most notably, this metadata tells WebSphere CloudBurst what module or script to invoke to apply the maintenance (Command, this executable is supplied by you), what image versions the fix is applicable to (ImagePrereqs), and the location of the working directory on the virtual machine (Location). In addition to the service.xml file and the executable, you can package up anything else, such as product binaries, which are needed to successfully apply the fix/upgrade/maintenance.
If you haven't noticed, this is an extremely flexible mechanism and can be used for just about anything. I should point out that you can only apply a given fix once per virtual machine, so it's not good for something that you want to run repeatedly against a given machine (check out user-initiated script packages instead). Also, there is a 512MB size limit on the archives. Keep these restrictions in mind when you are deciding how to use custom maintenance packages. If you are interested in learning a bit more about custom maintenance packages or other maintenance techniques, check out this article I co-authored along with Xiao Xing Liang from the IBM SOA Design Center in the China Development Lab.
When it comes to managing users and user groups within WebSphere CloudBurst, you can choose to manage all aspects of those resources within the appliance. Mainly this means that you can define and store user information (including login passwords) within the appliance, and you can define and maintain user groups and their associated membership list on the appliance. While you can do this and be sure that your information is extremely secure, you may instead want to integrate with an existing LDAP server that has some of this user and user group data. WebSphere CloudBurst certainly allows you to integrate with LDAP servers, but what does that mean for you?
For starters, when you integrate WebSphere CloudBurst with an LDAP server and enable the LDAP authentication feature, you no longer specify password information when defining users of the appliance. When users login, the password they specify will be authenticated against information stored in the LDAP server. Naturally, if you add a new WebSphere CloudBurst user with LDAP authentication enabled, that user must be defined in the LDAP server. Otherwise, WebSphere CloudBurst will prevent you from adding the user because it has no way to authenticate that person.
From a user groups standpoint, integrating with LDAP means you can no longer modify user group membership. User membership in groups is determined by information in the LDAP server. As a result, the same rule concerning adding new users applies when adding new user groups: You cannot define new user groups that do not exist in the LDAP server.
If you want to take a look at what LDAP integration looks like with WebSphere CloudBurst, I put together a short video. Let me know what you think.
If you've read anything I've written about WebSphere CloudBurst up to this point you know all about patterns. Using the appliance you can easily and quickly build, deploy, and manage these representations of your middleware application environments. Today, I want to focus in on the deployment piece in particular and take a look at how you can easily automate this process.
You can use the WebSphere CloudBurst web console to deploy patterns, and when doing so you can even schedule the deployment to happen at a later date. This scheduling capability certainly gets you on the road to an automated deployment process, but what if you want to take it one step further and eliminate the need for someone to login and manually move around the web console to schedule automated deployments? In this case, you can use either the CLI or the REST interface that WebSphere CloudBurst offers.
In this post I thought I'd take a look at using the CLI interface in order to set the stage for some nice automation around pattern deployment. It starts out with a properties file that provides details about my deployment. This includes the cloud to deploy to, the pattern to deploy, password information, and the time at which the virtual system should start.
SYSTEM_NAME_PREFIX=New App Development
TARGET_CLOUD=Default ESX group
TARGET_PATTERN=WebSphere single server
Imagine that the properties file above gets written as the result of some other action, such as the completion of your application's build process. With the properties file in place, and I'll point out that your properties file can and probably will be more robust than above, let's move on to the code that handles the deployment process based on the information in said file. First, we have a small amount of CLI code to retrieve and parse the input data (I omitted the straight-forward properties retrieval for space):
from datetime import datetime, timedelta
from java.util import Properties
from java.io import FileInputStream
// read in and retrieve properties using java.util.Properties API (i.e. props.getProperty('DEPLOYMENT_DATE'))
parsedParts = deploymentDate.split(" ")
systemName = systemName + "_" + deploymentDate
dateParts = parsedParts.split("/")
timeParts = parsedParts.split(":")
monthPart = int(dateParts)
dayPart = int(dateParts)
yearPart = int(dateParts)
hourPart = int(timeParts)
minutePart = int(timeParts)
Next is the code that actually schedules the pattern deployment:
First we get the desired deployment time and current time as datetime objects. After that, assuming the desired deployment time has not already elapsed, we calculate the difference between the desired deployment time and current time. This difference, in seconds, is then added to the result of the time.time() value to come up with a start time. After that is done, we simply retrieve the cloud that was indicated in the properties file, and then we call the runInCloud method for the pattern indicated. When calling the runInCloud method we supply the name of the virtual system that will be created, password information, and the start time we calculated earlier. As a result of this method call, a task will be generated in the target WebSphere CloudBurst Appliance and the virtual system will be started at the specified time. This will happen in an automated fashion with no human intervention required.
That's really all there is to automating the pattern deployment process using the CLI. In a more complete, end-to-end scenario you may envision the completion of one process, such as an application build process mentioned above, result in the writing of the properties file and in turn the call into the CLI to deploy a pattern. As always, feel free to send me any comments or questions.
Every time I've visited with customers about WebSphere CloudBurst, without fail someone requests that the appliance support products besides the WebSphere Application Server. We started to address these requests with WebSphere CloudBurst 1.1 when we announced the availability of a DB2 Enterprise 9.7 trial virtual image specifically packaged for use in the appliance. Very recently we continued to respond to customer requests by extending the list of supported products in WebSphere CloudBurst to include WebSphere Portal.
The WebSphere Portal Hypervisor Edition, initially offered as a Beta product, is a virtual image packaging of WebSphere Portal 6.1.5 ready for use in the WebSphere CloudBurst Appliance. The image includes a pre-installed, pre-configured instance of WebSphere Portal. Also contained within the image is an IBM HTTP Server instance configured to route to the WebSphere Portal instance and a DB2 instance installed and configured as the external database for WebSphere Portal. The WebSphere Portal instance also includes Web Content Management enablement along with several samples to help users get started right away.
The user experience when building and deploying WebSphere Portal patterns remains consistent with the existing experience for WebSphere Application Server and DB2 patterns. Another good note is that you can expect similar rapid deployment capability for WebSphere Portal patterns. I got a running virtual system, with all the parts I mentioned above installed and configured (meaning no after the fact integration scripting was necessary) in under 15 minutes.
To see more, check out my new demonstration of the WebSphere Portal Hypervisor Edition for the WebSphere CloudBurst Appliance. If you have a WebSphere CloudBurst Appliance you can download the WebSphere Portal Hypervisor Edition image and a usage guide from here.
I was at a customer meeting the other day, and someone asked me if they could query WebSphere CloudBurst for an inventory of all of their virtual system deployments. This person was of course aware that he could go to the web console and very quickly view all of the virtual systems. What he wanted though was something that he could run to generate a report that contained all of this information. For a purpose like this, harnessing the WebSphere CloudBurst CLI is exactly the way to go.
I thought I'd write a simple CLI script that provides an example of how you could do this.
from datetime import datetime
outFile.write("WebSphere CloudBurst Virtual System Inventory\n")
outFile.write("Total virtual systems: " + str(len(cloudburst.virtualsystems)))
def writeVSDetails(outFile, virtualSystem):
outFile.write("\tVirtual system name: " + virtualSystem.name)
outFile.write("\tCreated from pattern: " + virtualSystem.pattern.name)
outFile.write("\tVirtual system status: " + virtualSystem.currentstatus_text)
created = datetime.fromtimestamp(virtualSystem.created)
outFile.write("\tVirtual system creation date: " + created.strftime("%B %d, %Y %H:%M:%S"))
outFile.write("\tTotal virtual machines: " + str(len(virtualSystem.virtualmachines)))
def writeVMDetails(outFile, virtualMachine):
outFile.write("\t\tVirtual machine name: " + virtualMachine.name)
outFile.write("\t\tVirtual machine display name: " + virtualMachine.displayname)
outFile.write("\t\tCreated from image: " + virtualMachine.virtualimage.name)
outFile.write("\t\tVirtual machine hypervisor: " + virtualMachine.hypervisor.name + " | " + virtualMachine.hypervisor.address)
outFile.write("\t\tVirtual machine IP address: " + virtualMachine.ip.ipaddress)
outFileLoc = sys.argv
outFile = open(outFileLoc, 'w')
for virtualSystem in cloudburst.virtualsystems:
for virtualMachine in virtualSystem.virtualmachines:
As a result of invoking this script using the CLI's batch mode, content is written to the file location supplied by the caller.
WebSphere CloudBurst Virtual System Inventory
Total virtual systems: 3
Virtual system name: Single server
Created from pattern: WebSphere single server
Virtual system status: Started
Virtual system creation date: January 15, 2010 16:37:20
Total virtual machines: 1
Virtual machine name: Standalone 0
Virtual machine display name: Single server cbvm-110 default
Created from image: WebSphere Application Server 18.104.22.168
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual system name: Development WAS Cluster
Created from pattern: Custom WAS Cluster - Development
Virtual system status: Started
Virtual system creation date: January 18, 2010 14:08:46
Total virtual machines: 2
Virtual machine name: DMGR 0
Virtual machine display name: Development WAS Cluster cbvm-112 dmgr
Created from image: WebSphere Application Server 22.214.171.124
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual machine name: Custom Node 1
Virtual machine display name: Development WAS Cluster cbvm-111 custom
Created from image: WebSphere Application Server 126.96.36.199
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual system name: DB2 for development use
Created from pattern: DB2
Virtual system status: Started
Virtual system creation date: January 18, 2010 14:09:58
Total virtual machines: 1
Virtual machine name: DB2 Enterprise Server 32bit Trial 0
Virtual machine display name: DB2 for development use cbvm-113
Created from image: DB2 Enterprise 188.8.131.52 32-bit Trial
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
I withheld IP addresses and host names above for obvious reasons, but if you ran the script against your environment you would see actual host name and IP address values. The script above is written once, and it can be subsequently run anytime you want an inventory of virtual systems running in your WebSphere CloudBurst cloud. There's other information available for virtual systems and virtual machines that I didn't show here, and you can retrieve it if necessary for your inventory report. In addition, I chose to print this information as regular text in a file supplied by the caller, but you might choose to generate the report in another format including XML, JSON, or anything else for that matter.
-- Dustin Amrhein
p.s. As with any sample code or script I provide here, the above is only a sample and offered as-is.
If you are going to install and use WebSphere CloudBurst in your own environment, it is very likely that you would want at least two appliances. Perhaps you want to have a standby appliance in case of a failure on the main appliance, or maybe you have different teams that are looking to utilize the appliance in different data centers. In any case, once you install multiple appliances there's another requirement that will pop up pretty quickly. Naturally you are going to want to share custom artifacts among the various WebSphere CloudBurst boxes.
When I say custom artifacts, namely I mean virtual images, patterns, and script packages. Script packages have been easy enough to share since WebSphere CloudBurst 1.0 because you can simply download the ZIP file from one appliance and upload it to another. However, there are some enhancements in WebSphere CloudBurst 1.1 that make it easy to share both patterns and images among your different appliances.
As far as patterns go, there is a new script included in the samples directory of the WebSphere CloudBurst command line interface package called patternToPython.py. This script will transform a pattern you specify into a python script. The resulting python script can then be run against a different WebSphere CloudBurst (using the CLI), and the result is the pattern is created on the target appliance. You need to be sure that the artifacts that pattern references (script packages and virtual images) exist on the target appliance and have the exact same name as they do on the appliance from which the pattern was taken. There are no other caveats, and this new sample script makes it really simple to move patterns between appliances.
For virtual images, a new feature was added that allows you to export a virtual image from the WebSphere CloudBurst console. Simply select a virtual image, specify a remote machine (any machine with SCP enabled), and click a button to export the image as an OVA file. This OVA file can then be added to another WebSphere CloudBurst catalog using the normal process for adding virtual images. You can see this feature in action here.
Stay tuned for more information about some of the handy new features in WebSphere CloudBurst 1.1. We also should have a comprehensive look at the new release coming soon in a developerWorks article.