When it comes to managing users and user groups within WebSphere CloudBurst, you can choose to manage all aspects of those resources within the appliance. Mainly this means that you can define and store user information (including login passwords) within the appliance, and you can define and maintain user groups and their associated membership list on the appliance. While you can do this and be sure that your information is extremely secure, you may instead want to integrate with an existing LDAP server that has some of this user and user group data. WebSphere CloudBurst certainly allows you to integrate with LDAP servers, but what does that mean for you?
For starters, when you integrate WebSphere CloudBurst with an LDAP server and enable the LDAP authentication feature, you no longer specify password information when defining users of the appliance. When users login, the password they specify will be authenticated against information stored in the LDAP server. Naturally, if you add a new WebSphere CloudBurst user with LDAP authentication enabled, that user must be defined in the LDAP server. Otherwise, WebSphere CloudBurst will prevent you from adding the user because it has no way to authenticate that person.
From a user groups standpoint, integrating with LDAP means you can no longer modify user group membership. User membership in groups is determined by information in the LDAP server. As a result, the same rule concerning adding new users applies when adding new user groups: You cannot define new user groups that do not exist in the LDAP server.
If you want to take a look at what LDAP integration looks like with WebSphere CloudBurst, I put together a short video. Let me know what you think.
Since bundles are such a core component of the IBM Image Construction and Composition Tool, I thought it would help to take a closer, more thorough look at them than I did in my post last week (if you have not already, I suggest reading the overview post before continuing). To help us in our closer examination, we will consider an example bundle I built using the IBM Image Construction and Composition Tool. The example bundle I built encapsulates the logic to install and configure WebSphere Application Server Community Edition. Let's take this step by step.
The first part of the bundle is the General section. This section allows you to provide a name and description for the bundle, the bundle ID and version, and the products represented by the bundle.
The next section of a bundle is the Requirements section. In this section, you can define the operating system and software requirements for your bundle. In the OS section, you specify the type, distribution, and version level of the OS your bundle requires. In the software section, you can indicate that your bundle requires other bundles defined in the IBM Image Construction and Composition Tool. You do this by providing the bundle ID for required bundles.
Next, we move on to the Install section of the bundle. Two major subsections make up this section. The first subsection is the Files to Copy section. Here, you provide files, via a file upload dialog or by providing a URI, and you specify a destination directory. When you add a bundle to an image and initiate the synchronization process, the IBM Image Construction and Composition Tool will automatically copy the files you list here to the specified destination directory on the virtual machine. In the sample WebSphere Application Server Community Edition bundle, I specify a single install.sh file to copy to the virtual machine.
The second major subsection of the Install section is the Command subsection. In this section, you will specify the installation command that the IBM Image Construction and Composition Tool should automatically invoke during the synchronization process. Additionally, you can define variables that you want to make available to your installation scripts. The tool makes these available as environment variables for the process within which your script runs. In the sample bundle, I tell the Image Construction and Composition Tool to invoke the install.sh script specified above, and I define parameters that specify the location of the binaries to install, the location to install the binaries on disk, and more.
The next section in a bundle is the Configuration section. The configuration section allows you to define configuration operations that provide actions that execute for each deployment of an image containing the bundle. You can define 0 to N configuration operations in a bundle, and each configuration operation definition contains three major subsections. The first is the Files to Copy subsection. This subsection is similar to the Files to Copy subsection in the Install section. You provide files or file URIs and you provide a destination directory to which the tool will copy the file. The WebSphere Application Server Community Edition bundle contains a single configuration operation called ConfigWASCE. In the Files to Copy section, I define a single file to copy into the image's activation engine directory.
The second major subsection in the configuration operation definition is the Command subsection. Like the Command subsection in the Install section of the bundle, you specify a command to execute and optionally associate variables with the command. There is a key difference between the command definition for configuration operations as opposed to installation operations. The Image Construction and Composition Tool invokes the command you specify for installation operations exactly ONCE at image creation (synchronization) time. On the other hand, commands you specify in the configuration operation definition execute EACH time someone deploys an image containing your bundle. In the sample bundle, my ConfigWASCE.sh script will automatically execute for each deployment. The tool will package the image in such a way that ensures the automatic passing of parameters defined in the Arguments list (including num_servers, WASCE_HOME, and more) to the ConfigWASCE.sh script.
The final major subsection of a configuration operation definition is the Dependencies section. This allows you to define other services on which your configuration operation is dependent. This can include other configuration operations in the same or other bundles, and it can include general operating system services. The WebSphere Application Server Community Edition sample bundle includes a few dependencies.
The Install and Configuration sections are really the meat of your bundle, but there is more. There is a Firewall section that allows you to define port ranges and associated protocols that the IBM Image Construction and Composition Tool should ensure are open when provisioning an image containing your bundle. Currently, the tool supports firewall configuration data when building images for the IBM Cloud. The Reset section of the bundle allows you to define tasks that should execute when capturing the image back into the Image Construction and Composition Tool (after synchronziation completes). This allows you to clean up the state of the image after the install completes. Reset configuration is not currently available in the alphaWorks version of the tool. Finally, there is a License section where you can define software licenses associated with your bundle. The tool automatically adds these licenes to the constructed image's metadata, thereby allowing deployment tools to prompt the user to accept all pertinent licenses. The WebSphere Application Server Community Edition sample bundle defines a product license.
Of course, once the bundle definition is complete, you can leverage it to compose and produce an image that you can use in WebSphere CloudBurst, Tivoli Provisioning Manager, or on the IBM Cloud. In the case of the WebSphere Application Server Community Edition sample bundle, I used it to create an image that I loaded into WebSphere CloudBurst and used to build patterns.
I hope this helps to provide a better idea of what bundles are all about in the Image Construction and Composition Tool. Don't forget to take a look at the overview demo and stay tuned for more to come about this new tool!
Lately, I have run into multiple situations where an IBM Workload Deployer user has been trying to decide exactly how they want to create their customized images for the cloud. Essentially, they have been trying to decide whether to use the native extend and capture capabilities of IBM Workload Deployer, or to pursue the use of the Image Construction and Composition Tool (also included with the appliance). The conversations have been interesting and challenging, but more importantly, they have been a reminder that constructing enterprise-ready environments for the cloud does not happen by magic. It takes thought, deliberate planning, sustainable design, and the tools to carry everything out.
The tools part we have covered. I have every confidence, bolstered by user experience after user experience, that IBM Workload Deployer and associated tools (like the Image Construction and Composition Tool) equip you to build highly customized, cloud-based application environments. In this post, I want to focus in on the thought process that goes into how you decide to build your customized environment. Specifically, I would like to talk about important points to consider as you try to understand whether to use the native extend and capture capabilities of IBM Workload Deployer or the Image Construction and Composition Tool.
To be clear from the outset, I am not trying to provide a decision flowchart in this post. For all intents and purposes, that would be next to impossible. Instead, I want to pose to you some important questions that you should ask of yourself, along with the reasons why I believe those queries to be important. Keeping in mind that this is not an all-inclusive list, here it goes:
Question: Are the customizations that you want to make congruent with an IBM-supplied image?
Reason: One of the first decisions you should make is whether or not you can start with an IBM-supplied image as the base for your customization. You need to know what middleware elements (type and version) make up your environment and what operating system should host that environment (version and distribution). You can match that information against the list of content that IBM supplies. If there is a match, you should start by looking at extend and capture to customize that image to meet your needs. If there is no direct match, you may be looking at the Image Construction and Composition Tool.
Question: Does your custom content supplement middleware content supplied in an IBM image?
Reason: If you simply need to add additional components that supplement software already in an IBM image, I believe it is best to first examine the use of extend and capture. Whether these components are IBM software or not is irrelevant as the extend and capture functionality does not care.
Question: How configurable do you want to make the custom content in your image?
Reason: If you are adding content into the image, you need to think about just how configurable you need it to be. When you use extend and capture, you add the content to an existing image in a manner that pretty well ends up being opaque to IBM Workload Deployer. To configure that content, you need to have script packages and make sure they are part of every pattern you create based on the image. Alternatively, if you use the Image Construction and Composition Tool, you can embed configuration behavior in the image's activation engine, and you can expose deploy-time parameters without needing to include script packages in every single pattern. As an example, if you need to add a monitoring agent into your environment, you would likely do this via extend and capture and end up with a pretty simple script package to configure that agent during deployment. If however, you need to create an image with a custom database, you would likely favor the Image Construction and Composition Tool as you could embed common deploy-time configuration parameters directly in the image. For a database, there are likely to be many more deploy-time configuration parameters that you want to expose as compared to a more simple monitoring agent.
Question: Is your main focus on making operating system changes?
Reason:If your primary focus is on making operating system changes AND the answer to the first question is that your target content aligns well with IBM-supplied images, then extend and capture is where you want to start. Of course, you need to make sure that you can make all necessary changes to the OS with extend and capture, but I will say that this capability is not very restrictive at all.
Admittedly, this is a short list, but I believe it is a good starting point for how you decide upon one approach versus the other. Also, I would be remiss not to point out that these tools are absolutely not mutually exclusive. Many users I work with use a combination of the two approaches. In fact, there are some use cases that call for both tools. Start by creating a completely custom image in the Image Construction and Composition Tool, and then subject that image to the extend and capture process in IBM Workload Deployer to customize it for a particular purpose, team, project, etc. I hope you find this helpful, and I welcome your feedback or thoughts!
I'm out at the RSA conference in San Francisco this week, and I'm expecting a lot of good conversations about WebSphere CloudBurst and security. This topic always comes up when I'm out and talking to customers, and I approach it from a few different angles.
First of all, WebSphere CloudBurst enables the creation of on-premise clouds (clouds in your data center). This means that you retain control over the resources that make up and support your cloud, and you have the ability to very tightly secure said resources. Notice that I say "you have the ability". I'm careful to point out that on-premise clouds do not inherently make your environment secure. If you don't already have a robust security strategy in place within your enterprise, then simply moving to a cloud model will not solve much. That being said, if you do have a comprehensive security strategy in place, one built around customized processes and access rights, then on-premise clouds are likely to make much more sense for you.
Moving beyond the opportunity for customized security controls provided by on-premise clouds, WebSphere CloudBurst delivers additional, unique security features. It starts on the outside with the tamper-resistant physical casing. If a malicious user attempts to remove the casing to get to the inner contents, the appliance is put into a dormant state, and it must be sent to IBM to be reset. "So what!" you say. If the user removes the casing and gets to the contents, couldn't they simply read the contents off the flash memory or hard disks directly, or insert them into another WebSphere CloudBurst Appliance and read them from there? Nope. All of the contents stored on the appliance's flash memory and hard disks are encrypted with a private key that cannot be changed and is unique to each and every appliance.
If you are at all familiar with WebSphere CloudBurst, you know that the appliance dispenses and monitors virtual systems running on a collection of hypervisors. Obviously then, the appliance must remotely communicate with the hypervisors. In order to secure this communication, all information between WebSphere CloudBurst and the hypervisors (and vice versa) is encrypted. This encryption is achieved by using an SSL certificate that is exchanged when a hypervisor is defined in WebSphere CloudBurst. This certificate must be accepted by a user, thus preventing rogue hypervisors from being defined in WebSphere CloudBurst.
Finally, WebSphere CloudBurst provides for the definition of users and user groups with varying permissions and resource access rights in the appliance. You don't have to turn over the keys to your cloud kingdom when you add a user to the appliance. You have the capability to define varying permissions (from simply deploying patterns, to creating them, all the way up to administering the cloud and appliance), and you have the ability to control access to resources (patterns, virtual images, script packages, cloud groups, etc.) at a fine-grained level. These two capabilities combine to allow you to control not only what actions a user can take, but also on which resources they can take those actions.
WebSphere CloudBurst was designed with focus on delivering a secure cloud experience, and I think it hit the mark. I'm sure I didn't address all your WebSphere CloudBurst and security related questions. If you have something specific in mind, leave a comment on the blog or reach out to me on Twitter. I'll do my best to address your question.
I hardly ever have a conversation about WebSphere CloudBurst, or generally cloud computing for application middleware, without the topic of databases coming up. Databases are such an important piece of nearly every application middleware environment, so users want to be sure that whatever they do for their application servers, they can also do for the databases on which their applications rely. That is why the capability to deploy DB2 from WebSphere CloudBurst has been around for as nearly as long as the capability to deploy WebSphere Application Server.
Even though DB2 deployment capability has been around for a while, there are still some common misconceptions regarding the offering. First, I have talked to a fair number of users who are under the impression that we only offer a trial version of DB2 for deployment via WebSphere CloudBurst. While that was true for the first few months of the offering, that is no longer the case. For several months now, a fully supported, 64 bit, production-ready DB2 image has been ready for use in WebSphere CloudBurst. If you were waiting for a DB2 image that you could go live with, wait no longer!
The other misconception, or rather, point of confusion, arises from the fact that the DB2 image for WebSphere CloudBurst is not, by name, a Hypervisor Edition image. I can assure you that is in name only. The DB2 image looks like and behaves like any other IBM Hypervisor Edition image once you load it into the appliance. You can use it to build and deploy patterns in the same way you use other images in WebSphere CloudBurst. You may just have trouble finding it if you search for 'DB2 Hypervisor Edition' as opposed to 'DB2 Server for WebSphere CloudBurst Appliance.'
Instead of going into further detail, I want to refer you to a blog entry from a fellow IBMer, Leon Katsnelson. Leon is a program director for DB2 and is responsible for the team that develops and delivers the DB2 image for WebSphere CloudBurst. In his most recent post, he provides a nice overview of the image and gives good information for those looking to use DB2 and WebSphere CloudBurst (there is also a bit on cloud computing at the beginning that I think is spot on). Check out Leon's post, and let us know what you think!
If you are going to install and use WebSphere CloudBurst in your own environment, it is very likely that you would want at least two appliances. Perhaps you want to have a standby appliance in case of a failure on the main appliance, or maybe you have different teams that are looking to utilize the appliance in different data centers. In any case, once you install multiple appliances there's another requirement that will pop up pretty quickly. Naturally you are going to want to share custom artifacts among the various WebSphere CloudBurst boxes.
When I say custom artifacts, namely I mean virtual images, patterns, and script packages. Script packages have been easy enough to share since WebSphere CloudBurst 1.0 because you can simply download the ZIP file from one appliance and upload it to another. However, there are some enhancements in WebSphere CloudBurst 1.1 that make it easy to share both patterns and images among your different appliances.
As far as patterns go, there is a new script included in the samples directory of the WebSphere CloudBurst command line interface package called patternToPython.py. This script will transform a pattern you specify into a python script. The resulting python script can then be run against a different WebSphere CloudBurst (using the CLI), and the result is the pattern is created on the target appliance. You need to be sure that the artifacts that pattern references (script packages and virtual images) exist on the target appliance and have the exact same name as they do on the appliance from which the pattern was taken. There are no other caveats, and this new sample script makes it really simple to move patterns between appliances.
For virtual images, a new feature was added that allows you to export a virtual image from the WebSphere CloudBurst console. Simply select a virtual image, specify a remote machine (any machine with SCP enabled), and click a button to export the image as an OVA file. This OVA file can then be added to another WebSphere CloudBurst catalog using the normal process for adding virtual images. You can see this feature in action here.
Stay tuned for more information about some of the handy new features in WebSphere CloudBurst 1.1. We also should have a comprehensive look at the new release coming soon in a developerWorks article.
One of my favorite books from childhood is If You Give a Mouse a Cookie. Although targeted at children, the book illustrates a frequently occurring human behavior that is important for all of us understand. That behavior is the tendency for escalating expectations. The book offers this up by starting out with the simple action of giving a mouse a cookie. The mouse in turn asks for a glass of milk, various flavors of cookies, and on and on, until the mouse circles back to asking for another cookie.
Nearly all of us exhibit this same kind of behavior, and it can often produce positive results. In particular, in IT we always push for the next best thing or a slightly better outcome. Personally, I am no stranger to this behavior because I experience it from WebSphere CloudBurst users quite frequently. In these cases, it usually revolves around one particular outcome: speed of deployment.
Bar none, users of WebSphere CloudBurst are experiencing unprecedented deployment times for the environments they dispense through the appliance. The fact that we say you can deploy meaningful enterprise application environments in a matter of minutes is far beyond just marketing literature. Our users prove it everyday. However, just because they are deploying things faster than ever does not mean they are content to rest on those achievements. They want to push the envelope, and I love it.
For our users looking to achieve even speedier deployment times, I offer up one reminder and one tip. First, analyze all of your script packages to ensure you are using the right means of customization. If you have some scripts that run for considerably longer than most other script packages, you may want to at least consider applying that customization by creating a custom image. You still need to adhere to the customization principles outlined here, but you may benefit from applying the customization in an image once and avoiding the penalty for applying it during every deployment. You may also be able to break this customization out with a combination of a custom image and script packages. For instance, instead of having a script that installs and configures monitoring agents, you may install the agents in a custom image and configure them during deployment. Being selective about how and when you apply customizations can go a long way in improving your deployment times.
In addition to the reminder above, I also have a tip. Take a look at all of the script packages you use in pattern deployments and look to see if there are any that you can apply in an asynchronous manner. In other words, identify customizations that need to start, but not necessarily complete as part of the deployment process. Going back to our example of configuring monitoring agents during the deployment process, it may be important to kick off the configuration script during deployment, but is it crucial to wait on the results? Maybe not. If it is not, consider defining the executable argument in your script package in a manner that kicks off the execution and proceeds -- i.e. nohup executable command &. This approach can save deployment time in certain situations.
My advice to users of WebSphere CloudBurst: keep pushing your deployment process! Pare as many minutes off the process as you can. I hope that the tips above help in that regard, and be sure to pass along other techniques that you have found helpful.
In a recent post, Joe Bohn detailed some of the new capabilities and enhancements that come along with the recently delivered IBM Workload Deployer v3.1. To be sure, there are many valuable new features such as PowerVM support for virtual application patterns, the Plugin Developer Kit, WebSphere Application Server Hypervisor Edition v8, and more. Each of these topics probably merit their own post, but today I want to talk about something I did not mention above. Specifically, I want to talk about the announcements regarding the IBM Image Construction and Composition Tool (ICCT) and what that means for IBM Workload Deployer users.
You may have read an earlier post that I wrote about the ICCT, but allow me a brief overview here. In short, the ICCT enables the construction of custom virtual images for use in IBM Workload Deployer. You use the tool to create virtual images, much like IBM Hypervisor Edition images, and then you can use those custom images (containing whatever content you need) to create your own custom virtual system patterns. The key point about the custom images you create with the ICCT is that they are dynamically configurable. That is, the tool helps you to create the images in such a way that you can defer configuration until deploy time rather than burning such configuration directly into an image. For those of you familiar with virtual image creation, you know this type of 'intelligent construction' is a huge step towards keeping image inventory at a reasonable level.
Okay, enough of a general overview for now. Let's talk about the two new items of note regarding IBM Workload Deployer v3.1 and the ICCT. The first thing you should know is that starting in IBM Workload Deployer v3.1, the ICCT is shipped with the appliance. This means that you do not need to go anywhere else in order to get your hands on the tool to start creating your custom images. You simply log into IBM Workload Deployer and click the download link on the appliance's welcome panel (shown in image below).
Getting your hands on the tool is one piece of the puzzle, but using it is quite another. While the ICCT has been available as an alphaWorks project for some time, that also implies that there has never been official support for the tool. That changes starting with IBM Workload Deployer v3.1. The ICCT is now a generally available product from IBM, and that means that it is fully and officially supported as well. Further, the images you create using the tool are also officially supported for use as building blocks of your IBM Workload Deployer virtual system patterns. For many of you who have been using the ICCT for some time, but have been hesitant to expand use because of the lack of a formal support statement, you should now feel free to charge forward!
I hope this helps clear up exactly what the new Image Construction and Composition Tool announcements that were part of IBM Workload Deployer v3.1 actually mean. I cannot wait to hear about how you all are putting the ICCT to use with IBM Workload Deployer. Finally, don't forget to send us any questions, comments, or other feedback that you may have regarding this or any other new feature in IBM Workload Deployer v3.1!
One of the key benefits of WebSphere CloudBurst adoption is rapid -- seriously fast -- deployments of middleware application environments. Our users are leveraging the appliance to bring up enterprise-class middleware environments in mere minutes. If you know a little bit about WebSphere CloudBurst, that statistic may be a little surprising considering the appliance dispenses large virtual images from the appliance over the network to a farm of hypervisors. You may ask how the appliance can achieve such rapid deployments in light of the mere physics involved in transferring large amounts of data over a network. The simple answer is caching of course!
WebSphere CloudBurst creates a cache for each unique virtual image on datastores associated with the hypervisors in your cloud. On subsequent deployments of the same virtual image to the same datastore, WebSphere CloudBurst does not need to transfer the image over the wire. It simply uses the virtual disks that are in the cache on the datastore. In the context of the virtual image cache, the deployment process goes something like this:
WebSphere CloudBurst identifies the images necessary to deploy the pattern selected by the user.
WebSphere CloudBurst identifies the hypervisors and associated datastores that will host the virtual machines created during deployment.
WebSphere CloudBurst checks the selected datastores to see if they already have caches for the images it will be deploying. From here, one of two things happens:
WebSphere CloudBurst detects that there is no cache on the datastore and transfers the images over to the hypervisor, thereby creating the cache on the underlying datastore.
WebSphere CloudBurst detects that there is a cache on the selected datastore and uses that cache in lieu of transferring the disk over the wire.
The process may sound complicated, but it is completely hidden from you, the user. You do not need to know how the cache works since WebSphere CloudBurst handles all of these interactions. So, why am I telling you all of this then? As a WebSphere CloudBurst user, it is good to be aware of the cache for two main reasons. First, you need to account for the storage space the cache needs when doing capacity planning for your WebSphere CloudBurst cloud. Second, anytime you upload or create a new image through extend and capture, I would strongly suggest you automatically prime the cache for this new image. You can do this by simply deploying a pattern built on the image to each unique hypervisor/datastore in your environment. This may take a temporary re-arrangement of cloud groups, but it is a simple process, and it guarantees rapid deployments for all users of the new image.
I hope this sheds a little light on a subject we do not discuss too often. As always, if you have any questions, do not hesitate to let me know!
One of the new features that debuted in WebSphere CloudBurst 1.1 is the ability to resize the disks in a virtual image during the extend and capture (image customization) process. If you remember, the virtual images that exist in the WebSphere CloudBurst catalog are made of multiple virtual disks. In WebSphere CloudBurst 1.0 a default size was used for the virtual disks and this could not be changed, even during the image extension process. To be quite honest we got quite a bit of feedback about this, and so with version 1.1 while default sizes are still provided, you can specify the eventual size of each of the virtual disks during the image extension process.
As an example, consider the WebSphere Application Server Hypervisor Edition virtual image. This image contains four virtual disks: one for the WebSphere Application Server binaries, one for the WebSphere Application Server profiles, one for the IBM HTTP Server, and one for the operating system. The default size of each of these disks in the 220.127.116.11 version of the image is 6GB, 2GB, 1GB, and 12GB respectively, for a total of roughly 21GB. While that may be fine for some, what happens if you are going to be installing various other third-party software packages in the image? You may need more disk space for the operating system's virtual disk. Perhaps your WebSphere applications produce log files of considerable size. In that case you may want to increase the default size of the WebSphere Application Server profiles disk space.
Those scenarios and more are exactly why the resizing capability was added. When you extend the WebSphere Application Server Hypervisor Edition 18.104.22.168 virtual image in WebSphere CloudBurst 1.1, you will be presented the option to resize one or more of the virtual disks:
In the case above the default operating system disk size is bumped up to 16GB from the default 12GB size. Also note that in addition to changing the disk size, you can specify the number of network interfaces for your custom image.
Obviously, when you increase the size of the disks within the virtual image you are also increasing the storage requirements for that image when it is deployed to a hypervisor. Keep this in mind when you are calculating the upper bound capacity of your cloud. If you want to see more about how this feature works, check out this video.
Though I feel like we've come a long way in some of the initial confusion surrounding IBM CloudBurst and WebSphere CloudBurst, I still get quite a few basic questions on the solutions. The two most common questions are, 'Are they different products?', and 'Can/should I use them together?'. I put together a really brief overview that answers these questions and talks about the basics of the combined solution. I hope it provides a good introduction!