I've blogged previously about some of IBM's work in the public cloud and specifically on our partnership with Amazon to deliver IBM offerings on the Amazon EC2 cloud. Mostly on this blog I've been focused on IBM WebSphere offerings on the Amazon cloud, but coming up on October 1st, you'll get a chance to hear about and see the full breadth of IBM software offerings available on the Amazon EC2 cloud. Better than simply hearing and seeing, by attending the virtual developer's day, you'll get access to try out some of this software absolutely free!
By leveraging IBM software on Amazon EC2, users can very quickly benefit from the advantages of a cloud computing approach to IT. Instead of focusing valuable time and resource on installing, updating, and otherwise maintaining software, users can simply activate instances of the IBM software of their choosing on Amazon's infrastructure. Besides delivering this rapid approach to getting up and going, users can also leverage cloud computing techniques to quickly and dynamically scale up and down their entire application infrastructure. This ability to dynamically scale up and down means your end-users will always enjoy a satisfactory experience when accessing your applications on the cloud.
You can sign up for the Cloud Computing for Developers day here. In the meantime you can learn more about IBM offerings on the Amazon EC2 cloud by visiting the IBM/Amazon EC2 landing page. I hope you can take advantage of a chance to hear both IBM and Amazon experts and learn how IBM software delivered on the Amazon EC2 cloud can make a real difference for your business.
A few weeks ago, I had a conversation with a current WebSphere customer about the potential value they could derive from the use of IBM Workload Deployer. Right away, this customer saw value in the consistency that a patterns-based approach could afford them. It was clear that patterns eliminate the uncertainty that can make its way into even the best-planned deployment processes. Initially though, the customer questioned the value of being able to do fast deployments because, in their words, "We don't deploy WebSphere environments that often." So, we continued our discussion, and then they asked an important question that I encourage all of our users to ask: "Why don't we deploy our WebSphere environments more frequently?"
It is interesting to talk with our WebSphere users that have a long history with our products. Often times, they have been taking a shared approach to WebSphere installations for many, many years. They develop innovative approaches and isolation schemes that allow them to carve up a single WebSphere installation (cell) amongst multiple different application teams. This allows them to avoid having to setup a cell for each application deployment and saves them the associated time. However, having talked to many different users taking this approach, it is not without its challenges.
As was the case in the customer I mention above, users typically made trade-offs when electing for larger, shared cells. As an example, if you have multiple different application teams with different types of applications using a single cell, applying fixes and upgrades to that cell can be a lot more complex. After all, you now have to coordinate plans across a number of different teams and find a window that fits all of their needs. For the same reason, trying incremental function via our feature packs is much more arduous in these types of cells. Additionally, administrative controls become more complex since teams with varying needs all require administrative access. Admittedly, this gets simpler with newer fine-grained security models in WebSphere Application Server v7 and v8, but it still requires organizational discipline and process.
At this point I should be clear that I am not denigrating the shared cell approach. It can work well, and we have many facilities built into the WebSphere Application Server product to support that model. However, if you are using this approach and you find yourself stumbling too much for your own liking, then I would strongly suggest that you explore the patterns-based approach of IBM Workload Deployer. By deploying patterns that represent your WebSphere cells using IBM Workload Deployer, you can quickly and consistently setup multiple WebSphere Application Server cells to support the varying needs of your application teams. You will still avoid spending an inordinate amount of time installing and configuring cells as that is an automated part of pattern deployment, and your application teams will still get the resources they need. Further, this can liberate your application teams in terms of how they apply maintenance, install upgrades, and absorb new function in the form of feature packs.
I am not suggesting a complete pendulum swing in your approach to how you manage multiple application environments. There is definitely a happy medium in terms of how many cells you end up with. After all, you do not want to trade in one set of problems for the problem of managing way too many different cells. However, I do think that decomposing monolithic, multi-purpose cells into smaller, more purposeful cells can be beneficial. In the course of thinking about this different approach, you may come to the same conclusion that the customer I mention above did. IBM Workload Deployer's rapid deployment capabilities are indeed valuable if you take a slightly different view of current processes.
"What is the difference between WebSphere CloudBurst and IBM CloudBurst?" After the IBM Pulse 2010 event this week, I'm hearing this question in my sleep. It came from both our customers and other IBMers, and it's not hard to understand the confusion caused by the name similarity. Let's take a shot at clearing up any confusion around the two separate offerings and explain the complementary value WebSphere CloudBurst can provide IBM CloudBurst.
Both IBM CloudBurst and WebSphere CloudBurst provide capabilities to enable private, or on-premise, clouds. The main differences between the products are the degree to which they are purpose-built and the form in which they are delivered. First off, the IBM CloudBurst solution form factor consists of three primary elements: service management software, hardware, and IBM services. The software portion of the package provides general purpose (very important distinction) provisioning, workflow, and management capabilities for the services that make up your cloud. These services could consist of WebSphere software or any other software that you can package into a virtual image format. The hardware is the actual compute resource for your on-premise cloud, and the IBM services portion of the package provide a fastpath to get started with your cloud implementation.
On the other hand, WebSphere CloudBurst is a cloud management hardware appliance that delivers function to create, deploy, and manage virtualized WebSphere application environments in an on-premise cloud. WebSphere CloudBurst is purpose-built for WebSphere environments meaning that a lot of the things users would have to script with general purpose cloud provisioning solutions (creating clusters, federating nodes into a cell, applying fixes, etc.), are automatically handled by the appliance and virtual images with which it ships. Also, it is important to note that WebSphere CloudBurst works on a "bring your own cloud" model. The virtualized WebSphere application environments do not run on the appliance, but instead they are deployed to a shared pool of resources to which the appliance is configured to communicate.
While we are talking about two offerings that have the noted differences above, I should also point out the how and why of the integration of these two offerings. The WebSphere CloudBurst Appliance can be leveraged from within the IBM CloudBurst solution to handle the provisioning of WebSphere middleware environments in your data center. From the included Tivoli Service Automation Manager interfaces in the IBM CloudBurst solution, you can discover and deploy WebSphere CloudBurst patterns that exist on an appliance in your data center. WebSphere CloudBurst will deploy the patterns to the set of hardware resource provided by the IBM CloudBurst solution. Why would you want to integrate the two? If a large portion of your data center provisioning involves WebSphere middleware environments, WebSphere CloudBurst provides quick time to value and low cost of ownership. The WebSphere know-how is baked into the appliance and the virtual images it ships meaning that you don't need to develop and maintain what would be a rather large set of configuration scripts for the WebSphere environments running in your cloud.
I hope this clears the air a bit about not only the difference in IBM CloudBurst and WebSphere CloudBurst, but also about how and why these two can be integrated. I will never answer everyone's question in a simple blog post, so if I didn't address yours please leave a comment or reach out to me on Twitter @damrhein.
One of the new features that debuted in WebSphere CloudBurst 1.1 is the ability to resize the disks in a virtual image during the extend and capture (image customization) process. If you remember, the virtual images that exist in the WebSphere CloudBurst catalog are made of multiple virtual disks. In WebSphere CloudBurst 1.0 a default size was used for the virtual disks and this could not be changed, even during the image extension process. To be quite honest we got quite a bit of feedback about this, and so with version 1.1 while default sizes are still provided, you can specify the eventual size of each of the virtual disks during the image extension process.
As an example, consider the WebSphere Application Server Hypervisor Edition virtual image. This image contains four virtual disks: one for the WebSphere Application Server binaries, one for the WebSphere Application Server profiles, one for the IBM HTTP Server, and one for the operating system. The default size of each of these disks in the 126.96.36.199 version of the image is 6GB, 2GB, 1GB, and 12GB respectively, for a total of roughly 21GB. While that may be fine for some, what happens if you are going to be installing various other third-party software packages in the image? You may need more disk space for the operating system's virtual disk. Perhaps your WebSphere applications produce log files of considerable size. In that case you may want to increase the default size of the WebSphere Application Server profiles disk space.
Those scenarios and more are exactly why the resizing capability was added. When you extend the WebSphere Application Server Hypervisor Edition 188.8.131.52 virtual image in WebSphere CloudBurst 1.1, you will be presented the option to resize one or more of the virtual disks:
In the case above the default operating system disk size is bumped up to 16GB from the default 12GB size. Also note that in addition to changing the disk size, you can specify the number of network interfaces for your custom image.
Obviously, when you increase the size of the disks within the virtual image you are also increasing the storage requirements for that image when it is deployed to a hypervisor. Keep this in mind when you are calculating the upper bound capacity of your cloud. If you want to see more about how this feature works, check out this video.
Yesterday, I had the opportunity to present WebSphere CloudBurst during the IBM Cloud computing for developers virtual event. I provided a brief overview of the appliance along with a demonstration, and then tackled some questions from the 150+ attendees in the audience.
All of the questions were good ones, and I wish I had time to address them all during the session (I will be answering all questions and posting them online soon). However, one of the questions stood out to me because of its relevance to how IBM uses WebSphere CloudBurst in their own labs. Paraphrasing, the question was "Can you share hardware resources (hypervisor hosts) between WebSphere CloudBurst and other components in your data center?"
Very good question. The answer is, of course, yes you can. I don't want to sugarcoat it because it does require thought and planning. Ideally, when WebSphere CloudBurst is using a hypervisor host, the appliance is the only thing acting on that host. However, when WebSphere CloudBurst is not using that host, then you can absolutely repurpose it for use by other components in your data center.
Our WebSphere Test Organization uses WebSphere CloudBurst to aid in their testing of our WebSphere middleware products. The capability to provision hypervisor hosts in and out of the WebSphere CloudBurst cloud is of critical importance to them. Like many organizations, they are resource constrained and must get the most out of their IT investment. They use the Tivoli Provisioning Manager to provision VMware ESX hosts for use by WebSphere CloudBurst, and then they use the WebSphere CloudBurst CLI to define those hypervisors to the appliance and do verification testing of the new resource. This allows them to easily expand and contract the amount of shared infrastructure utilized by WebSphere CloudBurst at any one time, and it means components do not have to statically lock down resources. I have a lot more information to come about our WebSphere Test Organization and their use of WebSphere CloudBurst, but I thought I would give everyone a peek at the kinds of things they do everyday with the appliance.
If you are interested in what I presented yesterday to the attendees of IBM's Cloud computing for developers event, you can check out their developerWorks Group page. In the Activities section, you will find the charts, demonstration, and a playback if you prefer to listen to the session. As always, I appreciate your feedback and questions.
IMPACT means new product announcements, and I'm particularly excited to point out the announcement for WebSphere CloudBurst 2.0. The new release features multi-image product support, support for Red Hat on VMware ESX, the new WebSphere Process Server Hypervisor Edition and much more.
You can get all the details in my blog post here, and you can watch an overview demo here. Don't hesitate to send me any comments or questions here or on Twitter @damrhein.
In a post not long ago, I mentioned new enhancements to virtual system patterns in IBM Workload Deployer. A prominent part of those enhancements were updates to pattern construction that allow you to order virtual machine startup, order script package invocation, and include add-ons that provide system level configuration options. Recently I uploaded a demonstration to YouTube that highlights some of these new capabilities. Specifically, this provides a brief look at ordering and add-on enhancements.
I hope you take a look, and even more importantly, I hope to see some feedback. If you have something you would like to see captured in a demo, let me know and I'll work it to the top of a long and continually growing list!
More and more, I am getting a question about how to bring existing WebSphere environments into IBM Workload Deployer. While "bringing in an environment" can mean any number of things, let's take it to mean that a user wants to import their existing WebSphere cells, applications, and configuration into IBM Workload Deployer as a pattern they can subsequently deploy. While there may not be a big red easy button in the appliance that lets you point to an existing environment and import it, there are a couple of techniques that one can employ. I have covered both techniques before, but since I'm getting the question with increasing frequency, I felt like it was time for recap.
The first option is to use a combination of IBM Workload Deployer and Rational Automation Framework for WebSphere. This is a use case I have spoken about numerous times at conferences and in blog posts and articles. In fact, you can read a little about it here. In this sense, RAFW provides excellent capabilities to point at an existing cell, and import everything about it. This includes WebSphere configuration, applications, shared libraries, and more. Once imported as a RAFW project, you can use the IBM Workload Deployer integration script package provided by RAFW to replay that configuration on top of deployments created by the appliance.
The second option is something I talk about a little less frequently. This option revolves around the use of a sample script (provided for free in our samples gallery) that you can run against existing WebSphere cells. The invocation of this script produces IBM Workload Deployer script packages that you can use in patterns to apply the configuration of the target cell to your new cloud-based deployments. Under the covers the utility script and resultant script packages use backupConfig and restoreConfig respectively. They do ensure the update of the cell, node, and host names during the restoreConfig execution (which happens automatically during pattern deployment). Beyond that, the use of the script is subject to the same limitations and rules in place for the use of the backupConfig and restoreConfig commands. You can read more about this capability, watch it in action, and download it for free.
I hope this is all useful information for those of you looking for ways to import existing environments into IBM Workload Deployer as patterns. If you have any questions, please let me know!