Clouds form known patterns of shape, consistency and color. These patterns have formal names too: cumulus, stratus, nimbus, etc. But there are also the patterns that are only in the eyes of the imaginative: a dragon, or a face, or Aunt Betty being chased by a fire-breathing turtle.
Cloud computing implementations are composed of common elements such as network servers, enterprise software, routers, etc. There will be common configurations, however, the power of the cloud comes from the idea that capacity, software, storage, etc. are delivered on demand as a service. So despite the fixed configuration that is really the connected inventory, the shapes of clouds are indeed malleable. So what shapes will we see in these clouds?
One of the most widely used examples of a cloud benefit is the greeting card company that has flat business through the year except for specific holiday peaks. The cloud allows that company to expand their capacity for those peaks only, saving them money. In this example, the cloud is hosted by a third party provider.
But what about that nebulous provider? Such a company will still have to manage capacity and other IT services for all its customers. It has the same issues that any IT shop would have. Finite resources that have to handle all the demand. The cloud principle allows it to provision resource as it is needed, but then this provider will only be able to handle so many customers that have peak business around the various holidays other wise there is no gain. In fact I think that these providers will have to plan which kinds of business they can host to maximize their 'face' time.
A while back I had a four part FAQ series inspired by questions arising from customer visits discussing the first release of WebSphere CloudBurst. With the recent release of WebSphere CloudBurst 2.0, I think it is a good time to revisit an FAQ series with an entirely new set of questions.
For the first part of the series, I want to address a question we get all the time now: "What is the difference between WebSphere CloudBurst and WebSphere Virtual Enterprise?" This question was always fairly common, but now even more so because the new Intelligent Management Pack option for WebSphere Application Server Hypervisor Edition allows you to deploy WebSphere Virtual Enterprise cells using WebSphere CloudBurst.
Fundamentally, the difference between the WebSphere CloudBurst Appliance and WebSphere Virtual Enterprise is a complementary one. WebSphere CloudBurst provides a means to create your application environments, deploy them into a shared, cloud environment, and then manage them over time. In this respect, the appliance focuses on bringing cloud-like capabilities to the application infrastructure layer of your application environments. WebSphere CloudBurst does give you management capabilities for your running, virtualized application environments (i.e. applying maintenances and fixes), but for the most part those capabilities do not extend into the application runtime environment.
Now, you may ask why WebSphere CloudBurst does not extend its reach into the application runtime. The answer is simple: We already have a solution that does just that, WebSphere Virtual Enterprise. WebSphere Virtual Enterprise provides capabilities that allow you to dynamically and autonomically manage your application runtime. You can use WebSphere Virtual Enterprise to not only assign performance goals to your applications, but also to declare the importance of a given application meeting its goals relative to other applications in your organization. This enables the dynamic management of your applications and their resources such that your applications perform according to their goals and relative importance to your business. Simply put, you get an elastic runtime at the application layer of your application environments.
As I said, WebSphere CloudBurst and WebSphere Virtual Enterprise are complementary solutions. Both enable notions of cloud computing, but at different layers of your application environments. WebSphere CloudBurst hones in on the application infrastructure components, while WebSphere Virtual Enterprise zeros in on the applications running in those environments. The new Intelligent Management Pack for WebSphere Application Server Hypervisor Edition means that WebSphere CloudBurst can now dispense WebSphere Virtual Enterprise environments into your on-premise cloud. That means you can take advantage of these complementary solutions from a single and simple management plane.
I hope this helps to clear things up. As always, questions and comments are welcome!
For the next installment of this series of FAQs, let's move from product positioning and integration, square into the land of operational procedure. For this post, we will consider you are getting ready to deploy a pattern based on the WebSphere Application Server Hypervisor Edition. During the deployment process, you provide configuration information, which includes a password for a user named virtuser.
You read the documentation, and you understand that virtuser is both an operating system user and the user that WebSphere CloudBurst configures as the primary administrative user for WebSphere Application Server. Naturally, this user owns the WebSphere Application Server processes that run in the virtual machine. While it is convenient that this is all pre-configured for you, you want to know one thing: "Can I define a user besides virtuser?"
It certainly would not be the first time this question came up. The short answer to this is yes, but there are of course caveats. You can define another user and have that user own the WebSphere Application Server processes, but you cannot completely remove the virtuser user, nor should you remove virtuser as the primary administrative user. The reason for this is that WebSphere CloudBurst relies on virtuser when it carries out certain actions such as applying maintenance, applying fixes, or otherwise interacting with the WebSphere Application Server environment.
All that being said, I recently put together a script package that allows you to utilize a user other than virtuser. I hope to put the script package in our samples gallery soon, but here's a basic overview of using the script package and what it does:
Attach the script package to all parts in a pattern that contain a WebSphere Application Server process.
Deploy the pattern and provide the necessary parameter values. These include the name of the new user, a password, a common name, and a surname. The last two bits are necessary when creating a new administrative user in WebSphere Application Server.
During deployment, the script package first creates a new OS user with the specified password.
The script adds the new user to the existing OS users group.
The script creates a new WebSphere Application Server user with the same username and password and grants administrative privileges to the user.
The script shuts down the WebSphere Application Server processes.
The script changes the runAsUser value for all servers to the empty string and sets the runAsGroup value for those servers to users. This allows members of the OS users group to start the WebSphere Application Server process.
The script starts the WebSphere Application Server processes.
There are a few other activities in the script, but that should give you a basic overview. Again, note that the script does not remove the virtuser user or change that user's OS or WebSphere Application Server permissions in anyway. I would also point out that if you use WebSphere CloudBurst to apply maintenance to the WebSphere Application Server environment, it will do so as virtuser and it will restart processes as virtuser, so plan accordingly.
I hope this sheds some light on a very common question. I hope to get the sample up soon, and as always let me know if you have any questions.
For the last post in my FAQs Revisited series, I'm going to cheat a little bit. Instead of addressing one particular question, I'm going with a grab bag of a few different questions. These are questions that I get asked quite frequently, but do not demand an entire blog post explanation. Let's get on with it.
Question: Do the new software license management capabilities provided in WebSphere CloudBurst 2.0 depend on ILMT or other supporting components?
Answer: No. The license management features are completely standalone. Of course, you can still take advantage of ILMT (through easy integration in WebSphere CloudBurst I might add) to track licenses in your cloud if you so choose.
Question: Can I deploy a pattern, make changes to my virtual system, and then recapture that as an updated pattern?
Answer: You cannot do this with WebSphere CloudBurst alone, but you can use WebSphere CloudBurst in conjunction with the Rational Automation Framework for WebSphere to do just this. Check out this article (shameless plug alert!).
Question: What if I have an urgent operating system fix to apply before IBM delivers an update to the OS in the Hypervisor Edition image?
Answer: You can either manually apply the fix to the appropriate virtual machines, or you could package up the fix as a custom WebSphere CloudBurst fix, load it into the catalog, and use the appliance to automate the application of said fix.
Question: Can I change the install location for WebSphere Application Server in the virtual image?
Answer: I've just shown you all this really cool, useful, and easy to use stuff, and you worry about install locations? Seriously though, I understand the genesis of this question usually has to do with existing scripts that assume a certain install location for WebSphere Application Server. I certainly do not advocate changing those scripts, but you cannot change the install location for WebSphere Application Server in the images. There is nothing to keep you from creating a symbolic link however.
Question: Once I deploy a pattern, what do I need to do to add more processing capacity (i.e. more application server processes)?
Answer: You have a couple of options here. You can use normal WebSphere administration techniques to add more application servers to an existing node. If that will not work (perhaps a particular node is operating at max capacity), you can use the new dynamic virtual machine operations in WebSphere CloudBurst to add an entirely new node/virtual machine. If you find yourself consistently making these types of adjustments to the runtime environment based on ebb and flow of demand, you may also want to consider the Intelligent Management Pack option for WebSphere Application Server Hypervisor Edition.
I hope this FAQs Revisited series was helpful. Stay tuned for a look at some recent work I did to integrate WebSphere CloudBurst deployments with the new WebSphere DataPower XC10 appliance.
One of my favorite books from childhood is If You Give a Mouse a Cookie. Although targeted at children, the book illustrates a frequently occurring human behavior that is important for all of us understand. That behavior is the tendency for escalating expectations. The book offers this up by starting out with the simple action of giving a mouse a cookie. The mouse in turn asks for a glass of milk, various flavors of cookies, and on and on, until the mouse circles back to asking for another cookie.
Nearly all of us exhibit this same kind of behavior, and it can often produce positive results. In particular, in IT we always push for the next best thing or a slightly better outcome. Personally, I am no stranger to this behavior because I experience it from WebSphere CloudBurst users quite frequently. In these cases, it usually revolves around one particular outcome: speed of deployment.
Bar none, users of WebSphere CloudBurst are experiencing unprecedented deployment times for the environments they dispense through the appliance. The fact that we say you can deploy meaningful enterprise application environments in a matter of minutes is far beyond just marketing literature. Our users prove it everyday. However, just because they are deploying things faster than ever does not mean they are content to rest on those achievements. They want to push the envelope, and I love it.
For our users looking to achieve even speedier deployment times, I offer up one reminder and one tip. First, analyze all of your script packages to ensure you are using the right means of customization. If you have some scripts that run for considerably longer than most other script packages, you may want to at least consider applying that customization by creating a custom image. You still need to adhere to the customization principles outlined here, but you may benefit from applying the customization in an image once and avoiding the penalty for applying it during every deployment. You may also be able to break this customization out with a combination of a custom image and script packages. For instance, instead of having a script that installs and configures monitoring agents, you may install the agents in a custom image and configure them during deployment. Being selective about how and when you apply customizations can go a long way in improving your deployment times.
In addition to the reminder above, I also have a tip. Take a look at all of the script packages you use in pattern deployments and look to see if there are any that you can apply in an asynchronous manner. In other words, identify customizations that need to start, but not necessarily complete as part of the deployment process. Going back to our example of configuring monitoring agents during the deployment process, it may be important to kick off the configuration script during deployment, but is it crucial to wait on the results? Maybe not. If it is not, consider defining the executable argument in your script package in a manner that kicks off the execution and proceeds -- i.e. nohup executable command &. This approach can save deployment time in certain situations.
My advice to users of WebSphere CloudBurst: keep pushing your deployment process! Pare as many minutes off the process as you can. I hope that the tips above help in that regard, and be sure to pass along other techniques that you have found helpful.
I was at a customer meeting the other day, and someone asked me if they could query WebSphere CloudBurst for an inventory of all of their virtual system deployments. This person was of course aware that he could go to the web console and very quickly view all of the virtual systems. What he wanted though was something that he could run to generate a report that contained all of this information. For a purpose like this, harnessing the WebSphere CloudBurst CLI is exactly the way to go.
I thought I'd write a simple CLI script that provides an example of how you could do this.
from datetime import datetime
outFile.write("WebSphere CloudBurst Virtual System Inventory\n")
outFile.write("Total virtual systems: " + str(len(cloudburst.virtualsystems)))
def writeVSDetails(outFile, virtualSystem):
outFile.write("\tVirtual system name: " + virtualSystem.name)
outFile.write("\tCreated from pattern: " + virtualSystem.pattern.name)
outFile.write("\tVirtual system status: " + virtualSystem.currentstatus_text)
created = datetime.fromtimestamp(virtualSystem.created)
outFile.write("\tVirtual system creation date: " + created.strftime("%B %d, %Y %H:%M:%S"))
outFile.write("\tTotal virtual machines: " + str(len(virtualSystem.virtualmachines)))
def writeVMDetails(outFile, virtualMachine):
outFile.write("\t\tVirtual machine name: " + virtualMachine.name)
outFile.write("\t\tVirtual machine display name: " + virtualMachine.displayname)
outFile.write("\t\tCreated from image: " + virtualMachine.virtualimage.name)
outFile.write("\t\tVirtual machine hypervisor: " + virtualMachine.hypervisor.name + " | " + virtualMachine.hypervisor.address)
outFile.write("\t\tVirtual machine IP address: " + virtualMachine.ip.ipaddress)
outFileLoc = sys.argv
outFile = open(outFileLoc, 'w')
for virtualSystem in cloudburst.virtualsystems:
for virtualMachine in virtualSystem.virtualmachines:
As a result of invoking this script using the CLI's batch mode, content is written to the file location supplied by the caller.
WebSphere CloudBurst Virtual System Inventory
Total virtual systems: 3
Virtual system name: Single server
Created from pattern: WebSphere single server
Virtual system status: Started
Virtual system creation date: January 15, 2010 16:37:20
Total virtual machines: 1
Virtual machine name: Standalone 0
Virtual machine display name: Single server cbvm-110 default
Created from image: WebSphere Application Server 184.108.40.206
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual system name: Development WAS Cluster
Created from pattern: Custom WAS Cluster - Development
Virtual system status: Started
Virtual system creation date: January 18, 2010 14:08:46
Total virtual machines: 2
Virtual machine name: DMGR 0
Virtual machine display name: Development WAS Cluster cbvm-112 dmgr
Created from image: WebSphere Application Server 220.127.116.11
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual machine name: Custom Node 1
Virtual machine display name: Development WAS Cluster cbvm-111 custom
Created from image: WebSphere Application Server 18.104.22.168
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual system name: DB2 for development use
Created from pattern: DB2
Virtual system status: Started
Virtual system creation date: January 18, 2010 14:09:58
Total virtual machines: 1
Virtual machine name: DB2 Enterprise Server 32bit Trial 0
Virtual machine display name: DB2 for development use cbvm-113
Created from image: DB2 Enterprise 22.214.171.124 32-bit Trial
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
I withheld IP addresses and host names above for obvious reasons, but if you ran the script against your environment you would see actual host name and IP address values. The script above is written once, and it can be subsequently run anytime you want an inventory of virtual systems running in your WebSphere CloudBurst cloud. There's other information available for virtual systems and virtual machines that I didn't show here, and you can retrieve it if necessary for your inventory report. In addition, I chose to print this information as regular text in a file supplied by the caller, but you might choose to generate the report in another format including XML, JSON, or anything else for that matter.
-- Dustin Amrhein
p.s. As with any sample code or script I provide here, the above is only a sample and offered as-is.
Jason McGee will be leading the second GWC Lab Chat this week on Wednesday, 4/20. The very timely topic is related to recent announcements from IBM regarding the IBM Workload Deployer (see previous posts). Entitled "Application-Centric Cloud Computing" the discussion will focus on the concept of deploying and managing your application workloads in a shared, self-managed environment rather than manually creating and managing the application middleware topologies. It places the focus on the application rather than the infrastructure. This concept promises to deliver greater simplicity, elasticity, and
density among other things. It can position your business to react more
quickly and efficiently to the increasing demands of your customers and
free you from the managing all of the details.
Many of you may have already heard Jason speak last week at IMPACT 2011 in the cloud mini main tent or perhaps at any number of other sessions that Jason was involved in. Jason is the key architect behind IBM's WebSphere cloud activities. Obviously, Jason understands the cloud space very well and has a clear view of the evolution into Application-Centric Cloud Computing. This GWC Lab Chat will provide the opportunity to get your questions answered and share your perspective on this technology.
Jason will provide a brief introduction to the concepts and ideas and then lead an open discussion. Put it on your calendar and plan to attend - and please plan to bring your questions and comments to help foster a rich discussion. We want to hear from you.
If you haven't registered yet it is not too late - learn more and register here. It is easy to register and there is no cost. This is a very timely event and a great way to dig a little more deeply into concepts you first heard at IMPACT or perhaps hear them for the first time. Don't miss it!
In the course of my job, I am lucky to be able to work with both enterprise users and business partners who are adopting and using the WebSphere CloudBurst Appliance. When it comes to the business partner camp, one of my absolute favorites is the Haddon Hill Group. The Haddon Hill Group is an IBM Premier Business Partner, and they have been an early adopter and vocal advocate of the WebSphere CloudBurst Appliance. They have extensive knowledge of the use of the appliance in enterprise accounts, and quite frankly, they are doing some really cool things with WebSphere CloudBurst.
Given the above, I was glad to see summarized results from their various implementations made available recently on the IBM site. The summary is fairly concise, so I encourage you to take a look at the results Haddon Hill Group is getting with WebSphere CloudBurst.
I am not going to rehash the contents of the results here, but there are a couple of things I want to call out. First off, Haddon Hill Group says that WebSphere CloudBurst can provide companies with a '100 times faster time to market' delivery experience. In a practical sense, this means reducing the amount of time to deliver WebSphere environments from 40-60 days on average to just hours. That is an eye-opening data point!
The other thing I want call out here is a quote from Phil Schaadt, President and CTO, Haddon Hill Group. I have had the pleasure of working with Phil and team, and I have heard him echo these same sentiments many times:
"The important thing about the IBM WebSphere CloudBurst Appliance is that it will dispense a WebSphere Application Server image onto your WebSphere Application Server environment or private cloud along with other products within the WebSphere stack, and that application server will be ready in a few minutes. You can do it in a clustered environment, and you can even roll out IBM WebSphere Process Server and get it right in a fully clustered environment with a database connection, in about 90 minutes. You can also easily manage all the configurations of IBM WebSphere Process Server that you need. All the steps that took up so much time and effort on the part of IT staff have been removed. The savings for companies with large WebSphere implementations can be in the millions."
It is always great to see clients putting our technology to use to produce tangible business value. Again, I encourage you to take a look at these reports. As always, I am eager to hear what you think, so leave me a comment or reach out to me on Twitter @damrhein.
We've been talking a lot about IBM Workload Deployer V3 and we will continue to highlight different aspects of the capabilities it provides in the coming weeks. As we've already mentioned - IBM® Workload Deployer V3 is not just another release of the IBM WebSphere CloudBurst Appliance. While it builds on WebSphere CloudBurst's success, and supports and improves upon all of its original capabilities, Workload Deployer provides new application-centric computing capabilities for your private cloud, and brings you higher utilization, improved ease of use, and more rapid application deployment.
The WebSphere Application Server Hypervisor Edition virtual image is made up of four different virtual disks. One of those disks contains pre-created and pre-configured WebSphere Application Server profiles. When the image is activated (either through WebSphere CloudBurst or in a standalone fashion), all of the profiles not being used are deleted leaving only the intended WebSphere profile type.
Since the profiles are pre-created, this implies that certain information must be updated after the image is activated to reflect things that change with each node that is created. Among other things, the cell name, node name, and host name of the WAS profile configuration are usually updated during the image activation process. Nearly every time I talk to WAS administrators about WebSphere CloudBurst and WebSphere Application Server Hypervisor Edition they are intrigued by this particular configuration update and almost always ask "How do you do it?" (Dustin's note: Since the command to rename the cell is not officially documented, I have removed it from this post. I'm sorry, but it is for your own good!)
Most of the time this question pops up because users are attempting to, with a more narrow focus than WAS Hypervisor Edition, freeze-dry certain WAS configurations in their organization. However, no matter how they do that (virtual images, zipped up configuration files, etc.), they too need to update things like the cell, node, and host names when attempting to reuse the configuration. Many have gone down the route of trying to identify all of the different XML files they need to change in order to update this information, but this is untenable and in fact unnecessary.
If you need to update the node or host name, forget manually updating XML files. Instead, use these three wsadmin commands:
The commands can be run from a standalone node or from a deployment manager node. They are pretty straight forward, and if you need more information about them just take a look in the WebSphere Application Server Information Center. I hope this is helpful information, and stay away from those XML files!
Yesterday, I had the opportunity to present WebSphere CloudBurst during the IBM Cloud computing for developers virtual event. I provided a brief overview of the appliance along with a demonstration, and then tackled some questions from the 150+ attendees in the audience.
All of the questions were good ones, and I wish I had time to address them all during the session (I will be answering all questions and posting them online soon). However, one of the questions stood out to me because of its relevance to how IBM uses WebSphere CloudBurst in their own labs. Paraphrasing, the question was "Can you share hardware resources (hypervisor hosts) between WebSphere CloudBurst and other components in your data center?"
Very good question. The answer is, of course, yes you can. I don't want to sugarcoat it because it does require thought and planning. Ideally, when WebSphere CloudBurst is using a hypervisor host, the appliance is the only thing acting on that host. However, when WebSphere CloudBurst is not using that host, then you can absolutely repurpose it for use by other components in your data center.
Our WebSphere Test Organization uses WebSphere CloudBurst to aid in their testing of our WebSphere middleware products. The capability to provision hypervisor hosts in and out of the WebSphere CloudBurst cloud is of critical importance to them. Like many organizations, they are resource constrained and must get the most out of their IT investment. They use the Tivoli Provisioning Manager to provision VMware ESX hosts for use by WebSphere CloudBurst, and then they use the WebSphere CloudBurst CLI to define those hypervisors to the appliance and do verification testing of the new resource. This allows them to easily expand and contract the amount of shared infrastructure utilized by WebSphere CloudBurst at any one time, and it means components do not have to statically lock down resources. I have a lot more information to come about our WebSphere Test Organization and their use of WebSphere CloudBurst, but I thought I would give everyone a peek at the kinds of things they do everyday with the appliance.
If you are interested in what I presented yesterday to the attendees of IBM's Cloud computing for developers event, you can check out their developerWorks Group page. In the Activities section, you will find the charts, demonstration, and a playback if you prefer to listen to the session. As always, I appreciate your feedback and questions.
Since its introduction and initial release around one year ago, activity around WebSphere CloudBurst has been a steady buzz. New images, features, enhancements have been rolling in, and can sometimes be a little overwhelming to digest. With that in mind, I want to address a related and frequent question. What products does IBM support for use in WebSphere CloudBurst?
To answer that question, we only need to look at the IBM Hypervisor Edition images currently provided by IBM. Here's a quick matrix of those images:
When I talk with WebSphere CloudBurst users, the topic of custom virtual images comes up frequently. In some cases they simply want to customize a shipped IBM Hypervisor Edition, and in other cases they want to create a completely custom image. Creating a customized version of an IBM Hypervisor Edition is relatively easy since we give you extend & capture in WebSphere CloudBurst. Creating a completely custom image has historically been a bit tougher, mostly owing ot the fact that there was not a standard tool or process for image assembly. I am happy to say that today's publication of the IBM Image Construction and Composition Tool changes all that.
Watch a demo of the IBM Image Construction and Composition Tool
The primary purpose of the Image Construction and Composition Tool is to enable a modular approach to virtual image construction, while taking into account the typical division of responsibilities within an organization. The tool allows the right people within an organization to contribute their specialized knowledge as appropriate to the virtual image creation process. This means OS teams can handle the OS and software teams can handle the appropriate software. A separate image builder can then use both OS and software components to meet the needs of users within the organization. Best of all, the image builder does not need intimate knowledge of how to install or configure any of the components in the image. They simply need to know which OS and software components to use.
When using the Image Construction and Composition Tool, you start by defining the base operating system you wish to use for your images. You can do this by importing an existing virtual image with an OS already installed, providing an ISO for the OS, or pointing to a base OS image on the IBM Cloud. The bottom line is that you have necessary flexibility to start with your certified or ‘golden’ operating system build. Once you have the base OS image defined in the Image Construction and Composition Tool, you can start defining custom software for use in the images you will compose.
In the tool, bundles represent the software you wish to install within a virtual image. The definition of a bundle contains two major parts: Installation and Configuration. The installation component of a bundle tells the Image Construction and Composition Tool how to install your software into the virtual image. You provide a script or set of scripts that install the necessary components into your image, and you direct the tool to call these scripts. These tasks run once during the initial creation of the virtual image, thus allowing you to capture large binaries, long-running installation tasks, or other necessary actions directly into your image.
The configuration section of a bundle defines actions that configure the software installed into the image. Like with the installation tasks, you provide a script or set of scripts for configuration tasks. Unlike installation tasks that run exactly once, configuration scripts become part of the image’s activation framework and as such, run during each image deployment. Using the tool, you can define input parameters for configuration scripts and optionally expose them so that users can provide values for the parameters at image deploy-time. Configuration tasks are important in providing flexibility that allows users to leverage a single virtual image for a number of different deployment scenarios.
Once you have your base OS image and one or more bundles defined in the Image Construction and Composition Tool, you can compose a virtual image. To compose a virtual image, you extend the base OS image and add any number of bundles into the new image. A base OS image plus a set of bundles defines a unique image.
After you define the image you want to construct, you initiate a synchronize action in the Image Construction and Composition Tool. When you start the synchronize action, the tool first creates a virtual machine in either a VMware or IBM Cloud environment (based on how you configured the tool). Next, the installation tasks of each bundle you included in the virtual image run to install the required software. Finally, the tool copies the configuration scripts from each bundle into the virtual machine and adds them to the image’s activation framework. This ensures the automatic invocation of all configuration scripts during subsequent image deployments.
Once the image is in the synchronized state, you can capture it. Capturing the image results in the creation of a virtual image based on the state of the synchronized virtual machine. The tool also automates the generation of metadata that becomes part of the virtual image package. When the capture of the virtual image completes, you can export it from the Image Construction and Composition Tool and deploy it using WebSphere CloudBurst, Tivoli Provisioning Manager, or the IBM Cloud.
I am excited for users to get their hands on the Image Construction and Composition Tool. I believe it represents the first big step in helping users to design and construct more sustainable virtual images. Did I mention it is completely free to download and use? Visit the Image Construction and Composition Tool website for more details and a download link. I look forward to your comments and feedback.
May is almost here and that means that IBM IMPACT is right around the corner. Just like years past, IMPACT 2010 will be a great chance to get valuable education and insight into IBM WebSphere software and software from across the IBM software family. If you want to hear how IBM software is leading the march toward a smarter planet, register now.
IMPACT 2010 will be a great chance to hear the WebSphere cloud computing story. There will be multiple sessions on the WebSphere CloudBurst Appliance. These include customer-led sessions, internal adoption stories, overviews, and much more. I'll be there running a hands-on lab and delivering a session that discusses integration between WebSphere CloudBurst and IBM Rational tools. Of course, there is more to WebSphere and cloud computing than WebSphere CloudBurst. We have several other sessions that will detail all of IBM WebSphere's work in the cloud.
If you are interested, I put together a short video discussing some of the sessions on tap for WebSphere and cloud computing at IMPACT 2010. I'd also encourage you to check out the social media site for IBM IMPACT 2010. On that site, you will find tweets, videos, and blogs about the conference. Don't forget to sign up, and I hope to see you in Las Vegas!
-- Dustin Amrhein
I hate sitting on secrets. I always have. I understand that sometimes it's in the best interest of everyone (and your job) to keep tight lips, but that does not make it any more fun. Inevitably, the run-up to our annual Impact conference means everyone in the lab is doing their fair share of secret keeping -- just waiting for announce time. For a lot of us, that day ended Tuesday with the announcement of the IBM Workload Deployer v3.0.
Now, you may be wondering, 'I have never heard of this. Why is it version 3.0??' Well, IBM Workload Deployer is a sort of evolution of the WebSphere CloudBurst Appliance, which was previously at version 2.0. This is good news for all of our current WebSphere CloudBurst users because all of the functionality (plus new bits of course) that they have been using in WebSphere CloudBurst are present in IBM Workload Deployer. You can use and customize our IBM Hypervisor Edition images in IBM Workload Deployer. You can build and deploy custom patterns that contain custom scripts in order to create highly customized IBM middleware environments. So, what's the big deal here? Two words: workload patterns.
Workload patterns represent a new cloud deployment model and are an evolution of the traditional topology patterns you may have seen with WebSphere CloudBurst Appliance (I am a little torn between evolution and revolution, but that's splitting hairs). Fundamentally, workload patterns raise the level of abstraction one notch higher than topology patterns and put the focus on the application. That means, when you use a workload pattern the focus is on the application instead of the application infrastructure. Perhaps an example would be helpful to illustrate how a workload pattern may work in IBM Workload Deployer.
Let's consider the use of a workload pattern that was part of the recent announcement, the IBM Workload Deployer Pattern for Web Applications v1.0. Just how might something like this work? It's simple really. You upload your application (maybe a WAR or EAR file), upload a database schema file (if you want to deploy a database with the solution), upload an LDIF file (if you want to setup an LDAP in the deployment to configure application security), attach policies that describe application requirements (autonomic scaling behavior, availability guidelines, etc), and hit the deploy button. IBM Workload Deployer handles setting up the necessary application middleware, installing and configuring applications, and then managing the resultant runtime in accordance with the policies you defined. In short, workload patterns provide a completely application centric approach to deploying environments to the cloud.
Now, if you are a middleware administrator, application developer, or just a keen observer, you probably have picked up on the fact that in order to deliver something as consumable and easy to use as what I described above, one must make a certain number of assumptions. You are right. Workload patterns encapsulate the installation, configuration, and integration of middleware, as well as the installation and configuration of applications that run on that middleware. Most of this is completely hidden from you, the user. This means you have less control over configuration and integration, but you also have significantly reduced labor and increased freedom/agility. You can concentrate on the development of the application and its components and let IBM Workload Deployer create and manage the infrastructure that services that application.
Having shown and lobbied a bit for the benefits of workload patterns, I also completely understand that sometimes you just need more control. That is not a problem in IBM Workload Deployer because as I said before, you can still create custom patterns, with custom scripts based on custom IBM Hypervisor Edition images. The bottom line is that the IBM Workload Deployer offers choice and flexibility. If your application profile meshes well with a workload pattern, by all means use it. If you need more control over configuration or more highly customized environments, look into IBM Hypervisor Edition images and topology patterns. They are both present in IBM Workload Deployer, and the choice is yours.
If you happen to be coming to IBM Impact next week, there will be much more information about IBM Workload Deployer. I encourage you to drop-by our sessions, ask questions, and take the opportunity to meet some of our IBM lab experts. Hope to see you in Las Vegas!
IBM Impact 2011 was a wildly busy week! Customer meetings, entertaining keynotes, informative sessions, and hands-on labs packed the 6 days with more than enough action. I spent a lot of the week presenting sessions and conducting labs for the newly announced IBM Workload Deployer. As one would expect with any new announcement, we got tons of questions about IBM Workload Deployer. While I cannot capture all the questions and their answers here, I will try to cover some of the more prevalent ones below.
Question: What happened to WebSphere CloudBurst?
Answer: The short answer is, it simply went through a rename. WebSphere CloudBurst became IBM Workload Deployer v3.0. The version 3.0 acknowledges this is an evolution of what we started with WebSphere CloudBurst, which was at version 2.0. Why remove WebSphere from the name? The fact that this is now an IBM branded offering is more accurate as it is capable of deploying and managing more than just WebSphere software.
Question: What is new in IBM Workload Deployer?
Answer: While there are many new features that I will be talking about over the coming months, the most prominent new facet is the introduction of workload patterns (also referred to as virtual application patterns). As opposed to topology patterns (traditionally referred to as simply patterns in the WebSphere CloudBurst product), workload patterns raise the level of abstraction to the application level. Instead of focusing on application infrastructure and its configuration as you do with topology patterns, workload patterns allow you to focus on the application and its requirements. When using workload patterns, you provide the application, attach policies that specify functional and non-functional requirements, and deploy. IBM Workload Deployer handles deploying and integration the middleware infrastructure necessary to support the application, and it automatically deploys your application on top of that middleware. In addition, IBM Workload Deployer manages the application runtime in accordance with the policies that you specify in order to provide capabilities such as runtime elasticity.
Question: If I am a current WebSphere CloudBurst user, what does this mean for me?
Answer: Not to worry. You will be able to use all of your WebSphere CloudBurst assets (patterns, scripts, images) in the new IBM Workload Deployer. All of the capabilities previously in WebSphere CloudBurst are present in IBM Workload Deployer (terminology may vary slightly -- topology pattern instead of just pattern for instance). Additionally, we continue to expand on the functionality that you are familiar with from WebSphere CloudBurst. This includes updates for Environment Profiles, new IBM Hypervisor Edition images, new pattern building capabilities, and more. Stay tuned for more information about these new features and for information on how you can move your WebSphere CloudBurst resources to the new IBM Workload Deployer.
Question: How do I choose between using workload and topology patterns?
Answer: There are a number of factors that will lead you to using either workload patterns, topology patterns, or both. The primary decision point will be how much control you really need (not want). When using workload patterns, you sacrifice some customization control over the configuration, integration, and administration of the middleware application environment since the workload pattern and management model abstracts away the 'guts' of the system. Everything about the workload pattern is application-centric. On the other hand, topology patterns give you intimate control over the configuration, integration, and administration of the middleware application environment. As a general rule of thumb, if your application requirements match the capabilities of a workload pattern, that is the way to go as it can greatly reduce complexity and cost associated with deployment and management. If a workload pattern does not meet the needs of your application, topology patterns can still greatly reduce cost and complexity and you can tailor them to fit almost any need. Beyond generalities, there is no hard and fast rule for choosing one over the other. It comes down to understanding your application environment and its needs.
Question: Is IBM Workload Deployer an appliance like WebSphere CloudBurst?
Answer: Yes, it is still an appliance, but an updated one! The new appliance is 2U, and it provides more storage, processing power, and memory. It is still just as easy to setup, but just slightly bigger.
Well, that is all for now, but I will be back many times over the coming months with more information. In the meantime, if you have any questions, please leave them in a comment below.
As fast as the Impact conference arrived is about as fast as it is disappearing in our collective rearview mirror. It seems like a blur. In a word, the conference was... exhausting! In other words, it was informative, exciting, and illuminating. I hope that many of you had a chance to make it out there, and I hope more of you make it to Impact in 2013.
For those of you familiar with the conference, you know that it is typically a launching ground for new product versions and altogether new products. This year was certainly no different with the launch of the new version of WebSphere Application Server (8.5), the new and improved IBM Business Process Manager and IBM Operational Decision Manager, a new version of WebSphere eXtreme Scale (8.5), and numerous updates across the messaging and connectivity stack. While I encourage you to follow up on all of these important announcements, they are not what I am going to focus on today. Instead, I am going to focus on the new addition to the IBM family that got plenty of attention this year: IBM PureApplication System.
Joe recently touched on this new offering, so I won't get into an exhaustive overview. To put it briefly, IBM PureApplication System is an expert integrated system. What does that mean? First and foremost it means that it is a system -- a whole. It is an integrated platform of hardware and software, optimized and tuned for running transactional web and database workloads. I do not mean that it is a system of software that we pre-install on off-the-shelf hardware. Rather, it is the result of hardware and software engineers across IBM working together to build a system that is expert at what it does. More than just the web application and database software though, IBM PureApplication System also contains pre-installed and pre-configured management software that delivers a soup to nuts (hardware to application) single pane of glass for managing the entire system. I could go on and on, but again that's not my purpose here. I encourage you to check out the new IBM PureSystems web page for more information and some pretty cool videos.
For those of you that take a look at IBM PureApplication System you will quickly find that the notion of pattern-based deployments (something I have talked about at length on this blog) plays a key role in the new system. In fact, the same virtual system and virtual application pattern constructs that you have come to know in IBM Workload Deployer are front and center in IBM PureApplication System as well. In the new system, you can build custom virtual system and virtual application patterns, deploy them to your cloud, and then manage them over time. If you are familiar with the IBM Workload Deployer user interface, you will likely find yourself immediately familiar with the interface of IBM PureApplication System. Given all of that, if you are like many of the users I talked to at Impact and since, you probably have some questions as to IBM Workload Deployer and this new system. Most commonly, I get these two questions: "What does this mean for the IBM Workload Deployer product?" and "How do I know when to use IBM PureApplication System versus IBM Workload Deployer?" Let me do my best to address those questions.
In terms of the impact of the IBM PureApplication System on the IBM Workload Deployer offering, I can only view it in one way: affirmation. As I said above, IBM PureApplication System puts the mode of pattern-based deployments front and center, and further affirms that this kind of approach is crucial to the evolution of application delivery and management. Those of you familiar with IBM Workload Deployer or its predecessor WebSphere CloudBurst know that we have been talking about patterns for years. Rest assured we will continue to talk about patterns and solutions for building, deploying, and managing them. As it stands, we have at least three ways for you to build, deploy, and manage patterns: IBM SmartCloud Application Services, IBM Workload Deployer, and IBM PureApplication System. As you can see, options for consuming patterns have only increased since the initial launch of WebSphere CloudBurst. Furthermore, if you were at Impact, you know that we have a vibrant and vocal community of IBM Workload Deployer users, and I hope to see that community continue to grow! As I see it, the core technology of IBM Workload Deployer is becoming our 'operating system' for cloud platform management.
The question of when to use IBM Workload Deployer or when to use IBM PureApplication System is one whose answer is a bit more nuanced and not something one can or should try to definitively answer in a blog post. One thing I do suggest though is that when evaluating these two technologies, it is important to acknowledge that they have different business value propositions. Sure they share common core technology in terms of building, deploying, and managing pattern-based environments, but beyond that they diverge a bit. Remember, IBM PureApplication System is, well a system. It is the hardware, software, and management technology you need to run your middleware application workloads. It is pre-built and pre-integrated to the point that it only requires you to roll it into your datacenter, hook it up to your network, and do some one time configuration. The aim for the duration between receipt of the system to up and running with your first deployment is four hours, and if you were at Impact you saw an amusing video with the chief architect (Jason McGee) that proves this claim.
IBM Workload Deployer is fundamentally different in terms of how you consume it and how it integrates with your infrastructure. Most notably, IBM Workload Deployer does not include optimized hardware (servers, storage, networking) for running your workloads or a single point of management for everything from hardware to applications. To use IBM Workload Deployer you attach it to your network and point it at existing virtualized servers. Simply put, IBM Workload Deployer assumes you have existing, under-utilized hardware that you can get more out of with the intelligent deployment and management approach the appliance delivers. While you do not get the pre-integrated and optimized system of hardware plus software, you do get the flexibility to use your existing infrastructure.
As you can see, there are similarities (patterns) and differences (whole system vs. management system), and the result is a pretty different set of value propositions. The key in evaluating these technologies is that you do so with a crisp understanding of your current needs AND your future plans for growth and evolution. I know this kind of advice is a bit generalized, but I hope the differences I discussed above help you to at least understand the capabilities of the two different offerings. As always, if you have any comments or questions, please reply to the post!
During the week of IMPACT this year, we announced the launch of the WebSphere CloudBurst Samples Gallery. You can go to this gallery to find and download sample script packages, CLI scripts, and other tools that we hope help you in your endeavors with the appliance. The samples are free to use and offered in an "as-is" fashion.
While I certainly will not write about each and every sample we post out there, I did want to bring your awareness to a new one I just put up today. The new sample is neither a CLI script nor a script package, though you will find it in the script packages section of the gallery. Instead, the new sample is a tool that you can run to produce WebSphere CloudBurst script packages.
Specifically, the tool runs against a target WebSphere cell to produce a WebSphere CloudBurst script package that encapsulates that cell's configuration. The tool works by running the backupConfig command against the target cell. It packages the ZIP file that results from running the command into a special WebSphere CloudBurst script package that you can include in patterns which match the source cell in node quantity and type.
The script package produced by the tool packages logic to run the restoreConfig command using the backed up configuration from the source cell. This will apply the source configuration to a new WebSphere Application Server cell created as the result of deploying a pattern. In addition, the script package contains logic to handle the possibility of changing cell, node, and host names in the target environment.
The tool’s purpose is to help you accelerate the process of importing your existing WebSphere Application Server environments into the appliance as patterns (which is a problem I believe many of you would like to solve). It certainly does not handle everything you need to do to import environments. In fact, it has the same limitations as the backupConfig/restoreConfig utilities in WebSphere Application Server. However, I do believe that it makes it a little easier to start moving your existing environments into the appliance as new WebSphere CloudBurst patterns.
Check out this video to see a quick overview of the tool, and then go download it for free from the samples gallery. The ZIP file that you download has a readme file that gives specific detail about how to use this sample tool. As always, please let me know if you have any questions or feedback.
Virtual image parts play a huge role in WebSphere CloudBurst. When crafting your own customized patterns, you include anywhere from 1 to n parts from as many different virtual images as is necessary. These parts represent the different node types or personalities within a given Hypervisor Edition image, and form the basis of your pattern. When you deploy a pattern, such as the one pictured below, WebSphere CloudBurst creates a distinct virtual machine for each part.
This means that after deploying the above WebSphere Application Server pattern, you will have four virtual machines comprising your virtual system. This gives you a clean separation of concern by providing a unique container for each of your application environment nodes. This can attribute to performance optimization, increased availability, and much more. However, this approach is not suitable to all use cases. In some scenarios, especially when trying to control costs and increase consolidation, you may want to deploy a multi-node WebSphere Application Server environment within a single virtual machine. Based on what I showed you above, you might think our approach in WebSphere CloudBurst makes this impossible, but you would be overlooking an important component of patterns.
That component is of course the second building block of patterns... script packages. As you probably know, script packages allow you to supply just about any customization you want. In the case that you want a single virtual machine to host a number of WebSphere Application Server nodes, maybe even an entire cell, all you need to do is supply a script package that constructs the necessary nodes during deployment. In fact, you don't even have to write the script package. You can use the free sample in our samples gallery. As seen in the pattern below, you include this script package on a sole deployment manager part in a pattern.
The script script package provides parameters that define the node name, number of custom nodes, and number of web server nodes you want in your cell. During the deployment process, the script takes this information and constructs the cell you define. This includes creating the custom and web servers nodes and federating the custom nodes, thus completing the creation of your WebSphere Application Server cell. In this case, the script package provides deployment flexibility that is sometimes a necessity, and it is just another example of the many degrees of flexibility enabled by the script package design.
I should point out that a part in a pattern does not always map to a single node. For instance, in the case of WebSphere Process Server, there is a part that represents a complete, multi-node golden topology encapsulated within a single virtual machine. However, if you find yourself using images that do not contain these multi-node parts, rest easy knowing script packages provide you the flexibility you need.
It isn't often that the IT world looks at the federal government as technological pioneers, and the new CIO of the Office of Management & Budget, Vivek Kundra, thinks that is a problem. Kundra is striving the for the federal government to become key leaders in innovation, and in doing so, he's looking at cloud computing as a first step.
Due to the sensitivity and privacy of data the government is often handling, one may not think cloud computing the best fit. Kundra, however, does not see that as a show stopper. "We recognize that whether it's cloud computing or any area of technology there is sensitive and classified information and it cannot be treated the same way as public information, but they are not mutually exclusive." While only a few words, Kundra makes it clear that one of the biggest perceived fears of adopting cloud computing, security and privacy, will not stand in the way of the federal government's march to innovation.
It's still early in his tenure, but the First CIO seems to be serious about his push for cloud computing. He says that he is "killing projects that don't investigate software as a service first", and he is keen on looking to the cloud for storage and web development solutions. Kundra also believes that by leveraging cloud solutions across the multitude of federal agencies, we can ensure that resources are used only when needed by replacing the always-on data center with outsourced solutions where possible. He's also counting on the adoption of cloud computing to send a strong message that government agencies can lead technological innovation.
I doubt anyone would discount the benefits Kundra is seeking by attempting to move the federal government in the clouds. He hopes to reduce tax dollar spending by using only the necessary IT resources, improve end-user services for tax payers, and foster a culture of technological innovation among a myriad of federal agencies. There is no doubt that such a transition and culture change will not come easy, but Kundra sounds dedicated to an honest effort at change. If the federal government is able to effectively leverage cloud computing solutions, I believe it could be a trend-setter for many organizations. If an entity as unwieldy and complex as the federal government can adopt and derive benefits from cloud computing, I believe many organizations that once discounted the technology may take a fresh look at the capabilities at hand.[Read More]