Ask any enterprise about its overall IT architecture or strategy, and it won’t be long before you’re taking a look at its middleware infrastructure and the services that are hosted there. This infrastructure is often key to an enterprise’s IT capabilities because many services hosted there are outward-facing, revenue generating applications. This infrastructure needs to be able to support applications to provide efficient performance despite demand, and ideally this need is carefully balanced against inefficient resource use. However, that balance is much easier said than done. In reality, it’s often the practice to statically configure environments for the peak demands of the system thus ensuring responsive services, but ultimately leading to resource and economic wastes during off-peak times. WebSphere Virtual Enterprise seeks to address this need for balance by extending the cloud computing concepts of virtualization and virtualization management to middleware and middleware applications.
You may be wondering how WebSphere Virtual Enterprise provides such balance. That brings us to a very important concept of WebSphere Virtual Enterprise. Dynamic provisioning of middleware and applications is directly linked to application performance. Application performance goals are stated to the system via application service policies. These goals are expressed in terms of both application responsiveness and the importance of achieving such responsiveness in relation to other applications deployed within the system. This allows for a quantitative description of what ‘good’ performance is, and it also allows users to separate business-critical applications from those that are a bit more secondary to the business. By linking provisioning directly to application performance, enterprises can be assured that resources are being allocated based on the needs of users of the system.
It’s nice to have the ability to state application service policies, but the policy is nothing if the system doesn’t have the ability to act on it. That’s where dynamic clusters and on demand routers (ODRs) enter the picture. Dynamic clusters provide the capability to expand and contract the number of middleware servers and associated applications that are available to serve requests. If the system notices service policies are being violated, more instances of servers hosting the application associated with the service policy can be started on the dynamic cluster. Conversely, if WebSphere Virtual Enterprise detects that service policies can be met with fewer resources, instances of servers can be stopped and resources reclaimed. It’s also important to point out that dynamic clusters can contain both IBM and non-IBM middleware components allowing the capabilities of WebSphere Virtual Enterprise to extend to many different technologies.
ODRs are the entry point into a WebSphere Virtual Enterprise environment and help to shape the request traffic entering the system. ODRs provide all the features of an HTTP 1.0/1.1 compliant proxy, and incorporate additional on demand features such as request prioritization, request queuing, request routing, and more. Intelligent request routing is achieved by balancing the current system load with the service policies of the application being requested to ensure members are targeted in a way that allows the system to meet the service goals. ODRs provide the necessary gate-keeping duties to most effectively utilize components of a WebSphere Virtual Enterprise environment.
The four short paragraphs above only begin to scratch the surface of WebSphere Virtual Enterprise. Its ability to provide dynamically-scaled, autonomic middleware and middleware applications can give companies a leg up over its competition by ensuring responsive services balanced against efficient resource use. In effect, WebSphere Virtual Enterprise helps companies implement a smarter middleware infrastructure. Click here to read more about WebSphere Virtual Enterprise, and don’t forget to follow us on Twitter. If you have any questions about WebSphere Virtual Enterprise or cloud computing, send us an email at email@example.com.
I want to stay in the realm of the deployment process for our next frequently asked question regarding the WebSphere CloudBurst Appliance.
The ability to quickly deploy entire WebSphere Application Server cells (anything from single node cells, to multi-node clustered cells) is a hugely compelling feature of the appliance. Instead of spending days or hours deploying a WebSphere Application Server cell, users can deploy these in a matter of minutes (less than twenty minutes for clustered environments)!
For the most part, WebSphere CloudBurst patterns represent entire cells. This includes management parts (AdminAgent, DeploymentManager), managed parts (custom nodes), and proxy parts (IHS). When you deploy a pattern, the result is a complete and fully functional WebSphere Application Server cell running in your private cloud.
So, now that you have a complete cell out in your cloud, what happens if you need to add more nodes? If the the user-demand for the applications on your cell has exceeded the initial topology, can you use WebSphere CloudBurst to add more cells? Sure you can!
In short, this involves creating a pattern that contains only a custom node part, and then at deploy time, providing information about the existing cell. WebSphere CloudBurst then takes over the deployment of that custom node and federates the node into the existing cell based on the information supplied about that cell. I won't go into an entire explanation here, because I think the demo I put on our YouTube channel explains it pretty well.
In my experience with the WebSphere Application Server, this represents a much more seamless and automated process for deploying new nodes into an existing cell than what exists outside of WebSphere CloudBurst today. Of course, we value your comments and feedback above all else. So let us know what you think!
Over the last three posts I've been discussing a few of the most frequently asked questions regarding the WebSphere CloudBurst Appliance. I'd like to wrap up today with a fourth and final installment.
If you have read some of my entries before, or if you have read any of our WebSphere CloudBurst articles on IBM's developerWorks, then you know that the appliance brings extreme simplification and safety to applying fixes and service level upgrades to running WebSphere Application Server virtual systems. Users select a virtual system, choose a fix or service level upgrade, and then WebSphere CloudBurst drives the application of the fix or upgrade to the system. Before applying the fix or upgrade, the appliance takes a snapshot of the virtual system, and users can simply click a button to roll back to the previous state if the process produces undesired results.
This is a pretty strong value add to WebSphere Application Server management and one that our users typically immediately understand. Almost always though, after users see this they are curious about another aspect of rolling out fixes and upgrades in WebSphere CloudBurst. In particular, they want to know how they ensure that all subsequent deployments (after applying the fix to a specific virtual system) can be ensured of having the correct fixes and service levels.
The answer to this inquiry is that there are a couple of different ways to achieve this, and it depends on what you are try to accomplish and your preferences. For instance, if you want to make sure all of your subsequent deployments have a particular interim fix, you will likely go the route of image extension. First, you pick the WebSphere Application Server Hypervisor Edition image in your catalog to which the fix applies. Next, you extend that image, and once a virtual machine based off the image is accessible, you use existing WebSphere Application Server tools (Update Installer) to apply the fix. After the fix has been applied, you can capture the updated image and then use it as the basis for patterns created from that particular version of the WebSphere Application Server.
On the other hand, if you are looking to ensure subsequent deployments are based on a new level of the WebSphere Application Server, your process will be a bit different. First you would load a new WebSphere Application Server Hypervisor Edition image (based on the new level of WebSphere Application Server) into your WebSphere CloudBurst catalog. Then you would select any of your customized patterns you wanted to upgrade to the new level, clone that pattern, and simply select the new image as the basis for the pattern. All of your other customizations are preserved. Really, it's that simple!
I hope that over the last month I have answered some of the more common questions about WebSphere CloudBurst. At any point if you have any questions feel free to email me or leave a comment right here on the blog.
Over the past several months industry focus on cloud computing seems to have only intensified. Within IBM and for the purposes of this blog, WebSphere, there have been several announcements and offerings that indicate our commitment and belief in the cloud computing approach.
To further highlight WebSphere's focus and offerings in the cloud computing realm, we are embarking on a "WebSphere in the Clouds" campaign during the months of September and October. Our intent is to virtually deliver information about our cloud strategy and offerings directly from the experts to you, our WebSphere users.
The event will be kicked off by WebSphere's Director of Product Management, Kareem Yusuf, on September 23rd from 9-10 EDT. Kareem will talk about cloud computing in the enterprise, and its unique relationship to SOA thoughts and principles. In addition, he'll give an overview of what WebSphere has been doing in the cloud computing space. This will be followed by sessions from technical experts that detail WebSphere offerings in both the public and private clouds, as well as sessions that discuss enablers of application and application infrastructure elasticity.
To find out more about the "WebSphere in the Clouds" campaign, you can check out the main announcement page. To sign up for the series of virtual events visit the registration page. We hope you will join us for the series of webcasts to learn all about WebSphere's work in the clouds.
I’m going to take a different approach this week in the blog. Instead of me telling you about some of the features or uses of WebSphere CloudBurst, I thought I would catch up with someone using the product everyday, WebSphere Test Architect Robbie Minshall. Robbie is responsible for a team of testers that harness a lab of over 2,000 physical machines to put our WebSphere Application Server product through some pretty rigorous testing. Toward the beginning of this year Robbie’s team started to leverage the WebSphere CloudBurst Appliance in order to create the WebSphere Application Server environments needed for their testing.
Robbie, can you tell us a little bit about what the WebSphere Application Server test efforts entail?
In WebSphere Application Server development and test we have two primary scenarios. The first is making sure that developers have rapid access to code, test cases and server topologies so that they can write code, test cases and then execute test scenarios on meaningful topologies. The second scenario is an automated daily regression where in response to a build, we provision a massive amount of WebSphere Application Server topologies and execute our automated regression tests.
Previously we have supported these scenarios through the deployment of the Tivoli Provisioning Manager for operating system provisioning, some applications for checking out environments, and then a lot of automation scripts for the silent install and configuration of WebSphere Application Server cells.
Given those scenarios and the existing solution, what are your motivations for setting up a private cloud using WebSphere CloudBurst Appliance?
We are supporting these scenarios through a pretty complicated combination of technologies. These include silent WAS install scripts, wsadmin configuration scripts, a custom hardware leasing application and the utilization of Tivoli Provisioning Manager for OS Provisioning. This solution is working very well for us though as always we are looking for areas to improve, opportunities to simplify and to reduce our dependency on investment in our custom automation scripts. Mainly, there were 3 areas where we wanted to improve our framework: Availability, Utilization and Management. This is why we started looking to the WebSphere CloudBurst Appliance.
Can you expand a bit on what you are looking for in those three areas?
The first focus area we have is availability of environments. We really wanted to lower the entry requirement for the skills and education necessary to get a development or test environment. Setting up these environments has just been too hard, too time consuming, and too error prone. Using WebSphere CloudBurst we can provide an easy push button solution for developers to get on-demand access to the topologies they need.
The second area we are looking for significant improvements on is hardware utilization. Our budgets are tight and in our native automation pools we are only using between 6-12% of the available physical resources. In order to improve this we were looking at leveraging virtualization. WebSphere CloudBurst offers the classic benefit of virtualization with the nice additions of optimized WebSphere Application Server placement and really good topology and pattern management. In our initial experiments we were able to push the hardware utilization up to 90% of physical capacity and consistently were leveraging around 70% of our physical capacity.
Finally we are looking to improve and simplify our management of physical resources and automation. We work in a lot of small agile teams and organizational priorities change from iteration to iteration. Not only does WebSphere CloudBurst allow us to maintain a catalog of topologies or patterns for releases but it also allows us to adjust physical resource allocation to teams through the use of sub clouds or cloud groups.
Basically we felt that WebSphere CloudBurst would improve the availability of application environments, enhance automation, and improve hardware utilization all with very low physical and administrative costs.
What were some of the challenges involved with getting a cloud up and running in your test department?
One of our challenges seems like it would be common to many scenarios, especially in today’s world. Our budget for new hardware to build out our cloud infrastructure was initially very limited. Most cloud infrastructure designs depict very ideal hardware scenarios including SANs, large multicore machines, and private and public networks within a dedicated lab. Quite frankly we did not have the budget to create this from scratch. It was important for us to demonstrate value and data to warrant future investment in dedicated infrastructure. After some performance comparisons we were very happily surprised to see that we could leverage our existing mixed hardware within a distributed cloud. The performance of application environments dispensed by WebSphere CloudBurst on many small existing boxes in comparison to large multicore machines with a SAN was very comparable. This allows us to leverage existing hardware, with minimal investment all the while demonstrating the value and efficiencies of cloud computing. That data in turn has allowed us to obtain new dedicated hardware to iteratively build up a larger lab specifically for use with WebSphere CloudBurst.
Specifically with WebSphere CloudBurst, are there any tips/hints you would offer users getting started with the appliance?
Sure. First, we quickly realized as we added hypervisors to our WebSphere CloudBurst setup it was critical to have someone with network knowledge on hand. This is because the hypervisors came from various sections of our lab, and we really needed people with knowledge of how the network operated in those different sections. Once we had the right people we were able to setup WebSphere CloudBurst and deploy patterns within an hour and a half.
Moving forward we continued to have challenges as we dynamically moved systems between our native hardware pool and our cloud. Occasionally the WebSphere CloudBurst administrator would move a system into the cloud but incorrectly configure the network or storage information. This lead to some misconfigured hypervisors polluting our cloud. We overcame this, quite simply and satisfactorily I may add, by creating some simple WebSphere CloudBurst CLI scripts which add the hypervisors, test them individually, by carrying out a small deployment to that hypervisor, and then move the correctly configured hypervisors into the cloud after verifying success. Misconfigured hypervisors go into a pool for problem determination. This has allowed us to maintain a clean cloud, and we are able to dynamically move our hardware in and out of the cloud to meet our business objectives.
We also use the WebSphere CloudBurst CLI to prime the cloud so to speak. Before using a given hypervisor in our cloud, we execute scripts that ensure each unique virtual image in our catalog has been deployed to each of our hypervisors at least once. When the image is first deployed to a hypervisor, a cache is created on the hypervisor side of the connection, thus meaning subsequent deployments do not require the entire image to be transferred over the wire. This gives us consistent and fast deployment times once we are using a hypervisor in our cloud.
I would assume that like many applications deployed on WebSphere Application Server, your team’s applications have several external dependencies. Some of these dependencies won’t necessarily be in the cloud, so how did you handle this?
You’re right about the external dependencies. Our applications and test cases run on the WebSphere Application Server but are dependent upon many external resources such as databases, LDAP servers, external web services etc. WebSphere CloudBurst allows us to deploy WAS topologies in a very dynamic and configurable way but the 1.0.1 version does not allow us to deploy these external resources in the same manner. This was overcome by using script packages in our patterns. These script packages allow us to associate our test applications with various patterns we have defined. The script package definition also allows us to pass in parameters to the execution of our scripts. We supply these parameter values during deploy time, and these values are used to convey the name or location of various external resources. The scripts that install our applications can access these values and ensure the application is properly integrated with the set of resources not managed by the appliance.
What is your team looking to do next with WebSphere CloudBurst and their private cloud?
The next challenge on our plate is to keep up with the demand of our expanding cloud and to develop a more dynamic relationship between our native pools and our cloud using the Tivoli Provisioning Manager. These are fun challenges to have and we look forward to sharing our progress.
I'm glad I got to spend some time with Robbie to glean some insight into their work and progress with WebSphere CloudBurst. I hope this information was useful to you. It's always nice to hear about a product from practitioners who can give you hints, tips, gotchas, and other useful information. Be sure to let me know if you have any questions about what Robbie and his team are doing with WebSphere CloudBurst.
If you've attended one of our WebSphere CloudBurst sessions then you've undoubtedly heard us talk about the "special sauce" or "WebSphere intelligence" delivered by the WebSphere CloudBurst Appliance. If you haven't attended one of our sessions, trust me, we talk about it a lot, but there's good reason. This "special sauce" truly sets WebSphere CloudBurst apart from other virtualization management tools.
Essential to the uniqueness of the WebSphere CloudBurst solution is the WebSphere Application Server Hypervisor Edition virtual image that it dispenses. In one sense, the intelligence comes in the format of pre-installed, tuned, and configured software. The operating system and WebSphere components are all pre-installed, and the WebSphere Application Server configuration is tuned based on best performance practices. In addition, the image comes with a pre-configured instance of each WebSphere Application Server profile type that is available in the version that is bundled. This saves time during deployment since the unneeded profiles are simply removed.
The pre-installed, tuned, configured software only sets the foundation for what truly sets apart the WebSphere CloudBurst solution. The activation framework built inside of the WebSphere Application Server Hypervisor Edition allows WebSphere CloudBurst to deliver unique value. This activation framework allows the single virtual image to turn into many different flavors of WebSphere Application Server (Dmgrs, Standalone nodes, Custom nodes, Job Managers, etc), and it provides the facilities to change WebSphere cell and node names, IP addresses, host names, and more while a running virtual machine instance is being created.
On a mostly unrelated topic, the changing of WebSphere cell names, node names, host names, is done with documented, publicly available commands in either wsadmin or other WebSphere Application Server binaries. I know many customers want to do this exact same thing in their existing environments, so if you are wondering how it is done, drop me a line below.
Anyway, I won't get into anymore detail here because you can get a much better assessment of this special sauce elsewhere. Ruth Willenborg, one of the lead architects for the WebSphere CloudBurst Appliance, did a developerWorks Comment lines piece about this special sauce. Ruth provides a deeper look at the topics I hit on above, and it's a really good read. You can check it out for yourself here.
A recent announcement signaled the coming release of WebSphere CloudBurst 1.1. This new release of the WebSphere CloudBurst Appliance delivers enhancements to all phases of the lifecycle of virtualized WebSphere Application Server environments. Let's take a closer look at a few of these updates.
First and foremost, WebSphere CloudBurst 1.1 delivers support for the PowerVM platform. You can now deploy patterns to create virtualized WebSphere Application Server environments running in a PowerVM environment on pSeries servers. Among other things, this is enabled by a new version of the WebSphere Application Server Hypervisor Edition. This new version of the virtual image contains an AIX operating system and has been specifically bundled to allow it to be activated on the PowerVM hypervisor. From a user standpoint, building, deploying, and maintaining WebSphere Application Server environments is done from the same console with the same look and feel regardless of the target platform. Check out this demo to see WebSphere CloudBurst and PowerVM in action.
In addition to support for PowerVM environments, WebSphere CloudBurst 1.1 will also provide a trial edition of a DB2 virtual image. You can import this image into your WebSphere CloudBurst catalog and then use it to build and deploy DB2 environments. This allows you to, from the same centralized interface, deploy and integrate both your application and data environments in your private cloud. Check out this demo for more information on the new DB2 trial virtual image for WebSphere CloudBurst.
One other cool feature I want to point out delivers an enhancement to the use of script packages in WebSphere CloudBurst. In this new version of the appliance, you have more control around when script packages you include in a pattern are executed. Previously, these were executed toward the end of pattern deployment once all the necessary WebSphere Application Server components had been started. While that is still the default behavior, you can also elect to have the script package invoked when the virtual system is deleted, or you can choose the invocation to be user-initiated meaning that you decide when and how many times your script runs. To check out a pretty handy use case for this feature, watch the demo here.
These aren't the only new features and enhancements delivered in WebSphere CloudBurst 1.1. Stay tuned for more demonstrations and more words about these new features and when and why you would want to use them. In the meantime, if you have any questions be sure to stop by our forums.
Users of cloud computing solutions today expect to be charged for exactly the amount of compute resource they use. No more, no less. This expectation is often at the forefront of our customers' minds when contemplating the creation of internal or private clouds. They want to be sure that any solution they use audits the activity and usage of their cloud and enables them to consume this information to implement their specific chargeback scheme.
Thought it's not a feature we always seem to talk about, WebSphere CloudBurst provides the necessary capabilities to properly allocate costs to users, teams, and organizations. To start with there are some handy usage reports that you can view directly from the WebSphere CloudBurst console. For instance, as seen below, a WebSphere CloudBurst administrator can see a break down of cloud resource usage for each user of the appliance.
While the capability illustrated above is nice, it is likely that if you are implementing an enterprise-scale chargeback scheme you want to automate the processing of the usage data, thus implying the need to programatically consume such data. WebSphere CloudBurst enables you to do just this by way of its audit log. The WebSphere CloudBurst audit log is a record of each and every action taken in the appliance, along with information about who took the action, when the action was taken, what object the action was taken on, and much more. You can instruct the appliance to generate this file for a specified date range, and the output is a comma separated value file that can then be consumed in a manner of your choosing.
As an example of some of the things you can do with this data, I recently wrote a Java program that parsed the audit file and for each virtual system determined who created it, who deleted it (if it had been removed), and the duration of its existence. This program was simple (more of a string parsing exercise than anything else), but nonetheless provided necessary function and output for billing schemes based on hours of usage. If you are interested in how this was done please let me know and I'd be happy to discuss details. In the meantime, if you have any thoughts you can reach me on Twitter via @WebSphereClouds.
It's about the time of year when we all look back and try to determine exactly how we spent the past twelve months. Whether we do it because we have to as part of year-end job reviews or because we like to take stock in what we've done and figure out where to improve next year, it's a time for reflection and recall. For me, this exercise made me take a look at various things we have done to deliver WebSphere CloudBurst technical collateral (articles, demos, blogs, etc.) in 2009.
For all practical purposes, our mission and efforts for such technical collateral for WebSphere CloudBurst started when it was announced at Impact in May of this year. Though there was certainly some preparatory work being done on this front, there was nothing we could really push to the public until after the announcement, and in some cases even after the appliance's release in June. Given that most of the content was produced over a six month stretch, I really think we put forth a strong effort, and I hope that this technical material has helped to both raise awareness of and educate users on the WebSphere CloudBurst Appliance.
Seeing as I already went back and rounded up this content, I thought I'd provide you a centralized look at the information. To start, I accounted for the articles that we published to the IBM developerWorks site over the six month stretch. All together I counted 8 articles and a special column entry:
As you can see the articles cover quite a bit of content and range from general level overview articles to technical in-depth "how-to" style articles. In general they seem to have been received well with over 26,000 views to this point. Our goal is to keep the pace up for 2010, and we already have a few articles on our plate for early in the new year (including an overview of what's new in WebSphere CloudBurst 1.1).
Another main medium we utilized to spread the word about WebSphere CloudBurst was YouTube. On our YouTube channel at http://youtube.com/websphereclouds, we currently have 17 different videos that demonstrate how to use certain features of the WebSphere CloudBurst Appliance. Though I think each demo provides value depending on exactly what a viewer is looking for, 3 of them really stick out for me.
Check out our videos if you get a chance. We've made an effort to keep them as short as possible while still providing value to viewers.
We have some WebSphere CloudBurst content spread around other places as well including this blog and my personal blog. Over the next few weeks we'll be taking a look at what worked and didn't work with respect to getting information out to the public. Of course at any time we very much appreciate your feedback on how you like to see content delivered because you are our target audience! If you have a comment, idea, or suggestion, leave a comment on the blog or send me a tweet to @WebSphereClouds.
I want to clear something up about WebSphere CloudBurst that can sometimes cause a bit of confusion. In nearly all of our content about the appliance, we talk about it in the context of building private clouds consisting of WebSphere application environments. Typically people think of private clouds as something only those within their organization can access and utilize. However, with WebSphere CloudBurst you are not limited to creating that kind of a private cloud.
Perhaps it is more fitting that we talk about WebSphere CloudBurst as a means to create on-premise clouds. After all, that's really what we mean. You create a shared pool of hardware and network resources owned by your organization, and then you define this cloud of resources to WebSphere CloudBurst. Once that cloud is defined, you can leverage WebSphere CloudBurst to dispense your WebSphere application environments into that cloud. The accessibility of your application environments running in that cloud is entirely up to you.
You may decide that the cloud is indeed private and that only those in your organization or a smaller subset of users can access the environments. On the other hand, you may decide that you want to allow consumers in the public domain to request WebSphere application environments and then have WebSphere CloudBurst provision those environments into a public cloud. I say public here because while the cloud's resources are on your premise, access to that cloud is not restricted to within the organizational firewall. Ultimately, the determining factor for whether or not your WebSphere CloudBurst cloud is public or private is the network configuration you provide. If the virtual machines are associated with network resources that are publicly accessible, then I would say you have a public cloud.
I hope this entry didn't serve to only add to the confusion. The bottom line is this: WebSphere CloudBurst allows you to create, deploy, and maintain virtualized WebSphere environments in an on-premise cloud. Whether that cloud is public or private is entirely up to the network configuration that you setup.
If you are going to install and use WebSphere CloudBurst in your own environment, it is very likely that you would want at least two appliances. Perhaps you want to have a standby appliance in case of a failure on the main appliance, or maybe you have different teams that are looking to utilize the appliance in different data centers. In any case, once you install multiple appliances there's another requirement that will pop up pretty quickly. Naturally you are going to want to share custom artifacts among the various WebSphere CloudBurst boxes.
When I say custom artifacts, namely I mean virtual images, patterns, and script packages. Script packages have been easy enough to share since WebSphere CloudBurst 1.0 because you can simply download the ZIP file from one appliance and upload it to another. However, there are some enhancements in WebSphere CloudBurst 1.1 that make it easy to share both patterns and images among your different appliances.
As far as patterns go, there is a new script included in the samples directory of the WebSphere CloudBurst command line interface package called patternToPython.py. This script will transform a pattern you specify into a python script. The resulting python script can then be run against a different WebSphere CloudBurst (using the CLI), and the result is the pattern is created on the target appliance. You need to be sure that the artifacts that pattern references (script packages and virtual images) exist on the target appliance and have the exact same name as they do on the appliance from which the pattern was taken. There are no other caveats, and this new sample script makes it really simple to move patterns between appliances.
For virtual images, a new feature was added that allows you to export a virtual image from the WebSphere CloudBurst console. Simply select a virtual image, specify a remote machine (any machine with SCP enabled), and click a button to export the image as an OVA file. This OVA file can then be added to another WebSphere CloudBurst catalog using the normal process for adding virtual images. You can see this feature in action here.
Stay tuned for more information about some of the handy new features in WebSphere CloudBurst 1.1. We also should have a comprehensive look at the new release coming soon in a developerWorks article.
Not long ago I created a demonstration that highlighted the new support for the PowerVM platform introduced in WebSphere CloudBurst 1.1. In that demonstration I showed how you can deploy to a PowerVM cloud by defining a new cloud group that interfaces with a VMControl instance to manage a pSeries cloud environment. However, in the demo I did not go into much detail about the components of a pSeries cloud used with WebSphere CloudBurst.
Since pictures help me out a lot, I thought I’d start the discussion with an image that depicts the components in the pSeries cloud environment and the workflow when using WebSphere CloudBurst to deploy systems to this environment.
The workflow begins when a user requests the deployment of a pattern and targets that deployment for a PowerVM cloud group. WebSphere CloudBurst first checks that the cloud group contains the compute resources necessary to deploy the pattern. After the resource checks are complete, WebSphere CloudBurst decides where to place each virtual machine that will be created from deployment using its intelligent placement algorithm. No matter the type of the cloud environment being utilized the appliance retains control over placement decisions, thus ensuring the virtual system has been deployed in a way that optimizes both performance and availability.
Once the placement decision has been made, WebSphere CloudBurst communicates with the VMControl instance, which in turn instructs the Hardware Management Console (HMC) to create LPARs on the targeted pSeries machines. These LPARs will host the virtual machines that represent the WebSphere Application Server nodes in your virtual system. After the LPARs have been created, WebSphere CloudBurst leverages VMControl to instruct the Network Installation Manager (NIM) to deploy virtual images to the necessary LPARs.
When the LPARs have been created and the virtual images have been deployed to those LPARs, the common process of virtual system creation can proceed. This process includes starting virtual machines, starting WebSphere Application Server components, and running any user-supplied scripts. The end result is a ready to use, virtualized WebSphere Application Server cell running on the PowerVM hypervisor platform.
I hope this provides a nice overview of the underlying environment when PowerVM hypervisors are used with WebSphere CloudBurst. As for those users who are not WebSphere CloudBurst cloud administrators, the information above is nice to know but not necessary. The user experience with respect to building, deploying, and managing your virtualized application environments with WebSphere CloudBurst is consistent regardless of the type of your cloud platform.
If you read some of my entries from time to time, chances are you know that you can use WebSphere CloudBurst to apply interim fixes and fixpacks to your deployed virtual systems. When you choose to apply either a fix or fixpack, WebSphere CloudBurst temporarily stops the virtual system, takes a snapshot of the system (the entire WebSphere cell), applies the fix or upgrade, and then starts the system back up. The result is an updated, running WebSphere cell, and if you need to, you can rollback the virtual system to the previous configuration by simply clicking a button.
In WebSphere CloudBurst 1.0 the application of fixes and upgrades were applied via the web console which made it hard to automate this process. However, in WebSphere CloudBurst 1.1 you can use the command line interface to apply fixes and fixpacks to virtual systems. The appliance still takes the actions I described above, thus the process is still simple, safe, and fast. The only difference is the interface through which you drive the application of the maintenance.
What does it look like? Quite frankly, it's very simple. You can go through all of my virtual systems and apply both fixes and fixpacks with the seven line script below:
for virtualSystem in cloudburst.virtualSystems:
fixes = virtualSystem.findFixes()
if len(fixes) > 0:
upgrades = virtualSystem.findUpgrades()
if len(upgrades) > 0:
You can write this script once, save it as a Jython file, and run it with the CLI's batch mode anytime you want to roll out maintenance to your virtual systems. It's really amazing to me that the above SEVEN lines are capable of rolling out all fixes and all upgrades within your WebSphere CloudBurst catalog to every virtual system the appliance is managing. I can't think of an easier or safer way to automate the deployment of fixes/upgrades to your WebSphere environments.
Let me know if you have any questions. As always you can reach me on Twitter @WebSphereClouds.
I was at a customer meeting the other day, and someone asked me if they could query WebSphere CloudBurst for an inventory of all of their virtual system deployments. This person was of course aware that he could go to the web console and very quickly view all of the virtual systems. What he wanted though was something that he could run to generate a report that contained all of this information. For a purpose like this, harnessing the WebSphere CloudBurst CLI is exactly the way to go.
I thought I'd write a simple CLI script that provides an example of how you could do this.
from datetime import datetime
outFile.write("WebSphere CloudBurst Virtual System Inventory\n")
outFile.write("Total virtual systems: " + str(len(cloudburst.virtualsystems)))
def writeVSDetails(outFile, virtualSystem):
outFile.write("\tVirtual system name: " + virtualSystem.name)
outFile.write("\tCreated from pattern: " + virtualSystem.pattern.name)
outFile.write("\tVirtual system status: " + virtualSystem.currentstatus_text)
created = datetime.fromtimestamp(virtualSystem.created)
outFile.write("\tVirtual system creation date: " + created.strftime("%B %d, %Y %H:%M:%S"))
outFile.write("\tTotal virtual machines: " + str(len(virtualSystem.virtualmachines)))
def writeVMDetails(outFile, virtualMachine):
outFile.write("\t\tVirtual machine name: " + virtualMachine.name)
outFile.write("\t\tVirtual machine display name: " + virtualMachine.displayname)
outFile.write("\t\tCreated from image: " + virtualMachine.virtualimage.name)
outFile.write("\t\tVirtual machine hypervisor: " + virtualMachine.hypervisor.name + " | " + virtualMachine.hypervisor.address)
outFile.write("\t\tVirtual machine IP address: " + virtualMachine.ip.ipaddress)
outFileLoc = sys.argv
outFile = open(outFileLoc, 'w')
for virtualSystem in cloudburst.virtualsystems:
for virtualMachine in virtualSystem.virtualmachines:
As a result of invoking this script using the CLI's batch mode, content is written to the file location supplied by the caller.
WebSphere CloudBurst Virtual System Inventory
Total virtual systems: 3
Virtual system name: Single server
Created from pattern: WebSphere single server
Virtual system status: Started
Virtual system creation date: January 15, 2010 16:37:20
Total virtual machines: 1
Virtual machine name: Standalone 0
Virtual machine display name: Single server cbvm-110 default
Created from image: WebSphere Application Server 220.127.116.11
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual system name: Development WAS Cluster
Created from pattern: Custom WAS Cluster - Development
Virtual system status: Started
Virtual system creation date: January 18, 2010 14:08:46
Total virtual machines: 2
Virtual machine name: DMGR 0
Virtual machine display name: Development WAS Cluster cbvm-112 dmgr
Created from image: WebSphere Application Server 18.104.22.168
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual machine name: Custom Node 1
Virtual machine display name: Development WAS Cluster cbvm-111 custom
Created from image: WebSphere Application Server 22.214.171.124
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
Virtual system name: DB2 for development use
Created from pattern: DB2
Virtual system status: Started
Virtual system creation date: January 18, 2010 14:09:58
Total virtual machines: 1
Virtual machine name: DB2 Enterprise Server 32bit Trial 0
Virtual machine display name: DB2 for development use cbvm-113
Created from image: DB2 Enterprise 126.96.36.199 32-bit Trial
Virtual machine hypervisor: Ruth ESX | https://<hypervisor_host>/sdk
Virtual machine IP address: <ip_address>
I withheld IP addresses and host names above for obvious reasons, but if you ran the script against your environment you would see actual host name and IP address values. The script above is written once, and it can be subsequently run anytime you want an inventory of virtual systems running in your WebSphere CloudBurst cloud. There's other information available for virtual systems and virtual machines that I didn't show here, and you can retrieve it if necessary for your inventory report. In addition, I chose to print this information as regular text in a file supplied by the caller, but you might choose to generate the report in another format including XML, JSON, or anything else for that matter.
-- Dustin Amrhein
p.s. As with any sample code or script I provide here, the above is only a sample and offered as-is.
Every time I've visited with customers about WebSphere CloudBurst, without fail someone requests that the appliance support products besides the WebSphere Application Server. We started to address these requests with WebSphere CloudBurst 1.1 when we announced the availability of a DB2 Enterprise 9.7 trial virtual image specifically packaged for use in the appliance. Very recently we continued to respond to customer requests by extending the list of supported products in WebSphere CloudBurst to include WebSphere Portal.
The WebSphere Portal Hypervisor Edition, initially offered as a Beta product, is a virtual image packaging of WebSphere Portal 6.1.5 ready for use in the WebSphere CloudBurst Appliance. The image includes a pre-installed, pre-configured instance of WebSphere Portal. Also contained within the image is an IBM HTTP Server instance configured to route to the WebSphere Portal instance and a DB2 instance installed and configured as the external database for WebSphere Portal. The WebSphere Portal instance also includes Web Content Management enablement along with several samples to help users get started right away.
The user experience when building and deploying WebSphere Portal patterns remains consistent with the existing experience for WebSphere Application Server and DB2 patterns. Another good note is that you can expect similar rapid deployment capability for WebSphere Portal patterns. I got a running virtual system, with all the parts I mentioned above installed and configured (meaning no after the fact integration scripting was necessary) in under 15 minutes.
To see more, check out my new demonstration of the WebSphere Portal Hypervisor Edition for the WebSphere CloudBurst Appliance. If you have a WebSphere CloudBurst Appliance you can download the WebSphere Portal Hypervisor Edition image and a usage guide from here.
If you've read anything I've written about WebSphere CloudBurst up to this point you know all about patterns. Using the appliance you can easily and quickly build, deploy, and manage these representations of your middleware application environments. Today, I want to focus in on the deployment piece in particular and take a look at how you can easily automate this process.
You can use the WebSphere CloudBurst web console to deploy patterns, and when doing so you can even schedule the deployment to happen at a later date. This scheduling capability certainly gets you on the road to an automated deployment process, but what if you want to take it one step further and eliminate the need for someone to login and manually move around the web console to schedule automated deployments? In this case, you can use either the CLI or the REST interface that WebSphere CloudBurst offers.
In this post I thought I'd take a look at using the CLI interface in order to set the stage for some nice automation around pattern deployment. It starts out with a properties file that provides details about my deployment. This includes the cloud to deploy to, the pattern to deploy, password information, and the time at which the virtual system should start.
SYSTEM_NAME_PREFIX=New App Development
TARGET_CLOUD=Default ESX group
TARGET_PATTERN=WebSphere single server
Imagine that the properties file above gets written as the result of some other action, such as the completion of your application's build process. With the properties file in place, and I'll point out that your properties file can and probably will be more robust than above, let's move on to the code that handles the deployment process based on the information in said file. First, we have a small amount of CLI code to retrieve and parse the input data (I omitted the straight-forward properties retrieval for space):
from datetime import datetime, timedelta
from java.util import Properties
from java.io import FileInputStream
// read in and retrieve properties using java.util.Properties API (i.e. props.getProperty('DEPLOYMENT_DATE'))
parsedParts = deploymentDate.split(" ")
systemName = systemName + "_" + deploymentDate
dateParts = parsedParts.split("/")
timeParts = parsedParts.split(":")
monthPart = int(dateParts)
dayPart = int(dateParts)
yearPart = int(dateParts)
hourPart = int(timeParts)
minutePart = int(timeParts)
Next is the code that actually schedules the pattern deployment:
First we get the desired deployment time and current time as datetime objects. After that, assuming the desired deployment time has not already elapsed, we calculate the difference between the desired deployment time and current time. This difference, in seconds, is then added to the result of the time.time() value to come up with a start time. After that is done, we simply retrieve the cloud that was indicated in the properties file, and then we call the runInCloud method for the pattern indicated. When calling the runInCloud method we supply the name of the virtual system that will be created, password information, and the start time we calculated earlier. As a result of this method call, a task will be generated in the target WebSphere CloudBurst Appliance and the virtual system will be started at the specified time. This will happen in an automated fashion with no human intervention required.
That's really all there is to automating the pattern deployment process using the CLI. In a more complete, end-to-end scenario you may envision the completion of one process, such as an application build process mentioned above, result in the writing of the properties file and in turn the call into the CLI to deploy a pattern. As always, feel free to send me any comments or questions.
When it comes to managing users and user groups within WebSphere CloudBurst, you can choose to manage all aspects of those resources within the appliance. Mainly this means that you can define and store user information (including login passwords) within the appliance, and you can define and maintain user groups and their associated membership list on the appliance. While you can do this and be sure that your information is extremely secure, you may instead want to integrate with an existing LDAP server that has some of this user and user group data. WebSphere CloudBurst certainly allows you to integrate with LDAP servers, but what does that mean for you?
For starters, when you integrate WebSphere CloudBurst with an LDAP server and enable the LDAP authentication feature, you no longer specify password information when defining users of the appliance. When users login, the password they specify will be authenticated against information stored in the LDAP server. Naturally, if you add a new WebSphere CloudBurst user with LDAP authentication enabled, that user must be defined in the LDAP server. Otherwise, WebSphere CloudBurst will prevent you from adding the user because it has no way to authenticate that person.
From a user groups standpoint, integrating with LDAP means you can no longer modify user group membership. User membership in groups is determined by information in the LDAP server. As a result, the same rule concerning adding new users applies when adding new user groups: You cannot define new user groups that do not exist in the LDAP server.
If you want to take a look at what LDAP integration looks like with WebSphere CloudBurst, I put together a short video. Let me know what you think.
The ability to package custom maintenance packages and upload them as emergency fixes is perhaps a lesser known feature of WebSphere CloudBurst, but nevertheless something that's been around since the product's initial release. This is a powerful feature that allows you to build your own fix packages that you can then apply the same way you would use WebSphere CloudBurst to apply a PAK file or fixpack shipped by IBM.
Since IBM is delivering fixes and updates to all of the contents within WebSphere Application Server Hypervisor Edition virtual images (including the OS and IBM software components), you may wonder why you would even want to create your own maintenance packages. One reason would be if you switched out the SUSE Linux operating system shipped with the VMware ESX based images in favor of your own Red Hat operating system. In that case you would be responsible for maintenance to the operating system, and custom maintenance packages would be of interest to you. Another scenario where these custom maintenance packages come in handy would be if you created your own customized images that include non-shipped third-party software in addition to the software shipped in the images. If at some point you have the need to fix or update this software in a running virtual machine, custom maintenance packages provide you the vehicle with which to do just that.
What do these custom maintenance packages look like? In short, they are simply archives or ZIP files. The contents of the archive are largely decided by you, but there is one piece of metadata that is necessary if you want to use WebSphere CloudBurst to apply the maintenance. A file called service.xml is inserted into the root of the archive and tells WebSphere CloudBurst critical information about the custom fix archive. Here's an example of a service.xml file:
Most notably, this metadata tells WebSphere CloudBurst what module or script to invoke to apply the maintenance (Command, this executable is supplied by you), what image versions the fix is applicable to (ImagePrereqs), and the location of the working directory on the virtual machine (Location). In addition to the service.xml file and the executable, you can package up anything else, such as product binaries, which are needed to successfully apply the fix/upgrade/maintenance.
If you haven't noticed, this is an extremely flexible mechanism and can be used for just about anything. I should point out that you can only apply a given fix once per virtual machine, so it's not good for something that you want to run repeatedly against a given machine (check out user-initiated script packages instead). Also, there is a 512MB size limit on the archives. Keep these restrictions in mind when you are deciding how to use custom maintenance packages. If you are interested in learning a bit more about custom maintenance packages or other maintenance techniques, check out this article I co-authored along with Xiao Xing Liang from the IBM SOA Design Center in the China Development Lab.
One of the new features that debuted in WebSphere CloudBurst 1.1 is the ability to resize the disks in a virtual image during the extend and capture (image customization) process. If you remember, the virtual images that exist in the WebSphere CloudBurst catalog are made of multiple virtual disks. In WebSphere CloudBurst 1.0 a default size was used for the virtual disks and this could not be changed, even during the image extension process. To be quite honest we got quite a bit of feedback about this, and so with version 1.1 while default sizes are still provided, you can specify the eventual size of each of the virtual disks during the image extension process.
As an example, consider the WebSphere Application Server Hypervisor Edition virtual image. This image contains four virtual disks: one for the WebSphere Application Server binaries, one for the WebSphere Application Server profiles, one for the IBM HTTP Server, and one for the operating system. The default size of each of these disks in the 188.8.131.52 version of the image is 6GB, 2GB, 1GB, and 12GB respectively, for a total of roughly 21GB. While that may be fine for some, what happens if you are going to be installing various other third-party software packages in the image? You may need more disk space for the operating system's virtual disk. Perhaps your WebSphere applications produce log files of considerable size. In that case you may want to increase the default size of the WebSphere Application Server profiles disk space.
Those scenarios and more are exactly why the resizing capability was added. When you extend the WebSphere Application Server Hypervisor Edition 184.108.40.206 virtual image in WebSphere CloudBurst 1.1, you will be presented the option to resize one or more of the virtual disks:
In the case above the default operating system disk size is bumped up to 16GB from the default 12GB size. Also note that in addition to changing the disk size, you can specify the number of network interfaces for your custom image.
Obviously, when you increase the size of the disks within the virtual image you are also increasing the storage requirements for that image when it is deployed to a hypervisor. Keep this in mind when you are calculating the upper bound capacity of your cloud. If you want to see more about how this feature works, check out this video.
"What is the difference between WebSphere CloudBurst and IBM CloudBurst?" After the IBM Pulse 2010 event this week, I'm hearing this question in my sleep. It came from both our customers and other IBMers, and it's not hard to understand the confusion caused by the name similarity. Let's take a shot at clearing up any confusion around the two separate offerings and explain the complementary value WebSphere CloudBurst can provide IBM CloudBurst.
Both IBM CloudBurst and WebSphere CloudBurst provide capabilities to enable private, or on-premise, clouds. The main differences between the products are the degree to which they are purpose-built and the form in which they are delivered. First off, the IBM CloudBurst solution form factor consists of three primary elements: service management software, hardware, and IBM services. The software portion of the package provides general purpose (very important distinction) provisioning, workflow, and management capabilities for the services that make up your cloud. These services could consist of WebSphere software or any other software that you can package into a virtual image format. The hardware is the actual compute resource for your on-premise cloud, and the IBM services portion of the package provide a fastpath to get started with your cloud implementation.
On the other hand, WebSphere CloudBurst is a cloud management hardware appliance that delivers function to create, deploy, and manage virtualized WebSphere application environments in an on-premise cloud. WebSphere CloudBurst is purpose-built for WebSphere environments meaning that a lot of the things users would have to script with general purpose cloud provisioning solutions (creating clusters, federating nodes into a cell, applying fixes, etc.), are automatically handled by the appliance and virtual images with which it ships. Also, it is important to note that WebSphere CloudBurst works on a "bring your own cloud" model. The virtualized WebSphere application environments do not run on the appliance, but instead they are deployed to a shared pool of resources to which the appliance is configured to communicate.
While we are talking about two offerings that have the noted differences above, I should also point out the how and why of the integration of these two offerings. The WebSphere CloudBurst Appliance can be leveraged from within the IBM CloudBurst solution to handle the provisioning of WebSphere middleware environments in your data center. From the included Tivoli Service Automation Manager interfaces in the IBM CloudBurst solution, you can discover and deploy WebSphere CloudBurst patterns that exist on an appliance in your data center. WebSphere CloudBurst will deploy the patterns to the set of hardware resource provided by the IBM CloudBurst solution. Why would you want to integrate the two? If a large portion of your data center provisioning involves WebSphere middleware environments, WebSphere CloudBurst provides quick time to value and low cost of ownership. The WebSphere know-how is baked into the appliance and the virtual images it ships meaning that you don't need to develop and maintain what would be a rather large set of configuration scripts for the WebSphere environments running in your cloud.
I hope this clears the air a bit about not only the difference in IBM CloudBurst and WebSphere CloudBurst, but also about how and why these two can be integrated. I will never answer everyone's question in a simple blog post, so if I didn't address yours please leave a comment or reach out to me on Twitter @damrhein.