I have written in this blog before about how using WebSphere CloudBurst, users can achieve fully customized WebSphere Application Server environments. Those customizations start at the operating system level by extending the shipped virtual images, modifying those images, and recapturing the custom image in the WebSphere CloudBurst catalog. On top of operating system level customizations are customizations made to the WebSphere Application Server middleware environment.
Customizations to the middleware environment are mainly achieved through what WebSphere CloudBurst calls script packages (click here for the script package section in the WebSphere CloudBurst Information Center). Very simply script packages are merely ZIP files that users uploaded into the WebSphere CloudBurst catalog and include in their custom WebSphere CloudBurst patterns as necessary.
The ZIP file that is stored in the catalog includes an executable script and optionally a set of artifacts used or needed by the script. In addition to uploading this ZIP file into the WebSphere CloudBurst catalog, users tell WebSphere CloudBurst how to invoke the script package and provide other information like environment variables that need to be present during script execution. WebSphere CloudBurst then uses this information to invoke the script packages upon completion of pattern deployment.
Script packages are very open-ended. As long as WebSphere CloudBurst can execute the script in the operating system environment, then it can be included in a pattern. So, users can create script packages that utilize the WebSphere Application Server wsadmin tool to install applications, set server trace settings, or otherwise alter the WebSphere Application Server configuration. Alternatively, users can supply operating system executables like shell scripts that otherwise configure the application environment. As long as a valid executable is supplied, then there are really no imposed limits on what a script package can do.
However, just because script packages can do just about anything doesn't mean they should! You can read this developerWorks article for a more complete discussion about WebSphere CloudBurst customizations and the use of script packages, but the bottom line is that script packages are best suited to deliver customizations that vary over application environments (like installing applications). For customizations that are needed in just about every application environment (required anti-virus software comes to mind), creating a custom virtual image is the way to go.
If you want to learn more about WebSphere CloudBurst script packages, I'd suggest you check out the WebSphere CloudBurst Information Center link and the developerWorks link above. As always, let us know if you have any questions by commenting here, sending us a tweet (@WebSphereClouds), or by sending an email to firstname.lastname@example.org.
IBM trekked further into the cloud today by announcing new offerings that will help clients to leverage the cloud within their enterprise. These offerings include development environments, test environments, and desktop services running either in the IBM cloud or a private cloud, as well as something called IBM CloudBurst (not to be confused with the WebSphere CloudBurst Appliance).
In particular, the IBM CloudBurst offering jumps out to me as a truly innovative offering. This offering promises our clients a complete private cloud solution that includes hardware, software, and services all in a single package.
Essentially it sounds like the IBM CloudBurst offering is all about building out a cloud-enabled data center. The hardware provides the bulk computing power to host private clouds, and the software provides enhanced service management capabilities to give users full control and insight over the elements of their private cloud.
Couple these computing capabilities with included IBM service, and this means clients should be able to get up and going with their private clouds very quickly.
This new offering really seems to hit a sweet spot. While the number of cloud computing offerings continues to grow, few if any offerings take such a holistic approach to providing such a solution.
This type of comprehensive solution gives users what they need in terms of the hardware to host a private cloud and the software to manage that cloud. Just as importantly though, it helps users put the hardware and software to work via implementation services also offered in the solution.
I invite you to take a look at the new IBM CloudBurst offering, and don’t forget to sign up for the IBM CloudBurst webcast scheduled for June 25th.
I just wanted to point out a great opportunity for anybody considering leveraging IBM Workload Deployer v3 to deploy Database workloads. On June 29th Rav Ahuja, a Senior Product Manager for Data Management at IBM, will be hosting a webcast entitled "Easily Deploying Private Clouds for Database Workloads". He will be joined by Chris Gruber (Product Manager, Database as a Service), Leon Katsnelson (Program Director, IM Cloud Computing Center of Competence), and Sal Vella (Vice President, Database Development and Warehousing) in this panel discussion.
As many of you already know, IBM Workload Deployer v3 comes pre-loaded with DB2 images and patterns that are configured to rapidly provision standardized database servers for any number of purposes. The servers can be deployed in standalone configurations or as part of a complete virtual system including web components with the database components. These servers can also be configured for high availability scenarios. This panel discussion will cover all of these scenarios and more.
You can read more about the webcast in this blog post by Rav Ahuja.
If you want further details about how to build and rapidly deploy databases in a private cloud, be sure to attend this free webinar on June 29th.
If you've attended one of our WebSphere CloudBurst sessions then you've undoubtedly heard us talk about the "special sauce" or "WebSphere intelligence" delivered by the WebSphere CloudBurst Appliance. If you haven't attended one of our sessions, trust me, we talk about it a lot, but there's good reason. This "special sauce" truly sets WebSphere CloudBurst apart from other virtualization management tools.
Essential to the uniqueness of the WebSphere CloudBurst solution is the WebSphere Application Server Hypervisor Edition virtual image that it dispenses. In one sense, the intelligence comes in the format of pre-installed, tuned, and configured software. The operating system and WebSphere components are all pre-installed, and the WebSphere Application Server configuration is tuned based on best performance practices. In addition, the image comes with a pre-configured instance of each WebSphere Application Server profile type that is available in the version that is bundled. This saves time during deployment since the unneeded profiles are simply removed.
The pre-installed, tuned, configured software only sets the foundation for what truly sets apart the WebSphere CloudBurst solution. The activation framework built inside of the WebSphere Application Server Hypervisor Edition allows WebSphere CloudBurst to deliver unique value. This activation framework allows the single virtual image to turn into many different flavors of WebSphere Application Server (Dmgrs, Standalone nodes, Custom nodes, Job Managers, etc), and it provides the facilities to change WebSphere cell and node names, IP addresses, host names, and more while a running virtual machine instance is being created.
On a mostly unrelated topic, the changing of WebSphere cell names, node names, host names, is done with documented, publicly available commands in either wsadmin or other WebSphere Application Server binaries. I know many customers want to do this exact same thing in their existing environments, so if you are wondering how it is done, drop me a line below.
Anyway, I won't get into anymore detail here because you can get a much better assessment of this special sauce elsewhere. Ruth Willenborg, one of the lead architects for the WebSphere CloudBurst Appliance, did a developerWorks Comment lines piece about this special sauce. Ruth provides a deeper look at the topics I hit on above, and it's a really good read. You can check it out for yourself here.
A few nights ago, I went for a casual jog around the neighborhood. There is nothing notable about this since I do it regularly, and more to the point why should you care?? Well, it's what I did before and after the jog that may be of interest.
Sometimes when I present WebSphere CloudBurst I note a hint of skepticism in the room. It's not that the audience does not see the value in the WebSphere CloudBurst approach, since to a person they nearly always do. However, in some cases, the hardcore, impressively knowledgeable WebSphere administrators in the room are a little doubtful that they can create the same kind of heavily customized environments they rely on daily using WebSphere CloudBurst patterns. Well, I contend that you can!
I decided that rather than just make a claim, I would build a heavily customized, integrated environment in the form of a pattern. In particular, I wanted to build a pattern to accomplish the following:
Create a WebSphere Application Server cell augmented with WebSphere Virtual Enterprise management capabilities - including an On Demand Router
Create a dynamic cluster
Install an application into the dynamic cluster
Configure HTTP session replication behavior backed by the WebSphere DataPower XC10 Appliance
Create a DB2 database instance complete with a database and table
Create a WebSphere Application Server data source to the DB2 instance
I would consider this a fairly complex target environment with multiple integration points (XC10 and DB2). In WebSphere CloudBurst, I started by creating a custom virtual image and installed the necessary WebSphere eXtreme Scale binaries. This provided me the basis to achieve integration with the WebSphere DataPower XC10 Appliance. After creating the image, I built the pattern pictured below.
With a combination of parts, scripts, and advanced options provided by WebSphere CloudBurst (to setup my dynamic cluster), the pattern above encapsulates each one of my goals from above. And, before you throw your hands up and tell me how much scripting that is, the longest script is less than 50 lines! Better yet, I only have to write these scripts once, and I only have to provide invocation instructions once. Once completed, the invocation of my scripts is an automated process. This means I cannot muck up the configuration work.
So, what does all of this have to do with my jog? On the way out of the house that evening, I hit the deploy button for the above pattern. When I walked back in the house, I logged into WebSphere CloudBurst, pulled up my virtual systems and saw it was up and running. I logged into the WebSphere Application Server administration console and verified that the resulting environment met every last one of my deployment goals. How is that for low-touch? I simply clicked the deploy button, went for a run, came back and I had a pretty interesting application environment up and going.
Much of the focus on cloud computing to date revolves around the ways in which cloud computing delivers significant administrative and operational benefits. After all, the more dynamic, autonomic capabilities promised by cloud could go a long way in relieving some of the burden in managing large, complex IT infrastructure operations. Sometimes lost in the cloud computing benefits discussion is how cloud computing enhances development and test groups in an enterprise. I can think of five different ways in which cloud computing strengthens development and test groups:
1) Self-service capability: A defining characteristic of cloud computing solutions is a self-service capability that allows users to commission and decommission computing resources as appropriate. In development and test shops, this means users can directly procure the resources they need to complete their tasks without going through lengthy, manually-driven procurement chains. This results in a significantly shortened procurement period, and it means developers and testers can quickly get to the task at hand.
2) Resource availability: Resource sprawl within IT shops, a very common occurrence, leads to resource deficiencies that are sometimes a problem for enterprise developers and testers. Tasks like testing massive configurations and performing intensive load tests become increasingly difficult as it is hard to harness enough resources to get the job done. Cloud computing, through intelligent virtualization, usage tracking, and more, enables this scattered resource pool to be viewed and utilized as a single logical entity. Resources can be doled out as needed, and intense tasks become achievable without extensive setup or procurement periods.
3) Environmental fidelity: From the time a software application or service leaves a developer’s hands to the time it reaches production, quite a few things about its environment may be changed, often times unbeknownst to the developer. The test and operation teams may have different conventions and configurations than development teams, and this can lead to unintended application behavior and delays in service delivery. Cloud computing offers a potential solution to this problem in the form of the increasingly popular templatized solution stack. These solution stacks are pre-built, ready to deploy configurations, which include the application and entire environment down to the operating system. This stack can be captured as some sort of image (i.e. OVF image, Amazon Machine Image, etc.), and passed off between each team along the delivery cycle. Teams downstream from development see the exact environment in which the application was designed and unit tested, and they can balance needed changes to that environment against a known, working solution.
4) Hosted tools: Though possibly not yet standard operating procedure, one can look at the wave of SaaS offerings and make a reasonable assumption that more and more development and test tools will be moving in that direction as well. Why not? Putting aside the technical challenges of hosting something like a code editor on a network, the benefits of centrally hosting these types of tools are clear. Developers and testers no longer have to worry with installing, configuring, running, or maintaining these enabling tools on their own machines. Instead, they can log into the tools from any machine with a network connection and get work done.
5) Increased focus: This benefit is a culmination of all of the above benefits. By easing the process to acquire resources, making more resources available, ensuring configuration integrity, and removing the burden of maintaining tools, developers and testers are left to focus on their core jobs. The operational and administrative portions of their job are significantly reduced through cloud computing solutions. As a result, organizations are in a position to benefit from more developer innovation, increased test quality and coverage, and more.
The above five ideas illustrate that cloud computing can indeed enhance development and test efforts just as they boost administrative and operational tasks. Development and test teams that understand the benefits they can derive from cloud computing are likely to be proactive in advocating its use. For the cloud computing industry, increasing adoption by development and test groups could lead to widespread grassroots movements that further spread the use of cloud computing throughout enterprises.
The more and more we visit with customers about our new WebSphere CloudBurst Appliance, the more we see a common thread of questions emerge around the offering. In an attempt to address some of these questions in a more accessible medium, I figured I'd start a series of blogs that relays some of these questions and of course the answers. Today I want to start with what is perhaps the most frequently asked question about WebSphere CloudBurst.
One thing I've noticed while talking to customers is that virtualization and virtualization management tools are widely used today. When we talk to customers already using these tools, they immediately understand the benefits WebSphere CloudBurst delivers in the form of virtualizing WebSphere Application Server environments and bringing a set of lifecycle capabilities to this virtualization. However, almost invariably they ask why they would use WebSphere CloudBurst over their existing tools, for instance VMware's vSphere offering.
There's a two-word answer to this question: WebSphere intelligence. What exactly does that mean? WebSphere CloudBurst was built with WebSphere in mind, and it knows how to configure, tune, and maintain WebSphere Application Server environments without the need for custom scripting.
For instance, when a user is building a WebSphere CloudBurst pattern (if you are wondering what a pattern is, or just want to learn the basics of WebSphere CloudBurst, take a look at this article) the relationships among the various WebSphere Application Server parts are automatically configured. This means that custom nodes are automatically federated into the Deployment Manager cell, web servers are automatically configured to route to application server nodes (and the web server's config file is setup to be automatically propagated), and much more. In addition to establishing these relationships, WebSphere CloudBurst also applies best practice tuning to the WebSphere environment. This tuning of course is just a suggestion and can be easily changed by users.
In addition to configuring and tuning a topology, WebSphere CloudBurst allows users to apply both fixes and service level upgrades to running virtual systems. Through the console, users can select virtual systems and apply either a fix or upgrade directly to the system. All the while, WebSphere CloudBurst automatically backs up the state of the system before the change is applied, and users can easily roll back to the previous system state if undesired behavior is encountered after the change.
I'm not disputing that all of these actions could be potentially accomplished using some "black-box" virtualization management tool, but the burden of supplying WebSphere intelligence is placed directly on the user. In order to configure, tune, or maintain virtualized WebSphere Application Server environments, users would accompany their virtual machine definitions with a heavy dose of scripting. These scripts only add to the pile of IT assets that need to be owned, updated, and maintained over time, and they only serve to distract users from the end-game of getting their applications up and running.
It's important to note too that WebSphere CloudBurst was only just released, so I would expect that the WebSphere intelligence it provides will only grow and get better over time. If you want to learn more about WebSphere CloudBurst, or if you think your company would be interested in a briefing and demo please get in touch. You can reach me at email@example.com. We would love to explain both the business value and technical capabilities of the appliance.
I want to stay in the realm of the deployment process for our next frequently asked question regarding the WebSphere CloudBurst Appliance.
The ability to quickly deploy entire WebSphere Application Server cells (anything from single node cells, to multi-node clustered cells) is a hugely compelling feature of the appliance. Instead of spending days or hours deploying a WebSphere Application Server cell, users can deploy these in a matter of minutes (less than twenty minutes for clustered environments)!
For the most part, WebSphere CloudBurst patterns represent entire cells. This includes management parts (AdminAgent, DeploymentManager), managed parts (custom nodes), and proxy parts (IHS). When you deploy a pattern, the result is a complete and fully functional WebSphere Application Server cell running in your private cloud.
So, now that you have a complete cell out in your cloud, what happens if you need to add more nodes? If the the user-demand for the applications on your cell has exceeded the initial topology, can you use WebSphere CloudBurst to add more cells? Sure you can!
In short, this involves creating a pattern that contains only a custom node part, and then at deploy time, providing information about the existing cell. WebSphere CloudBurst then takes over the deployment of that custom node and federates the node into the existing cell based on the information supplied about that cell. I won't go into an entire explanation here, because I think the demo I put on our YouTube channel explains it pretty well.
In my experience with the WebSphere Application Server, this represents a much more seamless and automated process for deploying new nodes into an existing cell than what exists outside of WebSphere CloudBurst today. Of course, we value your comments and feedback above all else. So let us know what you think!
We've been talking a lot about IBM Workload Deployer V3 and we will continue to highlight different aspects of the capabilities it provides in the coming weeks. As we've already mentioned - IBM® Workload Deployer V3 is not just another release of the IBM WebSphere CloudBurst Appliance. While it builds on WebSphere CloudBurst's success, and supports and improves upon all of its original capabilities, Workload Deployer provides new application-centric computing capabilities for your private cloud, and brings you higher utilization, improved ease of use, and more rapid application deployment.
A recent announcement signaled the coming release of WebSphere CloudBurst 1.1. This new release of the WebSphere CloudBurst Appliance delivers enhancements to all phases of the lifecycle of virtualized WebSphere Application Server environments. Let's take a closer look at a few of these updates.
First and foremost, WebSphere CloudBurst 1.1 delivers support for the PowerVM platform. You can now deploy patterns to create virtualized WebSphere Application Server environments running in a PowerVM environment on pSeries servers. Among other things, this is enabled by a new version of the WebSphere Application Server Hypervisor Edition. This new version of the virtual image contains an AIX operating system and has been specifically bundled to allow it to be activated on the PowerVM hypervisor. From a user standpoint, building, deploying, and maintaining WebSphere Application Server environments is done from the same console with the same look and feel regardless of the target platform. Check out this demo to see WebSphere CloudBurst and PowerVM in action.
In addition to support for PowerVM environments, WebSphere CloudBurst 1.1 will also provide a trial edition of a DB2 virtual image. You can import this image into your WebSphere CloudBurst catalog and then use it to build and deploy DB2 environments. This allows you to, from the same centralized interface, deploy and integrate both your application and data environments in your private cloud. Check out this demo for more information on the new DB2 trial virtual image for WebSphere CloudBurst.
One other cool feature I want to point out delivers an enhancement to the use of script packages in WebSphere CloudBurst. In this new version of the appliance, you have more control around when script packages you include in a pattern are executed. Previously, these were executed toward the end of pattern deployment once all the necessary WebSphere Application Server components had been started. While that is still the default behavior, you can also elect to have the script package invoked when the virtual system is deleted, or you can choose the invocation to be user-initiated meaning that you decide when and how many times your script runs. To check out a pretty handy use case for this feature, watch the demo here.
These aren't the only new features and enhancements delivered in WebSphere CloudBurst 1.1. Stay tuned for more demonstrations and more words about these new features and when and why you would want to use them. In the meantime, if you have any questions be sure to stop by our forums.
There have been quite a few announcements from IBM lately that keep referring to the "IBM Cloud". Since IBM has been moving at a pretty substantial pace with cloud offerings as of late, I thought it may help to give readers a concise idea of exactly what the IBM Cloud provides.
Put very simply, the IBM Cloud is a public cloud offering that allows users to provision and utilize IBM Software on an infrastructure hosted by IBM. From the IBM Cloud's web-based dashboard, users choose a software package, provide some deployment information about the particular instance they wish to create, and then simply click OK. In a matter of minutes the software is up, running, and available for full use. At the time I wrote this blog, I saw software from our Information Management, Rational, and WebSphere brands available for use. In addition, users can launch plain SUSE Linux instances out onto the IBM Cloud.
Within WebSphere, users can choose from either the WebSphere Application Server or WebSphere sMash. I just went through a WebSphere sMash deployment, and in about 6 minutes the sMash instance was up and running, and I was able to log into the App Builder development environment. The WebSphere Application Server package that's available on the IBM Cloud is particularly interesting because it contains an embedded Rational Controller Agent. This makes it very easy to integrate some of the Rational offerings on the IBM Cloud (or elsewhere) with the WebSphere Application Server. Many of these integration scenarios focus on making it easier to very quickly build, package, and deploy applications from Rational development tooling to WebSphere Application Server environments.
The best thing about the IBM Cloud is that you can sign up and give it a whirl with absolutely no costs! Go and sign up for a free account and you'll immediately be able to spin up IBM Software in IBM's cloud. You can access and use that software, and then when you are done you can simply delete the running instance. There's no need to download anything to your computer, the interface to the IBM Cloud is completely web-based, and the launched software runs on IBM infrastructure. All of this adds up to give users a super easy way to kick the tires on some of our software. Sign up now by visiting the landing page for the IBM Cloud.
It isn't often that the IT world looks at the federal government as technological pioneers, and the new CIO of the Office of Management & Budget, Vivek Kundra, thinks that is a problem. Kundra is striving the for the federal government to become key leaders in innovation, and in doing so, he's looking at cloud computing as a first step.
Due to the sensitivity and privacy of data the government is often handling, one may not think cloud computing the best fit. Kundra, however, does not see that as a show stopper. "We recognize that whether it's cloud computing or any area of technology there is sensitive and classified information and it cannot be treated the same way as public information, but they are not mutually exclusive." While only a few words, Kundra makes it clear that one of the biggest perceived fears of adopting cloud computing, security and privacy, will not stand in the way of the federal government's march to innovation.
It's still early in his tenure, but the First CIO seems to be serious about his push for cloud computing. He says that he is "killing projects that don't investigate software as a service first", and he is keen on looking to the cloud for storage and web development solutions. Kundra also believes that by leveraging cloud solutions across the multitude of federal agencies, we can ensure that resources are used only when needed by replacing the always-on data center with outsourced solutions where possible. He's also counting on the adoption of cloud computing to send a strong message that government agencies can lead technological innovation.
I doubt anyone would discount the benefits Kundra is seeking by attempting to move the federal government in the clouds. He hopes to reduce tax dollar spending by using only the necessary IT resources, improve end-user services for tax payers, and foster a culture of technological innovation among a myriad of federal agencies. There is no doubt that such a transition and culture change will not come easy, but Kundra sounds dedicated to an honest effort at change. If the federal government is able to effectively leverage cloud computing solutions, I believe it could be a trend-setter for many organizations. If an entity as unwieldy and complex as the federal government can adopt and derive benefits from cloud computing, I believe many organizations that once discounted the technology may take a fresh look at the capabilities at hand.[Read More]
More and more, I am getting a question about how to bring existing WebSphere environments into IBM Workload Deployer. While "bringing in an environment" can mean any number of things, let's take it to mean that a user wants to import their existing WebSphere cells, applications, and configuration into IBM Workload Deployer as a pattern they can subsequently deploy. While there may not be a big red easy button in the appliance that lets you point to an existing environment and import it, there are a couple of techniques that one can employ. I have covered both techniques before, but since I'm getting the question with increasing frequency, I felt like it was time for recap.
The first option is to use a combination of IBM Workload Deployer and Rational Automation Framework for WebSphere. This is a use case I have spoken about numerous times at conferences and in blog posts and articles. In fact, you can read a little about it here. In this sense, RAFW provides excellent capabilities to point at an existing cell, and import everything about it. This includes WebSphere configuration, applications, shared libraries, and more. Once imported as a RAFW project, you can use the IBM Workload Deployer integration script package provided by RAFW to replay that configuration on top of deployments created by the appliance.
The second option is something I talk about a little less frequently. This option revolves around the use of a sample script (provided for free in our samples gallery) that you can run against existing WebSphere cells. The invocation of this script produces IBM Workload Deployer script packages that you can use in patterns to apply the configuration of the target cell to your new cloud-based deployments. Under the covers the utility script and resultant script packages use backupConfig and restoreConfig respectively. They do ensure the update of the cell, node, and host names during the restoreConfig execution (which happens automatically during pattern deployment). Beyond that, the use of the script is subject to the same limitations and rules in place for the use of the backupConfig and restoreConfig commands. You can read more about this capability, watch it in action, and download it for free.
I hope this is all useful information for those of you looking for ways to import existing environments into IBM Workload Deployer as patterns. If you have any questions, please let me know!
If you've seen the official demo on ibm.com/cloudburst then one of these demos will be familiar to you. However, there is a three part deep dive series that is being made public for the first time. The deep dive walks users through the different features that WebSphere CloudBurst offers to create, deploy, and manage WebSphere virtual systems in their private cloud.
I think the demos give a good idea as to how WebSphere CloudBurst provides the features we've been talking about on this blog. If you have questions after watching the demos visit our forum and start or participate in a thread.
Users of cloud computing solutions today expect to be charged for exactly the amount of compute resource they use. No more, no less. This expectation is often at the forefront of our customers' minds when contemplating the creation of internal or private clouds. They want to be sure that any solution they use audits the activity and usage of their cloud and enables them to consume this information to implement their specific chargeback scheme.
Thought it's not a feature we always seem to talk about, WebSphere CloudBurst provides the necessary capabilities to properly allocate costs to users, teams, and organizations. To start with there are some handy usage reports that you can view directly from the WebSphere CloudBurst console. For instance, as seen below, a WebSphere CloudBurst administrator can see a break down of cloud resource usage for each user of the appliance.
While the capability illustrated above is nice, it is likely that if you are implementing an enterprise-scale chargeback scheme you want to automate the processing of the usage data, thus implying the need to programatically consume such data. WebSphere CloudBurst enables you to do just this by way of its audit log. The WebSphere CloudBurst audit log is a record of each and every action taken in the appliance, along with information about who took the action, when the action was taken, what object the action was taken on, and much more. You can instruct the appliance to generate this file for a specified date range, and the output is a comma separated value file that can then be consumed in a manner of your choosing.
As an example of some of the things you can do with this data, I recently wrote a Java program that parsed the audit file and for each virtual system determined who created it, who deleted it (if it had been removed), and the duration of its existence. This program was simple (more of a string parsing exercise than anything else), but nonetheless provided necessary function and output for billing schemes based on hours of usage. If you are interested in how this was done please let me know and I'd be happy to discuss details. In the meantime, if you have any thoughts you can reach me on Twitter via @WebSphereClouds.