A couple of weeks ago, I wrote about a sample I was working on that would allow one to apply a layer of governance to their WebSphere CloudBurst patterns. Earlier this morning, I posted the sample to the WebSphere CloudBurst Samples Gallery under the 'Sample CLI Scripts for WebSphere CloudBurst' section. The name of the new sample is 'Check WebSphere CloudBurst patterns', and you can download it here.
As hinted in my earlier post, the new sample is a simple way to check your patterns against assertions you supply in a properties file. It allows you to check that patterns contain the correct parts and scripts, and it allows you to verify that they were built from valid images. The assertion format is pretty basic, but it should be flexible enough to allow you to check patterns against a wide array of requirements. The sample archive includes a readme file that explains exactly how to use the script, and it contains a sample assertions file to give you an idea of the input syntax.
I hope this helps to address some of the requirements of many WebSphere CloudBurst users that told me they were in need of a way to apply governance to their patterns. If you have any questions about the sample, please let me know. Alternatively, if you have another idea or a problem you would like to see addressed by a sample in our gallery, please let me know.
One of my favorite things to do with users or potential users of WebSphere CloudBurst is to help them understand how they can construct a custom environment using the appliance. Typically, we take one of their existing application environments and discuss the configuration steps that contribute to its makeup. From there, we map the required configuration actions to different customization capabilities in the appliance. It is one thing to talk about how you can customize every layer of your application stack with WebSphere CloudBurst, it is quite another to talk about it in the context of an existing environment. This exercise usually serves to greatly enhance a user's understanding of how to construct tailored environments with the appliance.
While I cannot take every one of you through this exercise in the context of one of your own application environments, I can propose a scenario that will help to illustrate the WebSphere CloudBurst customization process. Consider that I want to deploy a clustered WebSphere Application Server environment whose application server instances utilize WebSphere DataPower XC10 for HTTP session management. In order to deploy such an environment, I would need to do the following:
Install an OS and WAS
Install the WebSphere eXtreme Scale Client binaries - required for integration
Construct a clustered cell
Augment profiles with WebSphere eXtreme Scale profile templates
Configure the application server instances to use WebSphere DataPower XC10 for session management
So those are the steps, but how do they map to WebSphere CloudBurst? First, I know that the WebSphere Application Server Hypervisor Edition image used by WebSphere CloudBurst encapsulates the installation of the OS and WAS. I also know that WebSphere CloudBurst will automatically construct the clustered cell during the deployment process. That means I need to address the installation of client binaries, augmentation of profiles, and configuration of application server instances. In order to do this, I will use a combination of image extension and custom script packages.
To get started, I extend an existing WebSphere Application Server Hypervisor Edition image and simply install the WebSphere eXtreme Scale Client binaries. I then capture that image and store it as my own unique image in the WebSphere CloudBurst catalog. Now, you may wonder why I did not capture the profile augmentation in the custom image. Remember, you cannot change profile configuration during the extend and capture process as WebSphere CloudBurst resets the profiles as part of capturing the custom image.
My custom image encapsulates the installation of the client binaries, so now I turn to custom script packages. I need two in this case. One script package will augment a profile (either deployment manager or custom node) with the WebSphere eXtreme Scale profile template. The second script package will configure application server instances to use WebSphere DataPower XC10 for HTTP session management. Once done with these script packages, I have all the assets I need to build my target environment.
Using my custom image, I build a pattern that contains the number and kind of WebSphere Application Server nodes that I want. I use the advanced options to define a WebSphere Application Server cluster ensuring its creation happens during deployment. Next, I drag and drop the profile augmentation script onto the deployment manager and custom node parts in my pattern. Finally, I drag and drop the WebSphere DataPower XC10 configuration script onto the deployment manager. The pattern is now ready to deploy!
For those of you that are visual learners like me, this demonstration provides a nice overview of exactly what I wrote about above. Check it out and let me know what you think.
When IBM Workload Deployer v3.0 rolled around, the appliance introduced the concept of shared services. These were services that a cloud administrator could launch into the cloud infrastructure defined to IBM Workload Deployer, and use to serve a number of different application deployments. There were, and continue to be, two main shared services: a proxy service and a cache service. The shared proxy service does pretty much what you may guess. It provides request routing capabilities across multiple different instances of multiple different applications, thereby providing a centralized resource that encapsulates this basic need in an application environment. You can probably also guess what the caching service does. It caches things! Specifically, in IBM Workload Deployer v3.0 it provided in-memory caching of HTTP sessions, thus ensuring high availability of data stored in those sessions.
Undoubtedly, the ability to make HTTP session data fault tolerant is extremely critical in any application environment, cloud-based environments included. However, the applicability of a shared cache service is much further reaching, and in IBM Workload Deployer v3.1, we are starting to open this service up to your applications. What does this mean to you? Quite simply it now means that you can access this cache directly from your application code. If you are familiar with WebSphere eXtreme Scale or the DataPower XC10 Caching Appliance, then you know exactly what I mean. You can use the WebSphere eXtreme Scale ObjectGrid API to insert, read, update, and delete entries that exist in the in-memory cache. The underlying cache technology is based on the same code that powers WebSphere eXtreme Scale and DataPower XC10, so you can be sure that your cache is scalable, fault tolerant, responsive, and otherwise able to meet the needs of your application.
As I hope you find to be the case with many IBM Workload Deployer capabilities, this is a superbly simple capability to leverage. When you deploy virtual application patterns based on the IBM Workload Deployer Pattern for Web Applications, the capability is simply there. The underlying runtime that is serving your application is automatically augmented with the capabilities necessary so that your applications can connect to and utilize the deployed caching service. It is also worth pointing out that you can utilize the caching capabilities provided by this shared service for applications and application infrastructure that you deploy via virtual system patterns as well. You can either choose to augment the WebSphere Application Server environment with the XC10 Feature Pack (a deploy-time option for virtual system patterns built on WebSphere Application Server Hypervisor Edition v8), or you can configure WebSphere Application Server as you always would when integrating with a WebSphere eXtreme Scale environment or a DataPower XC10 Appliance.
What's the real benefit to all of this you ask? Well, when you use the shared caching service, you get the benefits of a distributed, in-memory, extremely scalable cache without having to deal with too much setup or administration. You simply tell IBM Workload Deployer how many resources you want to dedicate to your cache, and deploy the shared service. IBM Workload Deployer takes care of the details, including scaling in and out the cache to meet the needs of the system. On top of all of this, there is also an option to configure 'Next to the Cloud' caching. If you currently own DataPower XC10 appliances, you can make those available to virtual application pattern deployments (this was already possible with virtual system patterns) by simply providing details of the location of the appliance collective in question.
Put simply, setting up, administering, and utilizing an object caching service for your applications has never been easier. Check it out and let us know what you think!
If you work in a development shop similar to mine, you and many of your coworkers have more than one workstation under your desk.We use those extra machines for a variety of reasons but by and large they they tend to serve most often as foot warmers. That is not to say that they are unnecessary but rather they simply aren't used most of the time. If you try to eliminate one, you will surely need it within the next week but if your manager asks if it is really necessary you would be hard pressed to pinpoint precisely when the last time it was used for something really important. To developers, these extra machines are potential sandboxes for isolated experiments or testing scenarios. For managers, they are relatively unused capital investments that require inventory control and have depreciating value. If you are a network administrator there are certainly computers in your inventory that are older and lack the capacity to be counted on for everyday use. They sit in a corner or in a blade rack and are probably idle or even powered off. These assets take up physical space and contribute very little to your data center. However, they have little sale value but may represent a significant investment. Or maybe you just can't part with them for sentimental reasons.
Whatever the reasons for having computing resources lying around that are seldom used, here is an idea: Virtualization. With virtualized images you can use those machines for whatever purposes are required and for as long as they are required without having to spend hours loading them with a compliant OS image, installing software and configuring them for use. Virtual image libraries could hold preinstalled systems for almost any need. It could be for anything:
Workstations provisioned for temporary workers
More server capacity
More machines or load testing
Extra processors for parallel processing systems
Back up systems to carry loads during maintenance hours
If you use WebSphere in any capacity, CloudBurst can be used to lay in place a completely functioning WebSphere install in as little as 20 minutes, OS and all.
When the need for the machine is passed, it can be un-deployed and returned to the pool. This could significantly increase the available computing power of an entire development business. The ability to turn any machine into a needed and useful system on demand is real agile computing and gives a whole new dimension to governance.
Customization capabilities have been very important to the design of IBM Workload Deployer going back to the beginning with WebSphere CloudBurst. Having the ability to quickly spin up environments in a cloud really does little good if those environments are not customized according to your needs. If you look at the virtual system pattern capability, it is why we always had the notion of custom images, custom patterns, and custom scripts. We give you a strong foundation, and you tweak it here and there to create what you want.
Customization is not a concept unique to virtual system patterns. The virtual application model in IBM Workload Deployer supports many different mechanisms for you to tailor your cloud-based environments. You can start with the virtual application pattern types that we ship and use any components in those patterns to build a custom environment. The patterns you build can include your own configuration (within the set of configurable parameters) and include policies that you need for your environment. In looking at just the IBM Workload Deployer Pattern for Web Applications and the IBM Workload Deployer Pattern for Databases, there are quite a number of scenarios you can support with your cloud. However, what happens when you want to go a little further and color outside the lines of what we provide?
At some point you may have heard or read that the entire virtual application pattern model resides on a pluggable architecture. In effect, this means that everything about a virtual application pattern type, from the elements that show up when building a pattern to the management interface you interact with after deployment, is customizable. The fundamental unit of customization for a virtual application pattern type is a plugin. Plugins provide the know-how in terms of installing, configuring, integrating, and managing the application types supported by a given pattern. Plugins also provide metadata that control what users see as they build and manage these patterns. In short, plugins are the source of truth for virtual application patterns!
If you looked in IBM Workload Deployer, you would find the collection of plugins that support the virtual application pattern types shipped with the offering. While that is interesting, you should also know that you can supply your own plugins. That's right. You can develop a plugin, and load it directly into the appliance. This allows you to do two very important things. First, you can extend the virtual application pattern types that come with IBM Workload Deployer with any kind of functionality you deem important. This may be additional monitoring, integration with external systems, or any number of other extensions. Second, you can create new virtual application pattern types that support your desired workloads. You can support the workloads with the software of your choosing so long as you can supply the necessary know-how in your plugins. In either case, you contribute the plugin, and your customized components become first class members of the IBM Workload Deployer landscape.
Okay, so I admit that this is not necessarily news. We have supported user-contributed plugins since the release of IBM Workload Deployer. However, there is something new that significantly lowers the barrier to entry in the custom plugin game. Early last week, IBM announced the IBM Workload Plugin Development Kit. This kit provides a set of tools and samples designed to make the construction and packaging of custom plugins a simple process. In my opinion, this reiterates our commitment to an extensible, application-centric cloud approach, and it represents a huge step forward in the industry as a whole. Be sure to check this out, and don't be shy with the comments and feedback!
For the next installment of this series of FAQs, let's move from product positioning and integration, square into the land of operational procedure. For this post, we will consider you are getting ready to deploy a pattern based on the WebSphere Application Server Hypervisor Edition. During the deployment process, you provide configuration information, which includes a password for a user named virtuser.
You read the documentation, and you understand that virtuser is both an operating system user and the user that WebSphere CloudBurst configures as the primary administrative user for WebSphere Application Server. Naturally, this user owns the WebSphere Application Server processes that run in the virtual machine. While it is convenient that this is all pre-configured for you, you want to know one thing: "Can I define a user besides virtuser?"
It certainly would not be the first time this question came up. The short answer to this is yes, but there are of course caveats. You can define another user and have that user own the WebSphere Application Server processes, but you cannot completely remove the virtuser user, nor should you remove virtuser as the primary administrative user. The reason for this is that WebSphere CloudBurst relies on virtuser when it carries out certain actions such as applying maintenance, applying fixes, or otherwise interacting with the WebSphere Application Server environment.
All that being said, I recently put together a script package that allows you to utilize a user other than virtuser. I hope to put the script package in our samples gallery soon, but here's a basic overview of using the script package and what it does:
Attach the script package to all parts in a pattern that contain a WebSphere Application Server process.
Deploy the pattern and provide the necessary parameter values. These include the name of the new user, a password, a common name, and a surname. The last two bits are necessary when creating a new administrative user in WebSphere Application Server.
During deployment, the script package first creates a new OS user with the specified password.
The script adds the new user to the existing OS users group.
The script creates a new WebSphere Application Server user with the same username and password and grants administrative privileges to the user.
The script shuts down the WebSphere Application Server processes.
The script changes the runAsUser value for all servers to the empty string and sets the runAsGroup value for those servers to users. This allows members of the OS users group to start the WebSphere Application Server process.
The script starts the WebSphere Application Server processes.
There are a few other activities in the script, but that should give you a basic overview. Again, note that the script does not remove the virtuser user or change that user's OS or WebSphere Application Server permissions in anyway. I would also point out that if you use WebSphere CloudBurst to apply maintenance to the WebSphere Application Server environment, it will do so as virtuser and it will restart processes as virtuser, so plan accordingly.
I hope this sheds some light on a very common question. I hope to get the sample up soon, and as always let me know if you have any questions.
In my opinion, declarative deployment models are key to the entire notion of Platform as a Service (PaaS). That is, users should concern themselves with what they want, but not necessarily how to get it. The PaaS system should be able to interpret imperatives from the user and automatically convert that to a running system. In this respect, I think the new virtual application pattern, and more specifically policies, in IBM Workload Deployer takes a giant leap toward a more declarative deployment model.
In IBM Workload Deployer, policies allow you to 'decorate' your virtual application pattern with functional and non-functional requirements. In other words, they provide a vehicle for you to tell the system what qualities of service you expect for your application environment. To put a little context around this discussion, let's examine the policies available in the virtual application pattern for web applications. Specifically, let's look at the four policy types you can attach to Enterprise Application, Web Application, and OSGI Application components in this pattern:
Scaling policy: When it comes to cloud, the first thing many folks think about is autonomic elasticity. Applications should scale up and down based on criteria defined by the user. Well, that is exactly what the scaling policy lets you do. You simply attach this policy to your application component, and then specify properties that define when to scale. First, you choose a scaling trigger from a list that includes application response time, CPU usage, JDBC connection wait time, and JDBC connection pool usage. After choosing your trigger, you decide the minimum and maximum number of application instances for your deployment, and then you choose the minimum number of seconds to wait for an add or remove action. At this point, you can deploy your application and IBM Workload Deployer will monitor the environment, automatically triggering scaling actions as needed.
JVM policy: I would be willing to bet that nearly all of you tune the JVM environment into which you deploy your applications. The JVM policy allows you to take two common tuning actions, setting the JVM heap sizes and passing in JVM arguments, as well as attach a debugger to the Java process (especially useful in development and test phases). You can also use the policy to enable verbose garbage collection (invaluable to understanding heap usage patterns for your application) and select the bit level (from 32 or 64) for your application. Again, all you have to do is attach the policy and specify the properties. IBM Workload Deployer will take care of the required configuration updates.
Routing policy: The routing policy provides a simple way to specify virtual hostnames and allowable protocols (HTTP or HTTPS) for your application. Attach the policy, provide the virtual hostname you want to use, select the desired protocols, and that's it! Remember, once you set the virtual hostname you will need to update your name server to map the hostname to the appropriate IP address.
Log policy: During the development and test phase, it is likely that you will want to enable certain trace strings in the application runtime. The log policy allows you to provide trace strings for your application, and it makes sure that the appropriate configuration updates occur in the deployed environment.
While this is not an exhaustive explanation of each of the policies above, I hope it gives you a basic idea of what they are and how to use them. To me, declarative deployment models are going to be a crucial part of making PaaS successful, so I am really excited about the notion of policies in IBM Workload Deployer. What do you think?
One of the most powerful features of WebSphere CloudBurst is the ability to take one of the WebSphere Application Server Hypervisor Edition virtual images that are shipped with the appliance and extend it to a produce a custom virtual image. This allows users to begin creating customized environments from the bottom up, starting with the operating system. To put it in better context, let's take a look at a couple of scenarios where this feature comes in quite handy.
First off, a very common need for our customers is the ability to continually monitor their application environments. For instance, you may install Tivoli monitoring agents on all of your machines hosting WebSphere Application Server processes and configure those agents to report back to a management server. This is a great case for image extension in WebSphere CloudBurst.
In this scenario, you would start by extending an existing WebSphere Application Server Hypervisor Edition image. WebSphere CloudBurst creates a running virtual machine based off of the selected image, and you log into that virtual machine and install the Tivoli monitoring agents. Once the installation is done, you capture the virtual image back into the WebSphere CloudBurst catalog and use the new image to build a custom pattern. The last step is to include a script package on this custom pattern that, upon deployment, will configure the installed monitoring agents to report back to your desired management server.
Another use case is likely to be of interest to you if you are using WebSphere Virtual Enterprise (or something similar), and you could benefit from the same ease of provisioning for those environments that WebSphere CloudBurst brings to WebSphere Application Server environments. You can use the same customization combination above (image extension and custom scripts) to enable WebSphere CloudBurst to essentially dispense WebSphere Virtual Enterprise cells.
Again, this scenario starts off by extending a WebSphere Application Server Hypervisor Edition virtual image. Once the virtual machine for the extension is created by WebSphere CloudBurst, you log in and install the WebSphere Virtual Enterprise product. After the installation is done, you capture the image and store it in the catalog. Next, you build a custom pattern based off of this image and include script packages that, upon deployment, augment the various parts in the pattern from WebSphere Application Server profiles to WebSphere Virtual Enterprise profiles. (You may wonder why you wouldn't just create the WebSphere Virtual Enterprise profiles during the image extension process. This is because during image extension, you cannot make changes to the virtual disk that contains the WebSphere Application Server profiles. Any changes made to the profiles will be wiped out during the capture process.)
There are countless more scenarios for creating custom virtual images in WebSphere CloudBurst. To name a few, you may want to install JDBC drivers that are common to almost all of your application environments, install required anti-virus software, or just make operating system configuration changes. All of these things can be accomplished through the image extension and capture process. Look for an article coming out soon that will discuss and explain, in much greater detail than I provided here, the process of installing and configuring Tivoli monitoring agents in environments dispensed by WebSphere CloudBurst. In the meantime, if you have any questions or comments, drop us a line here or check out our forum.
Maybe you remember, but not long ago I wrote a post about scenarios when WebSphere CloudBurst and Rational Automation Framework for WebSphere (RAFW) combine to form quite the pair. You can read that post for details, but the basic scenarios were configuring and capturing, importing existing environments into WebSphere CloudBurst, and migrating from virtual to physical installations. Well, after talking with customers and colleagues lately, you can add another scenario to the list: version-to-version WebSphere Application Server migrations.
I want to be clear here about one thing before I go further. I am in no way advocating against the use of the migration tooling that ships with WebSphere Application Server. It is an excellent tool that can make migrations simple and fast. I am merely pointing out that when it comes to version-to-version migrations you have options, and you should survey them all before making a decision.
With that understanding, let's take a look at WebSphere CloudBurst and RAFW in the context of a version-to-version migration. This integrated approach to migration is ideal if you are amenable to moving up to a newer version of WebSphere Application Server in a cloud-based environment. Using both products makes migrations fast and easy, and you can be very confident that the configuration of the migrated environment is faithful to the original. The figure below shows the basic flow of the migration and breaks it down into a set of discrete steps.
Now, for a quick break down of each step:
Extract config & apps from old environment: The first step involves pointing RAFW at your existing configuration, the one you want to migrate from, and using an out-of-the-box action to import all of the configuration into a RAFW environment. You can also import your application binaries in this step.
Store config & apps from old environment: In step two, you will store the extracted configuration and application binaries in a source control repository or some backup location separate from your RAFW server. This is an optional, but recommended step.
Analyze and update apps: Before migrating your applications to the newer version of WebSphere Application Server, you can use the completely free Application Migration Toolkit to analyze the source code of your applications. This toolkit will recommend any required updates to ensure your application continues to behave as expected when moving to the new version. Again, this is an optional step, but the toolkit is free and very handy. So, why not?
Deploy new version of the environment: Step four starts by building a new WebSphere CloudBurst pattern. This new pattern matches the topology of the environment you are migrating from, and you build it from an image containing the version of WebSphere Application Server to which you want to migrate. Once built, you deploy it to your private cloud and you have a running environment in minutes.
Apply stored config and deploy updated apps: Now that you have your new environment up and running, use RAFW to apply the configuration you extracted from your old environment. RAFW inherently understands any configuration translation that needs to occur to apply the old configuration to your new environment, and it can also deploy your updated applications for you.
That's the basic overview for version-to-version migrations when you are moving to a cloud-based environment. In time, I will be posting more information about this process to shed a little more light about what is going on under the covers. In the meantime, you know how to reach me if you have questions!
I'm out at the RSA conference in San Francisco this week, and I'm expecting a lot of good conversations about WebSphere CloudBurst and security. This topic always comes up when I'm out and talking to customers, and I approach it from a few different angles.
First of all, WebSphere CloudBurst enables the creation of on-premise clouds (clouds in your data center). This means that you retain control over the resources that make up and support your cloud, and you have the ability to very tightly secure said resources. Notice that I say "you have the ability". I'm careful to point out that on-premise clouds do not inherently make your environment secure. If you don't already have a robust security strategy in place within your enterprise, then simply moving to a cloud model will not solve much. That being said, if you do have a comprehensive security strategy in place, one built around customized processes and access rights, then on-premise clouds are likely to make much more sense for you.
Moving beyond the opportunity for customized security controls provided by on-premise clouds, WebSphere CloudBurst delivers additional, unique security features. It starts on the outside with the tamper-resistant physical casing. If a malicious user attempts to remove the casing to get to the inner contents, the appliance is put into a dormant state, and it must be sent to IBM to be reset. "So what!" you say. If the user removes the casing and gets to the contents, couldn't they simply read the contents off the flash memory or hard disks directly, or insert them into another WebSphere CloudBurst Appliance and read them from there? Nope. All of the contents stored on the appliance's flash memory and hard disks are encrypted with a private key that cannot be changed and is unique to each and every appliance.
If you are at all familiar with WebSphere CloudBurst, you know that the appliance dispenses and monitors virtual systems running on a collection of hypervisors. Obviously then, the appliance must remotely communicate with the hypervisors. In order to secure this communication, all information between WebSphere CloudBurst and the hypervisors (and vice versa) is encrypted. This encryption is achieved by using an SSL certificate that is exchanged when a hypervisor is defined in WebSphere CloudBurst. This certificate must be accepted by a user, thus preventing rogue hypervisors from being defined in WebSphere CloudBurst.
Finally, WebSphere CloudBurst provides for the definition of users and user groups with varying permissions and resource access rights in the appliance. You don't have to turn over the keys to your cloud kingdom when you add a user to the appliance. You have the capability to define varying permissions (from simply deploying patterns, to creating them, all the way up to administering the cloud and appliance), and you have the ability to control access to resources (patterns, virtual images, script packages, cloud groups, etc.) at a fine-grained level. These two capabilities combine to allow you to control not only what actions a user can take, but also on which resources they can take those actions.
WebSphere CloudBurst was designed with focus on delivering a secure cloud experience, and I think it hit the mark. I'm sure I didn't address all your WebSphere CloudBurst and security related questions. If you have something specific in mind, leave a comment on the blog or reach out to me on Twitter. I'll do my best to address your question.
In the course of my job, I am lucky to be able to work with both enterprise users and business partners who are adopting and using the WebSphere CloudBurst Appliance. When it comes to the business partner camp, one of my absolute favorites is the Haddon Hill Group. The Haddon Hill Group is an IBM Premier Business Partner, and they have been an early adopter and vocal advocate of the WebSphere CloudBurst Appliance. They have extensive knowledge of the use of the appliance in enterprise accounts, and quite frankly, they are doing some really cool things with WebSphere CloudBurst.
Given the above, I was glad to see summarized results from their various implementations made available recently on the IBM site. The summary is fairly concise, so I encourage you to take a look at the results Haddon Hill Group is getting with WebSphere CloudBurst.
I am not going to rehash the contents of the results here, but there are a couple of things I want to call out. First off, Haddon Hill Group says that WebSphere CloudBurst can provide companies with a '100 times faster time to market' delivery experience. In a practical sense, this means reducing the amount of time to deliver WebSphere environments from 40-60 days on average to just hours. That is an eye-opening data point!
The other thing I want call out here is a quote from Phil Schaadt, President and CTO, Haddon Hill Group. I have had the pleasure of working with Phil and team, and I have heard him echo these same sentiments many times:
"The important thing about the IBM WebSphere CloudBurst Appliance is that it will dispense a WebSphere Application Server image onto your WebSphere Application Server environment or private cloud along with other products within the WebSphere stack, and that application server will be ready in a few minutes. You can do it in a clustered environment, and you can even roll out IBM WebSphere Process Server and get it right in a fully clustered environment with a database connection, in about 90 minutes. You can also easily manage all the configurations of IBM WebSphere Process Server that you need. All the steps that took up so much time and effort on the part of IT staff have been removed. The savings for companies with large WebSphere implementations can be in the millions."
It is always great to see clients putting our technology to use to produce tangible business value. Again, I encourage you to take a look at these reports. As always, I am eager to hear what you think, so leave me a comment or reach out to me on Twitter @damrhein.
Cloud Computing is essentially a Systems Management innovation. I understand that, to some, that means simply managing hardware and capacity or computing power. However, it also involves deployment of enterprise level software. While some software is a kind of out-of-the-box asset that can be installed generically as if it were a hard asset, infrastructure software like WebSphere requires considerable skill and knowledge.
Tier1 cloud computing implementations must be able to expand the enterprise into provided capacity quickly and autonomically. If the scale-out requires tremendous effort and specialized skills then the cost savings that cloud offers is severely mitigated.
CloudBurst provides a mechanism to quickly deploy WebSphere environments to private clouds and allows the administrator to simply manage the assets on which WebSphere will run. The expertise of setting up and configuring WebSphere is, in effect, canned. This allows for much more rapid deployments and reduces the need for more expensive admins.
While many companies are still putting forward more technologically sophisticated offerings that still require even more technologically sophisticated staff, WebSphere has produced a product with a value which is more easily realized, understood and which can be seen on the balance sheet.
I hardly ever have a conversation about WebSphere CloudBurst, or generally cloud computing for application middleware, without the topic of databases coming up. Databases are such an important piece of nearly every application middleware environment, so users want to be sure that whatever they do for their application servers, they can also do for the databases on which their applications rely. That is why the capability to deploy DB2 from WebSphere CloudBurst has been around for as nearly as long as the capability to deploy WebSphere Application Server.
Even though DB2 deployment capability has been around for a while, there are still some common misconceptions regarding the offering. First, I have talked to a fair number of users who are under the impression that we only offer a trial version of DB2 for deployment via WebSphere CloudBurst. While that was true for the first few months of the offering, that is no longer the case. For several months now, a fully supported, 64 bit, production-ready DB2 image has been ready for use in WebSphere CloudBurst. If you were waiting for a DB2 image that you could go live with, wait no longer!
The other misconception, or rather, point of confusion, arises from the fact that the DB2 image for WebSphere CloudBurst is not, by name, a Hypervisor Edition image. I can assure you that is in name only. The DB2 image looks like and behaves like any other IBM Hypervisor Edition image once you load it into the appliance. You can use it to build and deploy patterns in the same way you use other images in WebSphere CloudBurst. You may just have trouble finding it if you search for 'DB2 Hypervisor Edition' as opposed to 'DB2 Server for WebSphere CloudBurst Appliance.'
Instead of going into further detail, I want to refer you to a blog entry from a fellow IBMer, Leon Katsnelson. Leon is a program director for DB2 and is responsible for the team that develops and delivers the DB2 image for WebSphere CloudBurst. In his most recent post, he provides a nice overview of the image and gives good information for those looking to use DB2 and WebSphere CloudBurst (there is also a bit on cloud computing at the beginning that I think is spot on). Check out Leon's post, and let us know what you think!
When we talk about clouds, we tend to think of the usual enterprise with servers centralized in data centers or in server rooms. At least, I do. But why does
it have to be so? Any IT shop will have many more computers than what is in the server farm. With hardware technology accelerating, as always, even desktop machines are capable of multiprocessor computing and doubling as servers.
Cloud offers the ability to do more than web commerce. The concept of cloud can have broad implications for all kinds of parallel processing needs. Right now, there are a number of organizations from SETI to large medical research firms that use volunteers on the internet to help compute through massive computational workloads. The ability to do that on a wider scale is limited by the need to deliver more sophisticated or even proprietary software on the member systems.
What if workstations could be conscribed to be part of a cloud? When the workstation owner is not using it, the entire machine could be repurposed for another need. Then during work hours, the owner's image could be restored. Private owners could even lease their processing time and make some extra money or earn credit of some kind.
Right now I am surrounded by several multicore processor based systems. Any one of them could power a web presence for a small business. All of them could power the website for a medium business. If I maintained a small cloud using the computers of my neighbors, I could possibly lease powerful computing cycles to render the next animated movie or to compute fractal geometry calculations for climate models. If I operated between 9PM and 6AM I could deliver more than a day's worth of computing gain. What would that be worth?
If you are going to install and use WebSphere CloudBurst in your own environment, it is very likely that you would want at least two appliances. Perhaps you want to have a standby appliance in case of a failure on the main appliance, or maybe you have different teams that are looking to utilize the appliance in different data centers. In any case, once you install multiple appliances there's another requirement that will pop up pretty quickly. Naturally you are going to want to share custom artifacts among the various WebSphere CloudBurst boxes.
When I say custom artifacts, namely I mean virtual images, patterns, and script packages. Script packages have been easy enough to share since WebSphere CloudBurst 1.0 because you can simply download the ZIP file from one appliance and upload it to another. However, there are some enhancements in WebSphere CloudBurst 1.1 that make it easy to share both patterns and images among your different appliances.
As far as patterns go, there is a new script included in the samples directory of the WebSphere CloudBurst command line interface package called patternToPython.py. This script will transform a pattern you specify into a python script. The resulting python script can then be run against a different WebSphere CloudBurst (using the CLI), and the result is the pattern is created on the target appliance. You need to be sure that the artifacts that pattern references (script packages and virtual images) exist on the target appliance and have the exact same name as they do on the appliance from which the pattern was taken. There are no other caveats, and this new sample script makes it really simple to move patterns between appliances.
For virtual images, a new feature was added that allows you to export a virtual image from the WebSphere CloudBurst console. Simply select a virtual image, specify a remote machine (any machine with SCP enabled), and click a button to export the image as an OVA file. This OVA file can then be added to another WebSphere CloudBurst catalog using the normal process for adding virtual images. You can see this feature in action here.
Stay tuned for more information about some of the handy new features in WebSphere CloudBurst 1.1. We also should have a comprehensive look at the new release coming soon in a developerWorks article.