I spent most of my time growing up doing two things, going to school and playing sports. I made many fond memories -- mostly from the latter :) -- and learned more than a few lessons over that time. Of all of those lessons, there was one in particular that stuck out in both the classroom and on the baseball diamond: Sometimes you have to get back to the basics.
In that vein, I think it is time to revisit the basics of WebSphere CloudBurst. In revisiting the basics, I am not talking about the technical basics of the appliance. Rather, I am talking about revisiting exactly why WebSphere CloudBurst exists in the first place. In other words, let's take a look at the problem domains WebSphere CloudBurst addresses, and let's discuss a little bit about how the appliance does so.
I point this out about script packages because recently I put one together that is not a WebSphere Application Server administration task, but does provide configuration logic common to many WAS deployments. Specifically, I put together a script package that configures an IBM HTTP Server to be a reverse proxy server. This of course, allows clients to send requests to the IBM HTTP Server and have those requests pass through to a specified back-end destination (i.e. a service hosted on WebSphere Application Server) based on URI paths.
The script package is not all that different from many of the ones I put together. It contains a shell script (which provides configuration and orchestration logic) as well as a cbscript.json file that defines the script package's characteristics when I upload it into WebSphere CloudBurst. The notable difference in this script package is that I include a Perl script that modifies the IBM HTTP Server configuration file. This just reiterates the point that you are not limited to only wsadmin and shell scripts within your script packages.
The workings of the script package are quite straightforward. It starts with a call to the shell script that modifies the IBM HTTP Server's configuration file to ensure the loading of a couple proxy modules:
sed -i s/"#LoadModule proxy_module modules\/mod_proxy.so"/"LoadModule proxy_module modules\/mod_proxy.so"/g $HTTP_CONF
sed -i s/"#LoadModule proxy_http_module modules\/mod_proxy_http.so"/"LoadModule proxy_http_module modules\/mod_proxy_http.so"/g $HTTP_CONF
As you can see, the createProxy.pl script accepts a single argument. This argument represents the reverse proxy configuration information provided by the user during deployment. The Perl script parses the single argument and creates the appropriate proxy directives in the IBM HTTP Server's configuration file:
After the invocation of the script above, control returns to the shell script. The shell script restarts the IBM HTTP Server so that the configuration changes take place. The result is an up and running IBM HTTP Server acting as a reverse proxy based on information supplied during deployment. The listings here do not show the full script package, but I hope to have it up on our WebSphere CloudBurst Samples Gallery soon.
If the script above provides some configuration logic you can use, that is good. However, my main point for bringing it up here is to point out that WebSphere CloudBurst script packages can be more than shell and wsadmin scripts that perform WebSphere Application Server configuration tasks. You can use them to do any sort of scripted activity that is essential to your application middleware deployment process. Happy scripting!
If you follow this blog often, you know that from time to time I like to post frequently asked questions. Well, it's been a while since I have done that, and since then I have added some new questions to my list -- along with some regulars. Take a look below, and if I don't answer your question feel free to leave a comment!
Can IBM Workload Deployer deploy software that is not IBM software? Yes. You can use one of the included images as a springboard and customize them with your own software via extend and capture. Additionally, you can use the IBM Image Construction and Composition Tool (I'm getting ahead of myself here) to create your own custom images from the ground up and use those within IBM Workload Deployer.
Can I use VMotion for the systems I deploy with IBM Workload Deployer? Yes. IBM Workload Deployer has tolerated the use of VMotion since the WebSphere CloudBurst days (see the Additional Considerations section on this page for more information). IBM Workload Deployer v3 introduced the notion of virtual machine mobility initiated directly from the appliance. This capability takes advantage of VMotion in the case of VMware-based cloud environments.
Can IBM Workload Deployer deploy just a base operating system? Yes. IBM Workload Deployer v3 introduced a base operating system image that contains 64-bit Red Hat Enterprise Linux. Internally, IBM Workload Deployer uses this as the foundation on top of which virtual application patterns are deployed. You can use it to deploy virtual machines containing just the base OS, or you can customize it to deploy software of your choosing. (As an aside, IBM Workload Deployer v3.1 will include a base operating system image for AIX)
Can I automate the process of calling/using IBM Workload Deployer? Yes. IBM Workload Deployer is built to fit a specific need -- creating and managing a cloud of middleware and middleware-based workloads. In that light, it would be a shortcoming if IBM Workload Deployer did not to fit well into more holistic or enterprise-wide cloud management systems. The REST API and CLI allow you to automate the use of IBM Workload Deployer, thereby allowing it to be mashed up into other processes.
Can I group two appliances together for high availability? Yes. IBM Workload Deployer v3.1 introduces the ability to configure appliances in a master/slave setup. You can connect two appliances, allow them to share a floating IP address, and be confident that data is continuously replicated between the two. If one appliance fails, the other appliance picks up the floating IP ensuring continuous service.
Are images created using the Image Construction and Composition Tool supported for use within IBM Workload Deployer? Yes. Part of the new IBM Workload Deployer 3.1 announcement was a statement of support for using images created by the Image Construction and Composition Tool as a component of your virtual system patterns. This is a very important enhancement as it allows you to extend the set of content deployed by IBM Workload Deployer while being sure that you are operating within the boundaries of intended use.
Can I use IBM Workload Deployer to provision to public clouds? No... and yes. If you install an IBM Workload Deployer appliance in your datacenter, you cannot use it to deploy to a public cloud environment. However, you may have recently heard about the IBM SmartCloud Application Services portfolio. IBM has announced that the pattern-based provisioning that one gets with IBM Workload Deployer will also be available as part of this portfolio. This means that you will be able to build and deploy patterns using a service hosted on the IBM SmartCloud. Further, your deployed systems will run on the IBM SmartCloud. Check out this demo for more information.
** IBM Workload Deployer 3.1 firmware is available on 11/18.
A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.
Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:
Me:IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?
Marc: To sum it up, we offer a combination of depth and breadth. IWD delivers both workload aware management and general purpose management. Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context. There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools. This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity. By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.
That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun. As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer's entire environment, we offer a mix of workload-aware management and general purpose management.
Me:VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?
Marc: I think VMware has built a very compelling set of capability in the virtualization space. I think the main difference between VMware's suite and IBM Workload Deployer is the perspective from which the environments are managed. VMware puts the administrator in the position of thinking about infrastructure from the ground up. The administrator is thinking about virtual images, hypervisors, and scripts. In IBM Workload Deployer, we think about things from the perspective of the app, because that's ultimately what the business cares about. By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.
Me:The 'one tool to do it all' approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?
Marc: The advantages of a "one tool to do it all" are many: less integration, more uniformity, less complexity. As such, customers will always prefer a single tool when possible. This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the "everything else." As such, my advice to users is not to choose between breadth and depth - use IBM Workload Deployer which offers both.
Me:To close, I'm curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?
Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm. There are several reasons for this. One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged. Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models. However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data. Just look at IBM's Strategic Outsourcing business, which handles entire IT operations for many large businesses. I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption. In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public. When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.
One of the most exciting announcements at IBM IMPACT last week was that of the new WebSphere Process Server Hypervisor Edition. This new virtual image allows you to provision complete WebSphere Process Server environments into your on-premise cloud using the WebSphere CloudBurst Appliance. Just like with the other environments you can provision using WebSphere CloudBurst (namely WebSphere Application Server, DB2, and Portal Server), you can stand up these WebSphere Process Server environments in a matter of minutes.
The WebSphere Process Server does not come pre-loaded on the appliance, but it does come with a cool utility that helps you get it on the appliance. The WebSphere Process Server Hypervisor Edition loader provides a wizard-like tool that loads the image into the catalog of an appliance you specify. The tool is simple to use and is included as part of the image package that you download from Passport Advantage.
Not only does the loader above populate the WebSphere Process Server Hypervisor Edition into the appliance's catalog, but it also creates a set of patterns for the WebSphere CloudBurst Appliance. These patterns encapsulate golden topology environments for WebSphere Process Server Hypervisor Edition. At the time of my post, the patterns created by the loader include the following:
Standalone server: This pattern represents a single server instance of WebSphere Process Server. Deployment of the pattern results in a single virtual machine that contains both the server instance and a DB2 instance.
Simulated environment: This pattern contains a single part called a 'Full function control node'. Deployment of the pattern results in the creation of a deployment manager, proxy server, DB2 environment, and three WebSphere Process Server clusters (application target cluster, support cluster, and messaging cluster), all in a single virtual machine.
Scalable environment: This pattern contains a deployment manager, 'Basic function nodes' part, DB2 part, and a proxy server. Deploying the pattern results in the same components as the pattern above, but in this case each component resides in its own virtual machine.
The announcement of the WebSphere Process Server Hypervisor Edition only serves to increase the applicability of WebSphere CloudBurst for constructing on-premise WebSphere clouds. If you have any questions, or want to learn more about this new virtual image, please let me know.
I recently read a post by David Linthicum in which he proposes that a key benefit of cloud computing is the ability to transfer risk from the enterprise to the cloud provider.
At first glance, this seems an obvious benefit of using a public cloud for computing resources. Cloud providers take care of the onerous task of providing computing resources across an organization. If the resources need to be updated, require critical maintenance, or need emergency action, the cloud provider will provide those services. Enterprise IT departments are left to devote effort toward delivering technological capabilities to the business. However, does any of this imply a transfer of risk?
I'd answer that question with "It depends." Whether or not an enterprise has transferred risk by contracting with a public cloud provider depends on the provisions in the Service Level Agreement (SLA) that exists between the enterprise and provider. In some cases (maybe most) the SLA simply provides a refund for a portion of the service fee based on the impacted services. This is clearly not a case of transference of risk. The loss of current and new business sustained by the enterprise during the service outage is not indemnified by the cloud provider. In this sense, the enterprise has done nothing more than transfer the management of their risks to a third party.
True risk transference can be achieved, but it means that SLAs provide both service fee refunds and business loss indemnification. During a service outage, an enterprise's risk is not the fee they are paying for the service but instead the impact on current and future profits. There must be stipulations in the SLAs to address these losses for risk transfer to have taken place.
The differences between transferring risk and risk management may seem obvious, but it does serve to underscore the importance of SLAs in the cloud computing world. Enterprises need to fully understand these SLAs in order to accurately assess the benefits of using a cloud proider. SLAs are poised to be critical in the cloud computing world, and I'm interested to see how they will help shape the competitive landscape of the industry.
In my last post, I concentrated on the new enhancements to WebSphere CloudBurst 188.8.131.52. One of the major new additions was the introduction of Environment Profiles, and I promised a developerWorks article would be forthcoming. The article is now live along with a demo that showcases the capability of environment profiles.
As I mentioned in my last post, environment profiles center around giving you more customization capability during the pattern deployment process. In WebSphere CloudBurst, the pattern deployment process consists of the five main steps depicted below.
Traditionally, WebSphere CloudBurst controlled the entire deployment process, thus closing it off to the deployer. Environment profiles extend the customization reach of users to be able to effect steps 1-3 in the above diagram. Specifically, environment profiles give you the following control:
Control over the assignment of IP addresses and hostnames to pattern parts: Instead of having WebSphere CloudBurst automatically assign IP addresses, and thus hostnames, to virtual machines during deployment, you can explicitly set both values during the deployment process.
Ability to deploy single patterns to multiple cloud groups: Previously, when deploying a pattern you selected a single cloud group and WebSphere CloudBurst deployed all the parts in the pattern to machines within that cloud group. While this may be okay for many cases, other cases may require you to deploy some parts of the pattern to one group of machines while other parts map to a separate set of machines. Before environment profiles, you could accomplish this with multiple patterns. With environment profiles, you can accomplish it with a single pattern.
Ability to supply virtual machine naming standards: As part of deploying a pattern, WebSphere CloudBurst creates one to many virtual machines with distinct names. Environment profiles allow you to supply a naming standard that WebSphere CloudBurst will use when creating the machines as opposed to default naming schemes previously used.
It is important to note that the use of environment profiles is completely optional, and you can continue to use the traditional deployment process, thereby leaving WebSphere CloudBurst in control. That said, the introduction of environment profiles is a direct response to consistent user feedback we received regarding the need for more control during the deployment process. Based on my user conversations, these profiles address many of said needs in an easy to use, straightforward manner. We are, of course, eager to know what you think. As always, you can let me know right here, through email, or on Twitter (@damrhein).
Over time, many of our users learn to effectively leverage WebSphere CloudBurst user roles and fine-grained access controls to map activities and responsibilities in the appliance to the appropriate people and teams within their organization. Using these controls, they are able to define actions that a user or group can take, and they can define the set of resources on which they can take those actions. It is efficient, flexible, and an absolute necessity in many enterprise scenarios.
In some cases though, I talk with users that want a little more control, or probably better put, governance over the actions a user can take within a given role. Most often, this need arises when the discussion of pattern authoring comes up. If you want a user in WebSphere CloudBurst to be able to create patterns, you simply give them the Create new patterns permission. Once you give them that permission, the user can create patterns using both virtual image parts and script packages in the catalog. For many of the users I talk with, this approach suits their needs.
However, in some scenarios administrators want a little more insight and control over how the pattern authors build their patterns. Specifically, they want to ensure that patterns contain only approved virtual image parts and script packages. While you can certainly use the fine-grained access controls of the appliance to expose only the 'approved' virtual image parts and script packages, that alone may not be enough. After all, the definition of what is 'approved' may be different when building a pattern for testing purposes versus one built for production purposes. If the same pattern author builds both of those patterns, fine-grained access controls do not help as much. So, what can you do?
Have I ever told you how much I love the WebSphere CloudBurst CLI? It's powerful, easy to use, and a great automation enabler. It is also the perfect tool for our problem above. If you are looking to enforce certain constraints on WebSphere CloudBurst patterns, I strongly recommend using the CLI as a governance tool.
To provide a concrete example of what I mean, let's take a look at a generic pattern checking script I am working on now (I hope to have this in the samples gallery soon). Consider the case that I want to check that all of my test patterns for a specific application environment contain 1 deployment manager and between 1 and 3 custom nodes. In addition, I want to make sure that the parts for these nodes come from an approved virtual image, and I want to verify that the deployment manager contains the correct application installation script package. With the script I am currently writing, you would start by encapsulating this information in a properties file.
PatternAssertion_1=Customer Processing Test Environments
PatternAssertion_1_Requirements=Deployment manager:1:415:Install customer process app;Custom nodes:1-3:415
In the above, the PatternAssertion_1 key provides a name for the pattern verification assertion. The PatternAssertion_1_Requirements key provides the requirements for the pattern. The above requirements indicate that for a pattern to meet the assertion, it must contain 1 deployment manager part from the virtual image with reference number 415. In addition, the deployment manager must contain a script named Install customer process app. A valid pattern must also contain 1 to 3 custom node parts, also based on the virtual image with reference number 415. When done defining my requirements in the properties file, I simply invoke a script and pass in the file. As a result, I get information about which patterns satisfy or do not satisfy the assertions. For example:
The Customer Process Application pattern satisfied the requirements of the Customer Processing Test Environments assertion.
OR The Customer Process Application pattern did not satisfy the requirements of the Customer Processing Test Environments assertion
due to the following reason: The pattern is required to have a minimum of 1 and a maximum of 3 Custom node part(s), but it had 4.
As I said, I hope to have this sample script posted to the samples gallery soon. I am going through some final revisions and enhancements that I hope make it better and more generally applicable. In the meantime, I wanted to point out that pattern governance is indeed doable, and in fact not very hard to achieve with the CLI. I will be sure to post an update when the sample script is ready. In the meantime, let me know if you have any questions or comments.
If you have been wanting to get some first hand experience with patterns of expertise in preparation for IBM PureApplication System or IBM Workload Deployer but you don't yet have a system of your very own to use ... then you will want to check out this post/video and then download the Virtual Pattern Toolkit for Developers! It's absolutely free and will get you up and running with a virtualized system in a short while. Check it out!
In the previous post I spoke about how a Virtual Application feature introduced in Workload Deployer v3.1 actually had benefits for Virtual System patterns as well. In that case I was talking about the ability to deploy Virtual Applications running on AIX to PowerVM hypervisors and how this had hidden benefits for Virtual Systems as well. This is a great example of how an enhancement to Virtual Applications can sometimes benefit Virtual Systems. However, this is not the only instance where the two pattern types intersect.
Several other new or enhanced features that are primarily for Virtual Applications are also being extended to benefit and improve Virtual Systems ... and vice-versa. One such area of improvement involves Shared Service in v3.1. These services were introduced in v3.0 specifically for the benefit of Virtual Applications. However, several enhancements have extended these capabilities to Virtual Systems and likewise, some functionality that was previously only available to Virtual Systems has been extended to Virtual Applications in the form of Shared Services.
As you may already know, Shared Services were first introduced in v3.0 and are just what the name implies; services that are deployed by a cloud administrator and used by multiple virtual application deployments. Let's start by taking a look at the shared services available under Cloud -> Shared Services in v3.1. You will notice that there are now more shared services listed than there were in v3.0.
In addition to the familiar Caching Service and ELB Proxy Service (formerly Proxy Service) there are now additional entries for an External Caching Service and an External Application Monitoring Service. For simplicity let's just start from the top and go down the list, discussing the function of each service, what is new/improved for v3.1 with regard to virtual applications, and when applicable how this service can be used by virtual systems.
The Caching Service was introduced in v3.0. Its primary purpose is to cache HTTP session data using a highly scalable and fast in-memory cache. This is the same core technology that is included in our WebSphere eXtreme Scale and DataPower XC10 Caching appliance. To make use of this service all you need to do is deploy an instance of the Caching Service with the configuration parameters of your choice into a cloud group where you want to leverage that service. As you create virtual application patterns you simply select the Enable session caching check-box when you add a scaling policy. When this pattern is deployed it will be automatically configured to leverage the Caching Service for session persistence. It's as simple as that.
Several new features were introduced in v3.1 for the Caching Service. First, the Caching Service can now be launched with parameters to define the behavior for automatic scaling to meet the ever changing demands of your applications. Once set, Workload Deployer will manage this service to ensure sufficient capacity based upon your requirements, adding or removing containers. Second, and this is significant for Virtual System patterns, the caching service has been enhanced to add new operations to support listing, creating, and deleting various types of object grids. You can then use the WebSphere eXtreme Scale ObjectGrid APIs to persist and manage content in the grid from your application code from Virtual System deployments. This saves you the trouble of creating and configuring your own caching service for these purposes outside of the cloud and permits sharing of the service you have already configured - a nice savings.
Caching Service (External)
The External Caching Service is one of the new additions for v3.1. Let's say that you already have configured a caching solution for your enterprise using the DataPower XC10 appliance or a collective of appliances. It would be nice if you could leverage this same solution instead of launching yet another caching solution within your private cloud. Leveraging your existing solution would consolidate your caching needs and preserve the cloud resources for other purposes. With this new external caching service you can do just that. It provides you the ability to leverage an external caching solution for both your Virtual Application session persistence needs as well as your Virtual System and even non-cloud caching needs. Just point an instance of this external caching service at your DataPower XC10 caching solution and all of the HTTP session persistence needed by your virtual applications in the same cloud group will make use of the external caching service. You can also point multiple instances of the external caching service in multiple cloud groups to share the same XC10 appliance or collective.
Monitoring Application (External)
With the External Monitoring Application service you can deploy an External Application Monitoring service reference within a cloud group to point at a Tivoli Enterprise Monitoring Server installation outside of the cloud. The TEMS server must be at version 6.2.2 Fix Pack 5 or later. Once created, the Unix or Linux OS monitoring agents and the Workload monitoring agent that is provided for virtual application workloads will be automatically connected to the defined instance of the Tivoli server using the supplied primary and fail-over Tivoli Enterprise Management server, protocol, and port. This is especially useful if you want to consolidate all of your monitoring to a common console. As with the External Caching Solution, this enhancement also extends the integration capabilities of Virtual Application Patterns beyond the scope of your private cloud and allows you to consolidate and leverage investments you have already made.
ELB Proxy Service
The Proxy Service was first introduced in v3.0 and renamed to the ELB Proxy Service in v3.1 for clarity. As the name implies, its primary purpose is to provide routing and load balancing to multiple deployed web applications. As with the caching service, you deploy this service based upon your requirements for load and availability within a cloud group. When defining virtual application patterns to leverage this service you simply add a routing policy and define your virtual host name. When the virtual application pattern instance is deployed to the cloud group the necessary configuration will performed to add the virtual host name and configure your application environment to use the ELB Proxy Service. New in v3.1 is the capability to scale the ELB Proxy Service itself to meet the changing demands of your application mix.
One other item that I should point out (and to which I've already alluded) is that you can now deploy multiple instances of each of the shared services - one per cloud group. Shared services can also now be deployed using environment profiles. This was not previously the case in v3.0 where each service was a singleton for the appliance. Allowing multiple instances of shared services gives you the flexibility to configure the sharing of your services as necessary for your particular environment.
I hope this post has provided a useful overview of the value of shared services and the new capabilities introduced in v3.1. I also hope that you can see how these services make it easier to implement your solutions for both virtual applications and virtual systems within a private cloud environment and shed a little light on how we are continuing to improve IBM Workload Deployer. As always, these improvements are driven by the feedback we receive from you so please let us know what you think!