If you follow this blog often, you know that from time to time I like to post frequently asked questions. Well, it's been a while since I have done that, and since then I have added some new questions to my list -- along with some regulars. Take a look below, and if I don't answer your question feel free to leave a comment!
Can IBM Workload Deployer deploy software that is not IBM software? Yes. You can use one of the included images as a springboard and customize them with your own software via extend and capture. Additionally, you can use the IBM Image Construction and Composition Tool (I'm getting ahead of myself here) to create your own custom images from the ground up and use those within IBM Workload Deployer.
Can I use VMotion for the systems I deploy with IBM Workload Deployer? Yes. IBM Workload Deployer has tolerated the use of VMotion since the WebSphere CloudBurst days (see the Additional Considerations section on this page for more information). IBM Workload Deployer v3 introduced the notion of virtual machine mobility initiated directly from the appliance. This capability takes advantage of VMotion in the case of VMware-based cloud environments.
Can IBM Workload Deployer deploy just a base operating system? Yes. IBM Workload Deployer v3 introduced a base operating system image that contains 64-bit Red Hat Enterprise Linux. Internally, IBM Workload Deployer uses this as the foundation on top of which virtual application patterns are deployed. You can use it to deploy virtual machines containing just the base OS, or you can customize it to deploy software of your choosing. (As an aside, IBM Workload Deployer v3.1 will include a base operating system image for AIX)
Can I automate the process of calling/using IBM Workload Deployer? Yes. IBM Workload Deployer is built to fit a specific need -- creating and managing a cloud of middleware and middleware-based workloads. In that light, it would be a shortcoming if IBM Workload Deployer did not to fit well into more holistic or enterprise-wide cloud management systems. The REST API and CLI allow you to automate the use of IBM Workload Deployer, thereby allowing it to be mashed up into other processes.
Can I group two appliances together for high availability? Yes. IBM Workload Deployer v3.1 introduces the ability to configure appliances in a master/slave setup. You can connect two appliances, allow them to share a floating IP address, and be confident that data is continuously replicated between the two. If one appliance fails, the other appliance picks up the floating IP ensuring continuous service.
Are images created using the Image Construction and Composition Tool supported for use within IBM Workload Deployer? Yes. Part of the new IBM Workload Deployer 3.1 announcement was a statement of support for using images created by the Image Construction and Composition Tool as a component of your virtual system patterns. This is a very important enhancement as it allows you to extend the set of content deployed by IBM Workload Deployer while being sure that you are operating within the boundaries of intended use.
Can I use IBM Workload Deployer to provision to public clouds? No... and yes. If you install an IBM Workload Deployer appliance in your datacenter, you cannot use it to deploy to a public cloud environment. However, you may have recently heard about the IBM SmartCloud Application Services portfolio. IBM has announced that the pattern-based provisioning that one gets with IBM Workload Deployer will also be available as part of this portfolio. This means that you will be able to build and deploy patterns using a service hosted on the IBM SmartCloud. Further, your deployed systems will run on the IBM SmartCloud. Check out this demo for more information.
** IBM Workload Deployer 3.1 firmware is available on 11/18.
For those of you basically familiar with IBM Workload Deployer, you are likely aware that the appliance has many different capabilities. On the surface it is a cloud management device for middleware and middleware applications. Of course, there are quite a few details that are important to understanding the functionality provided, and I spend quite a bit of my time talking with various users and potential users about these details. One thing I have noticed that can become an obstacle in having effective communication regarding IBM Workload Deployer is the lack of a commonly understood language. I sometimes find that me and the user are simply using different terminology to describe the same thing. As you can imagine, this just serves to create confusion, and neither party gets the most out of the conversation.
In order to combat this communication gap, I thought I would put together a simple presentation that introduces and defines IBM Workload Deployer terminology. Check it out below (you can also download it here):
While the presentation does not dive deep into the terms it introduces, it does provide a basic definition and illustrative example of each. My hope is that this fosters an understanding of some of the basic concepts in IBM Workload Deployer, and ultimately pushes us towards a common vernacular. Please let me know what you think!
Typically we spend most of the real estate on this blog talking about cloud computing and specifically, IBM Workload Deployer. However, I am hoping that this week you permit me to take a bit of a detour to discuss a very important new announcement. Last week, IBM announced the early availability of the WebSphere Application Server v8.5 Alpha. In all fairness, your response may be 'You guys always have WAS Alphas. Why should I care about this one?' I have two words for you: Liberty Profile.
Based on my own experience in the IBM labs and my conversations with numerous enterprise developers out there, I think I understand many of the needs to create an efficient development environment. Developers need tools and runtimes that are lightweight, easy to install, simple to configure, and fast to recycle or otherwise update. Enhancements in our WebSphere Application Server v8.0 took many of these concerns head on with features such as directory-based install and drastically improved server startup times. The new v8.5 Alpha, and specifically the Liberty Profile, extend this developer focus even further.
The Liberty Profile is a lightweight, fast, and easy to use application runtime that you can download for free by visiting the WASdev community site. The design of the runtime is best described as fit-for-purpose and you configure it by selectively enabling and disabling features based on application need. For example, you may enable the servlet, JPA, and JSP features, or you may decide you only need to enable the servlet feature of the runtime. It is completely up to you! In addition to this innovative new runtime, the WebSphere Application Server v8.5 Alpha also includes free tools for Eclipse. These tools make it simple to create Liberty Profile server instances, start server instances, stop server instances, install applications, and remove applications. In fact, you can do all of this and even download and install the WebSphere Application Server v8.5 Alpha without ever leaving your Eclipse workspace! Check out the demonstration below to see an example of installing and using the new Alpha.
I really hope that you will participate in the new WebSphere Application Server v8.5 Alpha. The setup process that includes both tools and runtime will take just a few minutes of your time, and leaves but a small footprint on your machine (the Liberty Profile of the WAS v8.5 Alpha is only ~50 MB unzipped). In the meantime, you can find more information about the Alpha on the WASdev site or in the new Information Center. Finally, don't forget to join in on the conversation on the WASdev forum!
A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.
Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:
Me:IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?
Marc: To sum it up, we offer a combination of depth and breadth. IWD delivers both workload aware management and general purpose management. Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context. There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools. This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity. By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.
That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun. As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer's entire environment, we offer a mix of workload-aware management and general purpose management.
Me:VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?
Marc: I think VMware has built a very compelling set of capability in the virtualization space. I think the main difference between VMware's suite and IBM Workload Deployer is the perspective from which the environments are managed. VMware puts the administrator in the position of thinking about infrastructure from the ground up. The administrator is thinking about virtual images, hypervisors, and scripts. In IBM Workload Deployer, we think about things from the perspective of the app, because that's ultimately what the business cares about. By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.
Me:The 'one tool to do it all' approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?
Marc: The advantages of a "one tool to do it all" are many: less integration, more uniformity, less complexity. As such, customers will always prefer a single tool when possible. This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the "everything else." As such, my advice to users is not to choose between breadth and depth - use IBM Workload Deployer which offers both.
Me:To close, I'm curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?
Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm. There are several reasons for this. One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged. Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models. However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data. Just look at IBM's Strategic Outsourcing business, which handles entire IT operations for many large businesses. I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption. In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public. When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.
We've begun to seed this location with all sorts of helpful information on IBM Workload Deployer. Check it out and you will find links to a "getting started" section, articles, demos, redbooks, whitepapers, pointers to various blogs where authors write about private clouds or IBM Workload Deployer (yep, this blog is included), links to product documentation and education assistant, upcoming events, and more included in the wiki. We're still populating this location with content and we're looking for input on how to improve things ... so please provide your feedback and check back often to see how it evolves.
The content provided in the community is open and visible to everyone immediately. However, there is even more value if you create an id (or use your existing developerWorks id) to become a member of the community. Members can participate in the many collaborative elements that the community provides. This includes the ability to open discussions and collaborate on the forum, post blog entries in the IBM Workload Deployer community blog, or even share content that you have created which may be of interest to others.
There is even a specific section in the community focused on the Plugin Developer's Kit that Dustin mentioned in the previous post on extensibility ( see IBM Workload Deployer PDK wiki page ).
So please visit this new IBM Workload Deployer community and send us your feedback so that we can improve and grow this into a valuable resource. Ultimately, we want this to be a place where we can help each other be successful using IBM Workload Deployer. We also want to learn valuable insights from your experiences with IBM Workload Deployer so that we can continue to make improvements and optimizations in the appliance with the goal of improving your private cloud experience, making your business more agile and efficient. As always, please send us your feedback.
Customization capabilities have been very important to the design of IBM Workload Deployer going back to the beginning with WebSphere CloudBurst. Having the ability to quickly spin up environments in a cloud really does little good if those environments are not customized according to your needs. If you look at the virtual system pattern capability, it is why we always had the notion of custom images, custom patterns, and custom scripts. We give you a strong foundation, and you tweak it here and there to create what you want.
Customization is not a concept unique to virtual system patterns. The virtual application model in IBM Workload Deployer supports many different mechanisms for you to tailor your cloud-based environments. You can start with the virtual application pattern types that we ship and use any components in those patterns to build a custom environment. The patterns you build can include your own configuration (within the set of configurable parameters) and include policies that you need for your environment. In looking at just the IBM Workload Deployer Pattern for Web Applications and the IBM Workload Deployer Pattern for Databases, there are quite a number of scenarios you can support with your cloud. However, what happens when you want to go a little further and color outside the lines of what we provide?
At some point you may have heard or read that the entire virtual application pattern model resides on a pluggable architecture. In effect, this means that everything about a virtual application pattern type, from the elements that show up when building a pattern to the management interface you interact with after deployment, is customizable. The fundamental unit of customization for a virtual application pattern type is a plugin. Plugins provide the know-how in terms of installing, configuring, integrating, and managing the application types supported by a given pattern. Plugins also provide metadata that control what users see as they build and manage these patterns. In short, plugins are the source of truth for virtual application patterns!
If you looked in IBM Workload Deployer, you would find the collection of plugins that support the virtual application pattern types shipped with the offering. While that is interesting, you should also know that you can supply your own plugins. That's right. You can develop a plugin, and load it directly into the appliance. This allows you to do two very important things. First, you can extend the virtual application pattern types that come with IBM Workload Deployer with any kind of functionality you deem important. This may be additional monitoring, integration with external systems, or any number of other extensions. Second, you can create new virtual application pattern types that support your desired workloads. You can support the workloads with the software of your choosing so long as you can supply the necessary know-how in your plugins. In either case, you contribute the plugin, and your customized components become first class members of the IBM Workload Deployer landscape.
Okay, so I admit that this is not necessarily news. We have supported user-contributed plugins since the release of IBM Workload Deployer. However, there is something new that significantly lowers the barrier to entry in the custom plugin game. Early last week, IBM announced the IBM Workload Plugin Development Kit. This kit provides a set of tools and samples designed to make the construction and packaging of custom plugins a simple process. In my opinion, this reiterates our commitment to an extensible, application-centric cloud approach, and it represents a huge step forward in the industry as a whole. Be sure to check this out, and don't be shy with the comments and feedback!
As Joe mentioned in his last post, virtual application patterns are all the rage in IBM Workload Deployer. The high degree of abstraction provided by these patterns means users can remove tedious, time consuming tasks like middleware installation, configuration, and integration from their field of view. As a consequence, users can build and deploy application environments in unprecedented time, thus freeing up more time to focus on the actual application.
This is obviously important because building and deploying application environments are crucial, traditionally time consuming activities. However, what happens after you build and deploy the application? You manage it, that's what! Joe brought up the fact that IBM Workload Deployer makes this easier too by delivering an integrated management portal through which you can manage and monitor your application environments. Now, this probably already sounds valuable, but what really puts it over the top is the management portal exposes an interface that is workload aware. But, what does that mean?
To get an idea of what that means, consider the case that you use the shipped virtual application pattern to build a simple application environment with a web application and database. You deploy it with IBM Workload Deployer, and your application is up and ready. Now you want to start checking things out. You start by opening the management portal directly from the appliance, and you see both the application and database components listed in the view:
After you looked at basic machine statistics such as network activity and memory usage, you could move on to a more workload-centric view. For instance, you could examine statistics particular to a web application such as request counts and service response times:
You may also decide that you want to alter certain aspects of your deployed environment. As an example, you could update your deployed application or change certain configuration data in the deployed environment:
It is important to note that you have a management interface for each of the components in your environment. That means that from the same management interface, you can manage and monitor the database you deployed as part of your environment. For example, at different intervals, you may want to backup your database. You can do this directly from the management portal provided by IBM Workload Deployer:
Lest you think that you can only manage and monitor, this unique management interface is also a one stop shop for all of your troubleshooting needs. From the centralized portal, you can view log and trace data for each component:
Virtual application patterns are an attempt to encapsulate each phase of your application's lifecycle, from creation to deployment to management. In this regard, I hope the above provides a taste of some of the management capabilities provided by virtual application patterns. It truly is the tip of the iceberg!
In a post not long ago, I mentioned new enhancements to virtual system patterns in IBM Workload Deployer. A prominent part of those enhancements were updates to pattern construction that allow you to order virtual machine startup, order script package invocation, and include add-ons that provide system level configuration options. Recently I uploaded a demonstration to YouTube that highlights some of these new capabilities. Specifically, this provides a brief look at ordering and add-on enhancements.
I hope you take a look, and even more importantly, I hope to see some feedback. If you have something you would like to see captured in a demo, let me know and I'll work it to the top of a long and continually growing list!
If you are reading this blog then I am pretty sure that you are interested in the agility that can be achieved by rapidly provisioning middleware systems and standing up virtual applications in a private cloud environment. However there are other aspects of agility that you should also consider. One such aspect is the ability to build applications that can be easily maintained, updated, and extended. This is where OSGi technology comes into the picture.
If you have been working with the IBM Workload Deployer (or watching some IBM Workload Deployer demos) you may have noticed a category of components in the virtual application builder called OSGi Components.
Maybe you already know all about OSGi applications and the value they bring to an enterprise. Or, perhaps you noticed this and decided that you would search for some more information on this odd acronym and just what an OSGi application is all about.
In a nutshell OSGi technology is a way to define dynamic modules for Java. It provides a standard way to encapsulate components (called bundles) with metadata that define versioned package dependencies, service dependencies, packages exported, services exported, etc... basically everything you need to know about this bundle so that it can be connected up with other bundles to support a particular solution. These bundles can then be grouped together into applications and dynamically wired to fulfill necessary dependencies at runtime. The OSGi framework provides all of the necessary capability to manage the dependencies and resolve any problems.
Those who leverage OSGi technology benefit from improved time-to-market and reduced development costs. The loose coupling provided by the OSGi framework reduces maintenance costs and facilitates the dynamic delivery of components in a running system. Of course there's a lot more to it than just that ... involving portability across different environments, achieving the appropriate level of isolation or sharing within an environment, and integrating with the many different technologies and patterns already available today. I don't think I know enough about OSGi to do it justice here. But fortunately for me (and you) there are several experts who can make it all clear.
One such expert is Graham Charters and there is a great opportunity to hear him introduce this topic and also participate in a dialogue about the concepts and what they mean for your business. Graham will be leading a Global WebSphere Community Lab Chat on Wednesday of this week (July 20th) entitled: How can OSGi make your enterprise more agile. Graham is the IBM technical lead in the OSGi Alliance Enterprise Expert Group and an active participant in the open source community implementing many of these standards. So register now for this free session and learn how OSGi can make your enterprise even more agile.
A few weeks ago, I had a conversation with a current WebSphere customer about the potential value they could derive from the use of IBM Workload Deployer. Right away, this customer saw value in the consistency that a patterns-based approach could afford them. It was clear that patterns eliminate the uncertainty that can make its way into even the best-planned deployment processes. Initially though, the customer questioned the value of being able to do fast deployments because, in their words, "We don't deploy WebSphere environments that often." So, we continued our discussion, and then they asked an important question that I encourage all of our users to ask: "Why don't we deploy our WebSphere environments more frequently?"
It is interesting to talk with our WebSphere users that have a long history with our products. Often times, they have been taking a shared approach to WebSphere installations for many, many years. They develop innovative approaches and isolation schemes that allow them to carve up a single WebSphere installation (cell) amongst multiple different application teams. This allows them to avoid having to setup a cell for each application deployment and saves them the associated time. However, having talked to many different users taking this approach, it is not without its challenges.
As was the case in the customer I mention above, users typically made trade-offs when electing for larger, shared cells. As an example, if you have multiple different application teams with different types of applications using a single cell, applying fixes and upgrades to that cell can be a lot more complex. After all, you now have to coordinate plans across a number of different teams and find a window that fits all of their needs. For the same reason, trying incremental function via our feature packs is much more arduous in these types of cells. Additionally, administrative controls become more complex since teams with varying needs all require administrative access. Admittedly, this gets simpler with newer fine-grained security models in WebSphere Application Server v7 and v8, but it still requires organizational discipline and process.
At this point I should be clear that I am not denigrating the shared cell approach. It can work well, and we have many facilities built into the WebSphere Application Server product to support that model. However, if you are using this approach and you find yourself stumbling too much for your own liking, then I would strongly suggest that you explore the patterns-based approach of IBM Workload Deployer. By deploying patterns that represent your WebSphere cells using IBM Workload Deployer, you can quickly and consistently setup multiple WebSphere Application Server cells to support the varying needs of your application teams. You will still avoid spending an inordinate amount of time installing and configuring cells as that is an automated part of pattern deployment, and your application teams will still get the resources they need. Further, this can liberate your application teams in terms of how they apply maintenance, install upgrades, and absorb new function in the form of feature packs.
I am not suggesting a complete pendulum swing in your approach to how you manage multiple application environments. There is definitely a happy medium in terms of how many cells you end up with. After all, you do not want to trade in one set of problems for the problem of managing way too many different cells. However, I do think that decomposing monolithic, multi-purpose cells into smaller, more purposeful cells can be beneficial. In the course of thinking about this different approach, you may come to the same conclusion that the customer I mention above did. IBM Workload Deployer's rapid deployment capabilities are indeed valuable if you take a slightly different view of current processes.
In my opinion, declarative deployment models are key to the entire notion of Platform as a Service (PaaS). That is, users should concern themselves with what they want, but not necessarily how to get it. The PaaS system should be able to interpret imperatives from the user and automatically convert that to a running system. In this respect, I think the new virtual application pattern, and more specifically policies, in IBM Workload Deployer takes a giant leap toward a more declarative deployment model.
In IBM Workload Deployer, policies allow you to 'decorate' your virtual application pattern with functional and non-functional requirements. In other words, they provide a vehicle for you to tell the system what qualities of service you expect for your application environment. To put a little context around this discussion, let's examine the policies available in the virtual application pattern for web applications. Specifically, let's look at the four policy types you can attach to Enterprise Application, Web Application, and OSGI Application components in this pattern:
Scaling policy: When it comes to cloud, the first thing many folks think about is autonomic elasticity. Applications should scale up and down based on criteria defined by the user. Well, that is exactly what the scaling policy lets you do. You simply attach this policy to your application component, and then specify properties that define when to scale. First, you choose a scaling trigger from a list that includes application response time, CPU usage, JDBC connection wait time, and JDBC connection pool usage. After choosing your trigger, you decide the minimum and maximum number of application instances for your deployment, and then you choose the minimum number of seconds to wait for an add or remove action. At this point, you can deploy your application and IBM Workload Deployer will monitor the environment, automatically triggering scaling actions as needed.
JVM policy: I would be willing to bet that nearly all of you tune the JVM environment into which you deploy your applications. The JVM policy allows you to take two common tuning actions, setting the JVM heap sizes and passing in JVM arguments, as well as attach a debugger to the Java process (especially useful in development and test phases). You can also use the policy to enable verbose garbage collection (invaluable to understanding heap usage patterns for your application) and select the bit level (from 32 or 64) for your application. Again, all you have to do is attach the policy and specify the properties. IBM Workload Deployer will take care of the required configuration updates.
Routing policy: The routing policy provides a simple way to specify virtual hostnames and allowable protocols (HTTP or HTTPS) for your application. Attach the policy, provide the virtual hostname you want to use, select the desired protocols, and that's it! Remember, once you set the virtual hostname you will need to update your name server to map the hostname to the appropriate IP address.
Log policy: During the development and test phase, it is likely that you will want to enable certain trace strings in the application runtime. The log policy allows you to provide trace strings for your application, and it makes sure that the appropriate configuration updates occur in the deployed environment.
While this is not an exhaustive explanation of each of the policies above, I hope it gives you a basic idea of what they are and how to use them. To me, declarative deployment models are going to be a crucial part of making PaaS successful, so I am really excited about the notion of policies in IBM Workload Deployer. What do you think?
We've been talking a lot about IBM Workload Deployer V3 and we will continue to highlight different aspects of the capabilities it provides in the coming weeks. As we've already mentioned - IBM® Workload Deployer V3 is not just another release of the IBM WebSphere CloudBurst Appliance. While it builds on WebSphere CloudBurst's success, and supports and improves upon all of its original capabilities, Workload Deployer provides new application-centric computing capabilities for your private cloud, and brings you higher utilization, improved ease of use, and more rapid application deployment.
Among the major features of the new virtual application pattern in IBM Workload Deployer is the notion of elasticity. That is, as your application needs more resources, it gets them. When your application can meet its SLAs with fewer resources, the environment shrinks. With this kind of pattern, you enable elasticity by specifying a policy and defining the scaling trigger (i.e. CPU usage, application response times, database response times, etc.). What may have been a bit lost in some of these new announcements regarding IBM Workload Deployer is the fact that you can now leverage this core feature of cloud, elasticity, in your virtual system patterns.
If you have read this blog in the past, you probably already know that the Intelligent Management Pack is an option for virtual system patterns built using WebSphere Application Server Hypervisor Edition. When you enable the Intelligent Management Pack option, you are essentially building and deploying WebSphere Virtual Enterprise (WVE) environments. For those of you not familiar with WVE, the best way to describe it is that it provides you with application and application infrastructure virtualization capabilities. Of its many capabilities, one most germane to our discussion today is the ability for users to attach SLAs to applications and then have WVE automatically prioritize requests and manage resources in order to meet those SLAs. Inherent in this capability is the ability to dynamically start and stop application server processes (JVMs) as required. In other words, WVE provides JVM elasticity.
The fact that WVE provides JVM elasticity is nothing new. Further, IBM Workload Deployer started providing virtual machine (VM) elasticity in previous versions (when it was WebSphere CloudBurst). With this feature, you could add or remove VMs to an already deployed virtual system using dynamic virtual machine operations provided by the appliance. The catch was that the VM elasticity was a manual action and you could not link this elasticity to the same SLAs tied to your applications. Well, thanks to a new feature in WebSphere Virtual Enterprise and easy integration provided by the Intelligent Management Pack, this is no longer the case.
Starting in IBM Workload Deployer 3.0, you can take advantage of a new WVE feature called Elasticity Mode when using the Intelligent Management Pack. Elasticity mode is not unique to IBM Workload Deployer, but a concept new to the base WVE product. It allows one to define actions for how WVE should grow and shrink the set of nodes used by application server resources. Like the basic JVM elasticity capability in WVE, these node elasticity actions trigger based on SLAs tied to your applications. Consider the case that you are using elasticity mode and your application is not currently meetings its SLA. If WVE does not think it can start any more application server instances on the current set of nodes, it will grow the set of nodes per your elasticity configuration. Conversely, if WVE detects that it can meet SLAs with fewer nodes, it will shrink the resources per your elasticity configuration.
In IBM Workload Deployer, using elasticity mode becomes even easier. All you need to do is use the Intelligent Management Pack and enable the elasticity mode option in your virtual system patterns. When you do this, you get automatic integration between IBM Workload Deployer and the deployed WVE environment. What does that mean? It means that if WVE detects it needs more nodes, it will automatically call back into IBM Workload Deployer and request that the appliance provision a new VM that will serve as a node for application server processes. It also means that if WVE detects it could meet SLAs with fewer resources, it will call into IBM Workload Deployer and ask it to remove a node. All of this happens without any user scripting. All you have to do is enable this option in your patterns and configure SLAs appropriate for your applications.
To me, this exciting new feature brings out the best of elasticity capabilities in both IBM Workload Deployer and WebSphere Virtual Enterprise. The result is a single management plane that gives you both VM and JVM elasticity for your cloud-based application environments. Best of all, elasticity actions map directly to SLAs for your applications. After all, when it comes to cloud, it's the application that really matters!
As I have mentioned before, IBM Workload Deployer v3.0 introduces choices in pattern-based deployment models. One of those models, virtual system patterns, is a carry over from the WebSphere CloudBurst Appliance. When you use virtual system patterns in IBM Workload Deployer, you can take advantage of all of the techniques you put to use in WebSphere CloudBurst. This is certainly good news for current WebSphere CloudBurst users, but it goes a bit further. Instead of simply maintaining the status quo with virtual system patterns, which would have been reasonable considering the introduction of virtual application patterns, we chose to continue to expand on your customization options for this pattern deployment model. In particular, I want to discuss three new features in IBM Workload Deployer that may help you to better construct and manage virtual system patterns.
The first new feature is one that I have been eagerly awaiting. In the new version of the appliance, we provide you with the ability to specify part and script package ordering in your pattern. This means that, within the virtual system pattern editor, you can tell IBM Workload Deployer in which order to start the virtual machines in your pattern, and you can specify in which order to invoke the script packages within the pattern during deployment. This eliminates the need for special script invocation orchestration logic in your pattern (I had customers resorting to a semaphore like approach using a shared file system), and it allows you to be more declarative about the virtual machine bring-up process. There are constraints, specifically with the part ordering. Some images will impose an implied part start-up order that you cannot change. For instance, deployment manager parts in the WebSphere Application Server Hypervisor Edition image must start before custom node parts. The good news is the pattern editor will not allow you to specify a part start-up order that violates these constraints. The image below shows an example of the ordering view in the virtual system pattern editor.
Another new feature that may influence the way you build virtual system patterns is the introduction of Add-Ons. You can think of Add-Ons as special script packages that you can include in your virtual system pattern that perform system-level configuration actions. Specifically, you can include add-ons in your virtual system pattern to add an operating system user, add a virtual disk, or add a NIC during the deployment process. You include Add-Ons in your pattern by simply dragging and dropping them onto a part in your pattern, just as you do with script packages today. The difference between script packages and Add-Ons is that IBM Workload Deployer will ensure the invocation of all Add-Ons before any other scripts run during deployment. We include default Add-On implementations for adding a user, disk, and NIC.
The last new feature I want to talk about today has more to do with how you manage or govern the deployment of virtual system patterns. In WebSphere CloudBurst 2.0, we introduced the idea of Environment Profiles as a way to extend your customization reach into the deployment process. Initially, these profiles gave you the ability to directly assign IP addresses to virtual machines in your deployment, declaratively specify virtual machine naming formats, and easily split a single pattern deployment across multiple cloud groups. In IBM Workload Deployer, you will be able to use these same profiles to set resource consumption limits for pattern deployments. In particular, you will be able to set cumulative limits for virtual CPU, memory, storage, and software licenses used by deployments tied to a specific profile, thereby giving you finer-grained control over cloud resource consumption. The picture below shows the new resource limit aspects of environment profiles.
Virtual system patterns are key in the deployment model choices for IBM Workload Deployer. Not only did we carry the concept over from WebSphere CloudBurst to IBM Workload Deployer, but we made it even better. Expect this trend to continue!
More and more, I am getting a question about how to bring existing WebSphere environments into IBM Workload Deployer. While "bringing in an environment" can mean any number of things, let's take it to mean that a user wants to import their existing WebSphere cells, applications, and configuration into IBM Workload Deployer as a pattern they can subsequently deploy. While there may not be a big red easy button in the appliance that lets you point to an existing environment and import it, there are a couple of techniques that one can employ. I have covered both techniques before, but since I'm getting the question with increasing frequency, I felt like it was time for recap.
The first option is to use a combination of IBM Workload Deployer and Rational Automation Framework for WebSphere. This is a use case I have spoken about numerous times at conferences and in blog posts and articles. In fact, you can read a little about it here. In this sense, RAFW provides excellent capabilities to point at an existing cell, and import everything about it. This includes WebSphere configuration, applications, shared libraries, and more. Once imported as a RAFW project, you can use the IBM Workload Deployer integration script package provided by RAFW to replay that configuration on top of deployments created by the appliance.
The second option is something I talk about a little less frequently. This option revolves around the use of a sample script (provided for free in our samples gallery) that you can run against existing WebSphere cells. The invocation of this script produces IBM Workload Deployer script packages that you can use in patterns to apply the configuration of the target cell to your new cloud-based deployments. Under the covers the utility script and resultant script packages use backupConfig and restoreConfig respectively. They do ensure the update of the cell, node, and host names during the restoreConfig execution (which happens automatically during pattern deployment). Beyond that, the use of the script is subject to the same limitations and rules in place for the use of the backupConfig and restoreConfig commands. You can read more about this capability, watch it in action, and download it for free.
I hope this is all useful information for those of you looking for ways to import existing environments into IBM Workload Deployer as patterns. If you have any questions, please let me know!
WebSphere configuration management practices are common items of conversation that comes up when I am talking with users about IBM Workload Deployer (formerly WebSphere CloudBurst). This conversation can take on so many different avenues that it is hard to capture all of them in a short blog post. So, for the sake of this post, let's consider two facets of WebSphere configuration management. The first facet is addressing the need to consistently arrive at the same configuration across multiple deployments of a given WebSphere environment. The second facet involves managing the configuration of a deployed environment over time to protect against living drift. What is the best way to tackle these two challenges? Well, it comes down to picking the right tool for the job.
When it comes to ensuring consistency of initial WebSphere configuration from deployment to deployment, there is really no better means than patterns-based deployments enabled by IBM Workload Deployer. Whether you are using a virtual system or virtual application pattern, the bottom line is that you are representing your middleware application environments as a single, directly deployable unit. When you need to stand that environment up, you simply deploy the pattern. The deployment encapsulates the installation, configuration, and integration of the environment, and your applications if you so choose. The benefit of this approach is that once you get your pattern nailed down, you can be extremely confident that the initial configuration of your environments is extremely consistent from deploy to deploy. Basically, no more bad deployments because someone forgot to run configuration step 33 out of 100!
Because we talk about the benefits of consistency provided by our IBM Workload Deployer patterns, users often ask what IBM Workload Deployer does in terms of configuration governance for deployed environments. In other words, they ask how IBM Workload Deployer helps them to track configuration changes or compare the configuration of a deployed environment to a known good one. The honest answer is that this is a bit beyond the functional domain of the appliance. While IBM Workload Deployer does allow you to manage the deployed environment (apply fixes, update deployed applications, snapshot, etc.), it does not layer some of the common configuration governance concerns on top of that. However, there is a good reason why the appliance does not focus on that. It's because Rational Automation Framework for WebSphere does!
If you find yourself wanting to actively track configuration changes, periodically (and automatically at specified intervals) compare configuration changes to a 'golden' baseline, import configurations of a known good environment, apply common configuration across a number of cells, then the capabilities of RAFW would likely be of interest to you. It can do all this and give you an incredible toolbox of out-of-the-box application deployment and configuration capabilities for WebSphere environments. In my mind, for those that spend a good deal of time dealing with WebSphere configuration, whether it be deploying applications, configuring containers, or debugging inadvertent changes, an examination of RAFW functionality is a must.
Now it is time for a bit of disclaimer/clarification. I am not suggesting that you pick one or the other when it comes to IBM Workload Deployer and RAFW. In fact, there are many scenarios where 1+1=3 with these two solutions, and I have written about it many, many times (including this article). That said, I think it is important to highlight the relative strengths of each product, so that it is easier to map it back to your pain points. In honesty, many of the users I talk with have challenges in getting the initial configuration right AND managing it over time. That kind of problem beckons for the integrated IBM Workload Deployer/RAFW solution.
Of course, technology only gets you so far when it comes to these kinds of problems. It would be disingenuous of me to suggest otherwise. It has always been and will continue to be important to establish clear and rigorous processes around the way you deploy, manage, and change environments. This just gives you an idea of some of the tools you can leverage to aid in the implementation of those processes.
One of the fundamental tenants of IBM Workload Deployer is a choice of cloud deployment models. Starting in v3.0, users will be able to deploy to the cloud using virtual appliances (OVA files), virtual system patterns, or virtual application patterns. The ability to provision plain virtual appliances is a way to rapidly bring your own images, as they currently exist, into the provisioning realm of the appliance. As such, I think the use cases and basis for deciding to use this deployment model are fairly evident. However, when comparing the two patterns-based approaches, virtual system patterns and virtual application patterns, the decision requires a bit more scrutiny.
Our pattern approach is a good thing for you, the user. Basically, when we refer to patterns in the context of cloud, we are referring to the encapsulation of installation, configuration, and integration activities that make deploying and managing environments in a cloud much easier. Regardless of what kind of pattern you end up using, you benefit from treating a potentially complex middleware infrastructure environment or middleware application as a single atomic unit throughout its lifecycle (creation, deployment, and management). In turn, you benefit from decreased costs (administrative and operational) and increased agility via rapid, meaningful deployments of your environments. That said, it is imperative to understand the differences between virtual system and virtual application patterns, and more importantly, it is important to understand what those differences mean to you. Let's start by considering the admittedly simple 'Cloud Tradeoff' continuum below.
In the above graph, the X-axis represents the degree to which you have customization control over the resultant environment. The degree of control gets lower as we move from left to right. The left Y-axis represents total cost of ownership (TCO), which decreases as we move up the axis. The right Y-axis represents time to value, which similarly decreases as we go up the axis. Naturally, enterprises want to move up the Y-axis, but, and it can be quite a big but, they are sometimes hesitant to relinquish much control (move to the right on the X-axis) in order to do so. In that light, I think it helps to explore our two patterns-based approaches a bit more.
The most important thing to understand about this continuum is that the X-axis really represents the customization control ability from the point of view of the deployer and consumer of the environment. An example is probably the best way to explain. Let's consider a fairly simple web service application that we want to deploy to the cloud. If we were to use a virtual system pattern to achieve this, we would probably start by using parts from the WebSphere Application Server Hypervisor Edition image to layout our topology. We may have a deployment manager, two custom nodes, and a web server. After establishing the topology, we would add custom script packages to install the web service application and then configure any resources the application depended on. Users that wanted to deploy the virtual system pattern would access it, provide configuration details such as the WAS cell name, node names, virtual resource allocation, and custom script parameters, and then deploy. Once deployed, users could access the environment and middleware infrastructure as they always have. That means they could run administrative scripts, access the administrative console provided by the deployed middleware software, and any other thing one would normally do. The difference in using virtual system patterns is not necessarily the operational model for deployed environments (though IBM Workload Deployer makes some things, like patching environments, much easier). Instead, the difference is primarily in the delivery model for these environments.
Using a virtual application pattern to support the same web service application results in a markedly different experience from both a deployment and management standpoint. In using this approach, a user would start by selecting a suitable virtual application pattern based on the application type. This may be one shipped by IBM, such as the IBM Workload Deployer Pattern for Web Applications, or it may be one created by the user through the extensibility mechanisms built into the appliance. After selecting the appropriate pattern, a user would supply the web service application, define functional and non-functional requirements for the application via policies, and then deploy. The virtual application pattern and IBM Workload Deployer provide the knowledge necessary to install, configure, and integrate the middleware infrastructure and the application itself. Once deployed, a user manages the resultant application environment through a radically simplified lens provided by IBM Workload Deployer. It provides monitoring and ongoing management of the environment in a context appropriate for the application. This means that there are typically no administrative consoles (as in the case of the virtual application pattern IBM ships), and users can only alter well-defined facets of the environment. It is a substantial shift in the mindset of deploying and managing middleware applications.
Okay, with that explanation in the bag, let's revisit the diagram I inserted above. I hope it's clear that, all things being equal, virtual application patterns indeed provide the lowest TCO and shortest TTV because of the degree to which they encapsulate the steps involved in setting up complex middleware application environments. So, let's get back to my assertion that the customization control continuum really applies to the deployer and consumer. Why do I say that? It's simple. In the case of either the virtual system pattern or the virtual application pattern, the pattern composer has quite a bit of liberty in how they construct things. Sure, we enable you right out of the chute by shipping pre-built, pre-configured IBM Hypervisor Edition images, as well as pre-built virtual system and virtual application patterns. The key is though, that the IBM Workload Deployer's design and architecture also enables you to build your own patterns -- be they the virtual system or virtual application type. With anywhere from a little to a lot of work, you can build virtual system and virtual application patterns tailored to your use cases and needs.
At this point, you may be saying, "Well now you have really confused things! How am I supposed to decide what kind of patterns-based approach fits my needs?" I have some advice in that regard. First, map your needs to things that we enable with the assets you get right out of the box with IBM Workload Deployer. If your application fits into the functional scope of one of the virtual application patterns that we ship, use it. If you can support the application by using IBM Hypervisor Edition images, virtual system patterns, and custom scripts, do it. In this way, you benefit most from the value offered by IBM Workload Deployer. However, if you find that you cannot use any of the assets we provide right out of the box (e.g. you want to deploy your environment on software not offered in IBM Hypervisor Edition form or in a virtual application pattern), then ask yourself one simple question: "What do I want my user's experience to be?"
In this sense, I primarily mean a user to be a deployer or consumer of your patterns. You need to decide whether you favor the middleware infrastructure centric approach afforded by virtual system patterns, or if you prefer the application centric approach proffered by virtual application patterns. There is no way to answer this generically for all potential IBM Workload Deployer users. Instead, you have to look at your use case, understand what's available to help you accomplish that use case, and finally, decide on what you want your user's experience to be. I hope this helps!
When one uses IBM Workload Deployer (previously WebSphere CloudBurst) to deploy a virtual system pattern, they benefit from a completely automated deployment process. The automation includes the creation and placement of virtual machines, injection of IP addresses, initiation of internal processes, and invocation of included scripts. Most of these processes are straightforward and require little more than a brief overview. However, the placement of virtual machines stands out, and it's inner workings are the subject of quite a few questions when I discuss the appliance. With that in mind, I thought I would provide a little more information on how the placement algorithm in IBM Workload Deployer works.
The placement subsystem in IBM Workload Deployer considers three primary elements: compute resource, availability, and license optimization. Compute resource availability is the gating factor for placement. That means that IBM Workload Deployer first looks at the available CPU, memory, and storage resource in the collection of hypervisors making up the cloud group(s) you are targeting for deployment. If a particular hypervisor cannot provide enough resource based on the amount you requested for your deployment, then it is automatically taken out of the eligible hosts pool. It is important to note that IBM Workload Deployer will overcommit CPU, and it will overcommit storage if you direct it to do so. It will not overcommit memory because that could severely degrade the performance of the application(s) running in the virtual machines.
After choosing the pool of hypervisors that are capable of hosting the virtual machines in your deployment from a compute resource perspective, the appliance then considers high availability. To better understand this particular placement stage, let's consider an example. Consider you are deploying a pattern based on WebSphere Application Server Hypervisor Edition and it contains two custom node parts. It is conceivable, and in fact likely, that these two custom node parts will host members of the same cluster, and thus the two nodes will support the same applications. As such, IBM Workload Deployer will attempt to place the two custom nodes on different physical machines in order to prevent a single point of failure. Of course, this depends on having two hypervisors with enough resource (CPU, memory, storage) to host the virtual machines, but the appliance makes that decision in the first placement stage.
After considering compute resource and high availability, IBM Workload Deployer moves to the last stage of placement: license optimization. In this stage, the placement subsystem attempts to place the virtual machines on hypervisors in a way that minimizes the licensing cost to you. The appliance can do this because it is aware of IBM virtualization licensing rules and takes those into account during this stage (if you aren't familiar with virtualization licensing rules and you are curious, ask you're sales representative to explain some time). During this stage, it will not violate any resource overcommit directives or rules in place, nor will it compromise system availability, but it will seek to minimize costs within these parameters.
At this point, I should make something clear that may already have occurred to you. You can override most of these placement rules by creating a cloud group containing only one hypervisor. In this case, IBM Workload Deployer will put all virtual machines on the single hypervisor until it runs out of compute resource (memory is likely to be the constraining factor). I would not suggest that you do this unless you have a good reason or you are in a simple pilot phase, but I do like to point out the art of the possible!
While not incredibly deep from a technical perspective, I do hope that this provided a few helpful details on what goes on during the placement stages of deployment. If you have any questions, do let me know.
One of the key benefits of WebSphere CloudBurst adoption is rapid -- seriously fast -- deployments of middleware application environments. Our users are leveraging the appliance to bring up enterprise-class middleware environments in mere minutes. If you know a little bit about WebSphere CloudBurst, that statistic may be a little surprising considering the appliance dispenses large virtual images from the appliance over the network to a farm of hypervisors. You may ask how the appliance can achieve such rapid deployments in light of the mere physics involved in transferring large amounts of data over a network. The simple answer is caching of course!
WebSphere CloudBurst creates a cache for each unique virtual image on datastores associated with the hypervisors in your cloud. On subsequent deployments of the same virtual image to the same datastore, WebSphere CloudBurst does not need to transfer the image over the wire. It simply uses the virtual disks that are in the cache on the datastore. In the context of the virtual image cache, the deployment process goes something like this:
WebSphere CloudBurst identifies the images necessary to deploy the pattern selected by the user.
WebSphere CloudBurst identifies the hypervisors and associated datastores that will host the virtual machines created during deployment.
WebSphere CloudBurst checks the selected datastores to see if they already have caches for the images it will be deploying. From here, one of two things happens:
WebSphere CloudBurst detects that there is no cache on the datastore and transfers the images over to the hypervisor, thereby creating the cache on the underlying datastore.
WebSphere CloudBurst detects that there is a cache on the selected datastore and uses that cache in lieu of transferring the disk over the wire.
The process may sound complicated, but it is completely hidden from you, the user. You do not need to know how the cache works since WebSphere CloudBurst handles all of these interactions. So, why am I telling you all of this then? As a WebSphere CloudBurst user, it is good to be aware of the cache for two main reasons. First, you need to account for the storage space the cache needs when doing capacity planning for your WebSphere CloudBurst cloud. Second, anytime you upload or create a new image through extend and capture, I would strongly suggest you automatically prime the cache for this new image. You can do this by simply deploying a pattern built on the image to each unique hypervisor/datastore in your environment. This may take a temporary re-arrangement of cloud groups, but it is a simple process, and it guarantees rapid deployments for all users of the new image.
I hope this sheds a little light on a subject we do not discuss too often. As always, if you have any questions, do not hesitate to let me know!
In previous posts, I have discussed the integration capability between WebSphere CloudBurst and Tivoli Service Automation Manager. Most recently, I discussed this in the context of integrating WebSphere and IBM CloudBurst. Today, I am happy to announce the publication of an article I co-wrote with Marcin Malawski from TSAM development on the subject of this integration.
If you are a WebSphere user interested in a holistic approach in building out a private cloud, I strongly recommend that you check the article out. If you are currently an IBM CloudBurst, IBM Service Delivery Manager, or Tivoli Service Automation Manager user and you provision a significant number of WebSphere environments, I strongly recommend that you check the article out. In fact, regardless of your current situation, do me a favor and check the article out!
As always, I look forward to feedback and comments. Good, bad, or indifferent. You can leave your comments here or on the article page. I look forward to hearing from you!
Are you planning on attending IBM IMPACT? For all of you WebSphere users out there, I sure hope the answer to that is yes. Simply put, there is no better event to attend in order to learn about everything going on across the IBM WebSphere portfolio. There is already an exciting array of technical sessions lined up that will touch on topics such as analytics, big data, cloud computing, elastic data caching, enhanced automation, business process management, hybrid clouds, and much more. In addition to the sessions, you will also get plenty of chances to work directly with various WebSphere solutions in our labs. For my fellow hands-on learners, this is a great opportunity.
We have technical sessions and labs every year at IMPACT though, so you may be wondering besides the content, what's new?? Well, in that regard I'm excited to point out that this year's IBM IMPACT will host the first ever WebSphere Unconference! For those of you unfamiliar with the unconference format, the basic idea is that a group of folks with common interests (cloud computing, big data, etc.) get together and drive a series of conversations/sessions around these topics of interest. The unconference will include lightning talks (5 minutes to make a point!), guest appearances, as well as user-driven breakout sessions. These events are a great way to network with fellow WebSphere users, discuss cool technology, explore common challenges, and otherwise expand your areas of interest and knowledge.
The official website of the WebSphere Unconference recently launched in order to support this event. Besides giving you some details about the unconference, the website allows you to submit your own ideas for breakout sessions. All you need to do is sign up on the site, and you can start to put forth your own topic ideas. You can vote for the different sessions that interest you, and the top vote getters will earn a spot during the unconference.
I encourage you to go and have a look at the WebSphere Unconference website. It is easy to sign up and easy to use. If you are heading to IMPACT, please make time to at least drop by the unconference on Thursday. I promise you will benefit from engaging, attendee-led conversation and discussion. I hope to see you there!
One of my favorite books from childhood is If You Give a Mouse a Cookie. Although targeted at children, the book illustrates a frequently occurring human behavior that is important for all of us understand. That behavior is the tendency for escalating expectations. The book offers this up by starting out with the simple action of giving a mouse a cookie. The mouse in turn asks for a glass of milk, various flavors of cookies, and on and on, until the mouse circles back to asking for another cookie.
Nearly all of us exhibit this same kind of behavior, and it can often produce positive results. In particular, in IT we always push for the next best thing or a slightly better outcome. Personally, I am no stranger to this behavior because I experience it from WebSphere CloudBurst users quite frequently. In these cases, it usually revolves around one particular outcome: speed of deployment.
Bar none, users of WebSphere CloudBurst are experiencing unprecedented deployment times for the environments they dispense through the appliance. The fact that we say you can deploy meaningful enterprise application environments in a matter of minutes is far beyond just marketing literature. Our users prove it everyday. However, just because they are deploying things faster than ever does not mean they are content to rest on those achievements. They want to push the envelope, and I love it.
For our users looking to achieve even speedier deployment times, I offer up one reminder and one tip. First, analyze all of your script packages to ensure you are using the right means of customization. If you have some scripts that run for considerably longer than most other script packages, you may want to at least consider applying that customization by creating a custom image. You still need to adhere to the customization principles outlined here, but you may benefit from applying the customization in an image once and avoiding the penalty for applying it during every deployment. You may also be able to break this customization out with a combination of a custom image and script packages. For instance, instead of having a script that installs and configures monitoring agents, you may install the agents in a custom image and configure them during deployment. Being selective about how and when you apply customizations can go a long way in improving your deployment times.
In addition to the reminder above, I also have a tip. Take a look at all of the script packages you use in pattern deployments and look to see if there are any that you can apply in an asynchronous manner. In other words, identify customizations that need to start, but not necessarily complete as part of the deployment process. Going back to our example of configuring monitoring agents during the deployment process, it may be important to kick off the configuration script during deployment, but is it crucial to wait on the results? Maybe not. If it is not, consider defining the executable argument in your script package in a manner that kicks off the execution and proceeds -- i.e. nohup executable command &. This approach can save deployment time in certain situations.
My advice to users of WebSphere CloudBurst: keep pushing your deployment process! Pare as many minutes off the process as you can. I hope that the tips above help in that regard, and be sure to pass along other techniques that you have found helpful.
One of my favorite things to do with users or potential users of WebSphere CloudBurst is to help them understand how they can construct a custom environment using the appliance. Typically, we take one of their existing application environments and discuss the configuration steps that contribute to its makeup. From there, we map the required configuration actions to different customization capabilities in the appliance. It is one thing to talk about how you can customize every layer of your application stack with WebSphere CloudBurst, it is quite another to talk about it in the context of an existing environment. This exercise usually serves to greatly enhance a user's understanding of how to construct tailored environments with the appliance.
While I cannot take every one of you through this exercise in the context of one of your own application environments, I can propose a scenario that will help to illustrate the WebSphere CloudBurst customization process. Consider that I want to deploy a clustered WebSphere Application Server environment whose application server instances utilize WebSphere DataPower XC10 for HTTP session management. In order to deploy such an environment, I would need to do the following:
Install an OS and WAS
Install the WebSphere eXtreme Scale Client binaries - required for integration
Construct a clustered cell
Augment profiles with WebSphere eXtreme Scale profile templates
Configure the application server instances to use WebSphere DataPower XC10 for session management
So those are the steps, but how do they map to WebSphere CloudBurst? First, I know that the WebSphere Application Server Hypervisor Edition image used by WebSphere CloudBurst encapsulates the installation of the OS and WAS. I also know that WebSphere CloudBurst will automatically construct the clustered cell during the deployment process. That means I need to address the installation of client binaries, augmentation of profiles, and configuration of application server instances. In order to do this, I will use a combination of image extension and custom script packages.
To get started, I extend an existing WebSphere Application Server Hypervisor Edition image and simply install the WebSphere eXtreme Scale Client binaries. I then capture that image and store it as my own unique image in the WebSphere CloudBurst catalog. Now, you may wonder why I did not capture the profile augmentation in the custom image. Remember, you cannot change profile configuration during the extend and capture process as WebSphere CloudBurst resets the profiles as part of capturing the custom image.
My custom image encapsulates the installation of the client binaries, so now I turn to custom script packages. I need two in this case. One script package will augment a profile (either deployment manager or custom node) with the WebSphere eXtreme Scale profile template. The second script package will configure application server instances to use WebSphere DataPower XC10 for HTTP session management. Once done with these script packages, I have all the assets I need to build my target environment.
Using my custom image, I build a pattern that contains the number and kind of WebSphere Application Server nodes that I want. I use the advanced options to define a WebSphere Application Server cluster ensuring its creation happens during deployment. Next, I drag and drop the profile augmentation script onto the deployment manager and custom node parts in my pattern. Finally, I drag and drop the WebSphere DataPower XC10 configuration script onto the deployment manager. The pattern is now ready to deploy!
For those of you that are visual learners like me, this demonstration provides a nice overview of exactly what I wrote about above. Check it out and let me know what you think.
The majority of my posts on this blog address using various features of WebSphere CloudBurst to build private cloud computing environments. Today though, I want to switch gears and instead of talking private cloud, let's talk public cloud. Specifically, let's take a look at the capabilities and services delivered via the IBM Smart Business Development and Test on the IBM Cloud (hereafter referred to as the IBM Cloud).
For some of you, the fact that IBM has a public cloud offering may be a little surprising. After all, if you listen to some uninformed critics you may hear that IBM only cares about private clouds for large enterprises. That is simply untrue. The IBM Cloud is an Infrastructure as a Service public cloud that delivers rapid access to services hosted on IBM infrastructure via a self-service web portal. The IBM Cloud offers multiple payment options, including usage-based billing and reserved capacity billing, and even features a cost estimator so you can confidently establish a monthly budget for your usage.
Regardless of whether you use a private or a public cloud, security should always be a chief concern. As such, IBM takes security very seriously in the IBM Public Cloud. The infrastructure that constitutes the cloud is subject to internal IBM security policies that include regular security scans and tight administrative governance. Your data and virtual machines stay in the data center to which you provisioned them, and physical security policies match those of internal IBM data centers. Additionally, you can optionally make use of the virtual private network option to isolate access to the virtual machines that you provision on the IBM Cloud. Rest assured that security in the IBM Cloud was a guiding design principle and not an afterthought.
With the basics out of the way, let's get on to the question I'm sure you have: What can I run on the IBM Cloud? To get you started, the IBM Cloud provides a nice list of public images in its catalog that are ready for you to provision. These images include WebSphere Application Server, WebSphere sMash, DB2, WebSphere Portal Server, IBM Cognos Business Intelligence, Tivoli Monitoring, Rational Build Forge, and many more. In addition to the public images provided by the IBM Cloud, you can build your own private images. Private images allow you to start with a base public image and then customize it by adjusting the configuration or installing new software. Once customized, you can store these private images on the IBM Cloud and provision them whenever needed. Whether you are using public or private images, you have a number of server configurations to choose from in order to host your environments.
While very brief, I hope this overview provides you with some of the more important details regarding the IBM Cloud. There are few, if any, service providers out there with the enterprise expertise of IBM, and I think you see that reflected in the IBM Cloud. If you are looking at public cloud options for your enterprise application environments, you should definitely take a closer look at the IBM Cloud.
In the course of my job, I am lucky to be able to work with both enterprise users and business partners who are adopting and using the WebSphere CloudBurst Appliance. When it comes to the business partner camp, one of my absolute favorites is the Haddon Hill Group. The Haddon Hill Group is an IBM Premier Business Partner, and they have been an early adopter and vocal advocate of the WebSphere CloudBurst Appliance. They have extensive knowledge of the use of the appliance in enterprise accounts, and quite frankly, they are doing some really cool things with WebSphere CloudBurst.
Given the above, I was glad to see summarized results from their various implementations made available recently on the IBM site. The summary is fairly concise, so I encourage you to take a look at the results Haddon Hill Group is getting with WebSphere CloudBurst.
I am not going to rehash the contents of the results here, but there are a couple of things I want to call out. First off, Haddon Hill Group says that WebSphere CloudBurst can provide companies with a '100 times faster time to market' delivery experience. In a practical sense, this means reducing the amount of time to deliver WebSphere environments from 40-60 days on average to just hours. That is an eye-opening data point!
The other thing I want call out here is a quote from Phil Schaadt, President and CTO, Haddon Hill Group. I have had the pleasure of working with Phil and team, and I have heard him echo these same sentiments many times:
"The important thing about the IBM WebSphere CloudBurst Appliance is that it will dispense a WebSphere Application Server image onto your WebSphere Application Server environment or private cloud along with other products within the WebSphere stack, and that application server will be ready in a few minutes. You can do it in a clustered environment, and you can even roll out IBM WebSphere Process Server and get it right in a fully clustered environment with a database connection, in about 90 minutes. You can also easily manage all the configurations of IBM WebSphere Process Server that you need. All the steps that took up so much time and effort on the part of IT staff have been removed. The savings for companies with large WebSphere implementations can be in the millions."
It is always great to see clients putting our technology to use to produce tangible business value. Again, I encourage you to take a look at these reports. As always, I am eager to hear what you think, so leave me a comment or reach out to me on Twitter @damrhein.