Are you planning on attending IBM IMPACT? For all of you WebSphere users out there, I sure hope the answer to that is yes. Simply put, there is no better event to attend in order to learn about everything going on across the IBM WebSphere portfolio. There is already an exciting array of technical sessions lined up that will touch on topics such as analytics, big data, cloud computing, elastic data caching, enhanced automation, business process management, hybrid clouds, and much more. In addition to the sessions, you will also get plenty of chances to work directly with various WebSphere solutions in our labs. For my fellow hands-on learners, this is a great opportunity.
We have technical sessions and labs every year at IMPACT though, so you may be wondering besides the content, what's new?? Well, in that regard I'm excited to point out that this year's IBM IMPACT will host the first ever WebSphere Unconference! For those of you unfamiliar with the unconference format, the basic idea is that a group of folks with common interests (cloud computing, big data, etc.) get together and drive a series of conversations/sessions around these topics of interest. The unconference will include lightning talks (5 minutes to make a point!), guest appearances, as well as user-driven breakout sessions. These events are a great way to network with fellow WebSphere users, discuss cool technology, explore common challenges, and otherwise expand your areas of interest and knowledge.
The official website of the WebSphere Unconference recently launched in order to support this event. Besides giving you some details about the unconference, the website allows you to submit your own ideas for breakout sessions. All you need to do is sign up on the site, and you can start to put forth your own topic ideas. You can vote for the different sessions that interest you, and the top vote getters will earn a spot during the unconference.
I encourage you to go and have a look at the WebSphere Unconference website. It is easy to sign up and easy to use. If you are heading to IMPACT, please make time to at least drop by the unconference on Thursday. I promise you will benefit from engaging, attendee-led conversation and discussion. I hope to see you there!
IBM Workload Deployer v3.1 firmware has been released and is available for download. V3.1 includes many improvements, building upon the solid foundation that was laid in V3.0 and earlier releases of WCA. There are many improvements and enhanced features. Dustin already alluded to a few of these in his previous post but let me list again here some of the more prominent new features:
The ground breaking capabilities offered in our Virtual Application Patterns have been extended to include deployments for AIX on PowerVM - giving you more choices for your private cloud environment. Along with this support a new base operating system image for AIX is also available for extension using either extend and capture or the IBM Image Construction and Composition Tool. Of course, Virtual System Patterns continue to be supported on all three private cloud hypervisors we support: VMWare, PowerVM, and zVM.
A new version of the Web Application Pattern Type (formerly WebApp Pattern Type) has been released. The Web Application Pattern Type V2.0 is built upon the feature rich WebSphere Application Server V8.0 release.
The DBaaS Pattern Type has been updated and is now the IBM Database Patterns 1.1 which includes both the IBM Data Mart Pattern 1.1 and the IBM Transactional Database Pattern 1.1 (OLTP - the default). These pattern types support a broader range of offerings for both production and non-production use. You can choose to create a new type of workload standard to apply to the DB instance or you can choose to clone an existing DB image that has been backed up to your DB image catalog repository.
A number of improvements have been made to the shared services leveraged by Virtual Application patterns. The caching service used to persist session data when scaling a web application can itself now be configured to scale, adjusting to increased demand. We have also extended the shared services to support external caching services and to leverage an external monitoring service based upon Tivoli Enterprise Monitoring Server (TEMS). You can also deploy multiple instances of shared services by deploying to multiple cloud groups.
The Plugin Developer Kit that was previously released to support building your own plugins and pattern types for Virtual Application patterns is now available for download directly from the IBM Workload Deployer dashboard - making it even easier to gain access and experience using this extension mechanism to deliver your own custom plugins and pattern types.
Images created using the IBM Image Construction and Composition Tool are now fully supported in IBM Workload Deployer V3.1 Virtual System patterns. Furthermore, the IBM Image Construction and Composition Tool is now a generally available product that is fully supported and available for download directly from the IBM Workload Deployer dashboard.
Speaking of Virtual System Patterns - a new hypervisor edition image of WebSphere Application Server V8 is now delivered with the appliance. WebSphere Application Server V8 fully supports the JavaEE6 programming model and includes many other programming models directly in the base image that were previously delivered only as feature packs including OSGi, JPA, and many more.
One item already mentioned by Dustin is the ability to configure multiple IBM Workload Deployer appliances in a master/slave relationship with a floating IP address to support continuous availability in the event that the master become unavailable. This feature can also be leveraged to support continuous operation while performing maintenance.
Another key appliance improvement is increased appliance security through the introduction of several new security roles for separation of duties. This is to ensure that no single user has unrestricted control without oversight. Among the new roles is an auditing role and auditing operations to provide data for forensic analysis of security attacks and better assist with compliance with the Health Insurance Portability and Accountability Act (HIPAA) and the Sarbanes Oxley Act (SOX).
We believe that these new features and several more make the value proposition delivered by IBM Workload Deployer V3.1 an even more compelling offering that can increase agility, consistency, and time to value for your applications. You can download IBM Workload Deployer V3.1 from IBM Fix Central. Please let us know what you think!
A few weeks ago, I had a conversation with a current WebSphere customer about the potential value they could derive from the use of IBM Workload Deployer. Right away, this customer saw value in the consistency that a patterns-based approach could afford them. It was clear that patterns eliminate the uncertainty that can make its way into even the best-planned deployment processes. Initially though, the customer questioned the value of being able to do fast deployments because, in their words, "We don't deploy WebSphere environments that often." So, we continued our discussion, and then they asked an important question that I encourage all of our users to ask: "Why don't we deploy our WebSphere environments more frequently?"
It is interesting to talk with our WebSphere users that have a long history with our products. Often times, they have been taking a shared approach to WebSphere installations for many, many years. They develop innovative approaches and isolation schemes that allow them to carve up a single WebSphere installation (cell) amongst multiple different application teams. This allows them to avoid having to setup a cell for each application deployment and saves them the associated time. However, having talked to many different users taking this approach, it is not without its challenges.
As was the case in the customer I mention above, users typically made trade-offs when electing for larger, shared cells. As an example, if you have multiple different application teams with different types of applications using a single cell, applying fixes and upgrades to that cell can be a lot more complex. After all, you now have to coordinate plans across a number of different teams and find a window that fits all of their needs. For the same reason, trying incremental function via our feature packs is much more arduous in these types of cells. Additionally, administrative controls become more complex since teams with varying needs all require administrative access. Admittedly, this gets simpler with newer fine-grained security models in WebSphere Application Server v7 and v8, but it still requires organizational discipline and process.
At this point I should be clear that I am not denigrating the shared cell approach. It can work well, and we have many facilities built into the WebSphere Application Server product to support that model. However, if you are using this approach and you find yourself stumbling too much for your own liking, then I would strongly suggest that you explore the patterns-based approach of IBM Workload Deployer. By deploying patterns that represent your WebSphere cells using IBM Workload Deployer, you can quickly and consistently setup multiple WebSphere Application Server cells to support the varying needs of your application teams. You will still avoid spending an inordinate amount of time installing and configuring cells as that is an automated part of pattern deployment, and your application teams will still get the resources they need. Further, this can liberate your application teams in terms of how they apply maintenance, install upgrades, and absorb new function in the form of feature packs.
I am not suggesting a complete pendulum swing in your approach to how you manage multiple application environments. There is definitely a happy medium in terms of how many cells you end up with. After all, you do not want to trade in one set of problems for the problem of managing way too many different cells. However, I do think that decomposing monolithic, multi-purpose cells into smaller, more purposeful cells can be beneficial. In the course of thinking about this different approach, you may come to the same conclusion that the customer I mention above did. IBM Workload Deployer's rapid deployment capabilities are indeed valuable if you take a slightly different view of current processes.
One of the new features that debuted in WebSphere CloudBurst 1.1 is the ability to resize the disks in a virtual image during the extend and capture (image customization) process. If you remember, the virtual images that exist in the WebSphere CloudBurst catalog are made of multiple virtual disks. In WebSphere CloudBurst 1.0 a default size was used for the virtual disks and this could not be changed, even during the image extension process. To be quite honest we got quite a bit of feedback about this, and so with version 1.1 while default sizes are still provided, you can specify the eventual size of each of the virtual disks during the image extension process.
As an example, consider the WebSphere Application Server Hypervisor Edition virtual image. This image contains four virtual disks: one for the WebSphere Application Server binaries, one for the WebSphere Application Server profiles, one for the IBM HTTP Server, and one for the operating system. The default size of each of these disks in the 126.96.36.199 version of the image is 6GB, 2GB, 1GB, and 12GB respectively, for a total of roughly 21GB. While that may be fine for some, what happens if you are going to be installing various other third-party software packages in the image? You may need more disk space for the operating system's virtual disk. Perhaps your WebSphere applications produce log files of considerable size. In that case you may want to increase the default size of the WebSphere Application Server profiles disk space.
Those scenarios and more are exactly why the resizing capability was added. When you extend the WebSphere Application Server Hypervisor Edition 188.8.131.52 virtual image in WebSphere CloudBurst 1.1, you will be presented the option to resize one or more of the virtual disks:
In the case above the default operating system disk size is bumped up to 16GB from the default 12GB size. Also note that in addition to changing the disk size, you can specify the number of network interfaces for your custom image.
Obviously, when you increase the size of the disks within the virtual image you are also increasing the storage requirements for that image when it is deployed to a hypervisor. Keep this in mind when you are calculating the upper bound capacity of your cloud. If you want to see more about how this feature works, check out this video.
Many enterprise IT decision makers are probably currently, or will be shortly, contemplating some type of cloud computing solution that promises to deliver benefits to their operation. After all, cloud computing solutions, when utilized prudently, are hard to resist. As these decision makers prepare to weigh the pros and cons of such solutions, there is one consideration that is perhaps more important than all others: Will this solution enable elasticity among my services and applications?
Hopefully the answer to the above question is a confident 'Yes'. A lot of discussion is given to providing virtualized, dynamically scaled resources such as servers, processors, operating systems, etc. and rightfully so. By optimizing the usage and consumption of these infrastructure resources, IT operations can be significantly enhanced. However, these resources are often times just a mean to an end, that end being service delivery. IT provides services and applications to end-users that are critical in terms of revenue generation or optimal business activity. These are the resources that most need elastic capabilites to ensure that end-users are continually provided responsive service. Cloud computing solutions need to address this capability by providing the means to package services in manageable units and deliver the ability to govern the scale of these units in an autonomic fashion.
Note, that in the question above I used the word enable. This was deliberate since it is hard to comprehend how any cloud computing solution could provide "out-of-the-box" service elasticity. The implementation of a SOA architecture becomes a necessary partner to achieving true service elasticity. Services must be developed in such a way that they are loosely coupled from other services and required components such as databases. Otherwise, the ability of one service to dynamically scale may be implicitly connected to another service's ability to achieve such elasticity. In this type of intertwined architecture, maximum elasticity will never be achieved.
What do you think about the current cloud landscape? Is there enough focus on service elasticity, and is service elasticity even the most valuable aspect of cloud computing? Leave a comment below, or reply to us on Twitter @WebSphereClouds.
Many technologies and ideas are paving the way for cloud computing. Utility computing, grid computing, and virtualization have all played important roles in enabling cloud solutions to take hold. In some ways, SOA is an easy to overlook player in the cloud computing world. However, there's no doubt that without SOA, and the ideas from the SOA movement, cloud computing would not be where it is now.
First, consider the millions of services available in the application services layer of the public cloud. While some of these services are intended to be consumed by an end-user, just as many are meant to be consumed programmatically. Enterprises seek to compose services in the application services layer to deliver larger, end-user applications to their consumers. As such, the ability to consume services that exist across domains and company firewalls is a must. SOA standards help in this respect as they define how services, regardless of location, are discovered, consumed, and governed. This common set of standards has helped to make the services in a public cloud more readily useable by enterprises, so SOA standards have been a key factor in the explosion of service offerings in the public cloud.
Second, and just as important, is the impact that SOA has and will continue to have on the enabling layers of cloud computing. By the enabling layers of the cloud, I mean the platform and infrastructure services layer where we find both application and physical infrastructure. These two layers in the cloud are often referred to as constituting a Service Oriented Infrastructure, so the impact of SOA is immediately obvious. SOA has led to viewing application and physical infrastructure capabilities as discrete services that can be consumed as part of an overall solution or process. As the number of services in these two layers continues to grow, it will be important that they can be discovered, managed, and governed similar to software service components so as to enable robust, composable cloud infrastructure solutions. By applying the principles and lessons of SOA to these enabling services, we can achieve a discoverable, composable, and governable cloud infrastructure.
SOA should be acknowledged as a key enabler to cloud computing solutions. There are of course reasons beyond what is mentioned above. For instance, think about application virtualization and how effective management of such virtualization requires the capability to interact with applications implemented in various technologies. SOA standards have established how to interact and communicate with applications regardless of implementation, so virtualization management can and should piggyback on these standards. As cloud computing continues to evolve, I think we will only see more instances of SOA affecting cloud computing for the better.
I want to clear something up about WebSphere CloudBurst that can sometimes cause a bit of confusion. In nearly all of our content about the appliance, we talk about it in the context of building private clouds consisting of WebSphere application environments. Typically people think of private clouds as something only those within their organization can access and utilize. However, with WebSphere CloudBurst you are not limited to creating that kind of a private cloud.
Perhaps it is more fitting that we talk about WebSphere CloudBurst as a means to create on-premise clouds. After all, that's really what we mean. You create a shared pool of hardware and network resources owned by your organization, and then you define this cloud of resources to WebSphere CloudBurst. Once that cloud is defined, you can leverage WebSphere CloudBurst to dispense your WebSphere application environments into that cloud. The accessibility of your application environments running in that cloud is entirely up to you.
You may decide that the cloud is indeed private and that only those in your organization or a smaller subset of users can access the environments. On the other hand, you may decide that you want to allow consumers in the public domain to request WebSphere application environments and then have WebSphere CloudBurst provision those environments into a public cloud. I say public here because while the cloud's resources are on your premise, access to that cloud is not restricted to within the organizational firewall. Ultimately, the determining factor for whether or not your WebSphere CloudBurst cloud is public or private is the network configuration you provide. If the virtual machines are associated with network resources that are publicly accessible, then I would say you have a public cloud.
I hope this entry didn't serve to only add to the confusion. The bottom line is this: WebSphere CloudBurst allows you to create, deploy, and maintain virtualized WebSphere environments in an on-premise cloud. Whether that cloud is public or private is entirely up to the network configuration that you setup.
As far as easy to use, intuitive web interfaces go, I think WebSphere CloudBurst stacks up well against any competition. I think this is one of the main reasons why, for those with at least basic familiarity with WebSphere, the product has such a small learning curve. All that said, web interfaces are not ideal for all types of use cases. Namely, it would be nearly impossible to use WebSphere CloudBurst as part of automated processes if all it had were a web interface.
The need to automate the use of WebSphere CloudBurst or to use it within other automated processes is the reason behind the command line interface. The WebSphere CloudBurst CLI provides a Jython-based API with which you can leverage the same capabilities that are available in the web console. You simply unzip the CLI tools on any machine, and you can remotely interface with an appliance of your choosing. Sounds useful, right? Most agree that it does, but the question that comes up usually is, "How do I get started?"
I would suggest that anyone getting started with the CLI take a look at the overview and premise behind the API. Beyond this, it largely depends on how familiar you are with Jython scripting. If you are a WebSphere administrator that does a fair amount of wsadmin scripting, then you are probably all set. Otherwise, I suggest you spend some time in the interactive shell mode of the CLI, and in particular, I suggest you leverage the Wizard object provided by the CLI.
The Wizard object enables prompt-based creation of resources when using the WebSphere CloudBurst CLI. Take for example the following code snippet where I create a new virtual system:
>>> w = cloudburst.wizard()
Enter ?? for help using the wizard.
name: Single WebSphere Server
pattern (* to select from list): *
1. WebSphere single server
2. WebSphere cluster
3. WebSphere cluster (development)
4. WebSphere cluster (large topology)
5. WebSphere single server with sample
pattern (* to select from list): 1
cloud (* to select from list): *
1. Default ESX group
cloud (* to select from list): 1
Part properties and script parameters still requiring values:
value for 'part-1.ConfigPWD_ROOT.password': passw0rd
Part properties and script parameters still requiring values:
value for 'part-1.ConfigPWD_USER.password': passw0rd
number/key/-/*/?/??/!/enter to proceed:
"acl": (nested object),
"created": Jul 13, 2010 8:14:05 AM,
"maintenances": (nested object),
"name": "Single WebSphere Server",
"owner": (nested object),
"pattern": (nested object),
"snapshots": (nested object),
"updated": Jul 13, 2010 8:14:08 AM,
"virtualmachines": (nested object)
That's how easy the Wizard object makes using the CLI. Now, you might say that this is all well and good, but since the Wizard object implies prompt-based interaction, it is not helpful in terms of automating a process. That may be true for the first time you use the Wizard to create a particular resource, but if you take advantage of a handy method on the object it can enable automation going forward. The method I am talking about is the toDict method. By calling this method after the creation of a resource with the Wizard object, you get the dict object created from the information you entered via the prompts.
I truncated the output above for space, but the toDict method gives me the input data used to create the virtual system resource. This is really helpful going forward, as it gives me the exact input format to use to create my virtual system resources. I do not have to rely on the Wizard object any longer, and instead I can create virtual system resources without requiring direct user interaction because I know the input data WebSphere CloudBurst expects.
If you are just getting started with Jython-based scripting and the WebSphere CloudBurst CLI, I strongly suggest you use the Wizard object as a fast on-ramp. To move your automation work forward, make use of the toDict method. It will make writing completely automated WebSphere CloudBurst scripts much simpler. Good luck!
Alas, the wait is over! WebSphere is jumping head first into the cloud computing fray. The announcement today of two new offerings means that companies will be able to build and benefit from private WebSphere clouds. In addition, to these new offerings, IBM also announced two more WebSphere products headed to the Amazon Elastic Compute Cloud.
To start, the new WebSphere Application Server Hypervisor Edition is a virtualized packaging of the popular WebSphere Application Server platform. The virtual image includes a Linux operating system, WebSphere Application Server, and IBM HTTP Server all pre-installed and packaged according to the Open Virtualization Format (OVF). There are six different WebSphere Application Server profiles pre-configured on the image, which allow the virtual image to take many different forms when deployed to a hypervisor. The image supports unattended activation, meaning the virtual image can be deployed to a hypervisor and configured with activation scripts. This feature allows the deployment process to be fully automated. WebSphere Application Server Hypervisor Edition allows users to reap the benefits from virtualization and realize a higher level of business agility with their WebSphere Application Server environments due to the radical ease of deployment.
In addition to WebSphere Application Server Hypervisor Edition, IBM announced the WebSphere CloudBurst Appliance. The WebSphere CloudBurst Appliance is a secure hardware appliance that allows users to construct, store, deploy, and maintain private WebSphere cloud environments. WebSphere CloudBurst delivers WebSphere Application Server configurations including the operating system, which are optimized for virtual environments. These configurations, or patterns as they are called by WebSphere CloudBurst, can be customized by users to build WebSphere Application Server configurations that include the operating system, middleware, and user applications. WebSphere CloudBurst allows users to deploy these patterns to their private cloud, and it provides maintenance and administration capabilities for the deployed virtual systems. In short, WebSphere CloudBurst provides capabilities to manage the entire lifecycle of private WebSphere cloud environments.
The announcement wasn't all about private clouds. IBM also announced its intention to make the WebSphere Application Server and WebSphere eXtreme Scale offerings available as Amazon Machine Images. These AMIs will allow users to utilize both the WebSphere Application Server and WebSphere eXtreme Scale on Amazon's Elastic Compute Cloud (EC2).
It's clearly an exciting and innovative time for cloud computing in WebSphere. Stay tuned to our blog and WebSphere Cloud Computing for Developers site for more information and resources on these new offerings. In the meantime, check out the WebSphere Application Server Hypervisor Edition page and the WebSphere CloudBurst page for more information and live demos!
In previous posts, I have discussed the integration capability between WebSphere CloudBurst and Tivoli Service Automation Manager. Most recently, I discussed this in the context of integrating WebSphere and IBM CloudBurst. Today, I am happy to announce the publication of an article I co-wrote with Marcin Malawski from TSAM development on the subject of this integration.
If you are a WebSphere user interested in a holistic approach in building out a private cloud, I strongly recommend that you check the article out. If you are currently an IBM CloudBurst, IBM Service Delivery Manager, or Tivoli Service Automation Manager user and you provision a significant number of WebSphere environments, I strongly recommend that you check the article out. In fact, regardless of your current situation, do me a favor and check the article out!
As always, I look forward to feedback and comments. Good, bad, or indifferent. You can leave your comments here or on the article page. I look forward to hearing from you!
The answer is yes, I did a related but different blog post with a similar title a few weeks back. At that time I was primarily highlighting a webinar that I co-presented with Keith Smith regarding the various virtualization solutions and features that are available in IBM Workload Deployer in virtual application patterns and virtual system patterns leveraging the Intelligent Management Pack (IMP). If you didn't get a chance to attend that webcast live then I encourage you to check out the replay (especially Keith's portion with details on IMP - a really helpful overview).
This new blog post expands on the theme of that original blog post but takes a broader vision of where IBM has been with our private cloud offerings in WCA and IWD up to and including the recently announced IBM PureApplication System - and how this history demonstrates our leadership in supporting applications in the cloud.
IMPACT means new product announcements, and I'm particularly excited to point out the announcement for WebSphere CloudBurst 2.0. The new release features multi-image product support, support for Red Hat on VMware ESX, the new WebSphere Process Server Hypervisor Edition and much more.
You can get all the details in my blog post here, and you can watch an overview demo here. Don't hesitate to send me any comments or questions here or on Twitter @damrhein.
The recently announced IBM WebSphere CloudBurst Appliance is creating a fair amount of stir in the cloud computing market. Its ability to create, deploy, and administer private WebSphere cloud environments gives customers the ability to create and manage a services oriented cloud. To provide a more in-depth look at what the appliance delivers, I’d like to take a short look at the creation, deployment, and administration capabilities to understand what each one means to the user.
To get started, in order to leverage WebSphere environments in a private cloud, you need to construct WebSphere configurations optimized for such a virtual environment. Using WebSphere CloudBurst you can do just that. WebSphere CloudBurst ships a virtual image packaging of the WebSphere Application Server called WebSphere Application Server Hypervisor Edition. From this new virtual image offering, complete WebSphere Application Server topologies can be constructed to create what WebSphere CloudBurst terms a pattern. These patterns are representations of fully functional WebSphere Application Server environments. For example, using WebSphere components in the WebSphere Application Sever Hypervisor Edition, you can create a cluster environment that includes a WebSphere Deployment Manager, two customs nodes, and the IBM HTTP Server.
In order to create these patterns, WebSphere CloudBurst provides a drag-and-drop interface that allows users to select WebSphere components from the new virtual image and drop them onto a canvas that visually represents the WebSphere configuration. In addition to adding WebSphere components, users can also drag and drop script packages onto the components within a pattern. These script packages allow users to provide scripts and other artifacts that further customize the WebSphere Application Server environment once it has been deployed in the cloud. These script packages can do just about anything, from tuning WebSphere security settings to installing applications in the newly created environment.
Building the virtualized WebSphere Application Server environment, or pattern, is only part of the process. Once the pattern is built, it is ready to be deployed to the cloud. WebSphere CloudBurst operates on a ‘bring your own cloud’ model, so the cloud resources are defined to the appliance. These cloud resources consist of a set of supported hypervisors and a list of IP addresses available to the cloud. Once these resources are defined, WebSphere CloudBurst has all the information it needs for deployment. On deployment of a pattern, WebSphere CloudBurst determines the state of the available resources, and places the pattern across the available hypervisors accordingly. It places the WebSphere instance to ensure efficient use of resource, high performance, and high availability. In addition to placing the pattern instance onto the hypervisors, WebSphere CloudBurst selects and assigns an IP address to each WebSphere component in the configuration. Both the placement and IP address assignment are done with no user input or intervention. The result of the pattern deployment is a fully instantiated WebSphere environment that can be accessed and used like any other such environment. It is important to note that the WebSphere environments do not run on the WebSphere CloudBurst Appliance, and in fact the appliance plays no role in the runtime of the environments.
While it is true that the WebSphere CloudBurst Appliance is not involved in the runtime of the patterns that it deploys, it does provide users the capability to monitor and administer these environments. From the WebSphere CloudBurst console, each of the WebSphere virtual systems can be viewed to understand network configuration, memory consumption, and CPU usage. Usage of cloud resources (i.e. memory, CPU, IPs) is also tracked at a user or user group level allowing WebSphere CloudBurst to support chargeback across an enterprise. In addition to these monitoring capabilities, maintenance features are part of the appliance’s administration story. WebSphere CloudBurst provides users with a central administration point for applying maintenance, such as iFixes and service packs, to the WebSphere virtual systems it created. These fixes can be applied within the WebSphere CloudBurst console with only a few mouse clicks providing an unprecedented ease of maintenance application.
The above is only a glimpse at the capabilities of the new IBM WebSphere CloudBurst Appliance. Look for more information about this offering at ibm.com/cloudburst, and stay tuned to our blog, twitter account (@WebSphereClouds), and websiteas we continue to deliver insight into WebSphere CloudBurst.
It's been a busy few weeks full of customer visits ranging from the east coast to the west coast. Other than an extremely off kilter body clock, the trips have been great. It is so exciting to see the high level of interest in the newest release of WebSphere CloudBurst, version 2.0.
On the topic of WebSphere CloudBurst 2.0, I want to make sure our IBM Business Partners (and my IBM colleagues) are aware of a couple of upcoming Tech Talks. These Tech Talks are given by the IBM labs and provide an early look into some of our newest offerings. On the Tech Talk docket this month are WebSphere CloudBurst 2.0 and the new WebSphere DataPower XC10 Appliance. Business partners can sign up for the WebSphere CloudBurst talk here, and the WebSphere DataPower XC10 Appliance here (IBMers get in touch with me for the links).
I feel pretty certain that if you are reading this, you probably are pretty familiar with WebSphere CloudBurst, but maybe not as much so with WebSphere DataPower XC10. This is a new offering from IBM that provides in-memory data caching capabilities (similar to those of WebSphere eXtreme Scale) in the form factor of an appliance. Data grids and caches are really a hot wave in application design and development, and chances are if you are developing applications for distributed systems today, you could benefit from the use of in-memory data caching. Check out the Tech Talk for more information.
While these Tech Talks are restricted for IBM Business Partners and IBMers, I'm always available if you have any questions about WebSphere CloudBurst, WebSphere DataPower XC10, or any of our WebSphere offerings. I'll do my best to answer your questions or put you in touch with the right IBMers in the lab. Feel free to reach out and get in touch at any time.
Yesterday, I had the opportunity to present WebSphere CloudBurst during the IBM Cloud computing for developers virtual event. I provided a brief overview of the appliance along with a demonstration, and then tackled some questions from the 150+ attendees in the audience.
All of the questions were good ones, and I wish I had time to address them all during the session (I will be answering all questions and posting them online soon). However, one of the questions stood out to me because of its relevance to how IBM uses WebSphere CloudBurst in their own labs. Paraphrasing, the question was "Can you share hardware resources (hypervisor hosts) between WebSphere CloudBurst and other components in your data center?"
Very good question. The answer is, of course, yes you can. I don't want to sugarcoat it because it does require thought and planning. Ideally, when WebSphere CloudBurst is using a hypervisor host, the appliance is the only thing acting on that host. However, when WebSphere CloudBurst is not using that host, then you can absolutely repurpose it for use by other components in your data center.
Our WebSphere Test Organization uses WebSphere CloudBurst to aid in their testing of our WebSphere middleware products. The capability to provision hypervisor hosts in and out of the WebSphere CloudBurst cloud is of critical importance to them. Like many organizations, they are resource constrained and must get the most out of their IT investment. They use the Tivoli Provisioning Manager to provision VMware ESX hosts for use by WebSphere CloudBurst, and then they use the WebSphere CloudBurst CLI to define those hypervisors to the appliance and do verification testing of the new resource. This allows them to easily expand and contract the amount of shared infrastructure utilized by WebSphere CloudBurst at any one time, and it means components do not have to statically lock down resources. I have a lot more information to come about our WebSphere Test Organization and their use of WebSphere CloudBurst, but I thought I would give everyone a peek at the kinds of things they do everyday with the appliance.
If you are interested in what I presented yesterday to the attendees of IBM's Cloud computing for developers event, you can check out their developerWorks Group page. In the Activities section, you will find the charts, demonstration, and a playback if you prefer to listen to the session. As always, I appreciate your feedback and questions.