I'm documenting tricks in RAD 6. In Part 1, I explained how Activation Spec is replacing Listener Port for configuring MDBs, and gave some J2EE-spec-based guidance on which approach to use when. Here is some additional advice that is more specific to WAS 6.
In the WAS 6 docs, "Creating a new listener port" recommends that you should upgrade your EJB 2.0 MDBs that use Listener Ports to EJB 2.1 MDBs that use JCA adapters (and therefore Activation Specs). It strikes me that this advice is a little bit empty because your EJB 2.0 MDBs are using a JMS provider, so you can't update to Activation Specs until your messaging system vendor updates their JMS implementation to use JCA. Nevertheless, the advice still holds that if you can update your MDBs to 2.1 and Activation Specs, you should do so and quit configuring your WAS servers with Listener Ports.
It goes on to say that Activation Specs must be used with any MDBs that use JCA adapters. So, the JMS API implementation for (V6) Default messaging is apparently implemented using JCA and therefore requires Activation Specs. The JMS impls for the other providers apparently do not use JCA and therefore require Listener Ports. If and when those other providers are upgraded to JCA, then you'll be able to use Activation Specs.
Which brings to mind another question:
If a JMS provider is implemented using JCA, then you can configure the MDB connection as JCA using an Activation Spec, or as JMS using a Listener Port. Which should you use?
Just for fun, I implemented an EJB 2.1 MDB in RAD 6 to use (V6) Default messaging, but instead of using an Activation Spec, I configured its connection using a Listener Port (which specified a connection factory and destination that were configured in the default messaging provider). When I tried to run this configuration, the server started with no errors. But when I deployed the app and the server tried to start the EJB jar, I got this MDBInvalidConfigException (errors WMSG0063E and WMSG0019E):
WMSG0063E: Unable to start message-driven bean (MDB) MyExampleMDB against listener port MyExampleQueuePort. It is not valid to specify a default messaging Java Message Service (JMS) resource for a listener port; the MDB must be redeployed against a default messaging JMS activation specification. WMSG0019E: Unable to start MDB Listener MyExampleMDB, JMSDestination jms/ MyExampleQueue : com.ibm.ejs.jms.listener.MDBInvalidConfigException: Cannot deploy an MDB against a listener port specifying a default messaging JMS resource
So when configuring an MDB that listens for messages from the default messaging provider, there's nothing to stop you from using a Listener Port instead of an Activation Spec. But WAS will refuse to run the app. So I guess that's how you'll know when you need to convert your Listener Ports to Activation Specs.
This presents a bit of a dilemma, though. When you're developing an MDB, you're not supposed to know where the messages/events are really coming from. The configuration of the Activation Spec or Listener Port effectively hides this from you and makes it part of the deployment configuration. You just have to know that there'll be a resource in the deployment server with the expected resource name. But now you also have to know whether or not the source will be accessed through a JCA adapter, and therefore whether to configure it with a Listener Port or an Activation Spec. This makes your code, or at least its configuration, a bit less flexible.
For example, if you design your MDB to use WMQ, then you'll need to specify a Listener Port that the deployer will provide to map your MDB to its WMQ queue. But then if the deployer decides to use WAS 6's Default messaging instead, now he'll provide an Activation Spec to map your MDB to its queue. But your MDB isn't configured to use an Activation Spec, it's configured to use a Listener Port. The deployer will need to go modify the ibm-ejb-jar-bnd.xmi file that specifies what Listener Port or Activation Spec an MDB uses (see the MDB section of "Application bindings"). At least the deployer only needs to change a deployment descriptor and not change (and recompile) code. But to avoid needing to make this change, you'll need to decide what messaging system your MDB will use before you set its binding at development time. [Read More]
Which is the simpler approach for implementing web services, REST or SOAP/WSDL? I've been thinking about how they compare.
Supporters of the REST approach claim that it's simpler than SOAP. (By SOAP, we really mean SOAP and WSDL, since SOAP is just a data format, not a service.) Amazon Web Services implements both, and there are claims that 85% of the AWS traffic is REST (therefore only 15% SOAP). I have no idea if these numbers are accurate, or how they change over time, but let's suppose they're accurate. This is the main REST drumbeat, that REST is simpler than SOAP.
So is REST simpler? Hmmm, from who's perspective?
REST leads to lots of little URLs--one per remotely-accessible object, plus URLs for listing objects and creating new instances. For example, http://example.com/product/12345 could provide access to the Product instance whose ID is #12345. No non-trivial app can nor should have that many static URLs or HTML/XML/whatever pages, so the way REST works is to map lots of logical URLs to objects/resources in the app. This puts complexity on the client to figure out which of a seemingly endless list of URLs to invoke for the desired behavior, and puts complexity on the host to map this vast set of URLs to useful behavior or objects. SOAP has fewer URLs, basically one per port type defined in WSDL.
Thanks to all the URLs in REST, each URL's behavior is pretty darn simple. There are only four operations: GET, PUT, POST, and DELETE (although it appears that POST can be overloaded to do lots of things depending on the request document; see What is REST?). So assuming the operations' behavior is truly as intuitively obvious as it's supposed to be, once you've got the URL, invoking the behavior should be simple. A SOAP/WSDL service has fewer URLs, basicly one per port type, but lots of operations, as listed in the WSDL.
SOAP and REST seem to agree on XML as the data format of choice. But a SOAP/WSDL format requires a particular XML schema for the SOAP body (a schema for the request and another for the reply). A REST service, in theory, can support multiple formats; the client and provider negotiate which format to use for a particular invocation. Will these negotiations really make REST simpler? Or will a typical REST service maintain simplicity by only supporting a single format?
These are points upon which learned men can disagree. Do you prefer lots of URLs and a small, constant set of fine-grained operations (REST)? Or fewer URLs, each hosting a larger, application/service-specific set of potentially course-grained operations (SOAP/WSDL)? Do you want to support one set of XML data formats (SOAP/WSDL) or potentially negotiate the data format every time (REST)? Which do you think is simpler?
So, what's the difference? Wikipedia says "Interoperability: the capability of different programs to exchange data via a common set of business procedures, and to read and write the same file formats and use the same protocols" and "Integration allows data from one device or software to be read or manipulated by another, resulting in ease of use." Yuck, those aren't much help.
To me, interoperability means that two (or more) systems work together unchanged even though they weren't necessarily designed to work together. Integration means that you've written some custom code to connect two (or more) systems together. So integrating two systems which are already interoperable is trivial; you just configure them to know about each other. Integrating non-interoperable systems takes more work.
The beauty of interoperability is that two systems developed completely independently can still work together. Magic? No, standards (or at least specifications, open or otherwise); see Open Standards in Everyday Life. Consider a Web services consumer that wants to invoke a particular WSDL, and a provider that implements the same WSDL; they'll work together, even if they were implemented independently. Why? Because they agree on the same WSDL (which may have come from a third party) and a protocol (such as SOAP over HTTP) discovered in the binding. How does the consumer discover the provider? Some registry, perhaps one that implements UDDI (which sucks, BTW). So SOAP, HTTP, WSDL, UDDI--all that good WS-I stuff--make Web services interoperable.
Another example I like is the "X/Open Distributed Transaction Processing (DTP) model" (aka the XA spec); see "Configuring and using XA distributed transactions in WebSphere Studio." With it, a transaction manager by one vendor can use resource managers by other vendors. Even though they weren't all written for each other, they still work together because they follow the same spec. They're interoperable.
Now consider two systems that weren't designed to be interoperable, or perhaps interoperable but with different specs. This requires integration. The integration code--could be Java, Message Broker, etc.; I co-authored a whole book on this--takes the interface one system expects and converts it to the one the other system provides. This is why WPS has stuff like Interface Maps and Business Object Maps.
So, you want interoperable systems; integrating them is simple. Otherwise, you have to integrate them yourself. [Read More]
There doesn't seem to be a whole lot of agreement (yet) in the industry, or perhaps even within IBM, on what this term means. But here's my take, at least.
In the tradition of the composite pattern, a composite service is a service whose implementation calls other services. This is as opposed to an atomic service, whose implementation is self-contained and does not invoke any other services.
A composite service acts as both a service provider of the (composite) service and as a service consumer of its child services. The composite can be considered to be aggregating together the child services into a bigger service. A composite service is one kind of what I call a service coordinator, a coordinator that also itself is a service.
You cannot tell from a service's API whether it's composite. In fact, two providers may implement the same service, one in a composite fashion and the other in an atomic fashion. A provider implemented one way might be reimplemented the other way; this change makes no difference to the consumers because the interface is still the same.
Service component architecture (SCA) supports composite service components. Recall that an SCA has a service interface and service references. Those references give the implementation the ability to call the other services referenced. They also serve as a declaration that this component needs these services, much like a Java class importing other Java classes. An SCA with no references is atomic; one with references is composite.
Service Component Architecture overview
A business process is a kind of composite service. A business process can be invoked as a service, a request to perform its functionality. The process contains a sequence of activities implemented as services, so therefore is a service that calls other services. That's a composite service.
In my mind, a business process is a special case of composite service. Usually, a service executes what might be described as synchronously. (The execution can be invoked synchronously or asynchronously.) A service has a return value (or exception); it is returned at the end of execution. The consumer usually waits (but not necessarily with a blocking thread) while the service executes, and does not continue until the service completes and returns its result.
What makes business process a special case is that a caller usually doesn't wait while the process executes. A caller invoking a business process (synchronously or asynchronously) usually waits only while the process starts, to confirm that it starts successfully. But once the process starts, it runs on separate thread(s) "in the background" while the caller proceeds with other work.
Because of this difference, it seems to me, although a business process is just another service and another way to implement a service, a caller usually knows whether or not the service it's invoking is a business process. If it's not, the caller cannot tell whether the service is composite or atomic, but if it waits for a return value, then the service is most likely not a business process. [Read More]
The latest version of WebSphere Application Server, WAS 6, has a new feature called "Service Integration Bus" (WAS benefits talks about "a new pure-Java JMS engine"). The SIB is implemented as a group of messaging engines running in application servers (usually one-to-one engine-to-server) in a cell. As a service in WAS 6, SIB is a complete JMS v1.1 provider implementation. (Not just the API; a working messaging system.) The JMS provider is a pure Java implementation that runs completely within the application server's JVM process. (For persistant messaging, WAS also requires a JDBC database such as DB2.) Thus JMS messaging is built into WAS and easily available to any J2EE application deployed in WAS.
Why Service Integration Bus? IBM's software customers over the past few years have divided into two overlapping but still distinct markets with different needs:
Connect any kind of app to any other kind of app. This is the traditional WebSphere MQ market, where you've got different apps written in different languages running on different operating systems and you want them all to talk to each other. This market hasn't changed nor has IBM's commitment to supporting this market.
Connect J2EE apps running in WAS servers. What's changed in the last few years is that many of our customers are converting everything to J2EE apps deployed in WAS and so they don't need to be able to support every platform imaginable, just WAS. WAS 5 addressed this market with its Embedded Messaging feature (see below). This market is now better addressed with Service Integration Bus in WAS 6.
For a customer that finds itself in both groups--you have lots of WAS apps communicating, but you also need to communicate with other non-WAS apps--you will still need full WebSphere MQ. Embedded Messaging and Service Integration Bus only support WAS apps, so if any of the apps are not WAS apps, you need full WMQ. WAS 6 has a feature called MQ Link for connecting SIB and WMQ.
So here's the basic breakdown of WebSphere JMS options:
MQ Simulator -- A feature of the test server (aka, the single user WAS server) in WebSphere Studioand Rational Application Developer. Not a real messaging provider (did the term "simulator" tip you off?), it doesn't provide interprocess communication (pretty much a must-have for messaging) or persistence. What it is very useful for, and why it's in the test server, is testing and demoing your WAS apps that use JMS, without needing a separate JMS provider. When you're developing J2EE apps that use JMS, use this simulator.
Embedded Messaging -- A feature with WAS 5 for messaging just between WAS applications. It is a simplified version of the WMQ code base and is a full JMS implementation, but does not provide all of the quality of service advantages of full WMQ. Runs as several processes (written in C) that run outside of the WAS JVMs. So it involves more moving parts that consume more resources and need to be managed.
Service Integration Bus -- The replacement for Embedded Messaging in WAS 6. Implements the JMS spec; implemented in Java, runs in the app server JVM. (Think of it as "Really Embedded Messaging"!) Provides most (all?) of the same quality of service of full WMQ (such as clustering, which works as part of the WAS ND clustering model), but only supports WAS apps.
WebSphere MQ -- Messaging for just about any computer platform used in business, including WAS and JMS. WMQ is used to connect non-J2EE apps, and to connect a J2EE app to a non-J2EE app. It can also be used to connect J2EE apps, although this is usually because you also have non-J2EE apps as well. Written in C, it runs in its own processes, and does not require WAS or Java in any way (unless your app is a WAS app).
External JMS Provider -- This is the support WAS provides for using any J2EE-compliant JMS provider, so you can use our app server with someone else's JMS product.
I kinda thought everybody knew this already, but it's come up a couple of times in the past few weeks, so maybe this is a question that still needs answering: What's the best way to serialize an object, binary or XML?
In theory, XML serialization can be built right into Java the way binary serialization is. This is why binary serialization is implemented as a specific set of classes separate from the rest of Java; so that you can substitute another set of classes that implement another serialization scheme. Instead of ObjectOutputStream and ObjectInputStream, you could have XML equivalents. These would convert any java.io.xml.XMLSerializable (there is no such interface) object to some defualt XML that specified the class' full name, version, and each of its (non-transient) instance variables. But somehow this idea has never taken off.
So which should you use, binary or XML serialization?
Binary serialization is much more efficient than XML and easier to get working. Writing out an object in binary form is a little faster than text/XML form. Binary form is much more compact and so saves memory and bandwidth. And binary form is much faster and easier to parse than XML.
So why would anyone use XML instead of binary serialization?
Flexibility. Binary serialization requires the serializer and the deserializer be Java (but see the comments for details), and both Java program(s) must have the classes in their classpaths for the objects being serialized. There are also class version issues. XML is not Java-specific and so works with any language. Its text data can more easily be displayed and read by people.
I like to say this: If you find that your apps are too efficient and don't burden your hardware enough, use more XML. There is no code so inefficient that it can't be made even more inefficient using XML. As a keynote speaker said at OOPSLA a couple of years ago (I think it was Alfred Spector): "We just never thought that the programming community would be so accepting of a format as inefficient as XML."
There are ways to make XML more efficient. Don't include whitespace. Use short element names and shallow namespace trees with short names. There's even a move afoot for "binary XML" (practically an oxymoron); see Better Web Services Performance.
So bottom line, if all your apps writing and reading your data are implemented in Java, use binary serialization. Compared to XML, it's easier to get working and has better performance. If and when you need interoperability with other languages or better support for people reading the serialized data, then go through the extra effort to implement the code for XML marshalling and demarshalling.
The IBM Sequoia supercomputer will be faster than the fastest 500 supercomputers, combined.
So says "IBM Sequoia: Faster Than the Fastest 500 Supercomputers, Combined" on a blog called Gizmodo. "IBM Tapped For 20-Petaflop Government Supercomputer" (InformationWeek) says the Sequoia, which will be built at the Lawrence Livermore National Laboratory for the National Nuclear Security Administration, will deliver 20-petaflops of computing power. That's 20 times more powerful than today's fastest computer. "The system will comprise 96 refrigerator-size racks with a combined 1.6 PB of memory, 98,304 compute notes, and 1.6 million IBM Power processor cores" and cover 3,422 square feet. "Uncle Sam buys 20 petaflops BlueGene super" (Channel Register) gives details of the Dawn (a BlueGene/P), a 501 teraflop machine to be delivered late this year, and the Sequoia, a 20.13 petaflop machine to be delivered in 2011. The Sequoia will even be energy efficient; "U.S. taps IBM for 20 petaflops computer" (EE Times) quotes IBM's Herb Schultz: "The Sequoia system will be 15 times faster than BlueGene/P with roughly the same footprint and a modest increase in power consumption."
How should you set up WebSphere Process Server in your production environment?
I've talked about WebSphere Process Server (WPS) and the latest version 6.2. The simplest way to install a WPS runtime is a single server on a single node in a single cell, which is perfectly adequate for unit testing (and in fact is the topology for the WPS test server in WebSphere Integration Developer (WID)). However, this is not the best set-up for production. In production, you usually want a basic amount of high availability (HA) so that your users still get service even when part of your infrastructure goes down.
Let's take a quick look at what's in the golden topology. Two details to observe are that the cell contains two nodes and three clusters. Why is that?
Nodes -- The cell contains two nodes which should be installed on two different host machines (or perhaps two LPARs on the same host machine). This way, if one node crashes or has to be taken down for maintenance, the work will fail over to the other node and the users will still be able to do their work. During normal operation, workload management (WLM) will distribute requests across both nodes (and in fact also handles the failover when one node fails). An even better topology might have three or four nodes for increased redundancy.
Clusters -- Since the cell contains multiple nodes, the application should be deployed not in a single server running on a single node, but in a cluster with cluster members (aka application servers) on multiple nodes. In WPS, it's helpful to actually use three clusters, one for the business logic parts of the application and two for some of the main WPS infrastructure.
Business logic cluster -- This is where you deploy your SCA modules, including business process modules that run in the Business Flow Manager and Human Task Manager.
SIBus engine cluster -- This is for WPS to host messaging engines that are part of the service integration bus. A separate cluster is the easiest way to enable all of the business logic app instances to access the messaging engines even if/when they fail over.
CEI engine cluster -- This is for WPS to report what's going on for monitoring by products like Tivoli Composite Application Manager (ITCAM) (for IT monitoring) and WebSphere Business Monitor (for business monitoring). The separate CEI cluster helps lessen monitoring's interference with the business application.
So when deciding how to install WPS in production, the golden topology is at least a good start.
Some posts by my fellow blogger Bob Sutor have made me think more about what an ESB (Enterprise Service Bus) is. A specific comment that caught my attention, in ESB*?, is "if you use WebSphere MQ and other WebSphere brokers or integration servers, you have an ESB today."
Well, yes and no. I think there are two closely related but distinct issues:
what's come to be known as the Message Bus pattern (disclaimer: I'm a co-author of the book)
what is generally today being called an ESB
If you want an ESB today, it's a build-your-own affair. In terms of IBM products, you use:
WebSphere MQ so that all of your applications can talk to each other
WBI Message Broker if you want to route and transform the messages while they're being transmitted between applications
(WBI-MB runs on top of WMQ.) This provides the basic means for applications to invoke each others' services. Each service is represented by a pair of Request-Reply queues. Any application that wants to invoke a particular service places an appropriate request message on the request queue for that service, and then listens for the response message on the service's reply queue. Any application that provides a particular service listens for requests on the service's request queue, performs the service, and then sends the response on the service's reply queue.
The thing is, this is really a Message Bus, not an ESB. Enterprises with sophisticated use of messaging have been doing this for years, and it's still where we are today for the most part. So today, I don't think it's fair to say that we can really do an ESB, but we can do a Message Bus, and that's a pretty good start.
So, what's the difference between a Message Bus and an ESB? I think there are two key differences in the way clients (applications that invoke services) will interact with an ESB that is better than what they can do with a Message Bus today:
describing -- A service in an ESB will be self-describing. Whereas with a Message Bus, the client has to simply "know" the format of the request and reply messages, an ESB can tell you what those formats are. The formats will probably be described with WSDL.
discoverable -- An ESB client will be able to query the ESB to ask it what services it hosts, and what queue pair to use for a specific service. This directory capability within the ESB will probably be fairly UDDI-like, in that the client can say "How do I invoke the service with this WSDL?" and the ESB responds with the queues to use (perhaps expressed as WS-Addressing endpoint references (EPRs)). This brings me back to the discussions I've been having with another blogger, James Snell, about What is wrong with UDDI? and Thoughts on the UDDI Problem.
An ESB can do lots of other things, and for that matter a Message Bus can too. (A lot of this is broadly referred to as "mediation.") But how will the client experience differ? And how will you code your clients to work differently? An ESB's services are self-describing and discoverable; a Message Bus' are not.
Why is this not practical yet? Standards. Today WSDL describes the format of the SOAP request and response messages and the HTTP URL for invoking the service. WSDL need to be expanded with a standard way to specify a pair of request-reply queues instead of an HTTP URL. Given the URL/address for an ESB, there needs to be a simple and standard way to access its directory service, which must implement a standard for querying an ESB for its services and how to invoke them.
These standards are coming, but we don't have them yet. Which is why today, while we strive for an Enterprise Service Bus, we're still at the stage where what we can build is a Message Bus. But don't let that hold you up; a Message Bus is still a very good place to start, and you can start doing that today. (For more help, see "Understand and implement the message bus pattern" by James Snell.) [Read More]
I personally get cloud computing confused with grid computing. According to Wikipedia (chronicler of wikiality), grid computing (part of the onetime future of computing) is a cluster of resources that act together like one big resource, such that you don't care where in the grid your functionality gets performed. This sounds like, for example, a J2EE application deployed to a WAS ND cluster; the user doesn't know nor care which cluster member is performing his work. Cloud computing, says Wikipedia, occurs on the Internet (or some other type of network, I suppose) such that you don't even know where it's occurring. When you perform a search using Google, Amazon, Travelocity, etc., where is your search executing? Silicon Valley, New York City, or Bangalore--it doesn't matter. In fact, users in NYC are probably hitting different servers than those in Bangalore; those servers are running in a cloud. The data centers in Silicon Valley, New York City, and Bangalore should each be running a grid.
"What cloud computing really means" (InfoWorld) (part of Inside the emerging world of cloud computing) doesn't really answer its own question. Instead, it covers all the bases, saying cloud computing can mean: Software as a service (SaaS), utility computing, Web services in the cloud, platform as a service, managed service providers (MSPs), service commerce platforms, and Internet integration. Gee, clear as mud. (At least they didn't say it's Web 2.0 (which I say is MVC for the Web).)
Likewise, "Guide To Cloud Computing" (Information Week) doesn't really say what it is. But Amazon, Google, Salesforce, etc. are all doing it. An example that a lot of journalists are talking about is Amazon Web Services (AWS), which essentially lets you outsource computing jobs to them. Need some data crunched? Give it to Amazon and they'll get it done. Of course, there's a lot of constraints in how you package up your functionality to be performed, you need to have a lot of flexibility on when it gets done exactly, and you may need to worry about the security (esp. privacy) of your data.
Of course, I should also mention that IBM does cloud computing as well. See:
The Africa press release even has an IBM definition of cloud computing:
Cloud computing enables the delivery of personal and business services from remote, centralized servers (the "cloud") that share computing resources and bandwidth -- to any device, anywhere.
Back to David's paper. He divides an application platform into three parts (see Fig. 2): Foundation, such as the operating system, and I'd include middleware like a J2EE application server; Infrastructure Services, other capabilities and middleware that the app uses for persistence, security, messaging, etc.; and Application Services, which perform business functionality and ideally are wrapped up as SOA business services. The upshot (see Fig. 3) is that cloud computing makes infrastructure and application services available outside the enterprise, in the cloud. Cloud computing also enables the app itself to run in the cloud, so you just deploy your app to the cloud and access it from anywhere (again, like a world-wide WAS ND cluster).
To me, this approach isn't that astonishing; I guess someone just had to give it a name. I (and many others, I think) look at SOA as being an app that works as (what I call) a service coordinator consuming services, namely service providers. The key is that the providers for any given service may be inside the enterprise (what David calls on-premises) or may be outside the enterprise (what David calls the cloud). In fact, a single service may have both internal and external providers, and it seems to me that the cloud should include both, so that the app consuming the service doesn't need to know whether the provider is inside or outside the enterprise (or both). I think an important part of solving this problem, making services available to consumers without having to know where the providers are, is the enterprise service bus. This is one of the main points of my articles "Why do developers need an Enterprise Service Bus?" and "Simplify integration architectures with an Enterprise Service Bus" (the latter with James Snell).
So cloud computing is functionality being performed wherever is convenient, where the client application doesn't know nor care where the functionality actually lives. A great approach to make this happen, and to prepare for more of it in the future than may be practical for you today, is to use SOA and ESBs.
How would you like a program to automatically install WAS on all of the machines in your data center?
WebSphere CloudBurst Appliance (press release) is just such a machine. Plug it into your data center network, specify a bunch of other machines on the network, and CloudBurst installs WAS on those machines automatically. We say that what it's installing is a cloud, as in cloud computing. So with CloudBurst and some server computers, you can set up your own private WebSphere cloud in your data center.
CloudBurst is a significant advancement to simplify the installation of WAS environments. The installs are fast, reliable, and easily repeatable; it doesn't require a large operations staff to spend considerable time performing the installs. What's installed is WebSphere Application Server Hypervisor Edition. CloudBurst is an appliance in part to make its installation simple; we wouldn't want a chicken-and-egg problem where the installation tool is as difficult to install as the software it's supposed to install. CloudBurst's development codename was Rainmaker, which Jerry Cuomo discussed in 2009 Trends and Directions for WebSphere.
CloudBurst comes with WAS HV images already installed on it, which it then uses to install WAS HV on the other machines as you direct. This means it can install a cloud of WAS 6.1 and 7.0 servers. Odds are CloudBurst will support other WAS-based products in the future, such as Process Server and Portal.
CloudBurst makes sure that WAS is installed and configured the same way on each host server. This avoids a number of production problems. You can also use CloudBurst to install both your production environment and test environment, ensuring that the two environments really are configured the same. This will cut down on a problem we often see where an application problem can be reproduced in production but not in test because the two environments which are supposed to be identical in fact apparently are not, differences which can be very difficult to track down.
CloudBurst can also be loaded with feature packs and your own application EARs to have those installed in the cloud along with WAS.
I'm starting to do more work with RAD 6. As with any new release of an IDE, new features have been added and existing ones have been improved (and changed around), so I'm running into some surprises. As I learn how to do little tricks in RAD 6, I'll try to document them here.
So what the heck is a communications enabled application (CSA)? Imagine your user is browsing your Web site and has a question. The user can call a customer service representative, but then the CSR has no idea what Web pages the user is looking at and can't help the user browse to find what they're looking for. With CSA, the user can click a button on the Web page to contact the CSR, which then opens up a connection like an IM chat, a VoIP discussion, or the user enters their phone number and the CSR calls it (aka click to call). Not only does this make it easy for the user and CSR to have a conversation, but the button can also enable the CSR to share the user's browser screen, looking at the same Web page the user is seeing, and together they can collaboratively browse the Web site (aka cobrowsing). This enables the CSR to help the user find what they're looking for and even fill out orders and other forms. A similar button can enable a user to share a Web page with another user so the two can then browse the same site together from two different computers (aka peer-to-peer cobrowsing). This would, for example, enable two users to jointly browse for a gift for a third person without having to be at the same computer.
There are non-WebSphere, third-party packages for CSA, but all they really enable is click to call. The CSA is not built into the app, so the CSR can call the user and know something about the user's context when they clicked the button, but can't cobrowse. The app developer also has to learn how to program the CSA package which doesn't necessarily have a well-defined Java API like the features in WAS. Because the third-party CSA is a separate app, it has to be installed and administered separately in the runtime environment and may not have WAS's QoS features for scalability and failover. WAS CSA is a better approach because it integrates the CSA features into the WAS app, supports a single integrated Java programming environment, and runs on the WAS platform your app is already using.
The WebSphere development team that created the CSA has their own blog, appropriately titled Communications Enabled Applications. This blog is a good place to get the straight dope on how WebSphere CSA works and how to use it. It includes several YouTube videos that show how it works. For example, here's one where Erik Burckart, the chief architect for CEA, explains how cobrowsing works:
As the video shows, the two peer-to-peer cobrowsing users can chat about what they're browsing using an IM built into the Web site.
Imagine elevators that group together people going to the same floors.
I ride on a lot of elevators in a variety of buildings, but today I rode on some of the most interesting ones I've ever seen.
The elevator is actually a bank of elevators. Each elevator is pretty typical, but it's the way they work together that's so interesting. The button panel to call the elevator isn't the usual pair of up and down buttons. Rather it's a list of floors, one button per floor--what's normally inside the elevator rather than on the wall outside. You push the button for the floor you want to go to. A display next to the buttons shows a letter and arrow indicating which elevator to go to. Sure enough, each elevator door has a different letter above it; you go to the elevator indicated on the display and wait for the elevator. When the doors open and you get on the elevator, there are no buttons to press to select the floor; rather a display shows the floors the elevator is going to, including yours. It's like the control system for the elevators is turned inside out, with the controls on the outside instead of the inside.
Rather than pressing "up" or "down" in the lobby, and then indicating the destination floor once one has boarded the elevator, one may alternatively key in one's destination floor whilst in the lobby, using a central dispatch panel. The dispatch panel will then tell the passenger which elevator to use.
What's really cool about this approach is this: Since the elevator system knows where everyone's going (not just up or down), it groups people going to the same floor in the same elevator. Rather than getting on an elevator which stops on every floor, yours only goes to a couple of floors. This means the elevators travel less, saving energy, and elevator rides take less time.
Register your destination on a keypad before you enter the elevator.
Advance knowledge of every passenger’s destination before they even reach the elevator.
Reduced passenger journey times.
Elimination of crowding during heavy traffic.
Assurance of a dedicated service for people with special needs.
Greater design flexibility for building core configuration.
I just think this is really neat and wish more large elevators systems (multiple elevator cars, lots of floors) worked this way. BTW, I'm also fortunate that I visited this building with a friend who'd been before and knew how this worked. Otherwise, I probably would've been pretty confused on how to simply use the elevator!
Most programmers initially learned their craft developing programs that run in a single process (and therefore on a single machine). This may have changed somewhat in the past decade, but we still start out learning to write simple programs and even today we still develop (though not deploy) complex systems in simplified environments.
The jump from single-process programs to distributed computing is difficult, as Deutsch observed. It requires a significant change in mindset. We get used to the idea that every line of code runs very fast and is immediately followed by execution of the next line of code. We know each line of code takes a little time to run and programs can fail, but generally we get used to programs being reliable stacks of code that can perform meaningful units of work very quickly. But once the program architecture becomes distributed, with parts running in separate processes invoking each other remotely across network connections, these assumptions about the simplicity of program execution get exposed.
I think fallacy #7, Transport cost is zero, nicely summarizes this dilemma. We're used to computational overhead being close to zero; when distributed computing changes that, it messes up a lot of our assumptions. This fallacy overlaps with #1: Reliability, #2: Latency, and #3: Bandwidth. They are why programs designed to run in a single process often perform poorly when arbitrarily split across tiers.
Patterns have evolved to address this issue of transport cost. A Session Facade is used to make remote EJB clients less chatty, since every invocation across the network adds overhead. It's a major theme underlying the Enterprise Integration Patterns, especially the first few chapters of the book. Network latency, along with marshalling/demarshalling of data, are significant issues for RPC/RMI programming. Those, along with asynchronous service invocation, are significant issues for messaging.
Systems were easier to design and develop when they were assumed to run in a single process. Distributed computing makes that a lot more complicated. But what is a problem is also an opportunity.
I think you've made a very good point here. A lot of the focus of cloud computing is deploying an entire app on a public cloud so that it can be accessed from anywhere on the Internet/Web and so that the app owner doesn't have to host it on his own equipment. Even in this scenario, I think it makes sense to design the app as an SOA for all the reasons that SOA generally makes for better business apps than a traditional monolithic layered architecture (a.k.a. enterprise application architecture) has.
But an equally important approach which I think is a lot less recognized is that individual SOA services can be deployed to public and private clouds, making them easy to host and easy to access with consumer applications and composite services. When an app or composite service aggregates several services, those services don't have to be deployed in the same data center or cloud as the app, they can be deployed in various other clouds.
Like I said in Intro to Cloud Computing, "cloud computing sounds like SOA and ESBs to me."
There are three different achievement levels--contributing, professional, and master--which show increasing levels of accomplishment.
If you've published on dW and would like to receive this recognition, About the program explains how to register and how to gain points. This also applies to anyone who would like to become an author and start tracking points for recognition. (Sort of like a frequent flyer program for authors.) When you register, you'll get a welcome package with a tracking tool; this may take a couple of weeks, esp. if you have published several articles because the tool will be prepopulated with that list of contributions. Once you have the tool, you can use it to track your progress and submit it when you achieve a new level.
Also, even if you're "just" a reader of articles, check out the list of recognized authors to see who's been contributing a lot, and then search to find their articles and check them out.
WMQ v7.0.1 and WMB v7 have a new feature for standby processes that make the products more highly available.
The feature in WebSphere MQ, introduced in v7.0.1, is called multi-instance queue managers. The corresponding feature in WebSphere Message Broker, introduced in v7, is called multi-instance brokers. In both cases, the queue manager or broker runs in two processes, one active and the other on standby. If the active one fails, the product automatically fails over to the standby, with virtually no service interruption. Note that any resources the processes use, such as a database, must have their own high availability capabilities.
Multi-instance queue manager
In prior versions, to make WMQ or WMB highly available, one had to use hardware clustering (such as PowerHA (formerly known as HACMP) or Veritas). Hardware clustering may still be the gold standard for HA, but for environments that don't quite need the gold standard, software clustering via multi-instances may be good enough.
This new design reminds me of how the service integration bus in WebSphere Application Server works. An SIB bus is a collection of messaging engines managed by the WAS HA manager. By default, it runs two copies of each messaging engine, an active and a standby. If (parts of) the cluster lose communication with the messaging engine, the HA manager switches them to use the standby. Only one copy of the messaging engine can be active and only that one can maintain a lock on the external storage for the persistent messages. Multi-instance queue managers work in much the same way.
WebSphere MQ V7.0.1 introduced the ability to configure multi-instance queue managers. This capability allows you to configure multiple instances of a queue manager on separate machines, providing a highly available active-standby solution.
WebSphere Message Broker V7 builds on this feature to add the concept of multi-instance brokers. This feature works in the same way by hosting the broker's configuration on network storage and making the brokers highly available.
So if you needed a reason to upgrade to WMQ and WMB 7, now you have it.
Thanks to my colleague Guy Hochstetler who made me aware of this new feature.