WMQ v7.0.1 and WMB v7 have a new feature for standby processes that make the products more highly available.
The feature in WebSphere MQ, introduced in v7.0.1, is called multi-instance queue managers. The corresponding feature in WebSphere Message Broker, introduced in v7, is called multi-instance brokers. In both cases, the queue manager or broker runs in two processes, one active and the other on standby. If the active one fails, the product automatically fails over to the standby, with virtually no service interruption. Note that any resources the processes use, such as a database, must have their own high availability capabilities.
Multi-instance queue manager
In prior versions, to make WMQ or WMB highly available, one had to use hardware clustering (such as PowerHA (formerly known as HACMP) or Veritas). Hardware clustering may still be the gold standard for HA, but for environments that don't quite need the gold standard, software clustering via multi-instances may be good enough.
This new design reminds me of how the service integration bus in WebSphere Application Server works. An SIB bus is a collection of messaging engines managed by the WAS HA manager. By default, it runs two copies of each messaging engine, an active and a standby. If (parts of) the cluster lose communication with the messaging engine, the HA manager switches them to use the standby. Only one copy of the messaging engine can be active and only that one can maintain a lock on the external storage for the persistent messages. Multi-instance queue managers work in much the same way.
WebSphere MQ V7.0.1 introduced the ability to configure multi-instance queue managers. This capability allows you to configure multiple instances of a queue manager on separate machines, providing a highly available active-standby solution.
WebSphere Message Broker V7 builds on this feature to add the concept of multi-instance brokers. This feature works in the same way by hosting the broker's configuration on network storage and making the brokers highly available.
So if you needed a reason to upgrade to WMQ and WMB 7, now you have it.
Thanks to my colleague Guy Hochstetler who made me aware of this new feature.
I find that customers often confuse the role of an environment with its quality of service.
I previously discussed Data Center Environments, specifically the typical environment roles of Dev, Test, Stage, and Prod. These separate environments keep code under development (Dev) away from code the enterprise's users use to do their work (Prod). They also create a reliable, controlled environment for performing testing (Test) and for practicing installation and migration procedures (Stage).
These are environment roles, which describe who should be able to access an environment, what it is used for, and therefore what it should and shouldn't contain. For example, only the Prod environment should be able to access and change real customer data. Stage may contain a separate copy of the production data. Dev and Test shouldn't even have a copy of the production data, which is probably confidential and should be protected, but instead should conatin a representative set of fake data. (Use a Data Obfuscator to produce test data from a set of production data.)
The role of an environment is often confused with the quality of
service (QoS) an environment should support to meet its requirements.
One common example is availability. The applications running in Prod
are typically assumed to need to be available 24x7 (aka always). Test
and Stage are understood to be unreliable, that they may be taken down
or crash at any time as testing needs dictate. The Dev environment is
typically assumed to be fairly reliable, but with the understanding
that outages are acceptable.
These assumptions about the availability of different environments can become a problem for repository products like Rational Asset Manager (RAM) and WebSphere Services Registry and Repository (WSRR). Dev environments are typically not managed for reliability, yet products like RAM and WSRR (used in development to manage SOA governance) need to be reliably available. This is likewise true for the source code management system, but somehow the reliability requirements of RAM and WSRR are seen as being much more complex.
Long story short, customers often decide to install RAM and WSRR in their Prod environment simply because that has people prepared to manage WebSphere Application Server (WAS) servers (which is what RAM and WSRR run in) and make those WAS servers highly available. This, in my mind, is kind of crazy. RAM and WSRR store development artifacts, which are not used by production applications any more than source code is, and so should not be stored in Prod.
Customers often insist on installing RAM and WSRR in Prod because it's set up to make them highly available. I think the far better approach is to set up a couple of WAS servers in Dev for reasonable (maybe high) availability and install RAM and WSRR there, and assign personnel (who perhaps normally work in Prod) to manage those servers in Dev.
I'd be interested to hear from customers using RAM and/or WAS: What environment do you have them installed in?
One issue I see customers get confused on is the purpose of separate but equivalent software environments.
An enterprise should divide its IT servers and software into multiple separate and fairly independent environments. The number and purpose of these can vary somewhat, but a typical separation is these four environments:
Dev -- The development environment used to implement and compile software. Typically used for unit testing as well.
Test -- Used to perform functional testing and otherwise make sure that the software from development meets requirements. Scalability testing can be performed here if the hardware is robust and representative of Prod; if it's a shell, scalability results may well be misleading.
Stage -- A representative mirror of Prod, a place to test installation and migration procedures and perhaps the best place for scalability testing. Can also be used as an alternative/backup for Prod. New applications can be deployed by installing them in (part of) Stage, testing that, then swapping it for (part of) Prod.
Prod -- The production environment used to execute applications so that users (internal and external) may use them.
The users in the enterprise really only care about Prod. Dev, Test, and Stage are only used by IT. "The Ideal WebSphere Development Environment" is an old but good article which explains environments and how to use them in greater detail.
These environments are actually four roles than an environment can play. An environment in the Dev role needs development tools and test data, but probably doesn't need monitoring. The environment in the Prod role is the only one that should store confidential customer data and should have monitoring to verify that it's running properly.
The role of an environment is independent of its quality of service (QoS) requirements, a topic I'll discuss in my next posting.
The example shows CBM and SOMA being used to model a Rent-a-car company. I especially like the way it shows a broad capability like Rent Vehicle decomposed into atomic services which can be implemented in IT and composed again into a business process. This is a SOMA technique called Domain Decomposition. It also shows how Rentals and Reservations is a CBM component in the Rentals Management business competency. This component has services like Rent Vehicle--a course-grained service which, when decomposed, requires finer-grained services like Check Vehicle Availability and Get Customer Information which are offered by other components, Fleet Management in the Fleet Logistics competency and Customer Service in the Marketing competency. This shows how useful composite services often reach across competencies to make a company with many different lines of business operate for its customers as a coherent whole. And notice that all of these competencies are in the Execute accountability level, because they're parts of actually making the reservation (not planning for what one would be like, but actually making one). This all makes more sense if you look at the pictures, which is the point of all the graphics in these techniques, so go look at the PDF.
So the better the business/IT alignment in an SOA, the more IT becomes an enabler of business flexibility instead of an impediment. SOMA helps figure out the services, and CBM as a front-end to SOMA helps it focus all the more on services which model the business. This is a winning combination.
SOA Governance: Achieving and Sustaining Business and IT Agility is a fantastic book which shows how any service-oriented architecture project can be run more predictably and productively, decreasing cost and increasing ROI. The architects and project managers in charge of any significant SOA project should know the material in this book.
The book is written by four very knowledgeable SOA practitioners at IBM (which also explains why it’s published by IBM Press). Books written by multiple authors often read as independent chapters that don’t flow as a book, but these authors have collaborated well to produce a consistent whole. They have distilled their knowledge of how to manage SOA projects into what is really two books in one: 1) A model for managing SOA projects via 2) A process for performing SOA projects. The latter is based on tasks which produce work products, specific concrete deliverables which make project management much more straightforward. The latter half of Chapter 3 is a catalog of governance work product types, and Chapter 4 catalogs service development work product types. These form the basis for the SOA governance model described in Chapter 5, which details step-by-step tasks in the processes for governing the development of SOA applications, tasks which create the work products described previously.
I enjoyed all the touches of simple, practical advice spread throughout the book. One example is “Our experience has been that establishing a dedicated SOA CoE [Center of Excellence] is one of the most important organizational changes the governance planning team can make.” (p. 237) Another example is the sections titled “What Distinguishes the SOA Winners?” and “Antipatterns: Common SOA Pitfalls” (pp. 43-50) Almost every section begins with a quotation that has nothing to do with SOA governance and yet usually illustrates the section quite nicely. For example, the section on “Governance Mechanisms” (p. 33) beings with this quote attributed to Colin Powell: “Great leaders are almost always great simplifiers, who can cut through argument, debate and doubt, to offer a solution everybody can understand.”
No book is perfect, nor is this one. Chapter 6 on managing the lifecycle is not as strong and badly needs more copyediting. For example, after doing a nice job of distinguishing between processes and tasks (p. 268), other parts of the chapter start distinguishing between tasks and what are sometimes called processes but sometimes called services. I’d also quibble that they focus overly much on whether operations can be automated since it’s also valid for a task in a process to be a human task. Nevertheless, these complaints are minor in what overall is a collection of very useful information.
(Disclaimer: I, like the authors of this book, am employed by IBM.)
Remember the advertisement IBM made for enterprise service buses?
IBM TV AD - Universal Business Adapter is a ad from several years ago featuring a device that connects anything to anything. I think it's explaining to business what an ESB is, or business adapters more broadly.
Check it out:
I really like this ad. Glad I finally found it; YouTube has everything!
Address the #1 Success Factor in SOA Implementations: Effective, Business-Driven Governance
Inadequate governance might be the most widespread root cause of SOA failure. In SOA Governance, a team of IBM’s leading SOA governance experts share hard-won best practices for governing IT in any service-oriented environment.
The authors begin by introducing a comprehensive SOA governance model that has worked in the field. They define what must be governed, identify key stakeholders, and review the relationship of SOA governance to existing governance bodies as well as governance frameworks like COBIT. Next, they walk you through SOA governance assessment and planning, identifying and fixing gaps, setting goals and objectives, and establishing workable roadmaps and governance deliverables. Finally, the authors detail the build-out of the SOA governance model with a case study.
The authors illuminate the unique issues associated with applying IT governance to a services model, including the challenges of compliance auditing when service behavior is inherently unpredictable. They also show why services governance requires a more organizational, business-centric focus than “conventional” IT governance.
Understanding the problems SOA governance needs to solve
Establishing and governing service production lines that automate SOA development activities
Identifying reusable elements of your existing IT governance model and prioritizing improvements
Establishing SOA authority chains, roles, responsibilities, policies, standards, mechanisms, procedures, and metrics
Implementing service versioning and granularity
Refining SOA governance frameworks to maintain their vitality as business and IT strategies change
But it's that and more. According to Gunnar Peterson, the biggest hurtle to becoming a security pro is understanding security integration, and the best way to learn that is by reading EIP. This is because, Peterson explains, it's easier to teach security to developers who know how to design distributed systems well than it is to teach network security experts how to develop applications.
And I quote:
Rather than obsessing about the latest and greatest threat, its much more strategically important to sort out the logistics, constraints, and economics to distribute and scale out the security mechanisms and processes we have. Specifically how are they impacted by and how do they impact the message flows, endpoints, routing, transformation, and management. These patterns are aptly described and cataloged in Hohpe and Woolf's book and provide an important starting point for meaningful and useful security improvement over time.
So if you'd like to learn how to design distributed systems so that they can be secured easily and effectively, check out EIP.
Integration is the primary use case for more than half of the ESBs deployed today. The core language of EAI, defined in the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf, is also the core language of defining ESB flows and orchestrations, as seen in the ESB's developer tooling. For the users seeking integration, the ESB brings connectivity, protocol conversion, mediation, and other integration features together in one place to support the design, development, and management of integrated business solutions.
I'm interested that 50% of ESBs today are used primarily for integration. What are the other 50% being used for? In fact, I'd be more specific and say that an ESB should be used for service integration (or as IBM likes to call it, "sevice connectivity"), i.e. connecting together service requestors and providers in an SOA. Busses can be used for other things like transporting data and providing event notification, but I wouldn't exactly call those functions of a service bus.
Anyway, nice to be thought of as having helped to document the core language of EAI.
Thanks to my friend Dave for pointing out to me that I was mentioned in this report.
Can WSDL be used to describe RESTful Web services?
An on-going area of interest has been REST vs. SOAP/WSDL, such as Web Services: REST vs. SOAP/WSDL. In general, REST and SOAP/WSDL have been seen as two very different approaches where you had to choose one or the other because never the tween shall meet.
Now I've stumbled upon "Describe REST Web services with WSDL 2.0" (developerWorks), which explains that "Until recently there was no formal language to describe REpresentational State Transfer (REST) Web services-now there's WSDL 2.0." It's by Lawrence Mandel, who works in IBM Rational and leads the Apache Woden project (which is developing a WSDL 2.0 parser). Looks like WSDL 2.0 is becoming an alternative or replacement for WADL.
A dude from Dopplr has posted a presentation on how it works.
"Made of Messages" by Matt Biddulph explains "It's important to think about serverside architecture as an asynchronous system." It then explains how the internals of Dopplr work very asynchronously. Good stuff.
The part that really caught my attention is slides 6-8, where Matt recommends reading Enterprise Integration Patterns in order to understand asynchronous programming and messaging better. Matt says, "This is a great book. Really. Ignore the name." (Apparently there's a problem with the name?)
So if you're busy doing everything all Ajax, Web 2.0, mash-up and what all--hey, it turns out that Enterprise Integration Patterns is still a good book for you, even if you don't think of what you're doing as "enterprise" or "integration." Thanks, Matt. (And thanks to my friend Andy Piper for letting me know about Matt's presentation.)
The customer reviews on Amazon give Enterprise Integration Patterns five stars.
Enterprise Integration Patterns is a book I co-authored; check out the reviews for yourself. For a couple of years now, the total has been stuck at 4.0-4.5 stars because of helpful reviews like "A Tactical Book" (which says: it's all about using messaging systems) and "Good for concepts but lacks practical usage" (which says: After reading this book I know concepts but still have to buy real Biztalk book.). I'm also amused by "interesting patterns with a little bit of hype", which warns, "One word of warning, it's a "Martin Fowler Signature Series Book", which means it's more interested in being on the bleeding edge as opposed to being thorough." As Martin once commented to me: I feel sorry for anyone who considers this stuff bleeding edge.
Anyway, those reviews aside, there are now enough 5-star reviews to make the overall average round-up to 5.0. Not bad for a book that was published almost five years ago, an eternity for a computer book.
For some good developerWorks articles on this topic, see:
I personally get cloud computing confused with grid computing. According to Wikipedia (chronicler of wikiality), grid computing (part of the onetime future of computing) is a cluster of resources that act together like one big resource, such that you don't care where in the grid your functionality gets performed. This sounds like, for example, a J2EE application deployed to a WAS ND cluster; the user doesn't know nor care which cluster member is performing his work. Cloud computing, says Wikipedia, occurs on the Internet (or some other type of network, I suppose) such that you don't even know where it's occurring. When you perform a search using Google, Amazon, Travelocity, etc., where is your search executing? Silicon Valley, New York City, or Bangalore--it doesn't matter. In fact, users in NYC are probably hitting different servers than those in Bangalore; those servers are running in a cloud. The data centers in Silicon Valley, New York City, and Bangalore should each be running a grid.
"What cloud computing really means" (InfoWorld) (part of Inside the emerging world of cloud computing) doesn't really answer its own question. Instead, it covers all the bases, saying cloud computing can mean: Software as a service (SaaS), utility computing, Web services in the cloud, platform as a service, managed service providers (MSPs), service commerce platforms, and Internet integration. Gee, clear as mud. (At least they didn't say it's Web 2.0 (which I say is MVC for the Web).)
Likewise, "Guide To Cloud Computing" (Information Week) doesn't really say what it is. But Amazon, Google, Salesforce, etc. are all doing it. An example that a lot of journalists are talking about is Amazon Web Services (AWS), which essentially lets you outsource computing jobs to them. Need some data crunched? Give it to Amazon and they'll get it done. Of course, there's a lot of constraints in how you package up your functionality to be performed, you need to have a lot of flexibility on when it gets done exactly, and you may need to worry about the security (esp. privacy) of your data.
Of course, I should also mention that IBM does cloud computing as well. See:
The Africa press release even has an IBM definition of cloud computing:
Cloud computing enables the delivery of personal and business services from remote, centralized servers (the "cloud") that share computing resources and bandwidth -- to any device, anywhere.
Back to David's paper. He divides an application platform into three parts (see Fig. 2): Foundation, such as the operating system, and I'd include middleware like a J2EE application server; Infrastructure Services, other capabilities and middleware that the app uses for persistence, security, messaging, etc.; and Application Services, which perform business functionality and ideally are wrapped up as SOA business services. The upshot (see Fig. 3) is that cloud computing makes infrastructure and application services available outside the enterprise, in the cloud. Cloud computing also enables the app itself to run in the cloud, so you just deploy your app to the cloud and access it from anywhere (again, like a world-wide WAS ND cluster).
To me, this approach isn't that astonishing; I guess someone just had to give it a name. I (and many others, I think) look at SOA as being an app that works as (what I call) a service coordinator consuming services, namely service providers. The key is that the providers for any given service may be inside the enterprise (what David calls on-premises) or may be outside the enterprise (what David calls the cloud). In fact, a single service may have both internal and external providers, and it seems to me that the cloud should include both, so that the app consuming the service doesn't need to know whether the provider is inside or outside the enterprise (or both). I think an important part of solving this problem, making services available to consumers without having to know where the providers are, is the enterprise service bus. This is one of the main points of my articles "Why do developers need an Enterprise Service Bus?" and "Simplify integration architectures with an Enterprise Service Bus" (the latter with James Snell).
So cloud computing is functionality being performed wherever is convenient, where the client application doesn't know nor care where the functionality actually lives. A great approach to make this happen, and to prepare for more of it in the future than may be practical for you today, is to use SOA and ESBs.
OOPSLA 2008 is the 23rd annual international conference on Object-Oriented Programming, Systems, Languages, and Applications, sponsored by the ACM (Association for Computing Machinery). It'll be held October 19-23 in Nashville, Tennessee, USA. OOPSLA is the place where many great techniques have gotten started, such as: Patterns, Aspect Oriented Programming, Extreme Programming XP, Unit testing, UML, Wikis, and Refactoring.