Polymorphic Services, Part 2: Polymorphic function

This is the second part of a two part series on Polymorphic Web Services, highlighting the need for agility in the way we define our SOA services, in order to minimize the impact of future changing business requirements. Part one focused on polymorphic data, and introduced several patterns for enabling information variability. This article will present various techniques that can be used to invoke services in a polymorphic manner, and abstract the underlying complexities of service variations from the service consumer.


Scott M. Glen (scott.glen@uk.ibm.com), IT Architect, IBM

scott.glen@uk.ibm.comScott Glen is an IT architect with IBM's Advanced Technology Group (ATG). He has over 15 years of experience in the architecture, design, and development of object-oriented systems, providing consultancy to the finance, government, telecommunications, and media sectors. With a particular interest in WebSphere, J2EE architectures, and associated design patterns, he now specializes in SOA, providing consultancy and implementation services to clients across EMEA.

08 March 2010


In part one we analyzed some techniques for information variability, specifically using xml:extension to create derived schema elements that can be used to construct polymorphic web services. We created several types of account element (SavingsAccount, CurrentAccount, PlatinumAccount) all derived from an abstract base Account element. We then built a service that could implicitly process any of these derived accounts in a polymorphic like manner.


So polymorphic data, implemented through the xml:extension capability, lets us process a variable payload through a single web service operation. However, in each case we will invoke the same processing. In many cases this will be the desired behavior, but what if we not only want a single point of contact for variable payloads, but also require variable processing based on the data being submitted? Ok, so we could just write a big switch statement inside our service, but that's hardly maintainable, and after all we're talking here about using good architecture and design to minimize the impact of change. After all up to 70% of effort goes on maintaining and enhancing a software solution - change is inevitable, so as software engineers it's up to us to utilize the design patterns that minimize the impact of those changes.

So a big switch statement is out of the question; we need something more elegant, powerful and adaptable. Going back to our object oriented analogy we really want to mirror the behavior of polymorphic invocation, where a method on an interface can be invoked and will implicitly call up to the corresponding method on the appropriate concrete implementation class, without the consumer being aware. If we apply this philosophy at the service level, then we could create a number of related services, each of which implements the same service interface, but which provides a slightly different level of functionality. The consumer invokes the service interface, which actually behaves as a proxy, introspecting the contents and context of the request, ultimately using some 'intelligent glue' to invoke the most relevant underlying service. So how can we achieve level of variability in the service world?

Historical Patterns

Back in the day, when this kind of behavior was required we would typically have written a proxy service, and created some kind of persistent store (i.e. database tables) to hold the details of the service variations, documenting their capabilities, physical endpoints and so on. We may even have been clever enough to extend this mechanism to store the relationship between the type of request and the endpoint used to service that request, meaning that we could, to some degree, modify the service invocation without altering code. It might look something like this:-

Figure 1. Proxy Service Pattern
Proxy Service Pattern

This works, but is bespoke, brittle - in that there is decision and invocation logic hardcoded into the proxy service - and is a solution still rooted within the IT domain.

Enterprise Service Bus / Registry Pattern

With the introduction of the ESB pattern we were able to enhance our basic dynamic invocation technique, particularly when working in conjunction with a service registry such as the WebSphere Service Registry & Repository (WSRR). The ESB brings a standard method for exposing services, and enables protocol and data transformations through the use of mediations, whilst the service registry provides a standardized and integrated search and discovery capability. Together they can be used to create the following dynamic invocation mechanism:-

Figure 2. ESB / Registry Pattern
ESB / Registry Service Pattern

This provides a fairly tried and tested approach, using off the shelf components which are aligned with various web service standards. However the mediation is essentially a glorified proxy, containing hardcoded logic to introspect the request and interrogate the registry. As business change occurs we are likely to still need IT involvement to enhance the mediation to cater for new requirements

Business Rules Management System (BRMS) Pattern

Business rules have long been used to remove complex logic or mathematical algorithms from software components. Externalizing logic in a BRMS allows analysts to modify the rules, potentially without IT involvement, enabling a swift and less invasive response to the need for change.

Aside from providing a business decision making capability, rules can also be used to make dynamic routing decisions within a SOA. Although not traditionally used for this purpose, the need for flexibility around service invocation, coupled with existing BRMS skills, has led some clients to utilize their BRMS as the backbone of a dynamic innovation mechanism.

Figure 3. BRMS Pattern
BRMS Pattern

Here we no longer use a mediation to provide the decision logic, but instead delegate to the external rules engine, (which may potentially interact with a registry) in order to decide which service endpoint to invoke for each given request.

On the positive side this externalizes the service invocation rules, enabling them to be expressed through any standard vocabulary that is implemented within the BRMS. All types of rules are now managed in one place, enabling existing BRMS skills to be reused, alongside the application of current rules deployment and governances practices.

However the rules governing service invocation, and those say, that define a pricing algorithm, perform completely different functions, and are likely to be implemented by two different user communities. There may also be performance considerations to take into account; invoking an external application for each routing decision is expensive, and whilst some BRMSs, such as JRules, can now be run in the same JVM as the ESB, this introduces its own limitations on the way rules are implemented and managed.

Business service pattern

What we really want is an 'in-line' way of expressing routing logic that is integrated within the ESB, but still externally maintainable by non technical personnel, through the use of rules supported by an industry standard vocabulary. This can be achieved by using Business Services, available as part of the WebSphere Dynamic Process Edition (WPDE) offering. These intelligent and configurable services assess the context of a request, compare it to the capabilities of known endpoints, and invoke the most appropriate match. They rely upon Business Policies, which are similar to business rules, to assert the capabilities of each endpoint and specify the conditions under which each should be invoked. Architecturally it looks like this:-

Figure 4. Business Service Pattern
Business Service Pattern

The Business Service Repository (BSR) contains details of each Business Service, and can be configured externally, enabling analysts to alter the execution conditions without requiring IT intervention. The interaction between the Business Service and the BSR is implicit; the lookup, evaluation of alternatives endpoints, and the ultimate service invocation, are all handled by the WPDE infrastructure. This provides a more abstract way of dynamically invoking services; the developer does not have to concern themselves with external lookups in rules engines or registries, or selecting endpoint addresses. In fact, this approach is so abstract that during implementation the developer does not even have to be aware of the physical endpoints at all. The association between business service and physical service can be made during deployment, allowing for greater flexibility in the way applications are constructed.

Standards aligned interfaces

A key element in enabling this abstract way of building applications is the use of interfaces, and specifically interfaces built on top of industry standards which utilize a flexible information model. The interface represents a contract between the service provider and consumer, enabling them to be developed concurrently. The alignment to industry standards suggests that the interface represents current best practice, is vendor neutral, and unlikely to face immediate structural change.

Some degree of change however is almost unavoidable, and it is the therefore essential that the underlying information model can absorb change without passing it on to consumer. Without this level of information agility changes to the underlying data structures are propagated to the service interfaces, and ultimately to the service consumers.

Industry aligned information agility

Although we covered some information agility approaches in the first part of this article, I'd like to introduce another technique here, and in particular address how it is utilized within a real industry context. The Telecoms market is a particularly dynamic one; Communication Service Providers (CSPs) seem to be releasing new products, services and promotional offers on a weekly basis. The market is highly competitive and is now converging, with traditional mobile, landline, and digital TV operators all competing in the same space to provide a range of telecom based services. This has led to the emergence of two key characteristics within this market; the need for agility, and the adherence to industry standards. The need for agility is clear; business units are continually creating innovative products, with new features, utilizing new Telecoms technology. The IT infrastructure to support this kind of business model must be agile; there is simply not enough time in the day to continually change the business processes and supporting systems to reflect the ever evolving needs of the business. Standards help avoid vendor lock in, enabling the underlying IT infrastructure to evolve in unison with the needs of the business.

One initiative which encapsulates both of those key characteristics is the emergence of the Shared Information / Data Model (SID) standard, backed by the TMForum, the leading industry association focused on improving business effectiveness for service providers and their suppliers. The model is part of a Solution Frameworks program, defining a widely adopted set of standards and best practices for transforming business and operations. What makes SID interesting from our perspective is its approach to information agility.

Value/Specification pattern

IBM provides an implementation of the SID standard as part of the Telecom Content Pack (TCP) product, one of a number of industry aligned accelerators that form part of its SOA portfolio. The pack provides a set of over 1000 core SID business entities which can be extended, either statically or dynamically, to suit the specific needs of each CSP.

Name/Value Pairs

Wikipedia currently defines name/value pairs as: a fundamental data representation in computing systems and applications. Designers often desire an open-ended data structure that allows for future extension without modifying existing code or data. In such situations, all or part of the data model may be expressed as a collection of <attribute name, value> tuples.

Static extension typically involves design time enhancements to the supplied schema, utilizing the kind of xml:extension technique describe in part 1 of this article. The dynamic approach however allows for runtime extension, adding new characteristics to the existing entities, whist still providing the ability to validate the information. It does so by reworking an old favorite; namely the name/value pair pattern.

It seems as though name/value pairs have been around since the days of Charles Babbage, and whilst they are still used in a surprising number of high profile projects, their use is often overlooked, typically due to their perceived lack of validation. And it is that key drawback that SID has addressed; rather than just passing a name and a value, SID also lets you pass the specification for that value, enabling run time validation to occur. The CharacteristicValue entity described below holds the value, whilst the associated CharactisticSpecification is used to describe the format of the value, including its type, default value, cardinality etc.

Figure 5. Characteristic Specification Pattern
Characteristic Specification Pattern

In fact, if you take this approach to it fullest extent, it should be possible to pass a segment of an XSD schema as part of the CharacteristicSpecification, and use it to drive a generic XSD based run time validation engine. These Characteristic elements can then be associated with just about any of the core SID business entities, enabling them all to be extended in a consistent and verifiable manner. Powerful stuff.

Putting it all together

So far we have spoken about information agility and service agility, but what about within one of the most common consumers of SOA services, namely a business process? Over the last few years Business Processes Modeling (BPM) has matured, but now additional demands, specifically around the long term maintainability of deployed process, are being raised. So why must business processes also exhibit a high degree of agility?

Let's consider a simple Order Handling process for mobile phone orders:-

Figure 6. Simple Order Handling Business Process
Simple Order Handling Business Process

Information comes into the process, an order is formed, the network elements are then activated to enable the phone, and finally the customer is billed. It's a simplistic process, but serves a purpose. Over time the company expands its portfolio of products, potentially through acquisitions, and branches into new markets. Accordingly the business process must be updated:-

Figure 7. Order Handling Business Process for Multiple Products
Complex Order Handling Business Process

We have now introduced a hard wired decision point, splitting the processing path based on the type of product being ordered (mobile, landline, IP, VPN etc). Over time, the company expands further, perhaps offering different levels of customer service, or additional features on some of its products:-

Figure 8. Complex Order Handling Business Process
Complex Order Handling Business Process

Things now start to get a little bit more complicated; what happened to our simple business process? Essentially what I am demonstrating here is change is inevitable, and that change typically involves the introduction of decision points into the model. Branches to handle new products, types of customer, processing options, geographies, and level of service all serve to make the model more complex, brittle and ultimately more costly and time consuming to maintain. So what is the answer? Essentially, BPM solutions need to be able to exhibit the same kind of dynamic behavior services.

Dynamic BPM

Dynamic BPM is about decomposing large and complex business processes into more manageable components, which can be linked dynamically at run time, to form an end to end process instance. Instead of a single deployable business model, we now have a palette of business process components which can be assembled appropriately based on the context of the process request. Essentially we are applying the same Business Service pattern described above, but at a business process level.

So it's divide and conquer, but at what level of granularity? SOMA provides some guidance on how to define our SOA services, but what about process decomposition? Here a Component Business Model (CBM) or Capability Map can be used to help identify the basic building blocks of a business. Each element in a CBM defines a functionally coherent bounded context for a business capability, and hence provides guidance on the scope of an associated business processes. Continuing with our Telecoms example, we can utilize the enhanced Telecom Operations Map (eTOM), another standard emanating for the TMForum's Solution Frameworks, which provides a hierarchical view of a Telecoms business operations. The top level decomposition can be seen below:-

Figure 9. eTOM Level 0 View
eTOM Level 0 View

If we drill down to the next level we find that there approximately 75 level 2 elements, however the business activities defined at this level (such as 'Order Handling') as are still too complex and coarse grained to be implemented as re-usable business processes. Continuing to the third level of refinement, we see that there are approximately 250 entities, each of which represents a fairly atomic and reusable business capability. For example, the level 2 'Order Handling' element, which can be found at the intersection of the Fulfillment and Customer Relationship Management (CRM) elements, can be decomposed into the following level 3 activities:-

Figure 10. eTOM Order Handling View
eTOM Order Handling View

At this level of granularity, the eTOM model defines a manageable number of business activities, each of which performs a specific task and can be represented by a business process. There are no hard and fast rules here on how many levels of decomposition will yield an acceptable level of granularity, however from analysis conducted on eTOM, APQC and other CBM models, three levels of decomposition would generally seem to provide an acceptable balance between the number of elements, their complexity and their potential for re-use.

Therefore each of the seven eTOM level 3 elements identified above would seem suitable candidates to be implemented as 'component owned processes'. Each process can be exposed as a Business Service, and described through a service interface which uses a flexible standards aligned information model such as SID. The process components can therefore be linked intelligently at run-time to create an agile end-to-end (E2E) business process model as shown below:-

Figure 11. Order Handling Dynamic Process
Order Handling Dynamic Process

The additional benefits of this approach come in terms of process ownership and maintenance. Traditional BPM solutions can lead to the deployment of complex and monolithic end to end processes, which typically span multiple departments or lines of business. This leads to confusion over who actually own the process - and more importantly - who picks up the maintenance bill as the process responds to changing business requirements. With the dynamic BPM approach we are implicitly creating smaller, more atomic processes components, which are closely aligned with specific business tasks. This alignment brings clarity of ownership, and a clearer funding model.

Business optimization

Since we use Business Services to dynamically form the E2E process, we have the opportunity to provide multiple implementations for any of the individual process components. An example of this might be the 'Authorize Credit' capability which can be found under the level 2 Order Handling entity as shown above in Figure 10. In this business area it is common to have a number of mechanisms to examine a customer's credit rating. These alternatives provide varying levels of verification and will accordingly incur different cost and duration.

Authorize Credit

eTOM describes the purpose of the Authorize Credit capability as the ability to assess a customer's credit worthiness in support of managing customer risk and company exposure to bad debt. This process is responsible for initiating customer credit checks and for authorizing credit and credit terms in accordance with established enterprise risk and policy guidelines

For example, an existing high value customer, who has already been through an exhaustive credit check, may only have to go through a cursory financial examination in order to add a new product to their account. Alternatively a new client, requesting access to a high cost service, may have to go through a more comprehensive verification process. By utilizing the proposed dynamic BPM approach, we are able to define component owned business processes for each level of credit authorization. At run time we can then examine the context of the request (customer status, credit history and requested product) and utilize the most appropriate and cost effective authorization process. This level of business process dynamicity therefore provides us with a route to optimizing the business, creating a flexible, intelligent and performant process architecture.


We started this article by defining some of the patterns that can be used to create variability around service invocation, providing polymorphic like behavior. We showed how we can extend this approach this to the world of business processes, and create a dynamic and component based process architecture, which is aligned to business structure. When underpinned by an agile information model, we ultimately create a recipe for designing for change; information agility + service agility + process agility = solution agility.


Thanks to my colleague Jose De Freitas for his input and painstaking proof reading, and to Marc Fiammante for his insight into the world of Telecoms, eTom and Dynamic BPM.



Get products and technologies



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into SOA and web services on developerWorks

Zone=SOA and web services
ArticleTitle=Polymorphic Services, Part 2: Polymorphic function