The first installment of the Architecture in practice column, "Realizing Service-Oriented Architecture," discusses IBM's Service-Oriented Architecture (SOA) Foundation Lifecycle (or SOA Lifecycle) and how it allows IBM customers to think of SOA in terms of a software development lifecycle. The four phases of the IBM SOA Lifecycle: Model, Assemble, Deploy, and Manage are covered in detail.
This installment, Part 4, focuses on the first of the eight scenarios: the Service Creation scenario, which helps you learn how SOA can help solve typical business challenges. This article addresses the rationale behind the different options for service creation, and the context in which each option is most relevant and applicable. For each service creation option, the article maps the high-level activities of the phases in the SOA lifecycle. Also included are recommendations of one or more IBM tools and products you can use to realize the activities in each phase in the lifecycle.
Implementing a business plan quickly and efficiently is a primary business challenge that most organizations must be able to meet. An enterprise must be able to sense market conditions and quickly adapt their strategies to reflect changes. To obtain this kind of flexible business model requires an equally flexible IT infrastructure. Services in an SOA are defined as self-contained, reusable software modules that perform a specific business task. They are now being used as the basic software building blocks to provide flexible IT solutions. Services have well-defined interfaces and are independent of the applications and computing platforms on which they run. In today's environment it is imperative to view your business, and the processes that it performs, as a set of linked, repeatable business tasks that can be easily rearranged.
Your organization needs a mechanism to allocate resources to value-added investments (not to tasks that don't provide differentiated capabilities). You need to focus your resources on investments that bring differentiation and value to your business, instead of worrying about low value, commodity-level tasks that are simply overhead.
You also want your business to grow smoothly. You need the performance and reliability of the business systems that you know and trust, which may be combined with dependable business partners and service providers that can deliver services as you need them. And, if you choose to acquire a company, you must be able to integrate their business systems with yours to ensure that your union creates value quickly.
A good place to start is to compare what a business has compared with what the business needs. Modeling the as-is and to-be business processes and simulating its capabilities and effectiveness, provides a frame of reference of how the business should be run. The question then arises about how the individual tasks that comprise the business process should be accomplished. Each task needs to be supported by a service. An SOA makes it possible to string these services together into flexible, modular systems that enable an agile business model. Deciding where the services will come from is the first step in implementing your vision for an optimized business process.
IBM recognizes three main sources for services in an SOA, as shown in Figure 1.
Figure 1. Three sources for services
There are four commonly used architectural patterns that provide guidance about how to properly use the service from the three sources to create a service-based IT solution. The recommended approach is to start by comparing what you need to what you already have. You can create services from scratch, purchase them, or service-enable existing packaged or custom software. You can leverage all three sources by:
- Service-enabling tasks that are supported by in-place, high-value software applications and systems in existing applications and assets
- Using externally provided services to support commodity tasks
- Creating new services only to fill in the remaining gaps
The rest of this section provides an overview of the different architectural patterns for service access and usage.
SOA is not about "rip and replace." It's best to identify reusable and high-value business tasks from the existing applications, systems, and assets, and employ the principles and techniques of SOA to expose services. Reusing applications and systems that already exist is a sound business decision. You can reduce investment in new technologies and use existing business logic (one of the most valuable, proven, and time-tested assets a company can own). Service-enabling current applications can significantly accelerate the adoption of, and lower the risks of, SOA projects. Studies show that it can be five times less expensive than a build-from-scratch approach. Maintenance overhead also shrinks because tested code for common functions has already survived the rigors of production use.
Figure 2. Service-enabling existing assets
In this technique, an individual service can draw upon one packaged or custom application, or multiple systems, to deliver its intended function. For example, an address record in an SAP customer relationship management system can be combined with functions from a legacy mainframe-based accounting system to create a service to support the opening of a new customer account. The combined service might support part of a business process for sales order entry involving delivery and billing. You mitigate the risks of going down the path of a service proliferation syndrome, where any existing IT function may be considered as a service and exposed -regardless of its granularity. By applying proper SOA methodologies, such as Service-Oriented Modeling and Architecture (SOMA) from IBM (see Resources), you can solve the service granularity problem.
The two most prevalent architectural patterns for service-enabling existing assets are to:
- Directly expose existing application functions as services
- Indirectly expose functions as service components
Used when existing application functions must be exposed as a service for use by other systems and applications, or by more channels of access. With this pattern (shown in Figure 3), it is assumed you have a very simple scenario where the existing functions are not tweaked; they are just deployed as a service provider using Web service technology. This simple topology does not require any new infrastructure, because the service is implemented using tools and techniques to service-enable existing—usually legacy— applications.
For example, you can directly use the IBM CICS® Transaction Server V3.1 Web services technology to expose COMMAREA applications directly as Web services. The minimal requirement is that the exposed service conforms to the WS-I Basic Profile. (For more information about architectural patterns and how existing legacy functions can be exposed as services, see Resources.)
With this approach, one benefit is that the service interface is defined by the exposed legacy asset, so no analysis is required to design the interface specifications. And, there is no need for added infrastructure since the new service runs on the same platform as the existing asset that is being wrapped. Bypassing interface definition and analysis, coupled with fewer platforms to deal with, leads to much shorter deployment cycles.
There are some key architectural considerations while instantiating this architecture pattern for server enablement:
- The service consumers get coupled with the definitions of the legacy assets which could, in many cases, have been poorly designed initially.
- This pattern assumes that the existing application platform has support for the modern techniques for service invocation.
- This realization pattern often places a high service-message-processing burden on systems that are traditionally optimized for Microprocessor without Interlocked Pipeline Stages (MIPS) consumption.
Figure 3. Directly exposing existing functions as services
This service-enablement architecture pattern represents situations where an intermediate software component, called a service component, is introduced between the existing application functions and the service. Figure 4 below shows an example.
The service component, also known as the enterprise component (see Resources), is an IT component that provides the layer of abstraction between the service and the actual implementation. The service component can function in either of the following ways:
- Where business logic and function are being created from scratch, the service component can encapsulate logically and functionally cohesive business logic.
- Where business logic from one or more existing operational systems must be integrated and reused to provide the integrated business logic implementation for the service, the service component can encapsulate the access mechanism to the pertinent, and possibly disparate, operational systems.
There are certain benefits to using the intermediary service component:
- Implementation of business logic in existing operational systems can be changed without affecting consumers of the service. The service component can be easily extended to encapsulate data and information and provide a facade layer for data or information services.
- IT consolidation or migration of systems and functions can be carried out at the operational systems layer, with minimal or no impact to the service consumers.
- You can deploy the service on an infrastructure that is different from that of the existing applications, and which is hardened for the specific processing requirements for services.
As in the case of directly exposing application functions as services, this pattern also has some key architectural considerations. First, it allows for the definition of a service interface that is closely aligned with the business and not necessarily a direct mapping to the existing application interfaces. You can use the principles and best practices of SOA to design the services and interfaces with the right level of granularity. This, however, comes with added design time required to properly define the services and longer development cycles. Second, the design is often more complex than direct exposure, as it might involve using adapter or connector technology to connect with the operational systems.
Figure 4. Service component as intermediary between operational system functions and service in application layer
In this scenario, the source of the service is one or more third-party service providers. The applications leverage the third-party services for implementation of their required functions. Figure 5 shows how services rely on third-party implementation providers for their realization.
Figure 5. Relying on third-party providers
The main advantage of this pattern is that you don't need to spend time developing the realization artifacts for the service definitions—they will be provided by the external service providers. This reduces the development lifecycle significantly. The other advantage worth noting is the ability of the client to swap between service providers based on various technical, financial, and political considerations.
There are some key architectural considerations that need to be addressed:
- The service-level agreement (SLA) for the services must be well defined. While shopping around for the best third-party provider, ability to meet the SLA is a key consideration.
- Since the service implementation is outside the corporate firewall, proper security around the services and their invocation must be established.
- You should put a lot of emphasis around service governance to establish criteria for choosing the most pertinent service provider. Elements such as compliance to industry and open standards, and maturity of the third-party as a service provider, are considerations that are laid out by the governance body.
In this scenario, the service has to be designed, defined, and developed from scratch. Neither the existing application nor any third-party provider can provide the functions or service to satisfy the requirements. The scenario relies upon the Java™ 2 Platform, Enterprise Edition (J2EE) application server support for service realization. The core business functions are usually developed using the standard J2EE, usually in the form of Enterprise JavaBeans (EJBs) or simple Plain Old Java Objects (POJO). Standard J2EE integrated development environment (IDE) features and techniques are employed to convert the EJB or POJO into standard Web services.
Advantages of this approach enable you to model a service around the present business requirements, and design it to satisfy future business requirements. The service is not constrained by existing operational systems or by the third-party provider.
A key architectural consideration is that a complete SOA methodology (SOMA, for example) needs to be applied throughout the entire service development process (from identification, through specification, and realization).
The architecture patterns that let us identify and design services from three main sources, discussed in the previous section, need to be instantiated to realize an end-to-end SOA solution. You need to fit the pattern steps into the SOA lifecycle, and provide the right tools and products to create specific SOA artifacts.
IBM practices and follows the SOA lifecycle (Model, Assemble, Deploy, and Manage), Governance, and Processes; we identify activities for each architecture pattern that can be applied to each phase of the SOA lifecycle (see Resources). IBM also has a very rich set of products that enable any industrial strength SOA solution.
This section puts the following architecture patterns and their broad level activities into the SOA lifecycle context:
- Direct exposure of existing functions
- Indirect exposure of existing functions
- Service provided by a third party
- Service created from scratch
It also provides an overview of the most commonly recommended and used IBM products that help realize and instantiate the pattern in a real world engagement.
Like all SOA projects, direct exposure of existing functions (also known as service enablement of existing IT assets) is best considered in terms of a lifecycle. Here we draw upon the well understood service lifecycle as defined by the IBM SOA Foundation. Figure 6 shows how the SOA lifecycle can be applied for service-enabling existing assets.
Figure 6. SOA lifecycle applied during service-enablement of existing assets
Each phase in the SOA lifecycle can be applied to service enablement. The recommended high-level activities are:
- Start in the Model phase by taking an inventory of the candidate assets within the current IT application and system portfolio. During this phase, the most critical thing to focus on is a methodology for service modeling. IBM's SOMA method is a robust and well proven method for service oriented modeling. Rational® Unified Process (RUP) has also been extended to address service oriented methodology (RUP for SOA). RUP for SOA is based on the SOMA methodology. IBM's Rational Software Architect (RSA) provides a modeling framework to model and design service models.
- Use techniques to convert the assets into reusable services without altering the basic business functions that they provide. The conversion process will essentially involve wrapping existing functions with clean and well-defined interfaces.
In this phase, the most commonly used tool to service enable CICS applications, which traditionally run IBM System z™ systems (formerly IBM eServer™zSeries® systems), is the IDE called IBM WebSphere® Developer for zSeries (WDz). The existing code base that needs to be service enabled is imported into a workspace in WDz. The tool features can be leveraged to develop Web Services Description Language (WSDL) definitions. In the process, data and language structures of the existing applications might need some mapping and transformations. The WSDLs and the specific application bindings can be generated from the IDE.
The assembly phase also includes unit testing the developed code. WDz has a test environment for the CICS Transaction Server (TS) with all the basic features to test a WSDL that can run on an actual industrial strength CICS TS. The generated WSDLs can be unit tested on the CICS TS test environment as a part of the assembly phase.
- Use SOA infrastructure and middleware products to deploy the services, thereby extending access to the otherwise deeply entrenched functions to a wider pool of systems and users.
In this phase, the unit tested WSDLs and the generated COBOL source code (after possible data and language translation) are deployed on the CTS. There are many Web service features that come with CTS 3.1, such as WS-Security, that may be configured during the deployment process.
- Carefully manage and monitor, in real-time, the deployed services for performance and security of the renovated assets.
For this phase, the main focus shifts to managing and monitoring deployed services. Services must be carefully monitored to ensure conformance to their published functional and nonfunctional capabilities.
IBM's Tivoli® brand of products is geared toward systems management and monitoring in general, and has a rich variety of products that cater to monitoring and managing services. Tivoli Omegamon-XE for CICS 3.1 is commonly used to manage and monitor CICS TS on IBM z/OS®. Tivoli also has a suite of products that addresses specific areas of service invocation and security, such as:
- IBM Tivoli Federated Identity Manager (ITFIM), providing a loosely coupled federated identity model to manage identify across enterprise boundaries
- IBM Tivoli Identity Manager (ITIM), providing a centralized identify management system within the enterprise
- IBM Tivoli Access Manager (ITAM), providing single sign-on and authorization features
- Governance and Processes
- Ensure adherence to policies, standards, and best practices for the lifecycle of the services, and their efficient control and management.
For this phase, WebSphere Service Registry and Repository (WSRR) is product from IBM that supports an entire SOA lifecycle. WSRR allows service providers to securely register business services for finding and binding by the service consumers. It also provides the ability to publish the metadata required to manage the lifecycle of a service in an SOA.
To summarize, while implementing the pattern for direct access to existing applications you can follow the phases of the standard IBM SOA lifecycle, and use the following products in the various phases:
- RSA for visual modeling and design of services.
- WebSphere Developer for zSeries for assembling existing functions into services, and unit testing them from within the CICS TS Test Environment.
- Deploy the service definitions, along with the generated legacy code, on the CICS TS 3.1.
- Use a spectrum of Tivoli management and monitoring products, such as Tivoli Omegamon-XE, ITFIM, ITAM, and so on for managing and monitoring mainly the service SLAs.
- WebSphere Service Registry and Repository for managing services through their lifecycle in an SOA.
Each mechanism for service creation can have a set of prescriptive, most commonly followed steps for a given scenario. The steps, as described in the previous section, can be linked to the five phases in the SOA lifecycle. The main steps for this scenario are similar enough to Implementing exposure of existing functions so that they are not repeated here.
As explained in Access patterns for Service Creation scenario, the main difference between indirect and direct exposure is the inclusion of a service component layer. A service component provides the service interfaces that are aligned to the business—a top-down approach. By analyzing existing assets, you can gain insights into which application functions, in which system, you can use to implement the service interfaces defined by the service components. The service component works as an intermediary between the business-centric view and the existing application. This new facade component therefore requires a few extra steps during the Assemble phase.
During the Model phase, you can use the SOMA methodology and its high-level specification for service identification. This is not much different from the first access pattern. In both, you exercise the Service Identification phase of SOMA. The difference, however, is the focus within the activities in the phase. During direct exposure the focus is mainly on existing asset analysis. In the indirect exposure scenario, the focus is mainly on identification of business-aligned services using a top-down approach. The recommended IBM product to use here is Rational Software Architect.
During the Assemble phase, the most commonly used tool is the Rational Application Developer (RAD). If using RAD as the IDE in this phase, follow these steps:
- Create a Web or a J2EE project in a RAD workspace—ideally a new one. Define the WSDL based on the business-aligned view of what a service and its interface operations should be. Use the inputs from the Identification phase in SOMA to define the business-aligned services (business services). If the WSDL is already defined and is available, simply import the same into the project workspace in RAD.
- Generate the session EJB skeleton from the WSDL. The business logic for all the operations should then be implemented. In cases where the legacy system is running on a different environment, adapter technologies should be used during implementation.
For example, for a CICS application running on a separate System z (zSeries) machine a CICS ECI resource adapter needs to be used for connectivity to the CICS system.
The resource adapter, usually in the form of a resource archive (.rar) file is imported into the RAD workspace. There are application program interfaces (APIs) in the resource adapter package to facilitate access to the CICS application from the session EJBs. You should also use the relevant Java data binding for the particular language used in the legacy system.
- The WSDL definitions, along with the implemented session EJBs, the Java data binding, and the optional resource adapters are packaged into an enterprise archive (EAR) file for deployment.
Although using RAD during the Assembly phase is a fairly common practice, many IT shops are very specialized with only legacy systems. In such client environments, using WebSphere Developer for z (WDz) is recommended. WDz V6.0.1 provides a built-in CICS TS test environment that, along with the CICS Transaction Gateway V6.0.2, provides a very powerful environment for exposing services from legacy systems.
The standard and recommended IBM product for service lifecycle governance is WebSphere Service Registry and Repository, as mentioned in the previous section.
When existing legacy systems and applications are either too arcane and scheduled for replacement or when business requirements require functions that aren't provided in existing systems, the next option is third-party service providers. It's a common practice in the industry to influence or dictate the business requirements of an enterprise by the functions provided by third-party packages. Many enterprises have gone down that path, only to find that they start compromising their own business requirements because they are driven by the features or functions of the third-party package they invested in. This is an anti-pattern in SOA adoption, and should be avoided.
In this scenario, the functional requirements are completely, or very-closely, met by the third-party service providers. The functional requirements and SLA requirements must be satisfied, with acceptable limits, as mandated by the business requirements.
The high-level activities for this scenario can also be mapped to the phases of the SOA lifecycle, as shown in Figure 7.
Figure 7. Incorporation of third-party services into the enterprise SOA
Each phase in the SOA lifecycle can be applied to service enablement. The recommended high-level activities are summarized as follows.
- Start by running simulations of the business processes that are in the scope of the transformation, and decide what services make sense to own and which to obtain externally.
In the Model phase, the main focus is on analyzing the rationale for using a third-party service provider as opposed to building services in house. Various business analytics and simulations are performed; evaluate cost, time, resource, and IT feasibility.
- Access external services and orchestrate them with owned services to support an end-to-end business process. Assembly will provide orchestration of third-party and enterprise owned services.
In the Assemble phase, the main work is performed. The recommended IBM product is RAD. The steps are:
- Obtain the WSDL from the service provider.
- Validate the WSDL and work with the provider until complete validation is successfully passed.
- Create a new enterprise application project in RAD.
- Import the WSDL into the project workspace.
- Generate the client-side stubs from the WSDL. At this point, carefully analyze which type of XML binding is appropriate (JAX-RPC, JAXB, and so on).
- Develop the client application from the client-side stubs to invoke the service.
- Package the project into a deployable EAR file.
- The orchestrated services can be deployed without worrying about the origination of each individual service.
In the this phase, the deployable EAR file is installed on a Web service compliant middleware server. The recommended IBM product here is WebSphere Application Server. The installed EAR file provides the client-side APIs to consume the third-party services.
- If third-party service providers are doing the implementation, it is critical to monitor services to comply with business mandated SLAs and key performance indicators (KPIs) for compliance with the contracts. IBM Tivoli Composite Application Management (ITCAM) for SOA is the most comprehensive Tivoli product that monitors the runtime services for compliance.
- Governance and Processes
- Create and maintain a directory of external services in a registry for easy access and management.
In this phase, the WSDL definitions for the external services are provisioned. The recommended IBM product for this phase is WebSphere Service Registry and Repository (WSRR). Any changes in the form of service enhancements are managed in the WSRR, which manages the lifecycle of a service.
This scenario is often the last resort, when there are no existing application functions that could be directly or indirectly exposed as services, and no third-party service provider for the required business functions. The service definitions, and all implementation artifacts, need to be done from scratch. It might seem simple, with no existing legacy systems to build upon, no legacy code to integrate with, no third-party provider services to hook into, and no varying topologies for deployment. But, there can be a sizeable bit of work to determine service identification and in-depth specification. Figure 8 shows the main activities in the different phases of the SOA lifecycle.
Figure 8. SOA lifecycle applied to service creation from scratch
In terms of the SOA lifecycle, the activities for service creation from scratch are as follows.
- The emphasis is on designing business-aligned services that incorporate both current and future needs. The recommended approach is to apply the service identification techniques of SOMA, while using Rational Software Architect for service modeling to create the physical modeling artifacts.
- The recommended IBM product to use for service development is Rational Application
Developer (RAD), a robust, feature-rich J2EE application development IDE that also
provides simple and advanced features for service development and implementation.
It provides simple features for basic service implementation and exposes them as WSDL files. Rational Application
Developer can also add advanced features around Web service implementations, starting with WS-I compliance and incrementally adding implementations for WS-Addressing, WS-Transactions, and so on. The general steps for using Rational Application
Developer for service development (similar to
Implementing service provided by a third party) are:
- Create a J2EE enterprise application project in a new workspace in Rational Application Developer.
- Create a WSDL definition based on the design specifications for the service. Alternately, if a WSDL exists, import it into the workspace.
- Generate the session EJB service skeleton from the WSDL.
- Complete the implementation of the business logic for all the defined operations in the service skeleton.
- Optionally, create the Web service client code that's used to invoke the services. For J2EE application clients invoking the services, this client code is sufficient. For non-J2EE clients, you need to provide technology-specific client code for service invocation.
- Package the implementation artifacts into an EAR file for deployment.
The recommended IBM tool to use is the WebSphere Integration Developer (Integration Developer) that provides a Business Process Execution Language (BPEL) development environment. Among other features, it provides for the orchestration of existing services into a business process flow. The resulting process can then be deployed into a BPEL runtime engine that provides choreography capabilities to execute enterprise-scale business processes.
- The packaged EAR deployment modules are installed on a WebSphere Application Server runtime. For distributed environments, WebSphere Application Server Network Deployment Edition V6 is the recommended middleware for running services. To deploy the business processes mentioned, WebSphere Process Server V6 (part of IBM WebSphere Business Service Fabric middleware) is the recommended IBM product.
- The deployed services need to be monitored and managed. Monitoring is typically mandatory for compliance with the SLA for the services, and raising alerts or events when noncompliance thresholds are compromised.
When services are exposed outside the enterprise perimeter, the minimal requirement is for the services to be secured from unauthorized access. TheTivoli products that are mentioned in the previous sections are all applicable in this scenario. To summarize:
- IBM Tivoli Composite Application Management (ITCAM) for SOA, to monitor services for SLA compliance
- IBM Tivoli Federated Identity Manager (ITFIM), for federated identification management across enterprise perimeter
- IBM Tivoli Identity Manager (ITIM), for centralized identity provisioning across enterprise systems
- IBM Tivoli Access Manager (ITAM), for single sign-on and authorization prior to service invocation
- Governance and Processes
- The recommended IBM product for the lifecycle management of services is WebSphere Service Registry and Repository, which has robust, advanced features that can be used in a modular fashion.
IBM identifies eight different common SOA scenarios in typical SOA-based IT projects. IBM provides comprehensive guidance on how each scenario should be modeled, designed, and implemented using IBM tools, products, and methodologies for SOA.
In this article you learned about Service Creation, the first SOA scenario. You got an overview of the four most common architectural patterns for good service enablement based on the three key sources of service: existing application, third-party service providers, and creating service from scratch. You also learned how the SOA lifecycle can be applied to the four architectural access patterns and how the IBM product suite can address the specific design, development, and runtime requirement for service enablement.
Read other installments of this series:
A two-part series on a pattern language for Service-Oriented Architecture (SOA) and integration:
Read about IBM's Patterns for e-business.
From IBM Redbooks®: Patterns: SOA Foundation Service Creation Scenario describes the service creation scenario.
Also from IBM Redbooks: Patterns: SOA Foundation Service Connectivity Scenario describes the service connectivity scenario.
- IBM on demand demos to learn about various software products and technologies from IBM.
- Stay current with
developerWorks technical events and webcasts.
Get products and technologies
Download free trial versions other products from IBM discussed in this article.
- Participate in the discussion forum.
Learn more about IBM's SOA and SOMA methodology.
Read about Rational Unified Process and IBM Rational Method Composer.
Check out developerWorks blogs
and get involved in the developerWorks
Tilak Mitra is a Senior Certified Executive IT Architect in IBM. He specializes in SOAs, helping IBM in its business strategy and direction in SOA. He also works as an SOA subject matter expert, helping clients in their SOA-based business transformation, with a focus on complex and large-scale enterprise architectures. His current focus is on building reusable assets around Composite Business Services (CBS) that has the ability to run on multiple platforms like the SOA stacks for IBM, SAP and so on. He lives in sunny South Florida and, while not at work, is engrossed in the games of cricket and table tennis. Tilak did his Bachelors in Physics from Presidency College, Calcutta, India, and has an Integrated Bachelors and Masters in EE from Indian Institute of Science, Bangalore, India. Find out more about SOA at Tilak's blog. View Tilak Mitra's profile on LinkedIn or e-mail him with your suggestions at firstname.lastname@example.org.