Configure a Service Integration Bus in a network deployment environment

Cluster configuration

Install and configure Service Integration Bus (SIBus) scenarios across an IBM® WebSphere® Application Server V6.0.x cluster. This informative, step-by-step procedure takes you through each of the configuration phases with script examples.

Share:

Guy Barden (barden@uk.ibm.com), SIBUS System Test Technical Lead, IBM, Software Group

Guy BardenGuy Barden has been working in a Web Services Test Environment for the past two years and has been involved in configuring and designing System Test scenarios for the Service Integration Bus since its inception. Previously he worked with the Web Services Invocation Framkework (WSIF) standards and the Web Services Gateway in WebSphere Studio Application Developer V5.x.



29 March 2005

Introduction

The Service Integration Bus (SIBus) is a logical entity which is created and configured as a post install operation of IBM® WebSphere® Application Server V6.0 (Application Server) through either the Administration Console, or through the wsadmin scripting language. This paper is aimed at providing information on the configuration and use of the SIBus and WebServices Gateway (WSGW), specifically in a network deployment clustered environment.

The SIBus can be configured in two modes on an Application Server cluster, depending upon the requirements of deployment. As the SIBus is a logical entity based on the physical implementation of a Message Driven Bean (MDB), there is no inherent high availability (HA) or workload management (WLM) functionality. The underlying physical implementation has to be independently configured prior to the SIBus creation.


Overview of cluster configurations

The high availability cluster configuration is the default configuration created when a cluster of application servers in a cell is created. When the SIBus is created, there is only one active WPM messaging engine on one of the cluster servers, and all service requests to cluster members are routed through this single messaging engine. Therefore, for a cluster of n servers, there will be one local message put action for routing the service request on the server with the active messaging engine, and (n-1) remote message put actions for each of the servers with inactive messaging engines.

Figure 1. The high availability cluster topology
The high availability cluster topology

The advantage of this configuration is that when there is a messaging engine failure, another server becomes active and all remote put actions are routed through the newly active messaging engine. Also, because there is only a single server routing messages to the target service, any message sequencing is persisted through the SIBus.

The disadvantage of this configuration is one of performance, as there is effectively a messaging bottleneck within the cluster configuration, and (n-1) of all service requests are remote puts to the active messaging engine.

The workload management cluster configuration requires additional configuration from the default cluster installation prior to the creation of SIBus. The purpose of this configuration is to remove the dependence on the messaging engine remote put calls by explicitly creating an additional messaging engine for each of the servers in the cluster and defining a CoreGroup policy to "assign" the messaging engine to an individual server in the cluster. With n active messaging engines in a cluster of n servers, each service request is processed locally on the server receiving the message rather than getting routed to an active messaging engine.

Figure 2. The workload management cluster topology
The workload management cluster topology

The advantage of this configuration is the removal of the messaging engine bottleneck as each server processes service requests presented to the cluster elements. This is crucial to the SIBus, which has a large message processing overhead associated with the transmission of each Web services request.

The disadvantage of this configuration is that, as service requests are presented and processed locally by each server, the sequencing of discrete messages cannot be assured.


Overview of a sample application

The paper is directed at the configuration of the SIBus so you can use any deployed Web service that published WSDL through a URL. For the purpose of this paper, I use a standard Stock Quote Web service, described by the following WSDL:

http://services.xmethods.net/soap/urn:xmethods-delayed-quotes.wsdl

To access the Web service either directly in order to prove that the service method call is viable or through the configured SIBus, you can use either a Java Dynamic Interface Invocation client or the Web Services Explorer view in WebSphere Studio Application Developer (Application Developer). The diagram in Figure 3 shows the target set-up that you can achieve.

Figure 3. The example cluster topology
The example cluster topology

Example prerequisites

It is beyond the scope of this paper to describe the installation and cluster configuration techniques of Application Server. Following are example instructions for system set-up detail:

  • Machine 1:
    • Application Server V6.0 installed
    • Network Deployment Manager profile created. (profileName dm-1-profile, nodeName ${shorthostname}Manager, server dmgr)
    • Cluster created (clusterName was-cluster-1)
    • Managed node profile created. (profileName na-1-profile)
    • Managed node federated into ND Cell (nodeName NDNode1)
    • Managed node server added as cluster member template (serverName NDServer1)
  • Machine 2:
    • Application Server V6.0 installed
    • Managed node profile created (profileName na-2-profile)
    • Managed node federated into ND Cell (nodeName NDNode2)
    • Managed node server added as cluster member (serverName was-cmas-1)
  • Machine 3:
    • Application Server V6.0 installed
    • Managed node profile created (profileName na-3-profile)
    • Managed node federated into ND Cell (nodeName NDNode3)
    • Managed node server added as cluster member (serverName was-cmas-2)
  • Machine 4:
    • DB2 UDB V8.1 installed with network-enabled instance created (Note: The database can be any network-enabled database, but this example uses DB2 UDB.)
  • Machine 5:
    • Web Server with generated plug-in module to IP Spray

Phase 1: Database configuration and planning for SIBus

The SIBus installation and configuration is a post-install exercise that requires a certain amount of planning. Two data stores are required for SIBus operation. The first is for the SDO Repository, which holds the registered service WSDL information, while SIBus-specific WSDL is presented to any client. The second is for the underlying messaging layer that requires persistence of the internal message data format. These data stores are created, and their use is transparent to the user of a stand-alone server, as the server and databases are all residents on the same machine. For a network deployed cluster, it is necessary for each of the servers in the cluster to have access to the data stores, and as such this cannot be transparently configured by the SIBus install process. The administrator needs to configure this prior to installation.

For this example, the process for configuring databases for the different cluster set-ups is described and labelled as either for HA cluster (default configuration) or for the WLM cluster (recommended configuration).

HA cluster database set-up

For the high availability cluster, a single message store database is required so that each of the cluster servers can obtain locks and access. For this, a blank database me0 is created (as the following shows on Machine 4):

  • db2 create database me0

WLM cluster database set-up

For the workload management cluster, an individual message store table is required for each of the servers in the clusters. Application Developer automatically handles the table and schema creation, and here you can either set-up separate databases for each server or set-up a single database with separate schemas for each server (as the following shows on Machine 4):

  • db2 create database me0
  • db2 create database me1
  • db2 create database me2

or

  • db2 create database me0

Note: Individual schemas for each messaging engine and tables are created when the messaging engine is created.

HA and WLM cluster database set-up

For either of the cluster configurations, a SDO Repository data store is required (as the following shows on Machine 4):

  • db2 create database sdodb
  • db2 connect to sdodb
  • db2 create schema sdorep
  • db2 create table sdorep.bytestore (name varchar(250) not null, bytes blob(1G), timestamp1 bigint not null)
  • db2 disconnect sdodb

Now with the databases and tables created, connectivity to them from the Application Server cluster needs to be configured.

  1. On each of the host machines a homogenous directory structure should be created into which the client JAR files for DB2 UDB should be placed. This is to ensure that Application Server can connect to the database through the drivers supplied by the database. In the example on each of Machines 1, 2, and 3 the files db2jcc.jar, db2jcc_license_cu.jar, and db2jcc_license_cisuz.jar are placed in a directory /opt/db2udb.
  2. Configure the Database Instance ID with Application Developer through the admin console, using the Security > Global Security > JAAS Configuration > J2C Authentication Data panel and creating a new alias to match the db2 instance login details.
  3. Configure the JDBC Provider for Application Developer using the admin console
    1. In the Resources > JDBC Providers panel change the scope to Cell and create a new JDBC Provider, selecting DB2, Universal JDBC Driver Provider, Connection Pool Data Source (or XA DataSource) > Next.
    2. Change the classpath entries to reflect the directory structure into which the client DB2 JARs were copied.
    3. Clear the Native Path entry, select OK.
  4. With the JDBC Provider configured, the DataSources for each of the databases have to be created.
    1. For the SDO Repository, select the DataSource panel in the admin console and create a new DataSource. For each of the required fields, the following information should be supplied:
      • Name: SdoRep DataSource (This can be named anything appropriate to the install)
      • JNDI Name: jdbc/com.ibm.ws.sdo.config/SdoRepository (This JNDI name is used by the SDO Repository application, and must have this value)
      • DataSource CMP: Check the box
      • DataStoreHelper: DB2 Universal data store helper
      • Component-Managed authentication: The alias created in step 2
      • Database Name: sdodb (or whatever name the SDO DB2 database has)
      • Driver Type: 4
      • Server name: hostname of the DB2 Server
      • Port Number: Port no. of db2 instance
    2. For the HA and WLM cluster configuration a datasource needs to be created for each of the databases created. The values for the each datasource should be the same as the SDO Repository datasource, but with unique names, and JNDI Names. For the example, the values are as follows:
      • Name: ClusterME0
      • JNDI Name: jdbc/com.ibm.ws.sib/was-cluster-1.000-Example (HA Cluster or WLM Cluster with different schema defined for ME tables.)
      or
      • Name: ClusterME0 (WLM cluster with database per ME)
      • JNDI Name: jdbc/com.ibm.ws.sib/was-cluster-1.000-Example (WLM cluster with database per ME)
      • Name: ClusterME1 (WLM cluster with database per ME)
      • JNDI Name: jdbc/com.ibm.ws.sib/was-cluster-1.001-Example (WLM cluster with database per ME)
      • Name: ClusterME2 (WLM cluster with database per ME)
      • JNDI Name: jdbc/com.ibm.ws.sib/was-cluster-1.002-Example (WLM cluster with database per ME)
  5. Save the changes.

At this point in the example, the databases, schemas, and tables are created and, in the network deployment cluster, connectivity to the databases and authentication is established. This can be tested using the Test Connection button for each DataSource on each of the created Datasource panels in the admin console.


Phase 2: Post Application Developer installation tasks for SIBus

As has been mentioned, the SIBus needs installing and configuring after a WebSphere topology has been created. As the SIBus is a logical entity defined across the topology, it is physically manifested in a number of Enterprise Applications that are installed and started.

SDO Repository application

The SDO Repository has to be installed now that the database connectivity has been defined. This application is a special case that needs scope across both the cluster and the Network Deployment manager. This is because scenarios that require access to the SDO database can originate either from Web services requests and the SIBus run-time based on the cluster, or from wsadmin configuration requests through a SOAP connector to the deployment manager server, dmgr.

Installation is through a jacl script that is shipped with Application Server and can be found in the directory ${WAS_INSTALL_ROOT}/bin, and it is called installSDORepository.jacl.

For the HA cluster and WLM cluster set-ups

  1. Install the SDO Repository application to the deployment manager server. The command has the following format:

    ${WAS_INSTALL_ROOT}/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f ${WAS_INSTALL_ROOT}/bin/installSDORepository.jacl 
    	<nodename> 
    	<server>

    For this example, the command is as follows:

    /opt/WebSphere/AppServer/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f /opt/WebSphere/AppServer/bin/installSDORepository.jacl 
    	batmanManager 
    	dmgr
  2. The SDO Repository needs information about the database type that it is working with. The command has the following format:

    ${WAS_INSTALL_ROOT}/bin/wsadmin.sh
    	-conntype SOAP 
    	-port 8879 
    	-f ${WAS_INSTALL_ROOT}/bin/installSDORepository.jacl 
    	-editBackendId <DB_ID>

    A list of supported database identifiers can be found in the following directory:

    ${WAS_INSTALL_ROOT}/util/SDORepository

    For this example, the command is as follows:

    /opt/WebSphere/AppServer/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f /opt/WebSphere/AppServer/bin/installSDORepository.jacl 
    	-editbackendid DB2UDB_V81
  3. Finally, the SDO Repository application has to be installed across the cluster. The command has the following format:

    ${WAS_INSTALL_ROOT}/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f ${WAS_INSTALL_ROOT}/bin/installSDORepository.jacl 
    	-cluster <clustername>

    For this example, the command is as follows:

    /opt/WebSphere/AppServer/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f /opt/WebSphere/AppServer/bin/installSDORepository.jacl 
    	-cluster was-cluster-1

SIBUS applications

The SIBus is a logical entity that has a physical manifestation in the form of a number of enterprise applications, a resource adaptor, and an activation specification. As part of the Application Server post install process, these applications need to be installed and started before any SIBus creation and configuration activities can take place. The application installation is again done through an install jacl script that is shipped with Application Developer in the ${WAS_INSTALL_ROOT}/util directory, and it is called sibwsInstall.jacl. The jacl needs to be invoked for each of the physical installation steps of the SIBus, passing different arguments to distinguish actions.

For the HA and WLM cluster set-ups.

  1. Install the Resource Adapter and configure the Activation Specification used by the SIBus MDB?s. The install command has the following format:

    ${WAS_INSTALL_ROOT}/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f ${WAS_INSTALL_ROOT}/util/sibwsInstall.jacl 
    	-installRoot ${WAS_INSTALL_ROOT} 
    	-clusterName <clustername> 
    	-nodeName <nodename> 
    	INSTALL_RA

    For this example the command is as follows:

    /opt/WebSphere/AppServer/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f /opt/WebSphere/AppServer/util/sibwsInstall.jacl 
    	-installRoot /opt/WebSphere/AppServer 
    	-clusterName was-cluster-1 
    	-nodeName NDNode1 
    	INSTALL_RA

    Note: Although you are installing to a cluster, you need to specify the node name of one of the nodes in the cluster.

  2. Install the SIBus Enterprise Application. The install command has the following format:

    ${WAS_INSTALL_ROOT}/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f ${WAS_INSTALL_ROOT}/util/sibwsInstall.jacl 
    	-installRoot ${WAS_INSTALL_ROOT} 
    	-clusterName <clustername> 
    	INSTALL

    For this example, the command is as follows:

    /opt/WebSphere/AppServer/bin/wsadmin.sh 
    	-conntype SOAP 
    	-port 8879 
    	-f /opt/WebSphere/AppServer/util/sibwsInstall.jacl 
    	-installRoot /opt/WebSphere/AppServer 
    	-clusterName was-cluster-1 
    	INSTALL

    Note: For the actual enterprise application installation to the cluster, the node name is not required.

  3. Install the SIBus HTTP Channels Enterprise Application. The install command has the following format:

    ${WAS_INSTALL_ROOT}/bin/wsadmin.sh 
    	?conntype SOAP 
    	?port 8879 
    	?f ${WAS_INSTALL_ROOT}/util/sibwsInstall.jacl 
    	?installRoot ${WAS_INSTALL_ROOT} 
    	?clusterName <clustername> 
    	INSTALL_HTTP

    For this example, the command is as follows:

    /opt/WebSphere/AppServer/bin/wsadmin.sh 
    	?conntype SOAP 
    	?port 8879 
    	?f /opt/WebSphere/AppServer/util/sibwsInstall.jacl 
    	?installRoot /opt/WebSphere/AppServer 
    	?clusterName was-cluster-1 
    	INSTALL_HTTP
  4. Install the SIBus JMS Channels Enterprise Application. The install command has the following format:

    ${WAS_INSTALL_ROOT}/bin/wsadmin.sh 
    	?conntype SOAP 
    	?port 8879 
    	?f ${WAS_INSTALL_ROOT}/util/sibwsInstall.jacl 
    	?installRoot ${WAS_INSTALL_ROOT} 
    	?clusterName <clustername> 
    	INSTALL_JMS

    Note: The channels applications are optional and only need to be installed if the channel is to be used. In this example you are only using the HTTP channel, so the JMS Channel is not installed.

This concludes the installation of the physical aspects of the SIBus, and verification of a successful installation can be seen through the Enterprise Applications list in the admin console, where the following applications should be installed and started:

  • SDO Repository
  • Sibws.was-cluster-1
  • Sibwshttp1.was-cluster-1
  • Sibwshttp2.was-cluster-1

Phase 3: Post installation configuration task for SIBus

With SIBus applications physically installed to the cluster, the configuration paths for the two types of clusters diverge. The HA cluster is now ready to have an SIBus instance created to work with, but the WLM cluster needs the workload aspects configured.

For WLM cluster only

For the workload management to be set-up, each server within the cluster needs to have the ability to locally process incoming Web services requests. This can only be done by creating additional messaging engines for each of the servers in the cluster so that autonomous message processing can occur. Prior to creating the messaging engines, a policy defining the behavior of request processing across the cluster needs to be defined. This is known as a CoreGroup Policy, and the result of the core group is to assign each of the messaging engines to individual servers within the cluster.

This is functionally achieved through a jacl script shipped with Application Developer in the ${WAS_INSTALL_ROOT}/bin directory, and it is called CreateCoreGroupPolicy.jacl. The jacl script takes a single argument which is a path to a created properties file. A properties file should be created for each of the servers in the cluster, and the jacl script run against each of the properties files. There is no template properties file available with Application Developer, so they must be created from scratch. Below in Listing 1 is a template with explanations for entries that need tailoring to system set-up.

Listing 1. The CoreGroup Policy file template
	CoreGroupName=DefaultCoreGroup	
	PolicyName=<policyName>	# Name unique to the ME policy.
	PolicyType=OneOfNPolicy	
	IsAlivePeriodSec=0
	Failback=true	
	QuorumEnabled=false
	PreferredOnly=true
	NumOfMatchCriteria=3
	Name_0=type
	Value_0=WSAF_SIB
	Name_1=WSAF_BUS
	Value_1=<busName>	# Name of the SIBus that is to be created.
	Name_2=WSAF_SIB_MESSAGING_ENGINE
	Value_2=<meName>	# Name of the messaging engine being pinned.
	NumOfPolicyServers=1
	NodeName=<nodeName>	# The node for the server.
	ServerName=<serverName>	# The server being pinned to.

In this example, three properties files are needed. All should be created on the machine hosting the Deployment Manager, and each should map to a server present in the cluster. The policy files look like the following:

Listing 2. Machine 1 Policy file example
	CoreGroupName=DefaultCoreGroup	
	PolicyName=ME0	
	PolicyType=OneOfNPolicy	
	IsAlivePeriodSec=0
	Failback=true	
	QuorumEnabled=false
	PreferredOnly=true
	NumOfMatchCriteria=3
	Name_0=type
	Value_0=WSAF_SIB
	Name_1=WSAF_BUS
	Value_1=ExampleBus	
	Name_2=WSAF_SIB_MESSAGING_ENGINE
	Value_2=was-cluster-1.000-ExampleBus	
	NumOfPolicyServers=1
	NodeName=NDNode1	
	ServerName=NDServer1
Listing 3. Machine 2 Policy file example
	CoreGroupName=DefaultCoreGroup	
	PolicyName=ME0	
	PolicyType=OneOfNPolicy	
	IsAlivePeriodSec=0
	Failback=true	
	QuorumEnabled=false
	PreferredOnly=true
	NumOfMatchCriteria=3
	Name_0=type
	Value_0=WSAF_SIB
	Name_1=WSAF_BUS
	Value_1=ExampleBus	
	Name_2=WSAF_SIB_MESSAGING_ENGINE
	Value_2=was-cluster-1.001-ExampleBus	
	NumOfPolicyServers=1
	NodeName=NDNode2	
	ServerName=was-cmas-1
Listing 4. Machine 3 Policy file example
	CoreGroupName=DefaultCoreGroup	
	PolicyName=ME0	
	PolicyType=OneOfNPolicy	
	IsAlivePeriodSec=0
	Failback=true	
	QuorumEnabled=false
	PreferredOnly=true
	NumOfMatchCriteria=3
	Name_0=type
	Value_0=WSAF_SIB
	Name_1=WSAF_BUS
	Value_1=ExampleBus	
	Name_2=WSAF_SIB_MESSAGING_ENGINE
	Value_2=was-cluster-1.002-ExampleBus	
	NumOfPolicyServers=1
	NodeName=NDNode3	
	ServerName=was-cmas-2

You will need to create a policy for each of the servers. The CreateCoreGroupPolicy.jacl should be run for each of the properties files, and have the following format:

${WAS_INSTALL_ROOT}/bin/wsadmin.sh 
	?conntype SOAP 
	?port 8879 
	?f ${WAS_INSTALL_ROOT}/bin/CreateCoreGroupPolicy.jacl 
	<propsFile>

This completes the pre-bus configuration work for the WLM cluster.


Phase 4: SIBus creation and service definition

At this point in the configuration, you have in your example a cluster topology -- either HA or WLM cluster -- and prepared it for the creation of the logical entities of the SIBus. These tasks can be done through the admin console or through wsadmin and jacl script, and this paper concentrates on configuration through jacl script. Each of the configuration elements shows a jacl snippet describing the format of the wsadmin command. For this example, a full listing of a jacl script to configure the cluster is included after the steps and is applicable to both the HA and WLM clusters.

  1. First create the SIBus object, with which all subsequent elements are associated. The command has the following format:

    $AdminTask createSIBus ?bus <busName>


    For the example, the command is as follows (with error handling):

    if {[catch {set csb [$AdminTask createSIBus ?bus ExampleBus]} result]} {
    	puts "Error creating SIBus, failed with $result"
    }
  2. The next step is to add the cluster to the bus as a bus member. The command has the following format:

    $AdminTask addSIBusMember ?bus <busname> -cluster <clustername>\
    	-datasourceJndiName <ME datasource jndi name>

    For this example, the command is as follows:

    if {[catch {set csb [$AdminTask addSIBusMember ?bus ExampleBus 
    	?cluster was-cluster-1 \
    	?datasourceJndiName jdbc/com.ibm.ws.sib/was-cluster-1.000-ExampleBus]} \
    	result]} {
    	puts "Error adding SIBusMember, failed with $result"
    }
  3. For a WLM cluster only, the additional messaging engines defined in the CoreGroup Policy need to be created. The command has the following format:

    $AdminTask createSIBEngine ?bus <busname> -cluster <clustername> \
    	-datasourceJndiName <jndiName>

    For this example, you create two additional engines for your three server cluster. The commands have the following formats:

    if {[catch {set csb [$AdminTask createSIBEngine ?bus ExampleBus 
    	?cluster was-cluster-1 \
    	?datasourceJndiName jdbc/com.ibm.ws.sib/was-cluster-1.001-ExampleBus]} \
    	result]} {
    	puts "Error create SIBEngine, failed with $result"
    }
    
    if {[catch {set csb [$AdminTask createSIBEngine ?bus ExampleBus \
    	?cluster was-cluster-1 \
    	?datasourceJndiName jdbc/com.ibm.ws.sib/was-cluster-1.002-ExampleBus]} \
    	result]} {
    	puts "Error create SIBEngine, failed with $result"
    }


    Note how the JNDI names tie up with the datasource names you created when you were planning the database. If this example were using a single database with multiple schemas, then the JNDI Name would be the same for each messaging engine.
  4. Finally, with the bus members and additional messaging engines created (if in the WLM cluster configuration), the J2C authentication alias created for the database and unique schema needs to be associated with the messaging engines datastore object. The command has the following format:

    set schemaId 0
    set schemaName "IBMWSME"
    set authAlias [list authAlias $authAlias]
    set datastoreIDs [$AdminConfig list SIBDatastore]
    foreach datastoreID $datastoreIDs {
        set uniqueSchema "$schemaName$schemaId"
        set schema [list schemaName $uniqueSchema]
        set attributes [list $authAlias \
    		$schema ]
        $AdminConfig modify $datastoreID $attributes
        incr schemaId
    }


    where <authalias> is the authentication alias defined in Phase 1.
  5. Now that the SIBus has been created and configured with bus members and additional messenging engines, the next step is to start the Web services definition within the bus. For a Web service to be routed through the SIBus, an EndpointListener connected to the bus and associated with the transport channel is required. The commands for creating and connecting the endpointlistener has the following format:

    set clusterObj [$AdminConfig getid /ServerCluster:<clusterName>]
    set epl [$AdminTask createSIBWSEndpointListener $clusterObj \
    	?name <soapchannel> \
    	-urlRoot <soapchannel url> -wsdlurlRoot <sibws url>]
    $AdminTask connectSIBWSEndpointListener $epl ?bus <busname> \
    	-replyDestination <destname>

    where the following is true:

    • <clusterName> is the name of the cluster to deploy the endpointlistener to.
    • <soapchannel> is the name of soap channel to listen on, in other words, SOAPHTTPChannel1.
    • <soapchannel url> is the URL for accessing the SOAPChannel servlet, in other words, http://<hostname>/wsgwsoaphttp1.
    • <hostname> is the hostname of the Web Server serving requests to the cluster.
    • <sibws url> is the URL of the WSDL serving servlet of the SIBus, in other words, http://<hostname>/sibws.
    • <busname> is the name of the SIBus.
    • <destname> is an arbitary name used to create a destination for reply messages to the endpointlistener.
  6. The next step is to create the gateway for routing requests from clients to the target Web service. To do this, create a gateway instance with the WSDL of the Web service and an arbitrary namespace to associate with the gateway service WSDL. The command has the following format:

    set busObject [$AdminConfig getid /SIBus:<busname>] 
    set args "{name SwapGateway} {wsdlServiceNamespace http://test.wsgw.ibm.com}"
    set wsgw [$AdminConfig create "WSGWInstance" $busObject $args]
    set args {}
    lappend args [list "WSDLLocation " <targetServiceWSDL>]
    $AdminConfig create "SIBWSWSDLLocation" $wsgw $args "defaultProxyWSDLLocation"

    where the following is true:

    • <busname> is the name of the SIBus.
    • <targetServiceWSDL> is the WSDL description URL of the target service.
  7. Now the gateway instance is created, and the Inbound Service should be created and associated with the gateway. The command has the following format:

    set args " -name <serviceName> -requestDestination <reqDest> \
    	-replyDestination <rspDest> -targetService <outboundService> \
    	-targetBus <busName >"
    set swapgw [$AdminTask createWSGWGatewayService $wsgw  $args]

    where the following is true:

    • <serviceName> is the name of the Inbound Service.
    • <reqDest> is the arbitary name of a destination created for the service.
    • <rspDest> is the arbitary name of a reply destination created for the service.
    • <outboundService> is the name of the Outbound Service that the request is to be routed to.
    • <busName> is the name of the SIBus.
  8. The next step is to create an Inbound Port to connect the endpointlistener to the Inbound Service. The command has the following format:

    set args " -name <portName> -endpointListener $eplName -cluster $clusterName "
    $AdminTask addSIBWSInboundPort $inService $args

    where the following is true:

    • <portName> is the name of the port to associate with the Inbound Service.
  9. The Inbound side of the SIBus configuration is complete, and the Outbound Service and Port need to be created. For the Outbound Service, the command has the following format:

    set args [list -name $outBoundService -wsdlLocation <targetServiceWSDL> \
    	-destination <outDest>]
    set outService [$AdminTask createSIBWSOutboundService $busObject $args]

    where the following is true:

    • <targetServiceWSDL> is the WSDL description URL of the target service.
    • <outDest> is the arbitary name of a destination created for the service.
  10. The final piece of this configuration is to create the Outbound Port. The command has the following format:

    set args [list -name <outboundPort> -destination <portDest> -cluster $clusterName]
    $AdminTask addSIBWSOutboundPort $outService $args

    where the following is true:

    • <outboundPort> is the name of the service port defined in the target service WSDL.
    • <portDest> is the arbitary name of a destination created for the Port.

Configuring the example SIBus elements

The principles of configuring the SIBus elements with jacl script have been addressed Listing 5 shows the jacl script used in the configuring of this example.

Listing 5. The example configuration jacl
puts "****Configuring the cluster topology****"

# Pass in the hostname of the deployment manager and the
# database authentication alias
set hostName   [lindex $argv 0]
set authAlias [lindex $argv 3]

set clusterName    "was-cluster-1"
set eplName "SOAPHTTPChannel1"
set busName "ExampleBus"
set inBoundPort "StockPort"
set inBoundService "StockInboundService"
set outBoundService "StockOutboundService"
set outBoundPort "net.xmethods.services.stockquote.StockQuotePort"

# these jndi names are required to use the different datasources - which pin
# the MEs to the individual servers using policies set up by the AIS config
set jndi0 jdbc/com.ibm.ws.sib/was-cluster-1.000-SVTBus

# FOR WLM CLUSTER ONLY WITH MULTIPLE DATABASES
set jndi1 jdbc/com.ibm.ws.sib/was-cluster-1.001-SVTBus
set jndi2 jdbc/com.ibm.ws.sib/was-cluster-1.002-SVTBus
# END OF WLM CLUSTER ONLY

set args "-bus $busName"

puts "Creating bus, $busName"
if {[catch {set csb [$AdminTask createSIBus $args]} result]} { 
  puts "*** createSIBuses $busName FAILED with : $result ***"
  exit
}

puts "Adding cluster $clusterName to Bus..."
set args [list -bus $busName -cluster $clusterName \ 
	-datasourceJndiName $jndi0]
 if {[catch {set csb [$AdminTask addSIBusMember $args]} result]} { 
  puts "*** Add cluster to $busName FAILED with : $result ***"
  exit
}


# FOR WLM CLUSTER ONLY WITH MULTIPLE DATABASES use $jndi1 and $jndi2
# FOR WLM CLUSTER ONLY WITH MULTIPLE SCHEMA use $jndi0 for both 
# -datasourceJndiName options.
puts "Creating additional Messagine Engines..."
set params [list -bus $busName -cluster $clusterName \
	-datasourceJndiName $jndi1]
$AdminTask createSIBEngine $params
set params [list -bus $busName -cluster $clusterName \
	-datasourceJndiName $jndi2]
$AdminTask createSIBEngine $params
# END OF WLM CLUSTER ONLY

puts "Adding Authentication Alias and Schema to datastore used by Bus..."
set schemaId 0
set datastoreIDs [$AdminConfig list SIBDatastore]
set authAlias [list authAlias $authAlias]
foreach datastoreID $datastoreIDs {
        set uniqueSchema "$schemaName$schemaId"
        set schema [list schemaName $uniqueSchema]
        set attributes [list $authAlias \
                        $schema ]
        $AdminConfig modify $datastoreID $attributes
        incr schemaId
}

set busObject [$AdminConfig getid /SIBus:$busName/]
set clusterObject [$AdminConfig list ServerCluster]

puts "create $eplName EndpointListener on cluster"
set args [list -name $eplName \
	-urlRoot http://$hostName:9080/wsgwsoaphttp1/ \
	-wsdlUrlRoot http://$hostName/sibws]
set epl [$AdminTask createSIBWSEndpointListener $clusterObject $args]

puts "connect $eplName EndpointListener to bus, $busName"
set args [list -bus $busName]
$AdminTask connectSIBWSEndpointListener $epl $args

puts "create Outbound Service, $outBoundService"
set args [list -name $outBoundService \
	-wsdlLocation \
	http://services.xmethods.net/soap/urn:xmethods-delayed-quotes.wsdl  \
	-destination OutBoundServiceDestination]
set outService [$AdminTask createSIBWSOutboundService $busObject $args]

puts "create OutboundPort, $outBoundPort"
set args [list -name $outBoundPort \
	-destination OutBoundServiceDestinationPort \
	-cluster $clusterName]
$AdminTask addSIBWSOutboundPort $outService $args

puts "create Inbound Service $inBoundService"
set args [list -name $inBoundService \
	-destination OutBoundServiceDestination \
	-wsdlLocation \
	http://services.xmethods.net/soap/urn:xmethods-delayed-quotes.wsdl ]
set inService [$AdminTask createSIBWSInboundService $busObject $args]

set args [list -endpointListener $eplName -name $inboundPort \
	-cluster $clusterName]
$AdminTask addSIBWSInboundPort $inService $args

puts "create Gateway Instance StockGateway"

set args "{name StockGateway} \
	{wsdlServiceNamespace http://test.wsgw.ibm.com}"
set wsgw [$AdminConfig create "WSGWInstance" $busObject $args]
set args {}
lappend args [list "WSDLLocation" \
	http://services.xmethods.net/soap/urn:xmethods-delayed-quotes.wsdl]
$AdminConfig create "SIBWSWSDLLocation" $wsgw \
	$args "defaultProxyWSDLLocation"

puts "create Gateway Service StockGW"
set args " -name StockGW -requestDestination StockGWDest \
	-replyDestination StockGWReplyDest \
	-targetService $outBoundService -targetBus $busName "
set swapgw [$AdminTask createWSGWGatewayService $wsgw  $args]
set inService [$AdminConfig getid /SIBWSInboundService:StockGW/]

puts "create Inbound Port StockGWPort for $swapgw" 
set args " -name StockGWPort -endpointListener $eplName \
	-cluster $clusterName "
$AdminTask addSIBWSInboundPort $inService $args
puts "saving..."
$AdminConfig save
puts "script complete"

Phase 5: Testing the configuration

The gateway is now configured to route any SOAP requests to the target Web service. The best way to prove this is to make a request to the Web service. With the lack of a specific client, the Web Services Explorer in WebSphere Application Developer (or Rational® Application Developer) is the easiest method of doing this.

  1. In Rational Application Developer switch to the J2EE perspective, and on the toolbar, use the button to start the Web Services Explorer.
  2. In the Web Services Explorer select the WSDL page button, and select WSDL Main link.
  3. In the text box, enter the WSDL URL for your service. It takes the following form:

    http://<hostname</wsgwsoaphttp1/soaphttpengine/<busname>/ \
    	<inboundservicename>/<inboundportname>?wsdl

    Note: The hostname could be the address of any one of the servers in the cluster or of an ip-sprayer. For this example, the url would be as follows:

    http://batman:9080/wsgwsoaphttp1/soaphttpengine/ExampleBus/ \
    	StockGW/StockGWPort?wsdl


    Then select the Go button.
  4. This returns an operation link of getQuote. Click the link and enter IBM into the textbox.
  5. This should return a result of the IBM share price and is an indication that a Web service request has been successfully routed through a gateway instance on the cluster.

Conclusion

Working through this document, you gained an insight into the preparation of a cluster for SIBUS deployment. By following the example through each of the phases, you gained a hands on experience of working with the SIBus for Web services, and you are now prepared for subsequent deployments in a test or production environment.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into SOA and web services on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=SOA and web services
ArticleID=56961
ArticleTitle=Configure a Service Integration Bus in a network deployment environment
publish-date=03292005