As customers continue to use IBM® WebSphere® Process Server (hereafter called Process Server) to manage or automate more and more key business processes, it is often no longer practical to contain all of the necessary modules in a single Gold deployment topology. To address this, you can create a new set of Gold clusters, and distribute the modules between the two targets, but this approach requires careful consideration of the way in which the WebSphere Service Integration Bus (hereafter called SIBus) messaging component is used by the SCA runtime, and application modules. This article presents a series of best practice guidelines for configuring the SIBus in an efficient and maintainable fashion that will help your implementation along the path to success.
The golden topology
This article presents an overview of the golden topology, explains SIBus messaging in a multiple gold topology, and shows you how to configure efficient messaging connections in a multiple gold environment.
Described variously as the Gold or Silver topology, Full Support or Remote Messaging, the majority of IBM® WebSphere® Process Server and WebSphere Enterprise Service Bus customers use a production configuration in which the service integration bus messaging engines are configured as part of a separate WebSphere cluster to that which hosts the Process Server or WebSphere ESB modules as shown in Figure 1.
Figure 1. The Gold topology
For the purposes of the function described in this document, the Gold and Silver topologies are equivalent in that they both have a separate messaging cluster. This document refers to the Gold topology throughout but the observations made in this document are equally applicable to the Silver topology. The difference between Silver and Gold is that in Silver the CEI function is deployed to the application cluster rather than having its own cluster.
In this topology, the application modules are deployed to the application cluster, and there is a single active messaging engine in the messaging cluster, as shown running in MEServer1, with its passive counterpart waiting ready in MEServer2 in the event of a failure of MEServer1.
The linked set of clusters shown in the illustration provide the ability to independently provision and manage the individual components of the environment, for example, increasing the number of servers in the application cluster in order to meet the scalability or throughput requirements of the various application modules that are installed.
Module scalability issues
While the Gold topology provides excellent scalability, there are an increasing number of customers for whom a single set of linked clusters of this type is not sufficient to meet the demands of the growing set of SCA modules that they have developed to fulfill their business requirements.
In some large production scenarios, it is not uncommon for the number of SCA modules to reach one or two hundred. This can be as a result of an overly fine-grained approach to module or service definition, or the customer's objective to maintain a division of code development or administration that matches their existing team structure.
In a customer scenario that includes a large number of SCA modules, it is often the case that the Gold topology environment struggles to meet the non-functional requirements of the project, for example:
- Servers in the application cluster take an unacceptable length of time to start up, which makes administration tasks such as applying maintenance in the production or development environment extremely time consuming.
- The number of applications deployed in the application cluster means that there is insufficient space to run the applications in the available Java™ heap size.
- The service integration bus messaging engine takes a long time to start up or failover, resulting in an unacceptably long downtime or service outage for the period of time in which the ME is not available.
These problems are caused by the cost of initializing each of the modules and the load on the messaging engine (ME), both of which can become prohibitive as the number of applications increases.
Each SCA application causes a number of destinations to be created on the messaging engine, resulting in an increased load on the start up logic for that ME. The number of destinations created depends upon the complexity of the module, and type of the operations used within it, often resulting in six or more destinations being created per module. The ME's use of the other services in the WebSphere platform means that the length of time it takes to start the messaging engine varies by a factor of order n2, where 'n' is the number of destinations localized to the messaging engine. So as the number of installed SCA applications increases, the time it takes to start or failover the ME also increases, but at a quicker rate.
With a small to medium number of destinations (up to around 300), the scale of the increase is such that the increase in messaging engine startup time is generally acceptable. However with more than 300 destinations defined, or equivalently more than about 50-60 SCA modules, the n2 behavior can become significant, resulting in ME failover time measured in the order of minutes rather than seconds.
Resolving scalability concerns with the multiple gold pattern
The easiest way to resolve the problems caused by deploying so many SCA applications is to separate the set of applications into two or more distinct sets of Gold topology clusters with each application deployed to only one of the application clusters, and a distinct ME cluster for each of these application clusters, as depicted in the picture below.
Figure 2. Partitioning applications into two or more copies of the gold topology
By splitting the set of applications across multiple application clusters, we halve the number of applications that must be started on any given server and thus improve startup time of the servers, and similarly since half the destinations are localized to each of the messaging engines, the startup and failover time of the ME should be reduced back to acceptable limits.
The design above can be implemented in one of two ways:
- A cell per gold topology
- Multiple gold topology sets of clusters in a single cell
Approach A (multiple cells each containing a copy of the gold topology) guarantees that the modules will function independently of each other, at the expense of the additional administrative overhead required to install, configure, and maintain the new cell infrastructure. Also, this topology cannot easily be used if there is a requirement for communication between modules that are going to be installed in different cells as extra manual configuration is required to permit that behavior.
Approach B (single cell / multiple clusters) approach avoids most of the additional configuration overhead, but must be done carefully in order to ensure that the Process Server functions of each cluster do not interfere with each other; for example, it is important to note the following points:
- The application clusters must have their 'remote SCA destination' location attribute set to the relevant messaging engine cluster, so that the destinations are localized correctly when SCA modules are installed.
- The messaging engines in each ME cluster should be made members of the same SCA.APPLICATION and SCA.SYSTEM buses. In the case of the system bus this will allow modules in one cluster to invoke those in another cluster, while in the application bus it saves on the administrative cost of creating a separate bus for each copy of the gold topology.
Even assuming that the configuration described above is carried out correctly, the pattern of connections shown in the diagram above will generally not take place quite as you would expect it to. Please see the following section How SIBus connections are actually made for a description of the connection process.
SIBus messaging in a multiple gold topology
How SIBus connections are actually made
In a simplified form, the desired pattern of connections shown in Figure 2 is as follows:
Figure 3. Simplified view of the desired connection pattern
Each of the connections from the SCA module to the SCA.SYSTEM or SCA.APPLICATION buses is controlled by the settings on a JMS ConnectionFactory, a JMS Activation Specification (for JMS message driven beans) or a J2C Resource Adapter Activation Specification (for internal SCA message driven beans).
By default, these admin objects contain a set of properties that instruct the system to connect the application to any messaging engine on the specified bus (system or application). In the case of a single Gold topology in the cell, this is acceptable because there is only one messaging engine to choose from. However, if there is more than one ME in the bus, as is the case where there are multiple copies of the Gold topology in the same cell, then connections will be workload balanced between the available messaging engines.
The workload balancing of the connections between multiple messaging engines means that Module1 will connect to ME2 roughly 50 percent of the time in order to send/receive messages that are localized to ME1, as shown below;
Figure 4. The cross connection pattern
The cross-connection pattern illustrated in Figure 4 causes remote queue points (RQP) to be created in the messaging engine. An RQP is the runtime artifact that handles the storage and management of messages that an application sends to a destination that is not hosted by the messaging engine to which it is connected. In the pattern shown in Figure 4, an RQP would be created on ME2 for the destination hosted by ME1 to which messages are transmitted by Module1.
Disadvantages of the cross-connect pattern
The cross-connection pattern is inefficient as it introduces extra latency in the communication between the application and the required destinations as a result of the reliable protocol by which messages are transferred between two messaging engines.
It is difficult to provide a theoretical prediction of the performance degradation caused by this cross-connect pattern, however it is easy to see that there will be a significant overhead introduced by the extra hop required to transmit message in a reliable fashion from the ME to which the application is connected (ME2 in the Figure 4 above) to the ME hosting the destination to which the message is being sent (ME1).
It is thus desirable to configure the environment to avoid the cross-connection pattern for performance reasons.
The workload management algorithm used to initiate a connection to a messaging engine means that if no target properties are specified, then the system will essentially round-robin connections to MEs based on the runtime ordering of requests. This means it is difficult or impossible to predict where a connection will be made, as the target of the connection depends on the exact ordering of all the connection requests in a given server.
This complicates manageability of the system, as there is no fixed set of connections that will be created between the servers; for example, as might be viewed by netstat.
The same behavior can cause issues for serviceability, as two executions of a particular scenario might use completely different connection patterns, and result in different runtime behavior being observed based on whether an application is connected directly to the messaging engine that localizes the destination with which it is communicating, or obtaining messages from that destination through an intermediate messaging engine.
An example of this would be when a message that has been sent by one application does not arrive at a queue in order to be consumed by a second application. From an external perspective, the message can appear to be lost; in fact, it is safely stored in a remote queue point at the messaging engine to which the first application is connected, and awaiting transmission to the messaging engine that localizes the destination. Without knowing which ME the first application is connected to, it is extremely time consuming to identify where the message is waiting and hence identify how to resolve the problem.
For consistency reasons, it is thus very desirable to mandate the connection paths by setting up the target connection requirements to prevent cross connection as described below.
Secondary code path
Applications that are cross connected to a messaging engine other than the one that hosts the destination with which they are communicating make use of a different (and more complex) code path than those that connect directly to the messaging engine that hosts the destination.
In early versions of Process Server and WebSphere ESB some customers experienced functional problems when they inadvertently made use of the cross connect pattern, which affected the flow of messages inside applications. These problems were often observed by the generation of first failure data capture records (FFDCs) containing class or method names that talk about remote consumers, remote queue points or guaranteed delivery.
These problems have been fixed by IBM so it is very important to ensure that the latest levels of service fixes are applied to all the servers in the cell which are members of one or more buses. Details of these fixes are described in Step 1 below and it is highly recommended that they be applied to all nodes in the environment.
Configuring efficient messaging connections in a multiple gold environment
The simple work-around to the problems that are experienced when applications cross connect is to configure the necessary properties of the ConnectionFactory or ActivationSpecification objects so that the system is instructed to connect to the appropriate messaging engine.
There are three steps to this process:
Step 1: Apply the relevant fix packs
As well as the configuration changes carried out by the script supplied with this article, you must apply a number of service updates that address the same cross-connection pattern that occurs in other areas of the Process Server infrastructure as described below:
- For Process Server v6.0.2, it is strongly recommended to that you are running with a fix pack that includes a version of WebSphere Application Server at v220.127.116.11 or above (Process Server v18.104.22.168 or later), as this includes a number of critical fixes for this scenario, and is a prerequisite for one of the iFixes described below.
- JR29484 and its appropriate pre-requisites: SCA target significance
- JR30198: Failed Event Manager target significance
- PK71156: SIBus recovery fixes
Note that after applying JR29484, you must define a WebSphere variable with a name of SCA_TARGET_SIGNIFICANCE and value or Required or Preferred at cluster scope of the various Process Server application clusters as described in the documentation for JR29484.
Links to further information on these fixes can be found in the Resources section.
Step 2: Create property definitions (Process Server v6.0.2 only)
Important: You should create a backup of the Process Server configuration before making any con figuration changes!
In version 6.1 of WebSphere Application Server (and associated stack products) the properties required to configure this connection behavior are predefined, ready for them to be configured by the administrator. If you are using v6.1 products, then proceed directly to Step 3.
The property definitions are not automatically present if the profile was created using a version of the Process Server prior to v22.214.171.124 (specifically, using a version of the Application Server prior to v126.96.36.199). If the profile was created using Process Server v188.8.131.52 or earlier then you must follow these instructions to define the properties, even if you have subsequently upgraded to Process Server v184.108.40.206 or later.
The configuration steps required to set up the property definitions are described under PK54128, a link to which is provided in the Resources section below. The Web page describes how to define the necessary property definitions using the administration console, however it is easier and less error prone to use the JACL scripts "SIBUpdateResourceAdapter.jacl" and "SIBInternalUpdateResourceAdapter.jacl" that are provided by the Service team to achieve the same configuration.
The Web page also states that it is necessary to configure the properties before creating the profile, however the scripts presented here do not have that restriction, they can be applied to an existing profile without having to recreate any definitions. A link to where to download the JACL scripts is also provided in the Download section.
The scripts must be run once for each cell in which you wish to define the properties, using the following commands (see the documentation provided at the links in the Resources section for full details).
Listing 1. Running the scripts to create the target property definitions
wsadmin –f SIBUpdateResourceAdapter.jacl wsadmin –f SIBInternalUpdateResourceAdapter.jacl
Step 3: Configure the target properties
Now that the properties have been defined, you must configure them with the necessary environment-specific variables relevant to your environment. This consists of making the following changes to every JMS ConnectionFactory, QueueConnectionFactory, TopicConnectionFactory, ActivationSpecification and J2C ActivationSpecification defined at cluster scope in the two (or more) application clusters:
- Set the TargetType to BusMember: We want to connect to the messaging engine in the relevant Process Server ME cluster.
- Set the Target to the name of the ME cluster.
- Set the Significance for Connection Factories to "Preferred" or "Required" as appropriate to your application environment (see later sections for assistance in choosing between the two options). For ActivationSpecifications the significance should always be set to "Required".
Clearly this can be a very time consuming activity, so a script called "configure_MultipleGold_Messaging.jacl" is provided by this article in the Download section to automatically carry out the work for you.
Additionally, to avoid having to re-run this activity each time a new application is installed or configuration carried out, the script updates the admin object templates for the specified objects so that any new objects so that any new definitions that are created automatically pick up sensible default values. So when a new application is installed, any ActivationSpecification objects that are created at the appropriate (for example, cluster) scope will automatically pick up the associated messaging engine cluster as its target, unless they have been deliberately configured differently by the developer or administrator.
What is the difference between Preferred and Required?
The basic behavior provided by the target significance property is as follows:
- Required: Connect to the target specified in the other target properties (for example a bus member). If that target is not available, then fail the connection.
- Preferred: Try to connect to the target specified in the other target properties. If that target is not available then try to connect to any messaging engine that is running in the same bus.
Note that if the administrator sets "Preferred", and the specified target is not available at the time the connection is created then a connection to a different ME will be created. Subsequently, if the original (preferred) target becomes available again, the connection will not be automatically dropped and re-established; the application will remain connected to the non-preferred ME unless that connection is closed for any reason. This behavior may be undesirable for long-lived connections, or in situations where connection pooling is used, which may motivate use of the "Required" target significance.
Choosing whether to use Preferred or Required significance
There are two different types of object on which it is possible to set target properties: a Connection Factory, and an Activation Specification.
An Activation Specification provides the necessary configuration details for a message driven bean (MDB) and includes information on where to connect, and from which destination to consume messages. An Activation Specification may be created by the developer/administrator, or automatically by the SCA infrastructure as part of the definition of an SCA module.
Regardless of how the Activation Specification is created, the location of the destination is fixed at configuration time, and so we can know exactly which bus member an MDB should connect to. Connecting to any other bus member will result in the cross-connect pattern described above as the MDB tries to retrieve messages from one messaging engine, via the one to which it may have been connected.
In the case where the messaging engine hosting the destination is not running, the MDB will not be able to consume messages even if it connects to a different messaging engine, as that ME will still need to contact the unavailable ME to retrieve the messages. So for ActivationSpecifications we should always set the significance to "Required" for best efficiency. (Note that the configured_MultipleGold_Messaging.jacl script always sets ActivationSpecification significances to "Required" regardless of the parameter specified, which only applies to Connection Factory objects).
A Connection Factory can be used to send or receive messages, and the destination(s) it will use are not known at configuration time.
If the administrator knows that a particular Connection Factory will only be used to consume messages then the same logic applies as above for activation specifications, and a "Required" target should be set for the bus member that localizes the destination.
If the Connection Factory will only be used to send messages, then it is usually desirable to set the significance to "Preferred" in order to have better fault tolerance (the message will be sent if any messaging engine in the bus is available). Two exceptions to this are the cases below, which require a setting of "Required" to be used:
- Manageability / serviceability as described in previous section on those topics.
- Message ordering, or Event Sequencing. If the "Preferred" setting is used then an application might send messages through different paths if failures occur, which makes it impossible to guarantee the ordering of the messages as they arrive at the destination. If message ordering is a requirement then "Required" should always be used.
How to run the target property configuration script
Important: You should create a backup of the Process Server configuration before making any configuration changes!
The following listing provides a simple example of how you execute the configuration script provided with this article in order to automatically make the necessary configuration changes described above.
Listing 2. Running the scripts to configure the target property definitions
wsadmin -f configure_MultipleGold_Messaging.jacl CORRECT AppCluster1 Required
... where AppCluster1 is the name of the Process Server application cluster that you have previously configured.
Now synchronize the necessary nodes and then restart each of the servers in the application clusters. You do not need to restart the messaging cluster servers if you don't want to.
Once the configuration changes take effect, any existing (or new) applications will only connect to the messaging engine in the associated messaging cluster, which is more efficient, avoiding the disadvantages highlighted above.
Where/when to run the script
The script must be run once for each Process Server application cluster in the system. So if your environment has two instances of the gold topology, you will need to run the script twice, supplying the appropriate cluster name parameter in each case.
If you have other servers in the cell outside of the Process Server clusters, and those servers include definitions for Connection Factory or Activation Specification objects then you should also configure the target properties of those objects in a similar fashion to that described above by following the instructions for non-clustered servers provided in Listing 6.
You must run the script as part of your standard configuration process, before beginning functional or performance testing in order to ensure that your environment actually matches the one that you will be using in production.
Once you have run the script once it will have configured all existing applications correctly, and created appropriate defaults for any new applications that are installed to the Process Server application cluster and create their resources at that scope.
In most cases, the default will be correct for new applications that are deployed. However to be sure that the correct configuration is set up, you should consider re-running the script again after each batch of new applications are installed, in order to check that the default that was applied is indeed correct.
When the script is executed, it outputs a summary of the configuration that has been found and what changes have been made, or are recommended. An example of this summary can be found in the Example console output section below.
Resource scope requirements
In order for the script to function correctly, it must be able to automatically determine which Connection Factory and Activation Specification resources are associated with which application/messaging engine clusters. The way it does this is to assume that all the necessary resources are defined at cluster scope in the application cluster, which is normally the case for resources that are automatically created by SCA modules.
If you have resources that are configured at other scopes (for example at the Cell scope), and these resources are used by applications in the Process Server application cluster, then it is strongly recommended that you redefine those resources at the appropriate application cluster scope before running the script.
Important: The script does include an option to manually specify the scope at which resources are defined (see comment at the top of the script). However this is a significantly inferior for dealing with cell-scoped resources as without taking a great deal of care the new property values will be overwritten when the script is executed for the second cluster. For this reason, this article does not discuss that option further, and assumes that all resources are defined at the appropriate cluster scope.
Using the configuration script
The script is executed using the following command:
Listing 3. Running the scripts to configure the target property definitions
wsadmin.sh -f configure_MultipleGold_Messaging.jacl <mode> <appClusterName> <significance>
Note that the optional scopeOverride parameter is not discussed in detail here as its use does not represent best practice, see the note in the previous section.
Each of the parameters is discussed in detail below.
Table 1. Description of the mode parameter
|VALIDATE||Checks the values currently specified in configuration options, and advises of the suitability of those values, but does not make any configuration changes.|
|COMPLETE||Updates the values of any configuration options that have not already been set, but
does not modify those values which have already been configured.|
This option is used to provide a workaround to the problem where all resources are defined at cell scope (and there is more than one application cluster); however this is not discussed here as defining resources at cell scope is not best practice in this case.
|CORRECT||Updates all values to match the optimum choices as determined by the script with support from the other parameters entered by the user. This is the option that will be used in most cases.|
Table 2. Description of the appClusterName parameter
|Name of application cluster||Insert the name of the Process Server application cluster for your specific environment
This information is used to find the appropriate resource scope, and also calculate the name of the associated messaging engine cluster.
Table 3. Description of the significance parameter
|Required||Sets a target significance of Required in any Connection Factory objects that are found, as described above.|
|Preferred||Sets a target significance of Preferred in any Connection Factory objects that are found, as described above.|
Examples of how to execute the script
In the examples shown below, the deployment manager is assumed to be running on the default port on
the same host as the session in which the script is being executed. If this is not the case in your
environment, then you must supply the
-port parameters to enable
the admin client to connect to the deployment manager.
Listing 4. Update configuration to Required
wsadmin -f configure_MultipleGold_Messaging.jacl CORRECT env.AppTarget Required
Updates all the messaging resource definitions defined in the cluster "env.AppTarget" to use target significance of Required, overwriting any existing data. This is the most usual pattern of execution for the script.
Listing 5. Validate that the configuration uses Preferred significance for Connection Factories
wsadmin -f configure_MultipleGold_Messaging.jacl VALIDATE env.AppTarget Preferred
Checks the configuration of the messaging resource adapter resources in the "env.AppTarget" cluster, printing the results to the console. No configuration changes are made/saved.
Activation Specifications will be checked to see if they match the calculated home of the relevant destination, and have a Required target (see Choosing whether to use Preferred or Required significance). JMS Connection Factory objects will be checked to see if they are set to BusMember/Preferred with the name of the bus member being that of the associated ME cluster for the app cluster "env.AppTarget".
Important: Notice that when running the script using the Validate option, you will see one or more of the following warning messages when the script exits;
-------------------------------------------------------- No changes have been saved to the configuration. --------------------------------------------------------
WASX7309W: No "save" was performed before the script "configure_MultipleGold_M essaging.jacl" exited; configuration changes will not be saved.
These messages are confirmations that no updates have been made to the configuration files and can be safely ignored.
Listing 6. Update the configuration of a non-clustered server (all on one line)
wsadmin -f configure_MultipleGold_Messaging.jacl CORRECT server1 Required /Node:myNode01/Server:server1/
In addition to the standard Process Server application clusters, some users may also have one or more non-clustered servers as part of the same cell; for example, to host adapters that cannot be run in a clustered environment.
These non-clustered servers may also have messaging resources defined at their Server scope that must be configured in a similar way to those of the application cluster, which can be achieved using the example shown above.
Activation Specifications configured at this server scope will automatically have the name of the bus member that hosts the destination calculated correctly.
To define the target bus member for JMS ConnectionFactory objects defined at this server scope, the Administrator must manually configure a WebSphere Variable at server scope with the name "JMSCF_TARGET_BUS_MEMBER", and the value being the target bus member name. For example, to target all JMS CF objects at that same non-clustered server, you must use the "node.server" syntax; for example, "myNode01.server1" for the server nominated in the script invocation shown above. If the target bus member is a cluster, then the cluster name is used as the value of the variable.
If the JMSCF_TARGET_BUS_MEMBER variable is not set, then Connection Factory objects will be left with an empty target field, so connections will be workload balanced across the available set of messaging engines at runtime.
Known restrictions and limitations
- ConnectionFactory and ActivationSpecification names (display name, not JNDI name) that contain spaces make it difficult or impossible to parse the list of items. The recommended workaround is to remove the spaces.
- If the resources are defined at Cell scope, then it is impossible for the script to determine which resources should be associated with which cluster. The best practice is to define resources at the appropriate cluster scope
Examples of the script at work
Administration console examples
Prior to executing the steps described by this document, by default an Activation Specification or Connection Factory will not have a target set, as shown in the screenshot of a JMS Activation Specification in the administration panel.
Figure 5. The default (un-configured) values of the target properties in the administration console
Notice that the Target field is empty, while the Target Type and Target Significance fields are at their default values of "Bus member name" and "Preferred". The values for Target Type and Target Significance are not taken into account until you enter a name in the Target field.
Now, run the configuration script against the appropriate application cluster (which you can see at the top of the screenshot shown above).
wsadmin -f configure_MultipleGold_Messaging.jacl CORRECT mattEnv.AppTarget Required
If you log out of the console, and log back in again (to force picking up the new configuration changes) and look at the same object again, you will see that the values have been updated by the execution of the script, as shown below:
Figure 6. Target properties after configuration by the script
Example console output
The following shows an example console output of the script being run.
WASX7209I: Connected to process "dmgr" on node KADGINCellManager01 using SOAP connector; The type of process is: DeploymentManager WASX7303I: The following options are passed to the scripting environment and are available as arguments that are stored in the argv variable: "[CORRECT, mattEnv.AppTarget, Required]" Precondition validation: - Found application cluster OK - Found associated ME cluster name: mattEnv.Messaging -------------------------------------------------------- Found Platform Messaging Component SPI Resource Adapter -------------------------------------------------------- * Found activation spec: sca/SOACoreIVT/ActivationSpec busName: SCA.SYSTEM.KADGINCell01.Bus destination: sca/SOACoreIVT target: mattEnv.Messaging (current config) targetType: BusMember (current config) targetSignif: Required (current config) BusMember: mattEnv.Messaging (calculated) * Found activation spec: jms/act/adapterServerAS busName: SCA.APPLICATION.KADGINCell01.Bus destination: myQueue target: KADGINNode01.server1 (current config) targetType: BusMember (current config) targetSignif: Required (current config) BusMember: KADGINNode01.server1 (calculated) * Found activation spec: corespi/topicTest busName: SCA.APPLICATION.KADGINCell01.Bus destination: Default.Topic.Space target: mattEnv.Messaging (current config) targetType: BusMember (current config) targetSignif: Required (current config) durability: NonDurable * Found an activation spec TEMPLATE on the RA - Setting value of targetType to BusMember - Setting value of target to mattEnv.Messaging - Setting value of targetSignificance to Required -------------------------------------------------------- Found SIB JMS Resource Adapter -------------------------------------------------------- * Found activation spec: jms/cei/QueueActivationSpec jndiDest: jms/cei/EventQueue busName: CommonEventInfrastructure_Bus destination: mattEnv.Messaging.CommonEventInfrastructureQueueDestination target: mattEnv.Messaging (current config) targetType: BusMember (current config) targetSignif: Preferred (current config) BusMember: mattEnv.Messaging (calculated) - WARNING: Target significance is not optimal in current config - FIXED: Target significance updated to Required * Found activation spec: actspec/MattTest jndiDest: jms/fred busName: SCA.APPLICATION.KADGINCell01.Bus destination: adapterQueue target: anIncorrectTarget (current config) targetType: ME (current config) targetSignif: Preferred (current config) BusMember: KADGINNode01.server1 (calculated) - WARNING: Target name is not optimal in current config - FIXED: Target name updated to KADGINNode01.server1 - WARNING: Target type is not optimal in current config - FIXED: Target type updated to BusMember - WARNING: Target significance is not optimal in current config - FIXED: Target significance updated to Required * Found activation spec: jms/act/TopicAS jndiDest: jms/myTopic busName: SCA.APPLICATION.KADGINCell01.Bus destination: Default.Topic.Space target: mattEnv.Messaging (current config) targetType: BusMember (current config) targetSignif: Required (current config) durability: NonDurable * Found a JMS CF: jms/sampleCF target: (current config) - WARNING: Target name is not optimal in current config - FIXED: Target name updated to mattEnv.Messaging targetSignif: Preferred (current config) - WARNING: Target significance is incorrect - FIXED: Target significance updated to Required targetType: BusMember (current config) * Found an activation spec TEMPLATE on the RA - Setting value of target to mattEnv.Messaging - Setting value of targetSignificance to Required - Setting value of targetType to BusMember * Found a JMS TEMPLATE on the RA - Setting value of Target to mattEnv.Messaging - Setting value of TargetSignificance to Required - Setting value of TargetType to BusMember * Found a JMS TEMPLATE on the RA - Setting value of Target to mattEnv.Messaging - Setting value of TargetSignificance to Required - Setting value of TargetType to BusMember * Found a JMS TEMPLATE on the RA - Setting value of Target to mattEnv.Messaging - Setting value of TargetSignificance to Required - Setting value of TargetType to BusMember -------------------------------------------------------- Save any configuration changes that were made. --------------------------------------------------------
This article has described a series of steps by which you can ensure that your large scale WebSphere Process Server environment follows best practice when more than one Gold deployment topology is present in a single WebSphere cell. We discussed the potential pitfalls of the way in which the Service Integration Bus messaging is used in this type of environment, and presented a script that can be used to automatically configure your messaging resources to conform with the best practice.
By following the steps decribed in this article, you take advantage of the lessons learned working with leading edge customers who have implemented this type of topology, and help to ensure that your deployment will be a success!
- IBM Redbook: Production Topologies for WebSphere Process Server and WebSphere
This redbook is the primary reference for Process Server topology information (the terms Full Support is defined in this redbook).
Clustering WebSphere Process Server V6.0.2, Part 1: Understanding the topology
The terms Gold and Silver topology are defined in the DeveloperWorks article.
Get products and technologies
- Fix packs for
WebSphere Process Server v6.0.2, including v220.127.116.11
Find information on the latest fix packs for WebSphere Process Server v6.0.2, including v18.104.22.168.
Provides the ability to configure target significance for certain internal SCA connections.
Modifies the Failed Event Manager to make use of the SCA target significance property.
Describes how to create the target properties used in this article. Recommended to use the JACL scripts linked in the Download section of this article.
Contains a series of fixes related to service integration bus recovery behavior.
- Code sample: Target properties configuration script (configure_MultipleGold_Messaging.jacl)
- Code sample: SIBus resource adapter property scripts (SIBUpdateResourceAdapter.jacl, SIBInternalUpdateResourceAdapter.jacl)