Contents


Performance tuning for Java Messaging Service on WebSphere Application Server on z/OS

Comments

This article explains how to tune your existing z/OS system to achieve best performance when running a JMS application over WebSphere Application Server. You will use a JMS single-queue Point-to-Point scenario as the example throughout this article, although most of the tuning tips applies to the JMS Publish-Subscribe case as well.

You will be using WebSphere Application Server V6.0.2 for z/OS and WebSphere MQ V5.3.1 for z/OS (hereafter referred to as WebSphere MQ) as reference products. You will also use the JMS message driven beans (MDB) because their tuning is closely linked with the server tuning process.

You will learn about the tuning process for both cases when running with the default embedded messaging provider, named WebSphere Platform Messaging, and when using the external messaging provider, named WebSphere MQ.

This article is targeted to an experienced z/OS user with a good knowledge of the WebSphere Application Server (not necessarily on the z/OS platform) and an understanding of the basic concepts of WebSphere MQ (see MP16: WebSphere MQ for z/OS - Capacity planning & tuning).

This article assumes that:

  • You have access to a z/OS system already containing an instance of WebSphere Application Server and WebSphere MQ. The examples in this article refer to a specific installation. Consequently, the names of directories, data sets, and processes used in this document might not apply to your system. This article generalizes when possible or warns you of possible incongruence.
  • You have a single server installation for WebSphere Application Server. If you are using WebSphere Application Server Network Deployment, you should always refer to the installation structure of the node containing the application server where your messaging resources are localized.
  • You have access to an ISPF session and be able to telnet (or rlogin) to the same system. The telnet access is not mandatory since you can always use the TSO MVS™ command in your ISPF session to run a UNIX shell script. Or you could write a JCL job that does that for you.

Configuring WebSphere Application Server for WebSphere Platform Messaging

The WebSphere Application Server JMS engine, now called WebSphere Platform Messaging, is now 100% Java and provides an implementation for the enterprise service bus (ESB) architecture. The JMS support in WebSphere Platform Messaging is a particular case of the more general ESB functionality. For this reason, you will need to perform some extra configuration steps to enable the basic JMS point-to-point scenario to work.

Buses and destinations

Often, programmers regard queues and topics as the simplest entities of a JMS application and as mere containers of variable depth. In reality, the choice and configuration of a destination (general term for queue or topic) is in most of the cases crucial for a well-performing system. This choice is even more important for WebSphere Platform Messaging, whose bus-architecture relies heavily on fast and interconnected destinations. This section describes the steps required to create a bus, a destination and all the performance options you can choose.

Let’s suppose you want to create a queue with JNDI name Q1 located in Server1:

  1. First, create a Bus object.
    1. From the WebSphere administrative console left menu, select Service integration => Buses. Click New and give it a name, such as BUS1, as shown in Figure 1.
    2. You now have to make a choice that is going to affect your performance considerably: whether to run a secure bus or not. If you can live without security, then you could possibly gain an increase in speed between 10%-30%, depending on the type of workload. Remember that WebSphere Application Server has another two security settings, at the JVM level (Java Security) and at the server level (Global security). A secure bus is by far the most costly option.
    3. Set the High message threshold parameter to a higher number of messages than your workload will exchange during any given time. The default value of 50000 should be acceptable for most of the cases.
    4. Click OK and save.
      Figure 1. Bus configuration
      Bus configuration
      Bus configuration
  2. Assign the server instance to BUS1.
    1. Click on the created BUS1 and select Bus Member => Add.
    2. Select the Server radio button and select the right server instance from the drop-down menu.
    3. Click Next and then Finish. Save it.
  3. In BUS1, define a queue destination object named QD1 located in Server1. A queue destination defines the physical location of a queue and it can host multiple logical queues.
    1. Select the newly created BUS1 and then Destinations on the right panel in the Destination resources category.
    2. Click New and select Queue as destination type. Enter the identifier QD1. Click Next.
    3. Make sure the bus member is the one you defined previously. Click Next => Finish. Save everything.
    4. Inspect the destination you created and make sure that in the Quality of Service area, the Enable producers to override default reliability box is selected. This setting is very important since you want to be free to choose the reliability setting for your messages and not to use the default one of your destination.
  4. Create a JMS queue resource Q1 for the Default Provider that belongs to BUS1 and localized in QD1.
    1. From the WebSphere administrative console left menu, click Resources.
    2. Expand the JMS Providers and select Default Messaging.
    3. In the next menu, on the right side under the Destinations header, click on JMS Queue.
    4. Create a new queue, naming it Q1 (JNDI name test/Q1 for example).
    5. Under the Connection section, select the right Bus name to be BUS1. This selection populates the Queue Name drop-down menu.
    6. Select QD1 as queue name. Leave the other parameters as default as shown in Figure 2.
    7. Press OK and save.
Figure 2. Queue configuration
Queue configuration
Queue configuration

Now, messages sent to Q1 will flow inside the bus BUS1, reaching the destination QD1 on Server1 where the message will be stored. With this configuration, the type of Quality of Service (reliability) becomes an attribute of the message and not a setting in the destination. This enables more flexibility in reusing a same destination for different types of services.

The mechanism of choosing the reliability of a message is described in the next paragraph, where you will learn how to tune connection factories.

Connection factories

After populating our bus with destinations and localizing them in specific server instances, you should build the entry point for your user application. A connection factory is the entity that provides a connection to your bus. The characteristics of this connection are crucial for performance and can be set only during the creation of a connection factory.

  1. From the WebSphere administrative console left menu, click Resources.
  2. Expand the JMS Providers and select Default Messaging.
  3. On the right side, under Connection Factories, you have two possible choices: JMS connection factory and JMS queue connection factory. Performance-wise there is no difference, although you should use the latter option if there is a possibility that the client application has been developed using JMS1.0.2, instead of the more general JMS1.1 specification.
  4. Enter the Name, JNDI name, and Bus name.
  5. In Advance Messaging, select Read ahead. This setting will boost the performance of your consumer clients.

The JMS specification describes two types of messaging: persistent and nonpersistent. WebSphere Platform Messaging introduces three flavours of nonpersistent messages and two of persistent:

  • Nonpersistent messages
    • Best effort messages are discarded when a messaging engine stops or fails. Messages might also be discarded if a connection used to send them becomes unavailable, as a result of constrained system resources. You can acheive best performance with this setting. If you can afford to lose messages when your queue is full, then look no further.
    • Express messages are discarded when a messaging engine stops or fails. Messages might also be discarded if a connection used to send them becomes unavailable. Express is usually 20% slower than Best effort, but it has the advantage that you are not losing messages when the queue fills up. The system happily finds some space in the database for extra messages. Unfortunately, when your queue spills messages to the database, then the performance drops dramatically and things get worse and worse. Express is the default for the nonpersistent reliability setting.
    • Reliable messages are discarded when a messaging engine stops or fails, which is not very different than the Express Quality of Service. Reliable just introduces an extra acknowledgment message between server and client to guarantee the delivery of the message. Expect a 15%-20% hit in performance compared to Express.
  • Persistent messages
    • Reliable messages might be discarded when a messaging engine fails. Each message received by the server is stored in the database, but this is performed asynchronously, on different thread from the receiving of the message. If the server finishes before the message has been persisted, then it will be lost. Although not 100% safe, this Quality of Service permits the efficient use of the database, as the writing is done in batches. Also, several optimizations are applied when read and delete operation are concurrently applied. Reliable is the default for persistent messages.
    • Assured messages will never be discarded. When the producer application returns successfully after the send operation, WebSphere Application Server guarantees the message won’t be lost. You should use assured persistence with transactions. This is the slowest Quality of Service, taking a hit of 30-50% compared to Reliable Persistent.

Clearly, an increase in Quality of Service corresponds to a performance hit. The difference between the bullet-proof assured persistent, and the relaxed best effort, is around a factor of 4, as measured in a zSeries z990 server, using a message size of 1Kb. As the message size increases, nonpersistent messaging performance decreases more rapidly than the persistent messaging one.

Through the connection factory creation menu, you can map the two JMS types of messaging to any of the above five types of WebSphere Platform Messaging Quality of Service, as shown in Figure 3.

Figure 3. Quality of Service
Quality of Service
Quality of Service

After creating the connection factory, select the newly created queue connection factory and click on the Connection pool properties, under the Additional Properties. Set the parameter Maximum connections to a number greater than the expected client applications connecting to the server.

Java heaps

An instance of WebSphere Application Server, with WebSphere Platform Messaging enabled, runs on top of three different processes called regions: the control region (CR), the servant region (SR) and the control region adjunct (CRA). Each region has its own JVM and therefore its independent configuration/tuning:

  • The CR is the heart of WebSphere Application Server and handles all intra-region communications. The CR controls the distribution of incoming messages to the several existent SRs. The CR requires a small amount of code and little heap space to operate. Use a heap size of 256MB or more for the CR.
  • The SR hosts the EJB container process runs in multiple JVMs for scalability. In the messaging world, the SR runs the MDBs. Because the SR is memory-hungry, use a heap size of 512MB or more.
  • There is only one CRA in which the WebSphere Platform Messaging process operates. The CRA is effectively in charge of all the messaging code and requires a substantial heap size. Use a heap size of 512MB or more.

To set the heap size of a region:

  1. From the Servers and Application Servers selection (on the left panel of the WebSphere administrative console) choose your server.
  2. On the Servers right side selection area, under Server Infrastructure, expand Java and Process Management and select Process Definition.
  3. You should see three processes: Adjunct, Control and Servant, corresponding to the CRA, CR, and SR, respectively. Select the process you wish to set the heap size of and then click Java Virtual Machine.
  4. You will see several properties. Change the Maximum Heap Size to the desired value. Again, for a CR process, consider using at least 256 MB. For SR and CRA processes, use at least 512 MB.

Other tuning tips

The next few sections describe miscellaneous optimizations for WebSphere Application Server:

  • Traces and PMI
  • ORB
  • Thread pools
  • Multiple servant regions
  • z/OS WLM

Traces and PMI

WebSphere Application Server comes with some level of traces and PMI enabled as default. You should turn off all levels of tracing when performance is critical.

Disable all subsystem component traces of your z/OS system. This is particularly useful when running already at more than 95% of available CPU.

ORB settings

In z/OS, inter-process communications utilize the object request broker (ORB) service. In cases where the ORB service becomes a bottleneck, it helps to inform the z/OS system that we required more dedicated system threads.

To change the ORB settings:

  1. From the Servers and Application Servers selection on the left panel of the WebSphere administrative console, choose your server instance.
  2. Under Container Services, select ORB Service.
  3. Under Additional Properties, click z/OS additional settings.
  4. On the Workload profile drop-down menu, select LONGWAIT. LONGWAIT increases the nonpersistent rate up to 5%.
  5. In order to reduce the memory utilization, disable the ORB local copies by selecting Pass by reference. This is equivalent to add to the JVM the following option: -Dcom.ibm.CORBA.iiop.noLocalCopies=1

Thread pools

WebSphere Application Server uses different threads pools for different set of tasks. For this reason, CPU-intensive and time-consuming activities commonly find themselves short of available threads to execute on, while at the same time other thread pools associated with easy and quick tasks are full of idle threads. To balance your system for the type of workload you require, you should vary the settings on different thread pools. The out-of-the-box values could possibly be inadequate on some configurations.

To set the various thread pools in the WebSphere administrative console:

  1. From the Servers and Application Servers selection on the left panel, choose your server.
  2. Under Communications, select Messaging => Message Listener Service.
  3. In the additional properties, select Thread Pool and set the Maximum Size to be the sum of all the maxConcurrency of you MDBs plus the number of JMS client connections to your server.
  4. From the Servers and Application Servers selection on the left panel, choose your server.
  5. Under Additional Properties, select Thread Pools. There should be already three thread pools defined.
    • Default is generally shared by all container applications. If, for example, you are running multiple MDBs, then make sure you scale up this pool. For a single MDB present in the system, use a minimum size of 30 and a maximum size of 40.
    • SIBFapThreadPool is the service integration bus FAP outbound channel thread pool, and it is utilized by all JMS application sending or consuming messages to/from the server. The optimum value for its maximum size should be around 50.
    • WebContainer has no consequences for usual JMS applications, unless of course you have for example servlets using JMS or driving your EJB JMS producers. In this case, again make sure your maximum size is larger than 20.

Multiple servant regions

Because of the high number of threads involved in a messaging workload, it is not always very effective to bump up those thread pools. Usually thread pools with more than 100 threads are starting to degrade in performance. The system itself becomes slower due to the excessive synchronization and locking. At this point, if the EJB container becomes too busy, it is possible to run multiple concurrent SRs.

To run a concurrent SR:

  1. From the Servers and Application Servers selection on the left panel, choose your server instance
  2. Select Java and Process Management => Server Instance
  3. You can select Multiple Instance Enabled and set the Maximum Number of Instance to any value. Ideally, you should choose a value between 5 and 10

z/OS WLM

During the messaging workload, you should prioritize some processes. The z/OS WLM is one way to achieve that. As a general rule for messaging, the CR and the CRA should have higher priority (STCFAST) than the SR.

Configuring the default provider for WebSphere Platform Messaging

This section covers all those settings that affect the performance of the embedded Java messaging engine. On z/OS system, the messaging engine is executed in its own Java Virtual Machine, often referred to as the CRA. Two important components of the messaging engine run in the CRA: the resource adapter and the Message Store. The resource adapter deals with MDBs. The Message Store is responsible for writing persistent messages to the database. Getting the right tuning on these components often rewards you with big performance gains.

Tune the MDB Activation Specifications

Tuning MDBs is not trivial. An MDB is a stateless session bean which is missing its home or remote interface. An application cannot invoke an MDB directly but only indirectly by sending a message on a queue on which the MDB is listening. The MDB lives in the EJB container and as such it has to interface with the messaging world through some kind of adapter. On z/OS the container runs in a different process (SR) than the messaging engine (CRA). The code that interfaces the two processes is called the resource adapter (RA as defined in the JCA 1.5 specification) and lives part in the SR and part in the CRA.

To tune the resource adapter for best throughput:

  1. In the WebSphere administrative console, click Resources => Resource Adapters.
  2. Under Resource Adapters, you can find the SIB JMS Resource Adapter used by the MDBs to connect to the messaging engine. This adapter defines an object called activation specification that acts as connection factory for the MDBs.
  3. You can create an activation specification by clicking J2C Activation specifications in the right panel of the SIB JMS Resource Adapter object.
  4. Press New to get a configuration screen where you can select which JNDI name the activation specification refers to. You can leave the JNDI name blank and override that property when you deploy your MDB.

After creating the activation specification, you need to set few important custom properties, as reported in Figure 4.

Figure 4. Custom properties for the Activation Spec
Custom properties for the Activation Spec
Custom properties for the Activation Spec

The two most important properties are the maxBatchSize and the maxConcurrency.

  • maxBatchSize sets the maximum size of the batch of messages that is sent to a single running instance of an MDB. The optimum value varies between 5 and 10.
  • maxConcurrency sets the maximum number of concurrent instances of an MDB present in the container. The higher maxConcurrency is the more messages MDBs consume in parallel. We found that 40 is an optimum value for intensive messaging workloads.
  • Important: The maxConcurrency refers to a single MDB and not to the total number of all instances of all MDBs deployed in the EJB container. For this reason, it is dangerous to set it too high. Since each MDB instance is run on a different thread, it is likely the number of concurrent MDBs be limited by the container thread pool rather than other.

Messaging Engine Data Store

In WebSphere Platform Messaging, a message is first stored in memory in a structure representing the queue. This cache has a limited sized and the default setting could possibly be insufficient for messaging-intensive workloads.

For nonpersistent messages, the cache size is even more important since a full queue usually means a loss of messages (Best Effort) or a loss of performance (Express and Reliable). You can modify the memory cache size (i.e. the queue size) by setting a few properties in the WebSphere administrative console.

To set the queue depths:

  1. Go to Servers => Application Servers and select your server instance.
  2. On the right-side section, select Messaging Engines and then your defined message engine.
  3. Under Additional Properties click Custom properties. Add the following:
    sib.msgstore.cachedDataBufferSize = 40000000
    sib.msgstore.discardableDataBufferSize = 10000000.
Figure 5. Message Store Properties
Message Store Properties
Message Store Properties

cachedDataBufferSize defines the depth in bytes of messages with nonpersistent qualities of services. discardableDataBufferSize refers to persistent queue depths. The above values performed well in performance benchmarks for 1Kb messages. A workload with bigger messages might require higher values. Remember: although these cache sizes are just taken from the Java heap, garbage collection might become an issue when memory becomes scarce.

Another important factor in the usage and tuning of WebSphere Platform Messaging, is the mechanism with which messages are written to the disk when running with a persistent Quality of Service or when message spilling happens due to a full queue. WebSphere Platform Messaging uses a database for this purpose, and the default database shipped inside WebSphere Platform Messaging is Cloudscape™. However, it is possible to use a different database.

Choosing the right database for WebSphere Platform Messaging and correctly tuning the database can dramatically improve performance.

To change the default database for the message engine in the WebSphere administrative console:

  1. Create a JDBC resource for the database. In the WebSphere administrative console left panel select Resources => JDBC Provider => New. Configure the new provider:
    • Database type: DB2
    • Provider type: Universal JDBC Driver Provider
    • Implementation type: Connection pool data source
    • Name: your choice
    • Implementation class: com.ibm.db2.jcc.DB2ConnectionPoolDataSource
  2. After creating the JDBC provider, click on it and on the right panel select Data sources. Click New to create a data source. Give it a JNDI name that you will use later on. Make sure you fill the right values in the DB2 Universal data source properties section. For example, the Database name of our system was DSN810P3 and the Driver type was 2. If not specified, the Server name will default to localhost. After saving, click Test connection.
  3. Click on the newly created data source and select Connection pool properties under the Additional Properties on the right section. The connection pool parameters of the data source are very important tuning parameters. Set the Maximum connections to a value greater than 50. You can use a bigger number without negative consequences. In our system we had 200.
  4. In the data source panel, also under Additional Properties, make sure that the Custom properties related to traces and trace levels are disabled.
  5. Select Servers => Application servers and then the server instance that you made a member of the bus BUS1.
  6. Under Server messaging select Messaging Engines.
  7. You probably have one entry of the type <host>Node01.<servername>-BUS1. Click on that messaging engine.
  8. On the right side of the following screen, select Data store.
  9. Change the Data source JNDI name to the name you set for the data source in the previous step. You might need to create a new Authentication alias by clicking J2EE Connector Architecture (J2C) authentication data entries. You can set the Schema name to anything if you are using DB2®. In Figure 5, we used IBMWSSIB, which is the default for the Cloudscape database. With Oracle, you must set the Schema name to the user name that you will use to connect to the database. You need to restart the server for the changes to take effect.
Figure 6. Data Store for WebSphere Platform Messaging
Data Store for WebSphere Platform Messaging
Data Store for WebSphere Platform Messaging

Performance measurements showed that DB2 v8.1 is twice as fast as the default Cloudscape. On DB2, DASD I/O times are the limiting factor and any general tuning targeted to improve them will benefit messaging performance. Tuning DB2 on z/OS is an art in itself and plenty of documentation is available.

Configuring WebSphere Application Server for WebSphere MQ

When using WebSphere MQ as an external messaging provider in WebSphere Application Server, you need to use a few different configurations steps to tune your system for best performance. However, performance tips reported in Java heaps and Other tuning tips do apply to this scenario too.

Creating factories, queues and listener ports

While you can achieve the biggest improvement in performance by properly tuning WebSphere MQ, accepting all the default settings in the configuration of factories and destination could possibly create several bottlenecks.

To set up your JMS Point-To-Point scenario, you first need to define queue factories, queues, and listener ports when using MDBs:

  1. In the WebSphere administrative console, expand Resources => JMS Providers and click WebSphere MQ.
  2. In the right section, under Additional Properties, click WebSphere MQ queue connection factories => New and enter the name and the JNDI name. Also enter the Queue manager, Host and Port parameters. If you are running WebSphere MQ on the same system, the queue manager name is enough. The Transport type parameter has a great effect on performance and it is explained in the next section.
  3. Select Enable MQ connection pooling. Also check that the codeset (or CCSID) is the one used in the queue manager definition parameter list. Use 500 for the value.
  4. Press Ok and then Save.
  5. After creating the queue connection factory, click on it and set the Connection pool under Additional Properties. Although WebSphere MQ is internally pooling connections, it is a good idea to set this connection pool to a greater maximum value than the default. As a rule of thumb, the maximum connections value should be the sum of all the client applications attempting to connect at any given time in the busiest scenario, including internal MDBs.
  6. Set the Session Pool in the same way you did for the connection pool of the queue connection factory. This time, you need to count how many clients will be sharing one same connection.
  7. Define a listener port inside the server Messaging => Message Listener Service, as shown in Figure 6. Define the listener port for the application server node, not the deployment manager node.
Figure 7. Message Listener Service
Message Listener Service
Message Listener Service

The onMessage method of an MDB executes on a MessageListener thread; MDB instances and listener ports pool reuse this thread. You must configure this pool, the MessageListenerThread Pool, to enable sufficient concurrency to achieve the performance requirement. A value of 50 should be enough.

Figure 6 shows where to find this thread pool.

Additionally, each listener port provides a configurable property, Maximum Sessions, which limits the concurrency for that particular listener port. The Maximum Sessions property defaults to 1, which prevents concurrent processing on the listener port. If you wish to process multiple messages simultaneously, you will need to alter this limit. Set it to 50.

The Maximum Messages value is set on the listener port, along with the Maximum Sessions concurrency setting. When Maximum Messages is set greater than the default value of 1, you can call a MDB instance's onMessage method multiple times within the same context. When transaction times are short, this can yield a significant throughput improvement. Set this value to 5.

A listener port is associated with a JMS ConnectionFactory, which itself provides a tunable connection pool. Each live connection maintains a session pool, which is also tunable via the ConnectionFactory’s properties. Each active listener port will use one connection; when a MessageListener thread delivers a message on behalf of a listener port, it will use a session from the session pool that is owned by the listener port's connection.

There is a sequence of pools and limits, where each pool must be sufficient in size to support the pool above. The MessageListener thread pool should be larger than any of the listener port Maximum Session values; how much larger depends on how you want the system to behave under load.

A ConnectionFactory’s connection pool should be larger than the number of listener ports, with capacity to spare for any applications using that ConnectionFactory. The session pool should be large enough to cover the Maximum Sessions value for any listener ports using the connection. So set it to 50, too.

Client Vs bindings transport

Out of the box, the WebSphere Application Server is set to communicate with WebSphere MQ provided you define a CLIENT transport in the JMS factories resources. The presence of the mq.jar and mqjms.jar files inside the $WAS_HOME/lib/WebSphere MQ/java/lib directory guarantees this support.

However, when WebSphere MQ and WebSphere Application Server are located in the same z/OS system, you can use the faster BINDINGS transport that uses a shared memory location instead of TCP/IP calls between WebSphere Application Server and WebSphere MQ.

To enable the BINDINGS transport:

  • Make available a few WebSphere MQ native libraries to WebSphere Application Server by including the libwmqjbind in the library path of the application server. You can insert this library by adding the $MQ_HOME/java/lib directory to the WebSphere environment (LIBPATH and java_lib_path) in the WebSphere administrative console. Just make sure you change the LIBPATH for the server node and not the deployment manager node when running a WebSphere Network Deployment system.
  • Add the right datasets to the CR (V0SR01C) and server region (V0SR01CS) STEPLIB entries. Just check the queue manager job log and identify the full name of the following datasets: SCSQANLE, SCSQAUTH, SCSQMVR1, and SCSQLOAD. In order make the datasets part of the STEPLIB path of the CR, you need to give authorized program facility (APF) authorization to all four datasets.

Configuring WebSphere MQ

We presume that you already have a queue manager running with a local queue define in it. That queue should allow PUT and GET and should have a reasonable max depth. The queue has to be set as SHARE.

Traces

For best performance, always run with TRACE(G) OFF. This could possibly save as much as 30% of CPU costs.

We found that bringing up the queue manager with traces already switched off could potentially gain you another 5% of CPU, compared to the case where you turn the traces off after the queue manager has started. To disable traces, set TRACSTR=NO in the definition parameter dataset for you queue manager xxxxZPRM and TRAXSTR=NO in the ….XPRM for the channel initiator.

Manage your messages

Separate short-lived messages from long-lived messages by placing them on different page sets (queues) and in different buffer pools. You should allow enough space in your page sets for the expected peak message capacity.

Process multiple reasonably small messages in one unit of work (that is, within the scope of one commit) but do not usually exceed 50 messages per unit of work

Combine many small messages into one larger message, particularly where you will need to transmit such messages over channels. In this case, a message size of 30KB would be sensible.

Nonpersistent messages require no log data. NPMSPEED(FAST) channels require no log data for batches consisting entirely of nonpersistent messages. So use different channels for persistent and nonpersistent messaging. Use up to a max of 4 Buffer Pools of around 50,000 pages. Buffers pools of up to 200,000 pages are not unusual. Make sure your system related messages, long-lived messages and short-lived messages sit on different buffer pools.

Use nonpersistent local queues when possible. Shared queue performance is significantly influenced by the speed of the Coupling Facility (CF) links employed. Shared nonpersistent queues perform better than local private persistent queues.

Specify a codepage in the queue manager definition parameter dataset (e.g. xxxxZPRM), such as QMCCSID=500. The same value of codepage has to be set in the definition of the connection factory in WebSphere Application Server (see step 3). We noticed that, if the value is not set, performance degrades of around 10%.

WebSphere MQ connections

Every time WebSphere MQ uses MQCONN, WebSphere MQ loads a program module. If WebSphere MQ does this frequently, then there is a very heavy load on the STEPLIB library concatenation. In this case, it is appropriate to place the SCSQAUTH library in the CSVLLAxx parmlib member LIBRARIES statement and the entire STEPLIB concatenation in the FREEZE statement.

Each conversion type from one code page to another requires the loading of the relevant code page conversion table. This is done only once per MQCONN, however, if you have a many batch programs instances which process only a few messages each then this loading cost and elapsed time can be minimized by including the STEPLIB concatenation in both the LIBRARIES(..) and FREEZE(..) lists.

WebSphere MQ logs

Try to keep as much recovery log as possible in the activelogs on DASD. If you use devices with in-built data redundancy (for example, Redundant Array of Independent Disks (RAID) devices) you might consider using single active logging.

If you use persistent messages, single logging can increase maximum capacity by 6 - 10% and can also improve response times. To achieve single logging set TWOACTV=NO, TWOARCH=NO and TWOBSDS=NO in the queue manager definition parameter dataset.

Keep at least four active log data sets for dual logging; two in case of single logging. Your logs should be large enough so that it takes at least 30 minutes to fill a single log. A size of a 1,000 cylinders per log is reasonable.

There should be no other data set with significant use on the same pack as an active log: ideally, each of the active logs should be allocated on separate, otherwise low-usage DASD volumes. As a minimum, no two adjacent logs should be on the same volume.

WebSphere MQ channels

A channel initiator processing a large number of persistent messages across many channels might need more than the default 8 adapter TCBs for optimum performance. This is particularly so where achieved batch size is small, because end of batch processing also requires log I/O, and where thin client channels are used. We recommend CHIADAPS(40) and CHIDISPS(25) for such very heavy message workloads.

For nonpersistent messages, choosing NPMSPEED(FAST) gains efficiency, throughput and response time but messages can be lost (but never duplicated) in certain error situations. Set BATCHSZ to x and BATCHINT to y where you typically expect x or more messages in y milliseconds and you can afford the up to y milliseconds delay in response time on that channel.

Do not allow TCP/IP connection to be kept alive. Set CHITCPKEEP(NO). Extra processing is used to keep connections open and it is not appropriate when your application connects and disconnects very frequently.

Debug performance

Browsing the WebSphere Application Server SR log in the job V0SR01CS is always a good way to make sure no exceptions are thrown by the server while connecting to WebSphere MQ. However, this is not the full story.

Before any message is exchanged, the MDB in the SR holds a connection to the WebSphere MQ queue manager. The MDB is ready to consume a message in the WebSphere MQ queue as soon as the WebSphere Application Server CR notifies it that there is one available. But the CR has not yet contacted WebSphere MQ. The CR will only contact WebSphere MQ after the first message arrived in the queue. Therefore, connection problems between WebSphere Application Server and WebSphere MQ will only arise after the test has started. Look for exceptions in the CR V0SR01C job log.

Conclusion

In this article, we explained how to tune your existing z/OS system to achieve best performance when running a JMS application over WebSphere Application Server. We described the tuning steps for both the embedded messaging engine and WebSphere MQ. We showed several important parameters for the configuration of destinations and connection factories. We highlighted crucial values for several thread-pools in the EJB container and in the resource adapter for Platform Messaging. We explained the particular architecture of WebSphere Application Server for z/OS and the tuning applicable to it. We listed performance settings for WebSphere MQ’s channels, WebSphere MQ managers, logs and WebSphere MQ connections.

This article described the general tuning steps necessary for an efficient and scalable starting system. To go beyond that, experience and good-design in developing J2EE applications cannot be substituted.


Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Middleware
ArticleID=157516
ArticleTitle=Performance tuning for Java Messaging Service on WebSphere Application Server on z/OS
publish-date=09132006