z/OS Connect: A viable alternative to a client connection?

 View Only

z/OS Connect: A viable alternative to a client connection? 

Wed November 13, 2019 10:10 AM

z/OS Connect: A viable alternative to a client connection?

tonySharkey |May 16 2017 Updated

In MQ 9.0.1 we introduced the MQ Service Provider for z/OS Connect. This is compatible with MQ for z/OS queue managers that are version 8 or later.

 

You can use a stock check workload as an example use of the z/OS Connect feature, but how does this compare with a client performing the same request?

 

In order to answer this question, we created 2 configurations:

  • A client able to send HTTP requests to a WAS Liberty server that was configured for a z/OS connect ‘2-way’ service, or to borrow more traditional parlance a request/reply workload, into a z/OS queue manager where the queues were served by batch server tasks. The partner machine uses curl to drive the HTTP request into the Liberty profile which then connects to the z/OS queue manager in bindings mode to put the message to a request queue. This request queue is monitored by batch server tasks that get the message, process the data in the request message and generate a reply message to be put to the named reply queue, which is subsequently returned to the partner machine.

 

  • An MQ client application able to connect to a z/OS queue manager via a SVRCONN channel to drive workload on the batch server tasks. The client application is configured to connect to the queue manager, open the request and reply queues, put the request message, get-wait the reply message, close the queues and disconnect from the queue manager.

The MQ Client application is deliberately acting in a less optimal manner to best simulate the lack of a long-term persistent connection between the REST API and Liberty server.

 

Measurement variation:

The measurements performed run in a variety of configurations which involved:

 

Variable Message Sizes:

The request message used was typically 50 bytes or less and contained the size of the desired reply message.

The batch servers got the request message, extracting the required reply message size, allocate storage and populate the reply message.

 

The reply messages generated were small (1KB), medium (64KB) and large (512KB).

 

Variable number of clients:

Measurements were run with between 10 and 50 requester tasks.

 

Variable number of queues:

When workload was distributed over multiple sets of queues, there was no significant difference in the performance observed. Typically, the transaction rates were not high enough to reach queue limits. As such, this report discusses the performance when using 1 pair of request and reply queues.

 

Configuring the MQ Service Provider for z/OS Connect:

The MQ Service provider was set up following the instructions in the MQ Knowledge Centre.

In addition, we configured a two-way service in the server.xml by adding a number of workloads with specific queues, for example workload1 used request queue LQ1001 and reply queue LIXC01 as demonstrated in the snippet below:

<zosConnectService id="zosconnMQRR1" invokeURI="/workload1"

  serviceName="workload1" serviceRef="workload1" />

 

<jmsConnectionFactory id="mqcf1" jndiName="jms/cf1" connectionManagerRef="cm1">

  <properties.wmqJms transportType="BINDINGS" queueManager="VTS1" />

</jmsConnectionFactory>

                                                                             

<usr_mqzOSConnectService id="workload1" connectionFactory="jms/cf1"

  destination="jms/wlReqQ1"

  waitInterval="10000"

  receiveTextCCSID="37"

  replyDestination="jms/wlRepQ1"/>

                                                                          

<jmsQueue id="wlReqQ1" jndiName="jms/wlReqQ1">

  <properties.wmqJms baseQueueName="LQ1001" baseQueueManagerName="VTS1"/>

  <properties.wmqJms targetClient="MQ" CCSID="819"/>

</jmsQueue>

<jmsQueue id="wlRepQ1" jndiName="jms/wlRepQ1">

  <properties.wmqJms baseQueueName="LIXC01" baseQueueManagerName="VTS1"/>

  <properties.wmqJms targetClient="MQ" CCSID="819"/>

</jmsQueue>

 

Specialty processors on z/OS:

The WAS Liberty server hosting the MQ Service provider for z/OS connect is a java-based product and as such is able to exploit zIIP specialty processors. The measurements run were monitored via RMF to determine how much of the work could be offloaded to zIIP.

 

The measurements using the WAS Liberty server show costs when:

  • No zIIP processors are available.
  • All eligible workload is offloaded to zIIP. The IIPHONORPRIORITY=NO option for the parmlib member IEAOPTxx  can ensure that zIIP-eligible work will run on the zIIP unless is it necessary to resolve contention for resources with non-zIIP processor eligible work.

 

The RMF Workload report suggested that 98% of the WAS Liberty costs were zIIP-eligible. In reality, unless the IIPHONORPRIORITY=NO option is specified, some work may be run on general purpose processors if there are insufficient zIIP processors available.

 

Measurements:

The costs reported are primarily for the WAS Liberty server and the MQ channel initiator with the intent to demonstrate the differences. The MQ queue manager and batch application server costs are largely similar regardless of how the message arrives on the request queue.

 

Small messages

The following table shows the cost of the workload that was run on general purpose processors by the MQ channel initiator or the WAS Liberty server.

 

Transaction Cost in MQ Channel initiator v WAS Liberty Server

Requesters

Client connection

 

MQ Channel Initiator

REST API

 

WAS Liberty Server

 

(no zIIP offload)

REST API

 

WAS Liberty Server

 

(zIIP offload)

10

896

8200

164

20

940

8077

162

30

981

7954

159

40

1010

7907

158

50

1040

8180

163

 

Note: Costs shown are CPU microseconds per transaction, with measurements run on z13 with 3 dedicated processors running z/OS V2R2.

 

The costs of z/OS Connect when zIIP processors are not available is between 8 to 10 times higher than the client connection.

 

This is dramatically reduced when zIIPs are available, resulting in the chargeable cost being 15-20% of the client connection. It should be noted that this does mean that any zIIP processors will see increased usage.

 

Achieved Transaction Rate

The transaction rate for the z/OS Connect workload increased from 72 to 150 transactions per second as the number of requesters increased from 10 to 50. With more requesters, the throughput did continue to increase at a similar rate.

By contrast the client performance peaked at 2500 transactions per second with 50 clients.

 

Medium messages

Transaction Cost in MQ Channel initiator v WAS Liberty Server

Requesters

Client connection

 

MQ Channel Initiator

REST API

 

WAS Liberty Server

 

(no zIIP offload)

REST API

 

WAS Liberty Server

 

(zIIP offload)

10

920

11415

342

20

962

10060

302

30

1005

9338

280

40

1026

9343

280

50

1037

9277

278

 

Note: Costs shown are CPU microseconds per transaction, with measurements run on z13 with 3 dedicated processors running z/OS V2R2.

 

The costs of the client connection increased little when comparing the costs of the small and medium sized reply messages whereas the z/OS Connect workload saw an increase of 13-37%, suggesting the WAS Liberty server was more sensitive to the size of the message payload.

 

Achieved Transaction Rate

The transaction rate for the z/OS Connect workload increased from 65 to 144 transactions per second as the number of requester tasks increased from 10 to 50.

By contrast the client performance peaked at 1275 transactions per second with 50 clients and was hitting network limits for this workload due to bandwidth and latency.

 

Large messages

Transaction Cost in MQ Channel initiator v WAS Liberty Server

Requesters

Client connection

 

MQ Channel Initiator

REST API

 

WAS Liberty Server

 

(no zIIP offload)

REST API

 

WAS Liberty Server

 

(zIIP offload)

10

1218

19620

589

20

1233

19310

579

30

1253

19420

583

40

1207

19515

585

50

1256

19909

597

 

Note: Costs shown are CPU microseconds per transaction, with measurements run on z13 with 3 dedicated processors running z/OS V2R2.

 

As with medium sized messages, the use of large messages with the client application saw a small increase. In this configuration the increase was approximately 300 microseconds per transaction.

 

The z/OS connect workload increased by 8 CPU milliseconds per transaction, which when offloaded resulted in a chargeable increase of 250 microseconds per transaction.

 

When offloading all possible work to zIIP, the chargeable cost for the z/OS Connect workload was approximately half that of the client workload.

 

Achieved Transaction Rate

The client connection was rapidly constrained by network bandwidth, even with just 10 clients.

The z/OS connect workload showed much closer performance and would reach network limits at 90 requester tasks.

 

Does Advanced Message Security have an impact?

Enabling the SPLCAP=YES option on a z/OS queue manager, regardless of a policy being applied to the queues, will make a difference to the MQ Client measurement. We observed an increase of approximately 0.3 CPU milliseconds to the cost of each transaction as a result of explicit client look-up for a queue policy.

 

When the queue manager was configured to have SPLCAP=YES in the z/OS Connect configuration but no policies were defined, the transaction cost was not affected.

 

If policies were to be defined, there would be an increase similar to those seen in the MQ for z/OS V901 performance report.

 

Conclusions

There will be situations where z/OS Connect is a viable alternative to an MQ Client from a usability perspective – in all measurements, the round-trips had sub-second response times. Many modern scripting languages, such as node.js, Swift and Go, with no native MQ client have rich support for REST API processing.

 

It should be noted that there is certain functionality that the MQ Client possesses that z/OS Connect cannot replicate, such as transactional considerations, which may influence the decision of which configuration to use.

 

The MQ Client application was able to process more transactions per second, scaling well until the network bandwidth limits were approached. The z/OS connect measurements show a less aggressive increase in throughput, but was able to continue scaling with more requester tasks, until also hit by network constraints.

 

In an environment where network latency is high, the MQ client performance may drop as there are a number of flows between the client and the server. It may be that the REST API is less impacted by network latency as there are typically less flows between the requester and the WAS Liberty server.

 

The costs observed in the client configuration can be significantly reduced if the client is able to connect once, potentially open the queues once, then process multiple messages before closing queues and disconnecting. As a guide, more than 70% of the small message cost in the MQ channel initiator is related to MQCONN and MQOPEN – an overhead which rises further when SSL/TLS encryption is used on channels.

 

Further cost savings can be made in the MQ Client configuration by suppressing the CSQX511I and CSQX512I messages using the “SET SYSTEM EXCLMSG(X511,X512)” command. This was of the order of 150 microseconds per transaction. To put this into context, if these X511/X512 messages were suppressed, the small message transaction cost reduces to 770 microseconds, compared with the REST API cost of 342 microseconds (based on all eligible code running on zIIP).

 

Whilst the costs of the z/OS Connect workload when offloaded to zIIP is attractive compared to the MQ Client, this means there is additional load on the zIIPs. For example, the small message workload table reports that 342 microseconds was not eligible, meaning that 11703 CPU microseconds was eligible. If all of that eligible work was run on specialty processors, 90 transactions per second would occupy an entire zIIP processor. By contrast with the MQ Client running, it would take a rate of 1100 transactions per second to fully utilise a single general-purpose CPU.

 

Entry Details

Statistics
0 Favorited
8 Views
1 Files
0 Shares
4 Downloads
Attachment(s)
docx file
OS Connect- A viable alternative to a client connection?.docx   210 KB   1 version
Uploaded - Wed November 13, 2019

Tags and Keywords

Related Entries and Links

No Related Resource entered.