
How Does Transaction CPU Behave?If a customer has Transaction goals[1] - for CICS or IMS - it’s possible to observe the CPU per transaction with RMF. But you have to have:
This might seem like stating the obvious but it’s worth thinking about: The transaction rate and the CPU consumption have to be for the same work. Now, a Transaction service class doesn’t have a CPU number. Similarly, a Region service class doesn’t have transaction endings. So you have to marry up a pair of service classes:
Operationally this might not be what you want to do. Fortunately, you can do this with a pair of report classes. There’s another advantage to using report classes: You can probably achieve better granularity - as you can have many more report classes than service classes[2]. So I wrote some code that would only work if the above conditions were met[3]. Unimaginitively my analysis code is called RTRAN; You feed it sets of Transaction and Region class names. Perhaps I should’ve said you could have e.g. a pair of Transaction report classes and a single Region report class and the arithmetic would still work[4]. But why do we care about CPU per Transaction? There are two reasons I can think of:
From the title of this post you can tell I think the latter is more interesting. So let’s concentrate on this one. In what follows I used RTRAN To create CSV files to feed into spreadsheets and graphs[5]. Over the course of a week I captured four hills while developing RTRAN. The first thing to note is that CPU per Transaction is not constant, even for the same mix of transactions. This might be a surprise but it makes sense, if you think about it. But let’s think about why this could be.
Two Important Asides
In a nutshell, both these asides amount to “this is not a method to accurately measure the cost of a transaction but rather to do useful work in understanding its variability.”
Why CPU Per Transaction Might Vary
Short Term Dynamics
But it’s not just homogenous variation; Batch can impact CICS, for example. Look at the following graph: ![]() In this case it’s the lower transaction rates that are associated with the higher CPU per transaction. But not all low transaction rate data points show high CPU per transaction. A tiny bit more analysis shows that the outliers are when Production Batch is at its heaviest, competing for processor cache. It’s also the case that these data points are at very high machine utilisation levels, so the “working more to manage the heavy workload” phenomenon might also be in play. Long Term Change
Well, things do change, and sometimes it’s a noticeable step change, like the introduction of a new version of an application, where the path length might well increase. Or, perhaps, a new release of Middleware[7]. Or, just maybe, because the processor was upgraded[8]. But often, perhaps imperceptibly, CPU per transaction deteriorates. For example, as data gets more disorganised. Conclusion
Try to understand “normal” as well as behavioural dynamics, and watch for changes.
|