Batch Capacity Planning, Part 2 - Memory
MartinPacker 11000094DH Visits (5839)
I can't believe it's been almost a week since I wrote Batch Capacity Planning, Part 1 - CPU . Where did the time go?
Re-reading it I'm struck by the overwhelming theme of Batch's unpredictability and lumpiness. This is true of memory, as well, but to a much lesser degree.
Why to a lesser degree? Well, in most systems I look at the memory usage is mostly fairly constant and dominated by big "server" address spaces (such as DB2) or else CICS regions (and the analogue in the IMS world). So the variability of the Batch workload is overshadowed by the constancy of these other address spaces.
I oversimplified there just a tad. for example:
But these changes are step changes and conscious acts of will. Do take them into account, but don't worry about their variability.
But there's an issue anyway with Batch memory analysis:
This has been such a nuisance that my standard Memory graph has to take it into account: I take the Online frames and subtract all the non-workload frame queues - LPA, CSA, SQA and the like - and the Available Frames. These all come from SMF 71. What's left ought to be workload-related memory usage. So I then subtract all the memory usage (R723CPRS-driven) for all the workloads - including Batch and TSO. What's left I call "Other".
For the most part Other is the unde
What I do note is that, generally, there's lots of memory in the Available Frames category i.e. Free - during the Batch Window. So it's almost always true that - from the Memory perspective - Data In Memory (DIM) or Parallelism could be increased.
I find the SMF Type 30 data to be almost entirely useless for memory usage purposes - especially for Batch. About the only thing you can do with it is to use the Virtual Memory fields and pretend that each virtual page allocated is backed by real memory. Which we all know to be a (usually) gross overestimate.So I don't actually do that.
For Batch, though, the biggest user of memory is typically DFSORT. Now there we have good news: The instrumentation does a nice job of summarising peak memory exploitation: Whether Dataspace, Hiperspace or Large Memory Object sorting. Note: I say "Peak" here. You might be able to do something to turn that into an average. But that would be a little fraught.
All the above has been about actual usage. What about projecting forwards? If you know nothing's going to change the picture is likely to be static. If you think you're going to exploit DIM or Parallelism it's much more difficult, for the same reasons as it was for CPU. But there's an additional reason:
If you were to be very successful at buffering something - especially with only a small number of buffers - then you hold onto the memory for less time. By exploiting DIM well the "area under the curve" could actually go down. Such a case might be using VSAM LSR buffering. I've seen the kinds of speed up where this is likely: 2 million I/Os to under a thousand, for example.
So, in summary, Memory Capacity Planning for Batch shares the difficulties of CPU. But it also has a few of its own.