The cache and memory affinity optimizations are functions
of the AIX® Dynamic System Optimizer that change the settings by minimizing the amount of data
that crosses the affinity domains.
The IBM® Power Systems server divides its processor
and memory units into symmetric multiprocessing (SMP) affinity
domains. Affinity domain refers to a group of processing units
which have similar memory and cache access times. A processor socket
is an example of an affinity domain. System performance is close to
optimal when the amount of data crossing between the domains is minimized.
Cache affinity
Active System Optimizer (ASO) analyzes
the cache access patterns based on information from the kernel and Performance Monitoring Unit (PMU)
to identify potential improvements in cache affinity by moving threads of workloads closer together.
When this benefit is predicted, ASO uses algorithms to estimate the optimal size of the affinity
domain for the workload and uses kernel services to restrict the workload to that domain. The closer
cache locations result in improved performance as compared to cache locations that are farther away.
In AIX version 7.2.5 and later,
multi-threaded single-process workloads and multi-process single-threaded workloads are considered
for the cache affinity optimization.
Memory affinity
After a workload is identified and
optimized for cache affinity, ASO monitors the memory access patterns of the process-private memory
of the workload. If the conditions of the workload might benefit from moving process-private memory
closer to the current affinity domain, hot pages are identified and migrated closer to the current
affinity domain using software tools. Hot pages are frequently accessed memory allocations.
Single-threaded processes are not considered for this optimization because their kernel already
adjusts the affinity of the process-private data when the thread is moved to a new affinity domain.
Only workloads that fit within a single scheduler resource affinity
domain(SRAD) are considered.