Memory considerations

The amount of real and swap memory needed for a zCX instance depends on the memory requirements of the containers and applications that will run inside it. The combined virtual memory requirements of the applications that will run concurrently should not exceed double the amount of real memory provisioned to the zCX instance (maximum ratio of 2:1 virtual to real). You can use Docker and Linux® monitors to determine container and Linux memory use.

Avoid over-defining the memory size, as Linux uses excess memory for file and buffer caches of potentially low-value content. While these buffers can help workloads avoid I/O on a stand-alone system, you should account for them in your sizing to maximize memory being used for the guest.

One option to determine the appropriate memory size is to keep Linux from swapping. Lower the memory size until Linux begins to swap, then increase the size to the next largest increment that does not impact performance.

Start of changeDedicated Memory can be assigned to a zCX instance to ensure that the memory is available when the instance starts. For more information on Dedicated Memory, see z/OS MVS Initialization and Tuning Guide.End of change

Calculating real and swap memory allocation for your zCX instance

The total memory available to the zCX instance (both real and swap) must be at least 1 GB greater than the total virtual memory needed for all concurrently running applications.

The appropriate proportion of real to swap memory depends on the amount of real memory provisioned to the zCX instance. For zCX instances with less than 8 GB of real memory, use a 1:1 ratio of real to swap memory. For zCX instances greater than or equal to 8 GB, use a 2:1 ratio of real to swap memory.

For example, if you wish to concurrently run five applications that each require 4 GB of virtual memory, you would want at least 14 GB of real memory and 7 GB of swap memory:

5 applications * 4 GB each = 20 GB
+ 1 GB for zCX = 21 GB total

The general rule of a 1:1 ratio of real to swap memory yields 11 GB each (rounded up). This is greater than 8 GB, so the 2:1 real to swap memory ratio (66% real memory) should be used.

21 GB * 0.66 = 14 GB (rounded up) of real memory
21 GB – 14 GB = 7 GB of swap memory. 

Failure to allocate sufficient real or total memory can cause a zCX instance to run out of memory and reboot. (This would issue message GLZB017I to the zCX instance job log). If this occurs, re-examine the virtual storage requirements of your applications and the subsequent real and swap memory calculations for your zCX instance.

Sizing and other considerations for the z/OS memory storage pool and your zCX instance

Figure 1. z/OS Storage Pools
This shows z/OS storage pools and use based on 2G guest page size, 1M or 4K guest page size, or other address spaces.
Figure 1 illustrates the different z/OS storage pools and use based on page size and preferred (non-reconfigurable) or non-preferred (reconfigurable) storage. Storage is separated into the following pools:
  • 2G fixed
  • 1M and 4K preferred
  • 1M and 4K non-preferred
zCX APAR OA59573 enables use of 2G or 1M fixed pages to back guest memory. Using 2G pages provides the best performance. Both 2G and 1M pages save page table storage compared to 4K pages. A 2G page size for guest memory is not supported when z/OS is a z/VM guest. Neither 2G nor 1M page sizes are supported on zPDT.

The preferred 2G fixed pool size is defined at IPL by the IEASYSxx 2G LFAREA value and cannot be changed after IPL. The remaining memory is split between the preferred and non-preferred 1M and 4K areas.

The non-preferred storage pool size is determined by the IEASYSxx RSU (Reconfigurable Storage Units) value and can be dynamically changed after IPL. The remaining memory determines the size of the preferred 1M and 4K storage pool size.

The DISPLAY M=STOR command displays the non-preferred (reconfigurable) and preferred (non-reconfigurable) storage values for the system. The F AXR,IAXDMEM and DISPLAY M=STOR commands display the following metrics:
  • LFAREA-defined total sizes (LFAREA values may be restricted to a subset of the overall available size)
  • In-use allocation
  • Maximum in-use allocation for 1M fixed and 2G pages
  • RSU value

The system uses the fixed storage threshold control IEAOPTxx MCCFXTPR to limit how much of the 1M and 4K pool areas are fixed at any given time. When system reaches the threshold, it will swap out address spaces that are using fixed storage in order to protect itself.

More information on z/OS storage pools and page sizes is available in the z/OS MVS Initialization and Tuning Guide and z/OS MVS Initialization and Tuning Reference.

Choosing page sizes for zCX

The chosen page size determines the z/OS storage pool from which the guest memory comes. zCX address spaces are non-swappable and storage containing fixed frames is not reconfigurable. Therefore, a 1M or 4K page size uses storage from the 1M and 4K preferred storage pool and a 2G page size uses storage from the 2G fixed storage pool. The storage cannot come from non-preferred (reconfigurable) storage.

For example, suppose an LPAR has 260 GB of real storage defined as follows:
  • 60 GB of storage defined as 2G pages
  • 30 GB of LFAREA- defined total storage as 1M fixed pages
  • 40 GB of RSU storage via the RSU parameter
The memory available for zCX would depend on the selected page frame size.
  • For a 2G page size, there would be 60 GB minus what is required for other workloads.
  • For a 1M page size, there would be 30 GB minus what is required for other workloads.
  • For a 4k page size, there would be 160 GB minus what is required for other workloads
The amount of the 160 GB that can be fixed before a pageable storage shortage occurs is controlled by the IEAOPTxx MCCFXTPR parameter.

You can configure a zCX instance to use a single page size or a range of page sizes, which the system will attempt to use in descending size order. Only one page size is used for the guest storage. Select a single page size when you want zCX to use only that size. Selecting multiple page sizes provide higher availability and is useful for systems that have different memory requirements. zCX will terminate the instance if it cannot use any of the selected page sizes. You can automate message GLZ0024I for notifications when zCX cannot use a selected page size.

If you are using a 1M or 4K page size, consider altering your IEAOPTxx OPT IRA405I(2) and MCCFXTPR parameters. These determine the threshold of the fixed storage system and the percentages of preferred and non-preferred storage that may be fixed before a pageable storage shortage, respectively. Alterations should depend on possible changes to the ratio of fixed and available real storage. If these parameters are not appropriately adjusted, it can lead to severe system impacts. The MVS Initialization and Tuning Reference includes more information on IEAOPTxx and the Workload Management considerations about limiting real storage usage.

Impact of page sizes

You should plan for the impact of zCX on the z/OS storage pool resources based on the selected page size(s). This includes sysplex failure environments, where a zCX instance may restart on a different system. You may need to add more real memory or related auxiliary storage to the system. You can add more 1M or 4K preferred real storage either by adding more physical memory or by reducing the amount of non-preferred storage. Adding more 2G fixed memory requires an IPL.

zCX protects the system from storage impacts during initialization when using 1M or 4K pages. It queries the system using the SRM SYSEVENT STGTEST API to determine the impact of the guest fixed real memory on system performance and availability. If the impact is critical, zCX terminates. If the impact is minor, initialization continues. This check is not required when using 2G fixed pages, as they are fixed by definition. Start of changeIf zCX instances are assigned Dedicated Memory, IBM recommends using the memory as 2G frames.End of change You can find more information on SYSEVENT STGTEST in z/OS® MVS Programming: Authorized Assembler Services Reference SET-WTO— System event Obtain system measurement information (STGTEST).

Regardless of page size, SVC dumps of zCX instances with large memory footprints can put pressure on real and auxiliary storage resources. APAR OA59573 defaults that the guest memory will not be captured during a standalone dump.

Summary of frame size options

Consider all systems on which zCX may run.
  • 2G
    • Choose this option for the best performance and CPU use when you have 2G page space available. The 2G page space must be available on all systems on which zCX may run.
    • Choose this option when storage from 1M and 4K preferred pool would impact the system because the pool would not have enough available fixed space.
  • 2G, 4K:
    • Choose this option for the best performance and highest availability. The 4k pages provide higher availability if 2G pages are unexpectedly unavailable or if zCX temporarily restarts on another system that does not have 2G storage available.
    • Choosing First Fit 2G, 1M, 4K can also be considered as 1M and 4K pages come out of the same preferred storage pool. However, keep in mind that using 1M fixed pages without a large enough LFAREA maximum for 1M may impact other 1M fixed users who start after the appliance. As such, planning for 2G pages with 4K pages as a backup might be the better choice.
  • 1M, 4K:
    • Choose this option when running z/OS as a z/VM guest. Since 2G pages are not supported as a z/VM guest, 1M pages provide the best performance on z/VM.
    • This option allows storage used by zCX available to other workloads that do not support 2G pages when the zCX is not running and thus not using storage.
    • This option provides high availability when 1M fixed pages are unavailable due to fragmentation of the preferred 1M and 4K storage pool.
  • 4K:
    • Choose this option for highest compatibility. It has the lowest performance.
    • This option is only recommended when waiting for the next IPL to increase the LFAREA or when you do not want to dedicate 2G or 1M pages since zCX is not in production or performance sensitive.

Implementing storage limits for zCX instances

You can also limit the amount of real memory that one or more zCX instances can consume by specifying a memory limit (MEMLIMIT) in the WLM resource group associated with zCX instances, as described in the Workload management considerations section in this chapter.

The virtual memory used above the bar, however, is not limited by the memory limit (MEMLIMIT) control. Regardless of the page size used, the real storage used by zCX can be limited via WLM resource groups.

Memory demands that reduce available memory for Docker containers

  • The memory reserved by zCX for handling Linux kernel crashes
  • The memory reserved by the Linux kernel for its own use
  • The memory used by processes running in the zCX instance in support of Docker
  • The memory used by the Linux kernel to manage disk devices

Linux kernel crash memory reservation

Table 1. Memory reserved for Linux crash handling according to zCX instance memory size
zCX instance memory size Memory reserved for Linux crash handling
2G to 255G 512M
256G to 1023G 1G
1024G 2G

Estimating Linux reserved memory

Use the following formula to estimate how much memory the Linux kernel reserves for its own use.
reserved-in-M = ((18000 * instance-mem-in-G) + 40000) / 1024

Estimating baseline memory usage

zCX instances have processes (Docker daemon, zCX Docker CLI SSH container, and more) that use memory. These processes are always running and their memory cannot be freed for use by other Docker containers. There is no formula for determining how much memory is used for these processes. You can use the Linux free command (free -h) to display the currently available memory for a newly provisioned zCX instance. This provides a starting estimate for the memory available for Docker containers.

Start of change

Memory Map Area Consideration

zCX instances increase the vm.max_map_count kernel setting from 65530 to 262144. This enables running containers such as ElasticSearch or OpenSearch which require more memory map areas. When running processes that use more than the Linux kernel default of 65530, the memory allocation for the zCX instance must be 4GB or higher. This is required to account for the increased kernel memory needed to track larger numbers of memory map areas.

End of change

Using a large number of data/swap disks

As additional data and/or swap disks are added to a zCX instance, the Linux kernel uses more memory to manage these devices. If your zCX instance has many (over 100) data and/or swap devices, you may need to increase your memory allocation to account for the additional memory usage.

Example of evaluating memory of a zCX instance

For zCX instance provisioned with 4G of memory:
  • 512M of memory is reserved for handling Linux kernel crashes
  • Approximately 110M of memory will be reserved by the Linux kernel for its own use
    • ((18000 * 4) + 40000) / 1024 = 110M
In total, 622M of memory are reserved, leaving 3474M (3.4G) for Linux non-kernel processes.
The Linux free command (free -h) can be used in the zCX Docker CLI SSH container to view how much memory is available for Docker containers:
Table 2. Example Linux free command output
Total Used Free Shared Buff/cache Available
Mem: 3.4G 699M 2.3G 5.4M 467M 2.6G
Swap: 997M 0B 997M      
In the output, the Available column shows the amount of memory available for Docker containers (2.6G). Therefore, approximately 1.4G of the original 4G specification is used for other purposes and is not available for Docker containers.