IBM Support

Using -Xgc:preferredHeapBase with -Xcompressedrefs

Question & Answer


Question

"Why does the JVM report a native out-of-memory (NOOM) when using compressed references? I am using a 64bit JVM and I clearly have plenty of memory left. How can I resolve this problem?"

Cause

*In this note, please refer to Java Versioning: Java Version.Release.ServiceRelease.FixPack


ex: Java 7.1.4.6 is the same as Java 7.1 SR4 FP6 is the same as Java 7 R1 SR4 FP6.

The IBM JVM will automatically use compressed references when using a maximum heap size less than 25GB. This automated behavior was introduced in Java 6.1.5.0 and Java 7.0.4.0*. Compressed references (CR) decreases the size of Java objects making better use of available memory space. This better use of space results in improved JVM performance. *(Java 7.0.1.1 and later uses compressed references by default on z/OS)

See Introducing WebSphere Compressed Reference Technology for detailed information on how Compressed References work.

"When using compressed references, the size of the field used in the Java object for the Class Pointer and the Monitor/Lock is 32 bits instead of the 64bits that would be available in non-compressed mode. Because we are using 32 bits to store the location of these, and they are located in native (non-Java heap) memory, they must be allocated in the first 4GB of the address space - the maximum range we can address with the 32 bits." ~IBM Java Development team

If the Java heap itself is small (-Xmx), the JVM may allocate it in the lower 4GB of address space along with the Class Pointers and Monitors/Locks. If these Class Pointers, Monitors/Locks and Java heap (if included) cannot fit in the lower 4GB, a native out of memory (NOOM) will be thrown.

Answer

Why Use Compressed References?

Below the 4GB mark, the JVM does not have to perform any compression/decompression of the address pointer at runtime. Therefore, the best performance will be attained if the Class Pointers, Monitors/Locks and Java heap can all be contained comfortably within the lowest 4GB of the address space.

Determining Address Location of Java Heap Memory

To verify if the Java heap has memory regions below the 4GB mark, check the "Object Memory" section in the javacore:



Convert the "start" address from the hex value to a GB value. In the example below, 0x000000000F010000=0.23GB which is below the the 4GB (0x0000000100000000) mark.

 

Setting the Preferred Heap Base with -Xgc:preferredHeapBase

Starting with Java 6.1.6.0 and Java 7.0.5.0, and later, the JVM will determine if the Java heap will fit comfortably in the lower 4GB. If it is too large, the JVM will automatically allocate the Java heap above the 4GB mark (APAR IV37797).

 
NOTE: In IBM System z platforms (i.e. z/OS and z/Linux), automatic shift of the heap above the 4GB address space does NOT occur because on these platforms, there is an additional performance penalty associated with higher shift values.

To resolve the issue you can
1) use -Xmcrs (see section "Reserving Low-Memory Space with -Xmcrs" to determine optimal value)
2) use -Xnocompressedrefs (this will require additional heap space and may impact performance)


However, in earlier Java 6.1 and Java 7.0 versions (earlier than Java 6.1.6.0 and Java 7.0.5.0), if the Java heap cannot fit in the lower 4GB, a NOOM will occur. To avoid this problem, the generic JVM argument -Xgc:preferredHeapBase=<address> can be used to ensure the Java heap is allocated above the 4GB address space. This will leave more room for the Class Pointer and Monitor/Lock memory.


  • Example:

    -Xgc:preferredHeapBase=0x100000000

    This will locate the Java heap starting at the 4GB mark thus leaving the lower 4GB for the other processes.

 

Increase Maximum Heap Size to Force Heap Allocation Above the 4GB mark

Another way to ensure that the heap is allocated above the 4GB mark (Java 6.1.6.0 and Java 7.0.5.0 and later) is to set a maximum heap size equal to or greater than 4GB. For example -Xmx4G will ensure that the heap will have to be allocated above the 4GB mark. This will not work in earlier versions of the JVM since these earlier versions allowed the heap to straddle the 4GB mark, placing part of the memory above and some below (fixed as part of APAR IV37797) .

Further Investigation

If after setting -Xgc:preferredHeapBase=<address> or -Xmx4G a NOOM is still encountered (Java 6.1.6.0 and Java 7.0.5.0 and later), then further investigation is required at the application level. Look to decrease the size and usage of the applications Class Pointers and Monitors/Locks. Additionally, there are some WebSphere Application Server troubleshooting methods that may help reduce the native memory footprint. See: IBM Troubleshooting native memory issues.



Reserving Low-Memory Space with -Xmcrs

If there is still free memory in the system when a Native OutOfMemory (NOOM) occurs, then the problem may be a shortage of memory in the low-memory region (under 4GB). Even if the Java heap is located above this boundary, other data associated with Java objects can be located in the low-memory region.

The OS memory allocator will deal out low-memory freely, thus memory resources in the lower boundary may run out. Later when the JVM tries to allocate memory for an artifact which is required to be allocated in low-memory (because the JVM has only reserved a 32bit pointer for it) it fails and throws an OutOfMemoryError.

Starting in Java 6.0.16.3, Java 6.1.8.3, Java 7.0.8.10, Java 7.1.2.10, there is a parameter -Xmcrs which allows the JVM to increase the amount of low memory it reserves on startup. With this setting, as long as the low-memory usage by the JVM does not exceed the -Xmcrs value, NOOM in the lower boundary will be avoided.

To set this parameter, first decide what a reasonable value for your lower memory requirements may be. Reasonable value is unique to each environment so there is not a general recommendation.
  • -Xmcrs<reasonable_value_for_lower_memory>
  • To determine <reasonable_value_for_lower_memory>, check the javacore for low memory usage when the NOOM occurred. A quick formula would have you look at the "NATIVEMEMINFO subcomponent dump routine" section. Subtract "Memory Manager (GC)" value from "VM" value and multiply the result by 1.5. In this case:
    (9689267552-8771635584)*1.5=1376447952=1312.68MB=<reasonable_value_for_lower_memory>. But since we generally reserve memory in 256M denominations, round up to 1536M....-Xmcrs1536M

    From javacore at time of NOOM:


Disabling Compressed References with -Xnocompressedrefs

As a last resort, if the native memory still cannot be contained under the 4GB mark, you can set -Xnocompressedrefs as a generic JVM argument. Using -Xnocompressedrefs will remove the use of compressed references and therefore remove the lower 4GB memory restriction on the Class Pointers and Monitors/Locks. This will however, result in a significant increase in Java heap memory requirements. It is not uncommon for 70% more heap space to be required. Due to the increased memory requirements it is strongly advised that the Java heap size is adjusted to a larger value and garbage collection is monitored and retuned as required.



Additionally, some benchmarks show a 10-20% relative throughput decrease when disabling compressed references: "Analysis shows that a 64-bit application without CR yields only 80-85% of 32-bit throughput but with CR yields 90-95%. Depending on application requirements, CR can improve performance up to 20% over standard 64-bit." See:ftp://public.dhe.ibm.com/software/webserver/appserv/was/WAS_V7_64-bit_performance.pdf.

Before using -Xnocompressedrefs as a solution, first rule out the possibility of a native memory leak. Since using -Xnocompressedrefs will allow the the native memory to grow unbounded, a leak in native memory will lead to process size growth eventually leading to a process that needs to be paged out. The paging will incur performance overhead which will eventually lead to an unstable environment. Therefore careful consideration must be used when selecting -Xnocompressedrefs as a solution.


Memory Map Considerations

The below figure is a generalization of how the JVM handles addresses in each section of the memory map based on heap size and compressed references (CR). Please note that at each stage beyond having all of the Java memory contained below the 4GB mark, there will be performance consequences:


 
No Compressed References Overhead
using -Xnocompressedrefs
or
-Xmx > 25GB
-increased memory footprint
-fewer/larger objects stored on heap leads to more frequent GC
-lower cache and translation look aside buffer (TLB) utilization
Compressed References Overhead
maximum heap address used by the Java JVM process is below 4GB none
maximum heap address used by the Java JVM process is above 4GB but below 32GB compression/decompression of address pointers

 

Getting Assistance From IBM Support
If further assistance will be required from IBM WebSphere Support, please set the following -Xdump parameters in the generic JVM arguments:
  • -Xdump:java+heap+snap:events=systhrow,filter=java/lang/OutOfMemoryError,range=1..4
    -Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError,range=1..1

Then restart the JVM and re-create the problem. Once the NOOM is encountered, process the resulting system core with jextract. Send the jextracted core file, heapdump, javacore, snap trace, systemOut.log, native_stderr.log, native_stdout.log and systemErr.log to IBM Support for further analysis.

[{"Line of Business":{"code":"LOB67","label":"IT Automation \u0026 App Modernization"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSEQTP","label":"WebSphere Application Server"},"ARM Category":[{"code":"a8m50000000CdBBAA0","label":"OutOfMemory-\u003ENative-\u003E32-bit compressed references"}],"ARM Case Number":"","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"7.0.0;8.0.0;8.5.0;8.5.5;9.0.0;9.0.5"}]

Document Information

Modified date:
11 March 2024

UID

swg21660890