Be Careful with HPROF Heapdumps Bigger than 4GB
kgibm 0600027VAP Visits (2832)
If you are using a HotSpot-based JVM, and you are producing HPROF heapdumps that have more than 4GB of object data, and you are using the Memory Analyzer Tool to analyze those heapdumps, then make sure you check the Error Log for any warnings when first loading the dump. We recently discovered that some HotSpot JVMs write an incorrect length field in the HPROF file. MAT will end up reading only part of the heapdump, but other than the warning, there's no sign that you're only looking at a subset of the dump.
You can find more details in the bug report, but basically the original HPROF specification used a four byte integer for the length of each "HEAP DUMP" (0x0C) record. The obvious (and observed) problem is that if a heap dump record is larger than 4GB, then the field will overflow. The heap dump record contains all of the objects at the time of the dump, both live and unreachable, including each object's field contents and some metadata.
A new "HEAP DUMP SEGMENT" (0x1C) record format was added to the HPROF specification which can split the heapdump into multiple records, and there's no indication that a JVM which uses this alternative has any problems. Currently, the best way to tell if you're affected by this is to show the Error Log view (in the RCP, it's under Views (or Windows) > Error Log) and monitor for strange warnings. These warnings will not show up if you're reloading an already parsed heapdump (you can remove the index files from the heapdump directory to force a reload). A MAT patch has been produced in the referenced bug report to workaround this problem and it's currently being reviewed.
Note that this particular instance of the problem was noticed on JRE version 1.6.0_23.
Update (April 15, 2013): The workaround patch is now available in the MAT nightly builds. The added code is able to infer the correct length in the most common situations.