Linux: Understanding total virtual memory usage from a core dump, Part 5
kgibm 0600027VAP Visits (3205)
In part 4, I alluded to the fact that the gcore command in gdb actually walks the memory regions itself when writing the core dump. This got me thinking: what if this is different than how the kernel writes the core? To get the kernel to produce a core, the only way seems to be to destructively kill the process (e.g. kill -6 or kill -11). By default, IBM Java will catch these signals before the process dies and do its core processing magic (I don't know if it actually suppresses the default core generation and does its fork-and-kill routine, or if it simply renames the core the OS produces and adds some stuff to it). For a simpler test, I restarted the JVM with the generic argument -Xrs so that it doesn't catch and handle any signals (this is generally not recommended because then basic things like getting javacores using signal 3 will completely kill the process). Next, I ran gcore followed immediately by kill -11:
$ gdb --batch --command gdbinfofiles.py java gcore.dmp Sum memory ranges = 594063360 $ gdb --batch --command gdbinfofiles.py java core Sum memory ranges = 651829248
Therefore, it appears that gcore does grab fewer memory sections than a core produced by the kernel itself.
Finally, I tried the same test with 0x7f in the coredump_filter, in the hope that a kernel-produced core would have all or most of the memory sections with a full coredump_filter, but alas, there was no additional difference.