It seems I solve at least half of all problems I encounter with operating system core dumps. I'd probably solve more with them if customers took more core dumps (or they weren't truncated with ulimits). It's amazing how much information can be extracted from a full address space dump. It is probably the most underutilized diagnostic artifact, particularly in situations where one wouldn't expect a memory dump to help.
However, core dumps are expensive. They take dozens of seconds to dump (during which time the process is frozen). They are often massive and take a long time to compress and upload. Finally, they take a long time to download, decompress and analyze. For these reasons, core dumps are underutilized (along with the security implications of capturing and transferring all raw memory in the process).
So, in the case that you don't know ahead of time that you need a core dump, when should you take operating system core dumps?
In my opinion, if the performance and security implications above are acceptable, an operating system core dump should be the last step in every diagnostic procedure. For example, server hung? Do the normal performance MustGather, and then take a core dump.
But wait, didn't you just read that core dumps might be overkill? Yes. So what you should do is compress and save off the core dump into a special location. Only upload it if someone like me asks for it, or if you're otherwise stuck.
This is also a good way to deal with the security implications. With this approach, the data won't be lost, and in the case where the core dump is required, a secure way of handling or analyzing the core dump (perhaps even remotely) can be figured out.