Troubleshooting
Problem
B2Bi node goes down abruptly in Certified Containers environments.
Symptom
B2Bi node goes down abruptly in Certified Containers environments.
Although the "Enable Heap Dump on Out of Memory" option is set to true on the B2Bi dashboard's "Performance Tuning" page, no heap or thread dump is generated.
Cause
No heap dump or thread dump getting auto-generated upon B2Bi node crashing (going down) normally means that the noapp JVM did not reach its max heap size (-Xmx).
Possible causes:
a) The asi pod was evicted due to pod ephemeral storage exceeding the limit, or pod memory exceeding the limit.
b) The noapp JVM consumed too much memory (e.g., due to native memory leaks). It caused memory pressure on the worker node server, leading to an OOM; the pod was killed by the OS kernel.
Environment
B2Bi Certified Containers environments.
Diagnosing The Problem
For cause #a:
Run command
kuberctl get events -n <namespace>
If the event messages show "Evicted", it means that the pod was evicted due to memory or ephemeral storage reasons.
For cause #b:
For the B2Bi node in question, login to the worker node server for that B2Bi node. Then run this on the bastion node
Taking OpenShift as an example, the commands below. :
oc debug --as-root node/<nodeName>
chroot /root
cd /home/core
journalctl --since "2026-04-18" &> journal_2026Apr18.txt
If the journalctl command outputs contain "oom-kill" like below, then chances are that the noapp JVM was chosen as a victim of high memory pressure by the host OS kernel:
Apr 18 06:52:55 pepecgb2bisus01deploy-w6p4c-worker-southcentralus3-kq9wk kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=crio-c0068d84dae10d0b29427dd37bbac68bb7fbdbcc4384b15014abe8b657626954.scope,mems_allowed=0,global_oom,task_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf229f64c_c5b9_4c84_bc79_0db9b952bb6a.slice/crio-c0068d84dae10d0b29427dd37bbac68bb7fbdbcc4384b15014abe8b657626954.scope,task=java,pid=3910243,uid=1000740000
Apr 18 06:52:55 pepecgb2bisus01deploy-w6p4c-worker-southcentralus3-kq9wk kernel: Out of memory: Killed process 3910243 (java) total-vm:44291160kB, anon-rss:35750060kB, file-rss:2304kB, shmem-rss:0kB, UID:1000740000 pgtables:72500kB oom_score_adj:746
Apr 18 06:52:55 pepecgb2bisus01deploy-w6p4c-worker-southcentralus3-kq9wk kubenswrapper[3018]: I0418 06:52:55.090159 3018 oom_watcher_linux.go:83] "Got sys oom event" event={"Pid":3910243,"ProcessName":"java","TimeOfDeath":"2026-04-18T06:52:52.708802756Z","ContainerName":"/","VictimContainerName":"/","Constraint":""}
Resolving The Problem
For cause #a: Review B2Bi performance tuning and asi pod memory / ephemeral storage request and limit settings, and modify them as appropriate. Add RAM or disk space to the worker node server as needed. Also, check why the B2Bi node takes so much memory and/or ephemeral storage.
For cause #b: Review B2Bi performance tuning and asi pod memory settings. Add RAM to the worker node server if needed. If it turns out to be a native memory-leak issue in B2Bi, open a separate support ticket to troubleshoot it.
Document Location
Worldwide
Was this topic helpful?
Document Information
Modified date:
03 May 2026
UID
ibm17270642