IBM Support

PI37018: Hung threads can occur when shutting down a container and using eXtremeMemory with indexes.

Subscribe to this APAR

By subscribing, you receive periodic emails alerting you to the status of the APAR, along with a link to the fix after it becomes available. You can track this item individually or track all items by product.

Notify me when this APAR changes.

Notify me when an APAR for this component changes.

 

APAR status

  • Closed as program error.

Error description

  • When shutting down a container, hung threads like the ones
    below can be seen when using eXtremeMemory with indexes.
    
    XSThreadPool  W   CWOBJ7853W: Detected a hung thread named
    "ContainerWorkThreadPool_containerServer1_C-1 : 2" TID:79
    RUNNABLE.
    Stack Trace:
    com.ibm.ws.objectgrid.io.offheap.impl.XsOffHeapSetImpl.deleteXs
    OffHeapSet(Native Method)
    com.ibm.ws.objectgrid.io.offheap.impl.XsOffHeapSetImpl.destroy(
    XsOffHeapSetImpl.java:127)
    com.ibm.ws.objectgrid.index.OffHeapIndexSet.release(OffHeapInde
    xSet.java:431)
    com.ibm.websphere.objectgrid.plugins.index.HashIndex$Releasable
    ConcurrentHashMap.release(HashIndex.java:8213)
    com.ibm.websphere.objectgrid.plugins.index.HashIndex$Releasable
    ConcurrentHashMap.release(HashIndex.java:8213)
    com.ibm.websphere.objectgrid.plugins.index.HashIndex.destroy(Ha
    shIndex.java:7512)
    com.ibm.ws.objectgrid.map.MapListenerHandlerHelper.fireSingleDe
    stroy(MapListenerHandlerHelper.java:111)
    com.ibm.ws.objectgrid.map.MapListenerHandler.fireDestroy(MapLis
    tenerHandler.java:598)
    com.ibm.ws.objectgrid.map.BaseMap.destroy(BaseMap.java:9175)
    com.ibm.ws.objectgrid.ObjectGridImpl.destroy(ObjectGridImpl.jav
    a:3208)
    com.ibm.ws.objectgrid.catalog.placement.balance.disk.DiskCatalo
    gUtil.destroyGrid(DiskCatalogUtil.java:200)
    com.ibm.ws.objectgrid.server.impl.ShardImpl.destroy(ShardImpl.j
    ava:800)
    com.ibm.ws.objectgrid.server.impl.ShardActor.destroy(Sha
    rdActor.java:1103)
    com.ibm.ws.objectgrid.server.container.ObjectGridContainerImpl.
    deactivate(ObjectGridContainerImpl.java:1335)
    com.ibm.ws.objectgrid.server.container.ObjectGridContainerImpl.
    destroyShard(ObjectGridContainerImpl.java:2386)
    com.ibm.ws.objectgrid.server.container.ObjectGridContainerImpl.
    doContainerWork(ObjectGridContainerImpl.java:766)
    com.ibm.ws.objectgrid.server.container.ContainerActor$2.run(Con
    tainerActor.java:330)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExe
    cutor.java:1177)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolEx
    ecutor.java:642)
    com.ibm.ws.objectgrid.thread.XSThreadPool$Worker.run(XSThreadPo
    ol.java:309)
    
    
    XSThreadPool  W   CWOBJ7853W: Detected a hung thread named
    "ContainerWorkThreadPool_containerServer1_C-1 : 6" TID:7d
    BLOCKED.
    Stack Trace:
    com.ibm.ws.objectgrid.server.container.ObjectGridContainerImpl.
    doContainerWork(ObjectGridContainerImpl.java:715)
    com.ibm.ws.objectgrid.server.container.ContainerActor$2.run(Con
    tainerActor.java:330)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExe
    cutor.java:1177)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolEx
    ecutor.java:642)
    com.ibm.ws.objectgrid.thread.XSThreadPool$Worker.run(XSThreadPo
    ol.java:309)
    
    
    XSThreadPool  W   CWOBJ7853W: Detected a hung thread named
    "ContainerWorkThreadPool_containerServer1_C-1 : 3" TID:7a
    BLOCKED.
    Stack Trace:
    com.ibm.ws.objectgrid.server.container.ObjectGridContainerImpl.
    destroyShard(ObjectGridContainerImpl.java:2372)
    com.ibm.ws.objectgrid.server.container.ObjectGridContainerImpl.
    doContainerWork(ObjectGridContainerImpl.java:766)
    com.ibm.ws.objectgrid.server.container.ContainerActor$2.run(Con
    tainerActor.java:330)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExe
    cutor.java:1177)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolEx
    ecutor.java:642)
    com.ibm.ws.objectgrid.thread.XSThreadPool$Worker.run(XSThreadPo
    ol.java:309)
    

Local fix

  • one
    

Problem summary

  • ****************************************************************
    * USERS AFFECTED:  Users of eXtreme Scale who use indexes      *
    *                  with eXtremeMemory.                         *
    ****************************************************************
    * PROBLEM DESCRIPTION: Shards take a long time to stop due     *
    *                      to releasing native memory.             *
    ****************************************************************
    * RECOMMENDATION:                                              *
    ****************************************************************
    The index destroy logic was updated to move the releasing of
    the index memory to a background thread similar to what was
    done with the main map memory in APAR PI33005.
    

Problem conclusion

  • An interim fix is available for this APAR upon request.
    

Temporary fix

Comments

APAR Information

  • APAR number

    PI37018

  • Reported component name

    WS EXTREME SCAL

  • Reported component ID

    5724X6702

  • Reported release

    860

  • Status

    CLOSED PER

  • PE

    NoPE

  • HIPER

    NoHIPER

  • Special Attention

    NoSpecatt

  • Submitted date

    2015-03-13

  • Closed date

    2015-03-31

  • Last modified date

    2015-03-31

  • APAR is sysrouted FROM one or more of the following:

  • APAR is sysrouted TO one or more of the following:

Fix information

  • Fixed component name

    WS EXTREME SCAL

  • Fixed component ID

    5724X6702

Applicable component levels

  • R860 PSY

       UP

[{"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Product":{"code":"SSTVLU","label":"WebSphere eXtreme Scale"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"860","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]

Document Information

Modified date:
31 March 2015