The Support Authority: Analyze memory management problems with the Memory Dump Diagnostic for Java (MDD4J)

The Memory Dump Diagnostic for Java™ (MDD4J) tool helps you diagnose memory leaks and other excessive memory consumption problems in applications running in IBM Java Virtual Machines (JVMs). This article introduces you to MDD4J and shows you how to use its sophisticated analysis engine and user interface to peer into the Java heap so you can see which objects are consuming the most amount of memory. This content is part of the IBM WebSphere Developer Technical Journal.

Share:

Tomas Znamenacek (tomasz@ca.ibm.com), Software Engineer, IBM  

Tomas Znamenacek is a software engineer in IBM AIM Support Tooling and Electronic Service Desk team. He joined IBM in 1999. Throughout his career he has worked on core components of WebSphere Commerce, and on logging and problem determination components of WebSphere Application Server. Currently he works on development of IBM Support Assistant and leads the development of MDD4J tool.



Nick Mitchell (nickm@us.ibm.com), Research Scientist, IBM  

Nick Mitchell has worked with customers and internal development groups on memory leaks and memory bloat issues for many years. He works in Research on tooling and related technologies to assist in reducing bloat in WebSphere applications.



Adriana Valido (adrianav@us.ibm.com), Software Developer, IBM  

Adriana Valido is a member of the Websphere Serviceability Development Team and a developer for the IBM Guided Activity Assistant, Memory Dump Diagnostic for Java and other IBM support tools. Before joining the Websphere Serviceability Development Team, she worked with SAP on the System i® platform and Linux® on Power performance.



Russell Wright (rbwright@us.ibm.com), WebSphere Serviceability Development, IBM

Russell Wright has several years of experience developing and supporting data communications and middleware software including WebSphere Application Server. He currently manages the deployment of troubleshooting tools for the IBM Support Assistant and is a developer for the IBM Guided Activity Assistant.



30 September 2009

Also available in Chinese Russian Portuguese

In each column, The Support Authority discusses resources, tools, and other elements of IBM® Technical Support that are available for WebSphere® products, plus techniques and new ideas that can further enhance your IBM support experience.

This just in...

As always, we begin with some new items of interest for the WebSphere community at large:

Continue to monitor the various support-related Web sites, as well as this column, for news about other tools as we encounter them.

And now, on to our main topic...


Introduction

This is an introduction to the Memory Dump Diagnostic for Java (MDD4J) troubleshooting tool, which helps you analyze a Java heap to diagnose memory footprint problems. MDD4J's analysis results are provided in reports that summarize how an application uses the Java heap.

There are three primary situations where MDD4J can be helpful:

  • Memory leaks: If an application suffers from java.lang.OutOfMemoryError exceptions, or if verbose garbage collection data shows a growth in memory consumption over time, then MDD4J can pinpoint the data structures -- and the components within those data structures -- that contribute to this growth.
  • Excessive memory consumption: If an application does not scale well or uses more memory than necessary for what it accomplishes, MDD4J can show you high-level pictures of the data structures that allocate excessive amounts of memory. The MDD4J reports show you how the implementation of collections and other types of data modeling can contribute to excessive memory consumption.
  • Regression testing: The differential analysis and data structure views provided by MDD4J can help you track changes in memory consumption as bugs are fixed and new features are added or removed.

MDD4J is available via the IBM Support Assistant. Because memory dump analysis uses a large amount of processor and I/O resources, you can also run it outside of the IBM Support Assistant workbench. Performing a memory dump analysis on a server-class machine, particularly 64-bit, lets you process memory dumps of virtually unlimited size, and it leaves your workstation with plenty of resources so you can perform other tasks.

The MDD4J design goal is to give you enough information to help you identify problems without information overload. It does not provide in-depth, low-level, expert functions to manually examine the entire object graph within the heap. For in-depth heap dump analysis you should use the IBM Monitoring and Diagnostic Tools for Java – Memory Analyzer.


What is a heap dump?

A heap dump, also known as heap snapshot, is a dump of references between objects in memory and information about these objects. Heap dumps do not contain information like variable names, values or source code.

You need a heap dump before you can perform a heap analysis. There are three ways to generate a heap dump:

  • Automatically: The JVM usually generates a heap dump when it runs out of memory and crashes.
  • Manually: You can send a signal to the JVM asking it to generate a heap dump. (The MDD4J documentation has information on how to generate a heap dump on various platforms.)
  • Programmatically: The IBM SDK for Java includes a class called com.ibm.jvm.Dump. Use its static method Heapdump() to generate a heap dump while your application is running.

How to analyze heap dumps

MDD4J uses different terms for heap dump files based on when they were taken:

  • Baseline heap dump

    Even though the JVM generates a heap dump when it crashes, you can learn much more about how memory is being used by comparing it to a heap dump taken just after your application has started. This heap dump taken early in the life of the application is called the baseline heap dump and you use it when performing a comparative analysis.

  • Primary heap dump

    A primary heap dump is one you take when problems occur, such as running out of memory or excessive heap usage. This heap dump can be generated automatically when the JVM runs out of memory or crashes. You can generate it programmatically or manually by sending a signal to the JVM. You can perform a single or comparative heap dump analysis on a primary heap dump.

    MDD4J lets you perform two different types of heap dump analysis, depending on the number of heap dumps available:

    • As the name implies, a single heap dump analysis is performed on one heap dump, the primary heap dump. It shows you overall information about the heap including leak suspects and significant data structures.
    • A comparative heap dump analysis takes two heap dumps, baseline and primary, and compares their differences.

During an analysis, MDD4J tries to find the most likely leak suspects. They are called suspects because there is no exact way to identify a cause of high heap usage. Just because an analysis identifies a leak suspect does not necessarily mean that there is a leak. This is often true with cache implementations, which look very much like a typical memory leaks.


Launching and understanding MDD4J

To understand fully how to use MDD4J, you need to know the meaning of several terms used to describe memory leaks. Figure 1 shows these terms with an example class named MyClass which is holding a HashSet that contains string objects.

Figure 1. Memory leak terms
Figure 1. Memory leak terms
  • Leak root: The object which holds a reference to the chain of objects leading to the leak container. If no class objects are found in the owner chain, then this denotes the root object in the memory dump from where the leak container can be reached.
  • Leak container: The object that uniquely owns all leaking objects. In this example, it is a HashMap.
  • Leaking unit: The object in the data structure where there are multiple instances present. In this example, it is the HashMap$Entry objects inside a HashMap.
  • Owner chain: The chain of objects starting from a leak root object to the leak container object.
  • Contents: The actual data that is responsible for the majority of heap consumption under the leak root.

Other information not shown in Figure 1 but relevant to memory leaks:

  • Reach size: The cumulative size of all the objects reachable from a given object. In this example, all the objects contribute to the reach size of the leak root.
  • Drop in reach size: The difference between the reach size of an object and the reach size of a child of that object. This difference, together with differences in instance counts, is the key to discovering leak suspects.

MDD4J is launched like any other tool installed in the IBM Support Assistant workbench. Figure 2 shows MDD4J in the workbench after it has been launched and has completed an analysis. The Analysis Summary is the first view you see after an analysis completes. (If you export MDD4J from the workbench onto another machine, there will a batch or a shell script available to run the tool.)

In the following sections, you will learn about how the analysis results are presented in MDD4J. The information shown is for the primary dump.


Evaluating analysis results

The Analysis Summary is the first view you see after an analysis completes. It shows summary information of the analysis results and provides a quick overview of what is in the heap and what is significant in the heap. On this panel, you can identify the components and types that contribute the most to the size of the heap.

Figure 2. MDD4J in the IBM Support Assistant workbench
Figure 2. MDD4J in the IBM Support Assistant workbench

The Analysis Summary has three sections:

  • Basic Heap Information (Figure 3) shows information about the heap and the heap dump file: the heap size, number of objects, number of object types, heap dump file name, and date the heap dump file was taken.
    Figure 3:.Basic Heap Information
    Figure 3:.Basic Heap Information
  • High-level Summary of the Heap Contents (Figure 4) shows two charts displaying components and types in the heap dump that contribute the most to the size of the heap.
    • The Predefined Components graph shows the relative contribution to the heap size of various groups of objects corresponding to often-used components in Java and WebSphere applications.
    • The Type Breakdown graph shows the relative contribution to heap size of the top-most used Java classes.
    Figure 4. High-level Summary of the Heap Contents
    Figure 4. High-level Summary of the Heap Contents

In the Analysis Summary view you can also access a single-file HTML Report (Figure 2) which provides a summary of the analysis results. The report contains all of the information offered in the different views of MDD4J, without the graphics. You can use this report to quickly share analysis results with your colleagues.

You can also download a portion of the MDD4J user interface (the Data Structures tab, explained later in the article) as a standalone report called a Yeti Report (see Figure 2), and save the resulting .zip archive file. Extract the contents of the archive into an empty directory and open the index.html file in a Web browser. See the Making Sense of Large Heaps for information about Yeti.


Reach size leak suspects

This view is the starting point for identifying memory leaks. The analysis almost always identifies some suspects, but not all of the suspects are true memory leaks. The leak suspects are sorted by their reach size, with the most significant leak suspect shown at the top.

When analyzing an out-of-memory situation, this is the main source of information. When there is excessive memory consumption, this view can provide useful information. The Data Structures view, however, provides more details.

There are two sub views, both showing leak suspects, but identified at a different level (Figures 5 and 6). In most cases, the top leak suspects will be the same on both views -- but if the top suspects are different, it is important to review the suspects from both views:

  • Aggregated data structure level (Figure 5) works with summarized instances of the same type within the same owner chain. The leak suspects are grouped in tables, one for each leak root. The name of the leak root is listed just above the table and is a hyperlink to a content schematic. A content schematic is a graphical representation of the owner chain under the leak root.
  • Object/Class level (Figure 6) works with actual instances in the original reference graph of the heap. In most cases, the information shown should be the same as in the aggregated data structure level view.

Aggregated data structure leak suspects

There are several tables, one for each leak root, showing the suspects found in the reference tree under the leak root. The leak root is listed above the table (Figure 5) and is a link to a graphical representation of the leak root owner chain, which is also called a content schematic. This graphical representation helps you understand the relationship between the leak root, the leak container(s), and the leaking unit(s) shown in the table. The table shows a pair of leak containers and a leak unit, together with a number of instances of leak units and their cumulative sizes.

For example, Figure 5 shows an instance of the EvaluateServlet class as a leak container, and an instance of the ArrayList class as a leaking type in the first row. Often you will see the leak unit in one row of the table to be a leak container in another row within the same table. This means that there are nested container types.

For example, the third row of Figure 5 lists an instance of the class Analyzer holding on to an instance of a HashMap, and the fourth row that same HashMap is shown containing 1000 instances of the Analyzer$Chunk class.

Figure 5. Aggregated data level leak suspects
Figure 5. Aggregated data level leak suspects

You read the table from top to bottom to interpret the information (Figure 5). In some cases, such as this one, there can be several different leaks. Each number below corresponds to that row in the table.

  1. An instance of EvaluateServlet holds on to one instance of an ArrayList (78MB in size).
  2. This ArrayList contains one instance of an Analyzer.
  3. This Analyzer holds on to one instance of a Hashtable.
  4. This Hashtable contains 629 instances of Chunk, which is an inner class of Analyzer.
  5. Each instance of Chunk holds on to an ArrayList.
  6. All of the ArrayLists in the 1,000 instances of Chunk contain 500,000 instances of String with a cumulative size of 76MB.
  7. The next instance of Chunk holds on to one instance of a Hashtable (49MB in size).
  8. This Hashtable contains 629 instances of Chunk.
  9. Each instance of a Chunk holds on to an ArrayList.
  10. All of the ArrayLists in the 1,000 instances of Chunk contain 314,500 instances of String with a cumulative size of 48MB.

Object/Class leak suspects

This is a simpler view than the aggregated data level list of leak suspects. The Object/Class leak suspects table (Figure 6) shows a list of leak roots and their leak containers, together with the cumulative size of the container and the drop in the reach size of all the leaking units. As mentioned earlier, this list should contain the same top leak suspects as shown in the aggregated data structure level view (Figure 5).

Figure 6. Object/Class leak suspects
Figure 6. Object/Class leak suspects

Figure 6 shows the same two leaks that are shown in the aggregated data structure leak suspect view (Figure 5). The difference in cumulative size occurs because the aggregated data structure level view uses summarized instance counts and the object/class leak suspects view uses actual instance and size counts.

Using leak suspect views

To read the information on both leak suspect views, you start to read the tables from the first row. Because of the complexity of identifying leaks, the last suspects in the list might not represent a leak at all if their leak container size is less than 5-10% of the heap size (shown in the Analysis Summary view in Figure 2). Such small leaks can be ignored.

The top leak suspects are important and you should look at their content schematics by clicking on the link above the table. Then look at the leak suspects listed in the table to try to identify which classes are just containers and which might be the leaking classes. To do this, you need to know the implementation or structure of the application for which the heap dump was generated. Keep in mind that the classes listed as leaking classes might not be responsible for causing the leaks themselves. Most likely, other code is responsible for creating instances of the classes that have caused the extensive memory usage.

Once you have identified the leaking class, you might want to check the Data Structures view to confirm how much of the heap the leak root or the leaking unit occupies. If neither one is listed in the Data Structures view, the leak might not be a true leak, in which case you should review the top contributors to the heap found on the Data Structures view. (The Data Structures view is described in more detail later.)

Object tables

The Object Tables view contains two sub-views that list all classes and instances of each class in the heap. Use the views in the Object Tables tab to get additional information about the number of specific objects and classes that exist in the heap, or to confirm the size of a leak suspect.

The data in both tables can be sorted by any column by clicking on the column header. Clicking on the column again will flip the sorting order. Figure 7 shows a table sorted by instance count in descending order.

Figure 7. Object table sorted by instance count
Figure 7. Object table sorted by instance count

Data structures

The Data Structures view (Figure 8) provides additional details on the internal structure of the application's data structures. The visualizations contained within are based on the Yeti technology that was developed by the IBM Research team.

Figure 8. Data Structures view
Figure 8. Data Structures view

The information shown in this view can help you understand the shape and interconnection of the data structures within the heap dump. The Yeti visualizations automatically group together the instances of objects into larger cohesive units. This saves you from having to manually navigate around the individual objects and references, and shows you a higher-level picture of what makes the data structures big. The visualizations within help you:

  • Identify the largest data structures in the Java heap.
  • Track changes in the size of these data structures over time.
  • Quickly understand the internal structure of a data structure.
  • Diagnose data modeling problems in the application that contribute to memory bloat.
  • Triage the data structures based on the software components that contribute to their consumption of memory.

The view contains selection tabs on the left and an information frame on the right. There are four selection tabs: Big (Figure 9), Growing (Figure 10), Shrinking (Figure 11), and Steady (Figure 12).

Figure 9. Big data structures
Figure 9. Big data structures
Figure 10. Growing data structures
Figure 10. Growing data structures
Figure 11. Shrinking data structures
Figure 11. Shrinking data structures
Figure 12. Steady data structures
Figure 12. Steady data structures

When you analyze one heap dump, then only the Big tab will be populated. When you analyze two heap dumps, the Growing, Shrinking, and Steady tabs will be populated with lists of data structures that changed their size, or remained steady but contributed significantly to the size of the heap dump. The content schematics show the differential information in red.

The information frame on the right of the Data Structures view (Figure 8) shows either information for the whole heap or for the selected data structure. There are two tabs, Overview and Health Report, showing information for the whole heap dump (Figure 13) or for a data structure selected from Big, Growing, Shrinking, or Steady (Figure 14).

Figure 13. Overview of the entire heap
Figure 13. Overview of the entire heap
Figure 14. Overview of one data structure
Figure 14. Overview of one data structure

Figures 15 and 16 show charts on the Health Report tab. In this case, they are showing the health of an instance of EvaluateServlet.

Figure 15. EvaluateServlet health
Figure 15. EvaluateServlet health
Figure 16. EvaluateServlet health (continued)
Figure 16. EvaluateServlet health (continued)

Exploring data structures

The first view you will see is a bar chart showing the most costly data structures in the heap. Names proceeded by "statics" indicate data structures rooted under a static field of the given class. The instructions that follow describe how to explore the data structures and applies to data structures shown on all tabs.

To the right of each bar (Figure 17), that data structure is given a size (and it is sometimes gven two sizes). The first number, and the length of the bar, is the total number of bytes reachable from that data structure. Often, data structures overlap; the contents of one data structure are simultaneously part of another. When there is this kind of sharing, you will see a second number, in parentheses.

Figure 17. Data structure sizes
Figure 17. Data structure sizes

Sharing shows how much of the size of the data structure is reachable from other data structures. In other words, it shows those data structures that are shared with other data structures. You can select a data structure to explore its contents in more detail. When you click on a bar in any of the data structure bar charts, the right side of the window will update to display an initial summary of the contents of the selected structure. Figure 18 shows the summary displayed when you click the EvaluateServlet bar shown in Figure 17.

Figure 18. EvaluateServlet summary
Figure 18. EvaluateServlet summary

In the absence of sharing, the section list would have a single entry that represents the entire data structure. When portions of a data structure are shared with other data structures, the section list contains an entry for each shared portion. Each section is a sub-data structure where all of the objects are uniquely part of that section. In the section list, you will see the distinction between sections that are part of this structure and also part of others (those with "(shared)" at the end), and those that are uniquely part of this structure (those without that suffix).

Exploring content schematics

To learn more about the internal structure of a section, click on the underlined name of a data structure type, or click the data structure link to select the collection structure (Figure 19). You will see a pop-up window that shows the content schematic of the selected section (Figure 20).

Figure 19. Choose a data structure type
Figure 19. Choose a data structure type

Content schematics show concise pictures of what makes a data structure big. It shows how the data structure uses collections (like HashMap or Vector) to structure the elements of the data structure. Figures 20 and 21 are examples of content schematics. You read the schematic like a UML or Entity-Relation (ER) diagram, where each node represents either a collection infrastructure (for example, how the standard library implements a HashMap), or the contained entities of the target program (for example, the String keys in a map). In both cases, for collections and contained entities, the content schematic hides the implementation details and shows a single number that is the total size of the implementation details of that entity.

Figure 20. EvaluateServlet content schematic
Figure 20. EvaluateServlet content schematic

Figure 20 shows the EvaluateServlet content schematic. Some nodes do not show the class name because those instances do not contribute significantly to the size of the root class instance. This content schematic shows how the 500,000 instances of String are allocated.

Figure 21 shows an example of a more complex content schematic.

Figure 21. Complex content schematic
Figure 21. Complex content schematic

Type graph

If a node in a content schematic seems suspiciously large, you can drill down to a view that shows the implementation details of that entity. To do so, double-click on any node in a content schematic. You will see another pop-up window that shows the type graph of the selected entity. Figure 22 shows the type graph for the Hashtable referenced in the ownership chain under EvaluateServlet (Figure 20). The type graph shows how the classes (the data types) of an entity are interconnected, and shows how each type contributes to the overall size of that entity in the content schematic of the selected data structure.

Figure 22. Hashtable type graph
Figure 22. Hashtable type graph

Field layout view

If a node in a type graph seems suspiciously large, you can double-click on it to drill down to the field layout view (Figure 23). For heap snapshots that support this data (currently only HPROF), this view shows how the fields of the selected class -- and how the inheritance hierarchy of that class -- contribute to its instance size. Often developers will add many fields to classes without realizing that each int field they add to a class will be multiplied by a large factor (the number of instances) at run time. For example, developers often add fields to cache computations, or add fields necessary only during the construction phase, that are not needed during the usually longer use phase of the application.

Figure 23. Field layout view
Figure 23. Field layout view

Running MDD4J outside of IBM Support Assistant

IBM Support Assistant provides a convenient centralized platform from which you can discover, install, execute, and obtain regular updates for a wide variety of problem determination tools, so you should continue to use it to obtain regular updates to MDD4J. However, analyzing very large heap dumps can consume most of the CPU and I/O resources of a standard workstation for a considerable amount of time. In these cases, it might be desirable to perform the analysis on a server-class machine. You can find detailed instructions in the MDD4J Help to run the tool outside of ISA (Figure 24).

Figure 24. Running MDD4J outside of ISA
Figure 24. Running MDD4J outside of ISA

Conclusion

Now that you see the power of the Memory Dump Diagnostic for Java (MDD4J) tool for uncovering memory leaks and discovering other types memory usage problems, you’re ready to easily tackle a problem the next time you suspect applications are not properly managing the Java heap. Be sure to explore other tools available in the IBM Support Assistant to further enhance your troubleshooting arsenal.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere, Java technology
ArticleID=431147
ArticleTitle=The Support Authority: Analyze memory management problems with the Memory Dump Diagnostic for Java (MDD4J)
publish-date=09302009