Java Technology Community
Here is what I think of Java parameter passing conventions.
At programmer's level, Java is said to pass objects by reference and primitives by value. This means for references, what the callee receives is a heap address of the object, and the object references themselves are actually passed by value. This also means Java saves some space and effort in copying the entire object onto the subroutine linkage channel (for example stack memory).
By definition, pass by reference means 'a parameter passing convention where the lvalue of the actual parameter (argument) is assigned to the lvalue of the formal parameter.'
When passed by reference, the callee method can manipulate the original object’s attributes, can invoke the methods of the object, can re-new, re-assign and purge the components of a composite object thus passed. These operations affect the original reference of the caller, because we have only one object in the heap, which are pointed to by both of these references.
For destroying an object, the C++ way is to 'delete' the object, and the C way is to 'free' the pointer. If passed by reference or address, both these languages have the flexibility of cleaning the object or a structure from anywhere in the caller-callee chain. The invalidation of an object indirectly invalidates other references or pointers cached elsewhere in the stack locations, and trying to reuse those references or pointers results in a crash.
This is different in java. Since there is no explicit freeing of objects, we rely on null assignment on the reference, which is the only way to force an object cleanup. Even after the callee nullifies an object, the object lives through the caller's reference. This means that an object cannot be freed (or initiated for freeing) from an assignee reference, when the a peer reference is alive, and vice versa.
This may be a conscious design to eliminate bad references and make sure that all the object references are either null or a valid object's address. This is because, in the garbage collection, the memory of unreferenced objects are not really freed into the system, rather kept in the internal free pool, and is still mapped into the process, and if accessed through stale references, such a bad dangling pointer will actually cause more damage than a crash.
But then how to clean up an unwanted java object? Set your object reference to null and wait for a gc to occur? might not work because, if there is a second reference elsewhere in the stacks and registers, consciously or unknowingly, the object is not collected. Consequently, many of the objects the programmer has explicitly discarded will lay remnant in the heap until the last reference of the object also went out of scope. This may be sooner or later, or never.
Many of the memory leaks including the infamous Classloader leaks can be attributed to this 'hidden and under-documented' behavior of java. And this is the very reason we see more OutOfMemoryErrors than NullPointerExceptions.
Reflection in Java is primarily used to retrieve class information during run time. But its also a way to generate classes. This happens through the "accessor" . When using reflection, the JVM connects to the methods of a reflecting object to the obj/class being reflected on.
The "accessor" can be of two types
JNI accessor is a native accessor which requires very little setup
Bytecode accessor is the "dynamic class generation" aspect of reflection
JNI accessor is a fast to setup - slow to run
this is because switching from Java context to Native context is always time consuming
Bytecode accessor is slow to setup - fast to run
slow to setup because it needs building a class and loading it through a new classloader - but faster later on as you can have JIT optimize the class for you
Initially, the JVM uses JNI accessor for reflection. After a "threshold" , these accessors are promoted to bytecode accessors. This is called INFLATION.
Excessive Reflection has a drawback : it could lead to lot of bytecode accessors being created which inturn means lot of accessor classes and classloaders - both of which take up native memory to a large extent.
If your app is seen to be creating lot of Generated*Accessor classes or
The default threshold for inflation for IBM Java 5.0 is 15 and can be controlled through the system property
If N is 0 or less, then the accessors will never be inflated.
Make sure you read this excellent dW article on Native Memory
rpalanki 120000KF23 Tags:  ibm java memory support extensions isa troubleshooting assistant 5,284 Views
This is in an interesting post from Chris Bailey's (IBM Java Service Architect, JTC Java Technology Center) blog on "IBM on troubleshooting Java applications" (Please refer to the book-marks on the community for the blog-link which has several interesting posts)
The IBM Extensions for Memory Analyzer (IEMA) v1.1 are now available on
As part of Java Level3 Service, I have seen Java Heap Exhaustion related issues of varying types.
One of the more interesting cases is the case of exhaustion due to LARGE object allocations.
To quote an example, I have seen banking related applications which create objects that are in the 100s of megabytes region.
In another instance , we saw Java OOM due to an object allocation of 1GB!
It is not uncommon for appliations to create large objects - but when the customer sees them - how to let them know of the CODE from where these allocations are coming?
This is where the "allocation" event option comes to mind.
The RAS [Reliability, Availability, Serviceability ] component of the IBM JDK controls the Diagnostic component of the JDK.
The Xdump option has a special event called - allocation - which enables the user to configure dump agents on a allocation event.
In combination with the "filter" sub-option , this proves to be handy in "filter"ing out allocation requests based on SIZE.
Here is how I used it on a simple HelloWorld program :
Dump a "stack" of the thread that is inducing an object "allocation" event - with the size of the object > 3000 BYTES.
In my testcase, a char array of size 3408 bytes was allocated from the above mentioned stack.
This is a very simple use of the "allocation" event option.
More on this here- http://publib.boulder.ibm.com/infocenter/javasdk/v6r0/topic/com.ibm.java.doc.diagnostics.60/diag/tools/gcpd_stackevents.html
One of the key feature of Memory Analyzer is the leak suspects view. Memory Analyzer has capabilities to find the leaksuspects, large tress / deeply nested trees that contribute for large java heap usage.
How MAT suspects something as leaking?
To undertand this we need to get familiar two terms
1. Shallow size of the object
2.Retained size of the object.
Shallow size is the size of the individual object alone whereas Retained size is the total size of the object tree which includes its children (objects referenced by this object) also. Consider A is the root object and it has outgoing references to B and C (which are chil-dren of A). B has incoming reference to A and outgoing references to B1 and B2. Here the size of object A is 100 which is called Shallow size and Total tree size of A is 140 which is called Re-tained size as it includes the size of its children. Consider the size of B1 as 1000 and the total size of A is 1135. Here A is biggest consumer tree and in that B1 is the suspect because it is the biggest consuming child in the A tree.
A significant drop in the retained sizes shows the accumulation point and we can view the chain of objects and references which keep the suspect alive. The largest drop in the total size in the retained sizes helps to provide a relatively analysis to find the leak suspects. A significant drop in the retained size can be due to very large objects in the subtree or too many accumulated objects for example: collections
Memory Analyzer parses the dumpfile (heapdump or the processes system dumps), it compares the shallow and retained size of the objects in the tree and gives you the leak suspect report. This view is the default report generated by the Memory Analyzer . The tool has the in-built capabilities to look for probable leak suspects, large objects or collections of objects that contribute significantly to the Java heap usage and displays this information in the form of a pie chart. This reports memory leak suspects and checks for known anti-patterns. Below the pie chart view we can find the information about the suspects, the objects’ memory utilization, number of instances, total memory usage, and owning class. From the same view we can do more interactive analysis of suspects provided by the memory analyzer.
One of the sample view from Memory leak report is
Read this interesting post by Chris Bailey (Java Service Architect) on the IBM on Troubleshooting Java Applications Blog (https://www.ibm.com/developerworks/mydeveloperworks/blogs/troubleshootingjava/?lang=en_us&ca=dth-mydw)
The general guidance is that the data stored in an HTTP session should only be used to store the necessary data to main state between browser invocations, and that the amount of data stored should be as small as possible. However we often see that, over several iterations of additional development work and new features being added to a web application, the session sizes have grown as more and more data is being stored.
BharathRajBK 2700013SF4 Tags:  objects memory analysis data best java practices leak caching tools 6,351 Views
Java data objects (JDO) is a specification of Java Object Persistence. It is an interface based definition of object persistence for the Java language, which is used to describe the database operations (primarily SQL oriented operations) with the data store.
Few of the characteristics of JDO are as below –
This brings us to some of the advantages of JDO:
Data fetch mechanism by JDO:
JDO fetches data from databases in the form of simple Plain Old Java Objects (POJO) objects. This is done in such a way that for every row of data fetched, there is a POJO object retrieved with the individual values (columns) being attributes of this POJO object.
Hence, for database reads involving, say 100 rows of result set being fetched, then an array of 100 POJO objects are returned to the application which is handled through means of collection and iteration classes supported in the standard Java API libraries.
Performance impact to be considered:
There are few things to note with the above mechanism of data fetch by JDO –
Implementation best practices – prevent memory leaks:
Out of memory error:
Java applications run in the runtime called the Java virtual machine (JVM). Each JVM runs with its own pre – allocated memory defined by minimum and maximum parameters used at the time of runtime startup. Within this runtime memory, there is a major chunk of memory allocated to store all objects created by the executing java program. We call this part of the runtime memory as “Heap memory”.
The heap memory gets utilized as the java application workloads are executed. Java allocates objects in this data area for all object requests made by application. If an object cannot be allocated in the heap due to unavailability of enough memory for allocation, an allocation failure occurs. Every such allocation failure triggers the Garbage collection (GC) algorithm to kick in which sweeps the entire heap and reclaims objects which are no longer referenced (dead objects).
The heap may be of a fixed size, but can also be expanded when more memory is needed or contracted when no longer needed. If a larger heap is needed and it cannot be allocated, an OutOfMemory exception is thrown, resulting in application server crash generating a heapdump file consisting of details of the contents of the heap when the out of memory occurred.
Common causes of out of memory pertaining to use of JDO:
Further analysis and troubleshooting techniques with JDO is continued in the next section of this blog titled - "Troubleshooting and memory leak analysis with JDO"
BharathRajBK 2700013SF4 Tags:  objects detection framework troubleshooting java data analysis tools memory leaks 2 Comments 8,291 Views
Although this blog is self sufficient and would encourage the readers to know more about using memory analyzer tools particularly with respect to heap dump analysis to detect memory leaks, it would be advisable to read the previous blog in this series titled - "Build Enterprise applications with JDO - best practices to avoid memory leaks" to get a better understanding of the causes due to memory related issues in today's enterprise applications using JDO.
This blog essentially continues from the previous blog mentioned above with the troubleshooting section as below -
Verbose gc analysis:
Java writes all the information pertaining to garbage collection process in a log file called native_stderr.log (hereafter called verbose gc log). This file is placed in the same directory as that of System out logs of the application. It contains information such as the heap occupancy when the allocation failure occurred, time taken to complete the each of the stages of GC cycle such as mark / sweep & compact, free heap memory available after collection and so on.
Analyzing verbose gc logs helps understand the pattern of JVM heap utilization. With this information, one can determine whether a memory leak is occurring or not by observing a continuous growth of heap memory over time until exhaustion.
If this is not a pattern which is seen, and a saw tooth pattern is observed, where memory is consumed and released periodically, then the out of memory error may have occurred due to insufficiency of heap memory. If gencon is the GC algorithm being used, then the size of the nursery / tenure area within the heap needs to be appropriately set, so as to avoid out of memory errors.
Below figure shows a sample application’s – memory utilization pattern, graphically obtained after running a parser tool called Pattern Modeling and Analysis tool for Java Garbage collector (PMAT) on Verbose GC log.
The recommended approach taken while troubleshooting a memory leak is by analyzing heap dumps. Heap dumps are automatically generated when the JVM runs out of heap memory and crashes as a result. They can also be taken manually by using application server administration scripts.
A heap dump file would consist of all the objects that were residing in the heap when the out of memory error occurred. Analyzing a sequence of heap dumps taken after a regular interval will help in understanding the size of objects and the number of objects in memory. This in turn, will help in identifying which class / module is leaking memory.
Analyzing heap dumps may reveal that either the code in the application is leaking memory (mostly due to not closing the resources), or a framework on which the application is built is leaking memory, or maybe even so that there maybe a bug in the application server due to which memory is leaking. Narrowing down to this level and then tracing to the exact class / module where the code leaks memory must be done very carefully with proper understanding of the application and its behavior over the end to end request flow, so as to be able to arrive at the right problem areas.
Below are sample screenshots of heapdump analysis pertaining to JDO based application memory leak. This data is obtained by using a tool on heapdump files which parses the entire file to search for leak suspects and displays it in good readable format for ease of analysis and problem identification purposes. This tool is called – Memory Analyzer tool (MAT)
Using memory analyzer tool (MAT), it is very easy to understand the objects that are lying the heap and their memory sizes at that given point in time when the heap dump was taken. A sample listing is provided in the image below -
Using this data, it would be convenient to understand those objects which are growing in size as the application is used more and more. The way we pin point this objects which are continuously growing in size is by taking multiple heap dump snapshots at contiguous intervals of time and then using MAT to understand their size growth patterns.
The API library in JDO framework can be used to configure the behavior of JDO during database transactions.
Query results Cache:
The query results hold strong references to the retrieved objects. If a query returns too many objects it can lead to OutOfMemory error. Hence, it is advised to use a “Paging” mechanism while retrieving result sets from database.
Paging mechanism of retrieving objects from the database helps in retrieving only a certain number of records from the database instead of retrieving all the records in a single shot. The results retrieved due to this mechanism can be displayed to the end user, providing a link which says, next few records / next page in the application.
When the end user wishes to see more records for the executed query, this link is clicked, which in turn retrieves few more records from the database and displays it in the second page. If all the records wish to be seen, then this process needs to be repeated continuously till all the records are retrieved for the executed query.
The advantage of using this approach is many –
Ensure best practices are adopted while developing any application. These best practices include both in terms of structuring your code or in terms of ensuring that there are no open resources / handles left unclosed. Developers should ensure that optimized techniques and methods are adopted while building functionality into the application.
Profiling the application periodically at significant stages of application development will help in determining the “code-wellness” and prepare the application for behaving optimally at significant user load / prevent unexpected memory leaks.