Java applications in the purest form are definitely portable. The whole idea of having a compiler generate hardware neutral files pretty much guarantees application portability. Java accomplished this by creating what is now know as byte-codes. Every Java compiler generates "object files" whose analogous assembler languages is byte-codes. To run the application, a Java Virtual Machine (JVM) is used to interpret byte-codes generated by the Java compiler. Since byte-code is consistently generated by the compilers regardless of the development platform, the porting effort must focus around the JVM. JVM's are not platform neutral. The whole job of the JVM is to interpret the byte-codes and execute the program on the given hardware platform. It is the JVM that forces you to "port" your application.
Porting Java based applications can be as simple as running verification tests for each operating system your application executes on. For most applications, running verification tests is all that is required. But for some, there are problems. To explain why problems arise, we must dig deeper into the JVM to get a better understanding of why the JVM can cause "porting" problems. Most people look at the JVM as a black box whose role in life is to take byte-code and execute the interpreted program on any given operating system. This is a very accurate description of the JVM and for the most part is how Java developers should look at the JVM.
The JVM is a complex application. The simplest JVM would simply take byte-code as input, interpret the byte-codes, and execute the application. A simple JVM has the smallest amount of operating system dependent code and is, therefore, perhaps the most portable of JVM's. Simple JVM's, however, have significant performance limitations. The performance of the application is limited to how fast the computer can interpret and execute the byte-codes. To solve application performance problems, bigger and faster computers must be used, but buying larger computers to alleviate performance problems is not cost effective. Having this inherent performance problem led to the creation of Just-In-Time-Compilers (JIT). The JIT's job is to optimize the byte-code at run time. The JIT does this by analyzing the application, determining which classes are used most frequently, and compiling the byte-code code into a more native form for future usage. The more native form executes faster and is compiled only once instead of being interpreting each time the class is required.
Adding a JIT to a JVM greatly increases the complexity of the JVM. Not only must the JVM interpret the byte-codes, but it also must be able to communicate between classes that are interpreted and classes that are optimized by the JIT. The JIT takes the byte-codes and performs optimization techniques that produce faster code at the risk of perhaps being too aggressive and generating invalid code.
Now that we have a better understanding of what goes on inside a JVM, let's focus our attention on what we should do if our application produces incorrect results. If your application produces incorrect results, first turn off the JIT. As mentioned earlier, the JIT adds significant complexities to the JVM. To turn off the JIT on AIX, set the environmental variable JAVA_COMPILER as follows:
By disabling the JIT, you have reduced the problem to this: "Does the JVM on AIX interpret the byte-code correctly?" If the problem persists, submit a PMR and report the problem to IBM. Chances are, the problem will be resolved. If disabling the JIT resolves the issue, then the problem must be somewhere within the JVM and JIT interaction. Since the JVM is complex, it's best to work with IBM's service organization to analyze the problem. The service team will assist you in diagnosing the problem. Like most code optimizers, the JIT's architecture is such that various components handle specific optimization techniques/tasks. Each component can be turned on and off, and the service team will guide you through the process to further determine which component of the JIT is having the problem. In some cases, the application will continue to have good performance with just a portion of the JIT disabled. You--as an application supplier--can choose to release your product with portions of the JIT disabled if time is of the essence. Regardless, if turning off a portion of the JIT resolves your problem, the service team will follow the JIT problem until the problem is resolved.
Java is great! But what about my legacy code that I still need to use? Do I have to convert my legacy code to Java before I can use the code in my Java application? No. The Java Native Interface (JNI) was born because application developers required access to legacy code, higher performing code, or third-party application code that wasn't available as Java code. It should be obvious that if your Java application uses JNI, then your application will have to be ported. JNI code is operating system-specific generated code and will have to be generated for each operating system your application is designated to run on.
JNI uses shared libraries as the interface. The actual Java code remains the same, but the porting issue comes when a shared library must be created. On AIX, shared libraries can be created either with or without runtime linking. Runtime linking is a way to create shared libraries whose external references are resolved at the time of execution. To implement runtime linking on AIX, the application must be linked with the appropriate linker/loader flags. The Java command for AIX is not linked with the runtime flag. Therefore, all shared libraries used with JNI must have all of their external references resolved. If a shared library is not built correctly, the System.loadlibrary call will fail. To verify a shared library is built correctly use the dump command. For example:
dump -HTv libjniLib.so: ***Loader Section*** Loader Header Information VERSION# #SYMtableENT #RELOCent LENidSTR 0x00000001 0x00000010 0x00000017 0x00000061 #IMPfilID OFFidSTR LENstrTBL OFFstrTBL 0x00000004 0x000002b4 0x00000100 0x00000315 ***Import File Strings*** INDEX PATH BASE MEMBER 0 /usr/vacpp/lib:/usr/lib/threads:/usr/lib:/lib 1 libC_r.a shr.o 2 libc_r.a shr.o 3 libC_r.a shr2.o ***Loader Symbol Table Information*** [Index] Value Scn IMEX Sclass Type IMPid Name  0x00000000 undef IMP RW EXTref libC_r.a(shr.o) cout  0x00000000 undef IMP DS EXTref libC_r.a(shr.o) __ls__7ostreamFPCc  0x00000000 undef IMP DS EXTref libC_r.a(shr.o) __ls__7ostreamFPCv  0x00000000 undef IMP DS EXTref libC_r.a(shr.o) __ls__7ostreamFi  0x00000000 undef IMP DS EXTref libC_r.a(shr.o) endl__FR7ostream  0x00000000 undef IMP DS EXTref libc_r.a(shr.o) sbrk  0x00000000 undef IMP DS EXTref libC_r.a(shr2.o) __vn__FUl  0x00000000 .data EXP RW SECdef [noIMid] dataPnt  0x00000004 .data EXP RW SECdef [noIMid] curAddr  0x00000008 .data EXP RW SECdef [noIMid] initAddr  0x0000000c .data EXP DS SECdef [noIMid] __priority0x80000000  0x00000018 .data EXP DS SECdef [noIMid] reportMemory__Fv  0x00000024 .data EXP DS SECdef [noIMid] __ls__7ostreamFPFR7ostream_R7ostream  0x00000030 .data EXP DS SECdef [noIMid] allocateChunk__Fv  0x0000003c .data EXP DS SECdef [noIMid] Java_jniLib_memAllocateMemory  0x00000048 .data EXP DS SECdef [noIMid] Java_jniLib_memHighWaterMark
The dump command lists the shared library's dependencies (see the section in the code labeled "Import File Strings") as well as the external references and where the references are resolved to (see the section "Loader Symbol Table Information"). The "Loader Symbol Table Information" section lists each external reference required to properly load the shared library, i.e. libjniLib.so. The columns of interest in this section are the IMPid column and the Name column. This IMPid column lists where the external symbol shown in the Name column is resolved from. Runtime enabled shared libraries will have [..] in the IMPid column. Attempting to load a shared library with such a symbol resolution will not work on AIX. If you attempt to load such a library on AIX, Java will simply state that the library could not be loaded.
Applications that use Java and native code together must consider how AIX handles memory. Executables on AIX use data segments for malloc'd memory. A data segment is a region of memory that is 256 MB large. By default, an executable will have one data segment in which both data and stack are stored. The number of data segments an application can use is controlled by the linker/loader option:
Where N is number of data segments to use with a maximum of 8 for 32-bit applications.
This means that an application call allocate via malloc/new type mechanisms up to two GB of memory for 32-bit applications. By default, the Java command for Java 1.2.2 has five data segments and Java 1.3.0 has eight data segments.
Memory-related issues arise in applications that allocate memory within native code. Java allows you to specify the minimum and maximum heap sizes Java will consume by using the -ms and -mx options, respectively. The native code will be executing as part of the Java command and will be limited to the number of data segments the Java command was linked with. The problem arises when the Java application specifies a maximum heap size that doesn't take into account the amount of memory the native code requires. Hypothetically, let's consider a Java application that specifies a minimum and maximum heap size of one GB. When the Java application starts, the JVM will grab one GB of memory to be managed, leaving just 256 MB for a Java 1.2.2 based application. If the native code requires more than 256 MB of memory, a memory allocation error will occur when the application attempts to surpass the remaining amount of memory, i.e. >256 MB. The application programmer is then left perplexed, confused, and asking: "Why can't I allocate this memory? My system has plenty of RAM and paging space." The real issue is related to the number of data segments and not necessarily to the amount of paging space or RAM on the computer.
These types of memory-related issues also arise when your native based application instantiates a JVM to execute Java based code. Invoking a JVM from within native code is not mainstream; however, the same types of issues must be considered. If the instantiated JVM is given startup parameters for minimum and maximum heap sizes that are greater than the application was linked with, the application could experience strange behavior later on.
How do you resolve memory related issues? The easiest way is to sit down and consider the over memory requirements for your application. For example:
TotalMemory = NativeMemoryRequirements + JavaMemory Requirements NumberOfDataSegmentsRequired = (TotalMemory+255MB)/256MB
The NumberOfDataSegmentsRequired is a rough estimate of how many data segments your application needs. For native-based applications, you would link your application using the -bmaxdata option using NumberOfDataSegmentsRequired as the N value (see above). For existing applications, you can use the following environmental variable to override the number of data segments used by the application:
again where N is the NumberOfDataSegmentsRequired.
To further illustrate how memory grows when using native code, consider the following chart. The chart lays out the first address (data address) beyond the data base of a Java based application using JNI to allocate memory. The Java application was started using various minimum and maximum heap sizes to illustrate where the data address is in relation to the number of data segments, i.e. maximum memory allocable.
The application allocates a large array in Java, uses JNI to query the data address, allocates a large array in C++ and queries the data address again. The bottom blue line, for example, shows the data address just above 0x40000000. The second data point is querying the data address after the allocation of the large Java array. The data address doesn't change because Java has already allocated the memory and simply manages the memory internally. The third data point is after a JNI call to some C++ code that allocates a large array. As you can see, the data address increased, meaning that the overall amount of memory the application is consuming has suddenly increased. If the application continues to allocate memory in the JNI code, the data address continues to rise and would eventually fail once the data address attempts to go past the maximum amount of memory the application is permitted to allocate based upon the -bmaxdata value.
Java programs have two stacks: one for Java code and one for C code. The Java command line option -oss sets the Java code stack size and the -ss option sets the C code stack size. The -ss option is the most interesting when talking about Java portability. The -ss option is more interesting because the -ss option relates to C code, i.e. the underlying implementation of the JVM. For every thread a Java application creates, an implementation-specific thread is created. For AIX, the implementation-specific thread is a pthread. pthreads have various attributes/behaviors that can be set on a per-thread basis. One attribute is the size of the stack used by the thread. The -ss option allows you to specify the stack size for the thread.
Why should you be interested in setting the stack size using the -ss option? If the underlying implementation is recursive or has a deep calling tree, you might run into a scenario where you run out of stack. When you run out of stack, your application will crash! Unfortunately, there isn't a method to determine how large the stack should be. In general, the default value is adequate. But in rare cases, you'll need to know how to determine if the stack size should be increased.
Before we can determine if the stack size needs to be increased, you must first make sure you have a valid core file. On AIX, when an application terminates abnormally (it crashes), an image of the application is dumped to a file named core. By default, AIX is set up to generate a smaller than normal core file to prevent file systems from being filled up. A valid core file is guaranteed if the full core file setting is enabled. To enable a full core you must execute the following command as superuser/root:
chdev -l sys0 -a fullcore='true'
With AIX set up correctly, rerun your application to generate a valid core file. Once you have a valid core file, you'll want to run the debugger, dbx, against the Java command as follows:
dbx <path to java command>/java core
dbx will read the core file and you should see something like this:
01: Type 'help' for help. 02: reading symbolic information … 03: [using memory image in core.20010221_0930AM] 04: Segmentation fault in java. at 0x31f80880 ($t17) 05: 0x31f80880 (???) 91e10020 stw r15,0x20(r1) (dbx)
Line 04 indicates that the application terminated as a result of a segmentation fault. At the very end of line 04, you see ($t17). $t17 indicates that thread 17 was the thread that received the segmentation fault. We can now retrieve information regarding thread 17 by executing the dbx command:
(dbx) thread info 17
Here is a sample output:
>$t17 run running 28563 k no sys general: pthread addr = 0x3096812c size= 0x18c vp addr = 0x3096a004 size= 0x284 thread errno = 2 start pc = 0x300400e8 joinable = no pthread_t = 1011 scheduler: kernel = user = 1 (other) event : event = 0x0 cancel = enabled, deferred, not pending stack storage: base = 0x30ca2000 size= 0x40000 limit = 0x30ce2208 sp = 0x30ca1fd8<- Below the base address
The lines of most interest are the lines in the "stack storage" section. Here we can see what the base, limit and size of the stack is along with where the stack pointer (sp) currently is. The stack pointer should never be below the base value. If the stack pointer is ever below the base, the stack size for the C code should be increased using the -ss option.
Java is portable. However, inconsistent behavior of Java based applications leaves the impression that Java is not portable. The real issue is not Java, but the implementation of the JVM. Throughout this article, various operating specific issues and JVM specific issues have been explored to uncover potential issues that arise while porting Java applications to AIX.
- AIX and UNIX: Want more? The developerWorks AIX and UNIX zone hosts hundreds of informative articles and introductory, intermediate, and advanced tutorials.
technical events and webcasts: Stay current with developerWorks technical events and webcasts.
Get products and technologies
- IBM trial software: Build your next development project with software for download directly from developerWorks.
Participate in the AIX and UNIX forums, developerWorks
blogs, and get involved in the developerWorks community.
Brad Cobb works as a Senior Technical Consultant for the IBM Solutions Enablement Group where he assists independent software vendors to enable their applications on pSeries servers. Brad has over ten years of experience porting applications to AIX, during which he has worked with various application types. He has filed patent applications, published articles, and spoken at developer conferences. You can reach Brad at firstname.lastname@example.org.