I have synchronized block. inside the block,calling a method which creates a pdf document. The problem is the thread which calls this method is waiting for a long time causing other threads to hung. how can i prevent this? Can i do some sort of timer so that if one thread is taking long time to return from the method,we can interrupt that thread and continue with other threads. Please give me a solution.
Thanks
|
There could be many instances within the Linux kernel where the native APIs have been upgraded to the newer and higher levels but there would be kernel support for both the levels for backward compatibility.
Consider a scenario where the application is using older version calling format however the machine on which the compilation is happening has both the levels installed.
We would probably call it this way using dlsym where handle is handle to the numa library returned by dlopen and numa_node_to_cpus is the function required to be accessed. where BUF and cpubuf are declared as following. Lets get into the linux kernel at this point in time There are 2 versions available for this call
|
M E M B E R S P O T L I G H T
In his own words, Hemanth is that kind of a person , who admits he doesn't know it , if he really doesn't know it and will not stop there , but rather hunt down and always loves to sit for long hours learning new things and trying to do something revolutionary. He believes in the quote " IF u can dream it you can do it " , though he is a Computer science graduate , he is interested in art and philosophy.Also does some translations in launchpad. Q Tell us a bit about yourself & your accomplishments. As a kid, I always had imagined that computers are devices which answer any questions we have, solves any problem we have in life! No wonder this curiosity has put me on a high adrenaline rush, that is still making me explore the extraordinary field of computers. Defining oneself is the hardest thing to do, asked my friends, most of them said I’m just a computer geek, the relative term is always confusing, I would define myself as a person who always likes to face challenges in life and has always tried to do something different. Completing my bachelors in computer science , I was much more adept than the normal curriculum and was more interested in doing new things by my own. Accomplishments indeed is a very relative term! The below are a few noticeable: - Won the top speaker award in IBM Develothon 2010.
- First prize in State level technical paper presentation conducted by Infosys.
- Gave more than 50 ideas to Google wave, a few did got implemented.
- 201th best contributor in the world of Ubuntu brainstrom.
- Took part in the first ever open-web p2pu course from Mozilla.
- Gold medal in state level debate competition.
- One of the top 100 selected members in Adobe Flex boot camp.
- Mentor approval for GSOC for a true idea called "Ants and Algorithm".
Q. You were announced as the Best Speaker at IBM Develothon 2010, Bangalore, how was the experience? This was the third time in a row I attended IBM developerWorks Unconference as a speaker, I have seen it evolving. This time there was pre talk voting, where the audience had to decide if they liked to hear someone speak about a particular topic. Crossing over that in itself was a tough task, especially amongst many on-spot entries. It’s always tricky to get the crowd mentality, sometimes the overwhelming preparation is just void and sometimes an extempore wins the crowd!! Its been quite an epic victory, I was all excited [ most of the time I am when it related to speaking in front of a bombastic crowd ] . I had already updated my social networks as most of them would have saying, “Me selected to be one of the speakers in IBM develothon, wish me luck”, after the pre talk voting. Then when it was for the audience to select who was the best for that day amongst the lot. It was a roller coaster ride, seeing so many people agreeing with my thoughts. It made me feel elated and got great opportunity to network during the yummy lunch time. I have blogged it in detail here: http://www.h3manth.com/content/ants-journey-planet-called-ibm Q. Do you have any favorite technology related websites/books that you can recommend to others? Websites : {techcrunch, slashdot,news.ycombinator,reedit, mashable,readwriteweb}.com Books : {founder,coder}@work, Design Patterns: Elements of Reusable Object-Oriented Software, Surely You're Joking, Mr. Feynman! and if you have read that all, for life time reading there is “The Art of Computer Programming (TAOCP)” for all of us :). Q. What is the best advice you have ever been given? The best advice, that my mother gave me once was, as humans everyone will have good and bad qualities, learn to see only the good qualities in them, I though it was just a cliche, but when I tried to apply it, I saw a different world all together! Q. How do you find developerWorks useful ? The very first article I read on developerWorks was that in the linux section, it was happy read. Mydeveloperworks is rather drawing my attention these days with many renowned geeks blogging on their expertise. It is indeed a great repository of good stuff. It would be more nice if there is live interaction with everyone on a simple chat application would be fun. Q. Your blog has a vast amount of great content, what advice would you give a blogger trying to find new stories for their blog and producing new great content everyday? First and foremost, write on everything you do which is in unison with your blogs genre, have a deadline, be self motivated and never think how others would react. As everyone is unique, we must find our own way and add variety of flavours in all our writings, as of my philosophy is to just write what I have tried, in an particular topic and not to worry about getting traffic into the blog and generating comments and for the fact of it when something is interesting and new, indeed people will like it, but after blogging a new post its much necessary at least for a few of your social media friends to know it, sharing is life. Blogger block indeed happens to every blogger, few things I have followed is to lead the mind relax in a cool room start brainstorming on a particular topic, do look out for resources if already someone has done it try to think of a better ways of doing it. Q. How do you wish to see your career shape in future? Completing my degree in 2009 and a few start-up experience with me, I am eager to ply my potential to the fullest. Moving on I’m confident enough to rely on my technical and articulation skills to achieve full satisfaction in whatever I get to work on. Thanks for the candid responses Hemanth!
To network with Hemanth visit hemanth.hm
|
Member : : Ramana V Polavarapu Tell us about yourself & your role with IBM Right from the beginning, I have been extremely fond of mathematics. Consequently, I planned to get into Mathematics, Physics and Chemistry (MPC) in intermediate (11th and 12th grades). However, I was discouraged to get into MPC because of my visual impairment and there was no provision for the college to support me with the lab work. Of course, I was disappointed and I decided to go to Commerce, Economics and Civics (CEC) since I believed that Economics would certainly have math in it. I completed my bachelor's degree in Economics. Then, I looked for the management career in XLRI, IRMA and IIM. All three institutes claimed that they did not have a provision for a visually impaired dude! Then, I decided to go for master's degree in Economics from the University of Hyderabad. Later, I spent one year in ISI. Then, I secured my Ph.D. in Economics from the University of California at Davis. I joined the University of Colorado at Denver as a faculty member and continued in that position for the next seven years. During my stint in the University of Colorado, I realized that everyone was making more money in the field of Information Technology than I was with my university job! I switched careers and went into computer sciences. For the last eleven years, I have been with this industry. In conclusion, I did not really choose the path that I have been walking. In other words, I am not one of those heroes that you find in the writings of Ayn Rand! At the same time, I wholeheartedly tell you that I have no regrets at all. Current Job Profile: For the last one year, I have been with Java Technology Center (JTC). I have been working on Apache Open Source project called Harmony (the alternative class library). Once again, I am planning to head back to Research because of one more interesting opportunity with MIOP. Q. What has your career journey been like? How did you get started? A. To some extent, I answered this question in the introduction. In IT, I commenced my career with a bunch of E-Commerce projects. I used application servers like BroadVision and ATG Dynamo. Then, I joined SAP Labs. For the next five years, I worked on various aspects of NetWeaver. Roughly three years ago, I got into IBM Research. Q. You were once an Economist, how challenging is the role of an IBM Researcher for you? A. I did not face any problem in terms of research methodology but I did have challenges to develop a research agenda in the area of computers. While I was with research division, I focused on Service sciences, Software Engineering, Mobile Computing and Social Networks. Q. How do you keep yourself updated on technologies? Are there any favorite websites, books, journals or blogs you follow? A. I read lots of books. Currently, I am reading Domain-Driven Design. I strongly recommend it. I also follow the blogs. My favorite ones are DeveloperWorks, TechCrunch and Harvard Business Review. Q. Is there anyone that you look up to and model yourself on? A. It changes from time to time. Currently, my hero is Kent Beck. Q. How do you find DeveloperWorks/My developerWorks useful? A. For over six years, I have been working only in Java. In my view, there is no better source than DeveloperWorks. However, I am still a figuring out My DeveloperWorks. Q. What is your advice to the future aspirants ? A. Keep nurturing the love for technology. Money alone cannot make us happy. Q. What motivates you? A. The previous answer makes sense here too. I use Java to solve my day-to-day problems. For example, I downloaded a book called "Tips from the Trenches" today. The copyright stuff was there on every page. I was getting irritated. I wrote a simple Java program with a hint of regular expressions to clean it up. Coding is fun. I do not know how long it lasts but I have been enjoying coding for the last 25 years.
|
Computers may keep getting faster and more powerful, yet many organizations are still hampered by the inconsistent performance of their systems. While today’s systems are capable of processing most transactions within a matter of milliseconds, there is still a percentage of these transactions that will take an order of magnitude longer to complete, because systems temporarily slow down from extraneous or internal housekeeping operations that may tax system resources. For a growing number of organizations today, such unpredictability is disruptive, costly and unacceptable. A trading desk at a brokerage firm cannot ensure the integrity of its transactions if some are slowed because of a systems bottleneck. Financial services organizations are under pressure to assure that both front-office and middle-office transactions not only are executed at blazing speed, but also are consistently fast across the board – or else face the scrutiny of regulatory agencies. That is why real-time processing with determinism – or the ability to deliver predictable, consistent results – is becoming a necessity. Now, thanks to new developments in the market, real-time processing capabilities are available through standardized software solutions that don’t require massive investments in skills or additional hardware. WebSphere Real Time offers a fast and deterministic version of the world’s best known development language. Real-Time Java-based applications can be deployed with minimal impact on current configurations – with no need to re-learn a special-purpose language. WebSphere Real Time now makes it simple and affordable for organizations of all types to build out realtime operations with predictable performance. WebSphere Real Time enables organizations to redirect critical resources to core business requirements rather than expending time and money in supporting customer, low-latency systems. Programmers no longer have to rely on languages such as C, C++ or ADA 95 for real-time programming. This Java platform now represents a viable alternative environment for highly deterministic, distributed, real-time applications in critical system applications, ranging from command and control, weapons, industrial automation and financial systems to telecommunication infrastructures. Financial services firms have been buying the latest and greatest technology for years, in an effort to manage an explosion of data, support complex transactions, meet stringent regulatory requirements, and compete in fast-changing markets. Functions such as value weighted average pricing (VWAP), derivatives pricing, and pre- and post-trade short-running analytic programs can benefit from real-time. In this industry, the ability to analyze and leverage the latest and freshest data means competitive advantage. To meet this challenge, firms are embracing high-performance trading and analytics systems. Enterprises leveraging real-time capabilities can respond more quickly than their competition to new information and changing market conditions. Running their time-sensitive mission critical applications on Real-Time Linux, Java, and WebSphere middleware not only reduces process dispatch latencies, but also gives enterprises the time advantage they need to reduce the risks of financial losses, and retain leadership in their markets. While financial services organizations are seeing the initial benefits of Real-Time Linux and Java, there are numerous advantages for other industries, from government to healthcare to manufacturing. Real-time processing will provide predictability to applications such as realtime product simulations, language translation, and audio/video streaming, among others. For many industries, the time is NOW to move.
|
The last two decades, I would say, have experienced a revolution in the way people think about computer programming. This revolution emanated out of the development and adoption of, moderately to extremely, rich programming languages. Richness spans from something as simple as platform-independence, to something as complex as dynamic typing. Irrespective of the amount of wealth hidden in them, these programming languages have often mandated a robust runtime environment to bolster an effective use of their wealth. Undoubtedly, Java has led this bandwagon and the Java Runtime Environment (JRE) has been constantly maturing to support this phenomenal programming language.
Java gets most of its applauses for the fact that it is platform-independent. Java code, when compiled, is transformed into something called bytecode. Bytecode is a sequence of software instructions (the Java Virtual Machine has its own, well-defined and universal instruction set; just as how a hardware platform would have). The Java methods that you write are visible to the JRE as bytecode. It is this universally accepted bytecode that imparts the platform-independence to Java. The JRE has a stack-based interpreter that processes one bytecode instruction at a time. Bytecode instructions can be as simple as an add and as complex as a tableswitch - an instruction that represents a switch statement. While executing a method, the interpreter loops through bytecode, triggering computations. The interpreter is an abstract execution engine that wraps around the physical machine - hardware bundled with the operating system.
The interpreter hence forms an additional layer of abstraction between the application and the underlying physical machine. Other than platform-independence, this layer also allows the JRE to effectively exercise runtime control over the execution. I shall delve deeper into this in future. For now, it is important to realize that this layer introduces an operational delay and decreases the throughput of the application. This is a side-effect of interpretation. The best approach to work around this side-effect is dynamic compilation or Just-In-Time(JIT) compilation.
Just-In-Time (dynamic) compilation transforms method bytecode into machine code. The compilation is called dynamic because it happens during application execution, unlike the classical static compilation of C/C++ programs. The unit of JIT compilation is a method. Not an entire class! JIT compilation picks up methods, at runtime, and translates their bytecode into machine instructions. All the subsequent invocations of this method, will not result into bytecode interpretation, but into machine-level execution (just like your statically compiled C/C++ program). The additional interpreter layer is hence peeled off! This leads to a large improvement in the method execution times and increases the application throughput. But there is a certain cost involved in JIT compilation.
JIT compilation happens parallel to application execution. So, is it an overhead to the application ? Not exactly. JIT compilation is an investment, the return of which is execution speed caused by the peel-off of the interpreter layer. The best candidates for JIT compilation are methods which have been extensively interpreted. They have reached a stage where they deserve to be promoted to machine code. A small investment in the form of compilation will lead to a huge return in the form of an increased speed of execution. But there is a risk involved here. What if a JIT compiled method was never used again ? Will our investment not go in vain ? Yes, it will. So, the JIT compiler should make all the attempts to maximize the returns on this investment. This is possible simply by compiling the right methods, at the right time, in the right way! This is precisely what the IBM J9 Just-In-Time compiler achieves. I plan to discuss how it does so in the subsequent posts.
|
The amount of memory available to the Java Heap and Native Heap for a Java process is limited by the Operating System and hardware. A 32 bit Java process has a 4 GB process address space available shared by the Java Heap, Native Heap and the Operating System. 32 bit Java processes have a maximum possible heap size which varies according to the operating system and platform used. On AIX the maximum java heap size possible is 3.25 GB (though the advised maximum is 2.5 GB allowing sufficient space for the native heap) whereas on Windows the maximum available is 1.8 GB [More on Java Heap sizing in the next post]. 64 bit processes do not have this limit and the addressability is in terabytes. It is common for many enterprise applications to have large java heaps (we have seen applications with java heap requirements of over 100 GB). 64 bit Java allows massive Java heaps (benchmark released with heaps upto 200 GB) However the ability to use more memory is not “free”. 64 bit applications also require more memory as java Object references and internal pointers are larger. The same Java application running on a 64 bit Java Runtime may have 70% more footptint as compared to running on a 32 bit Runtime. 64 bit applications also perform slower as more data is manipulated and cache performance is reduced. (As data is larger, processor cache is less effective). 64 bit applications can be upto 20% slower. 64 bit JVM is only recommended if a Java heap much greater than 2 GB is required or application uses computationally intensive algorithms for statistics, encryption etc for high precision support. (IBM Just In Time compiler takes advantage of 64-bit capabilities: it generates machine code to take advantage of 64-bit instruction extensions and high performance computational features. It leverages extra registers to reduce register spills and memory loads and stores) There have been major improvements on 64 bit Java performance with the compressed pointers technology. (More about this in another post) 32 bit versus 64 bit Runtimes bring another interesting consideration: Scaling. When considering application scaling there are two choices : Monolithic scaling with small number of 64 bit JVMs (scaling up) or Horizontal scaling with many clustered 32 bit JVMs (scaling out). The advantage of monolithic scaling is that more data / larger datasets can be cached with less administration / management overhead. The flipside is the reduced performance with increased size Horizontal scaling provides process isolation (and resilience) with better performance though there is a administration overhead.
|
Let me start with something I said in my previous post - "The Just-In-Time compilation activity is an investment. To maximize the returns on this investment ( RoI), the JIT needs to compile the right methods, at the right time, in the right manner". We can deduce a lot of understanding from this statement. * *
Why is JIT compilation an "investment" ? Just-In-Time compilation is NOT the compilation of Java code to bytecode, something achieved by the javac compiler. JIT compilation is the conversion of bytecode to machine instructions. The phrase Just-In-Time signifies the need-based and dynamic nature of this compilation. JIT compilation happens during application execution. Extensively utilized methods are candidates of JIT compilation. The following expenses are incurred by the JIT compilation of a method: 1. CPU time, which could be actually utilized by the application - JIT is a major CPU contender of the JRE, next to the application threads and the divine Garbage Collector! 2. A permanent chunk of memory to store the "machine code" - The compiled machine code would reside on the heap, not in the data segment as it would for statically compiled code. This increases the memory footprint of the JRE. 3. A temporary chunk of memory to be used as work-memory during compilation
While the third part is usually insignificant, the first and second can be of a significant magnitude, considering the large number of methods that can get JIT compiled in business applications. These expenses should, apparently, affect application execution. But the prime objective of the JIT compilation philosophy is imparting performance boosts on extensively used (hot) regions of the application code. It is hence imperative to treat JIT compilation as an investment; the performance boost being the returns. * * What is precisely meant by "maximizing the RoI " ? With the current understanding one may attempt to assert that JIT compiling the entire application would, theoretically, yield the best performance. A more curious user of the JRE may know her application code very well and point out potentially hot regions in the code, which she may want to be JIT compiled unconditionally! But trust me, this is against the philosophy of dynamic compilation. The IBM J9 JIT Compiler is a testimony to the wonderful concept of dynamic compilation. It follows a simple rule : "Limit your investment based on the current hotness(significance) of a method". The amount of CPU and memory spent in JIT compilation should, ideally, be a function of the hotness (significance) of the method. This means, we are making a genuine assumption that a method which has attained a particular degree of hotness, may turn hotter in the near future. And we may now make start investing, or make a new investment (discussed soon), in its JIT compilation. This controlled form of investing CPU and memory in JIT compilation ensures an effective use of its end product - the machine code, which precisely means "maximizing the RoI". On the other hand, aggressive and unconditional JIT compilation can waste valuable CPU and memory and is never recommended. Just think what would happen if you JIT compiled a hundred thousand methods and your application never used them again! This risk always exists, but it tends to zero in the approach followed in the IBM J9 JRE. * * How do we select the 'right methods" ? In this context, the right methods are the hot-spots in the application code. They are methods which are extensively utilized and have a high relative significance. These methods are candidates for JIT compilation. The J9 JIT uses two approaches to identify these hot-spots: 1. Invocation counters : This approach is used only in the early stages of tracking. It is definitely expensive. Each invocation of a method increments a counter. When a particular threshold is hit, the method deserves to be JIT compiled. 2. Sampling: A cheaper approach where application threads are periodically sampled. It tracks the increase in the relative significance of a method and selects methods for recompilation (new investments!). * * What do we actually mean by "right manner" ? This question brings us to one of the hallmarks of the J9 JIT Compiler. The JIT compiler does a lot of code optimization on the method under compilation. There is a wide range of optimization algorithms that work on the method. These optimizations include classical compiler optimizations, optimizations specific to object oriented languages and Java and platform specific optimizations. Optimizations can involve simple to massive computation and are the real expense points in the compilation process. It is imperative to categorize the optimizations into categories like (for example ) low-cost, moderate-cost, high-cost and very expensive. We ought to define various optimization levels and attribute each level with a set of optimizations. Lower levels can have a set of low to moderate cost optimizations and the higher levels may include the very expensive ones. I earlier stated, "Limit your investment based on the current hotness(significance) of a method". This limiting is achieved through the optimization levels. When the method first becomes eligible for JIT compilation, through the invocation count technique, we chose to compile it at a low optimization level. As the hotness(significance) of the method keeps increasing, we keep investing more by recompiling the method to higher levels of optimization. This is what I meant by "compiling in the right manner". This adaptive compilation strategy is responsible for "maximizing the RoI" through a process of incremental investment of CPU/memory, ensuring an effective utilization of the end-product (machine code) at each increment and before moving to the next increment. Of course, the number of optimization levels is small. * * I plan to re-visit optimization levels in the next post.
|
Why do we want
to solve problems quickly?
- It means a lower Total Cost of Ownership
(TCO) for us and our customers.
- Problem determination and resolution has
become a daunting task as more of today’s solutions involve complex
collections of products and applications deployed in heterogenous
environments
- Developing and deploying new solutions gets
delayed by maintenance of diverse existing systems
- 25-50% of time is spent in
problem determination and resolution
- The skills needed to do manual cross-product
problem determination are scarce and expensive.
In this age of complex and integrated systems and short cycle times of deployment, the ability to respond to business demands in a timely manner is becoming critical. Pro-active problem determination, quick and easy access to relevant information, reduced turnaround times on interactions with support organizations are key - client "self assist" is the "buzzword" Java developers frequently
encounter runtime problems during development, migration and post production
stages and spend significant amount of time diagnosing and resolving those
issues. Quick turnaround for problem determination is a key focus area.
In the recent times a wide array
of new tooling has emerged from the IBM Java
Technology Center
and WebSphere Serviceability teams that enable developers to debug Java Runtime
issues in a convenient way. A key focus area for IBM is to make the user experience with these tools a convenient one. Tools should be easy to obtain, available from a single source and easy to update. The IBM Support Assistant (www.ibm.com/software/support/isa)
is a free local workbench that includes rich features and serviceability tools
for quick resolution to problems. All the Java tools delivered through the IBM Support Assistant provide the capabilities of :
- "Visualization" Provide different graphical views for a diagnostic data input (a view of increased java heap usage over time for example)
- "Analysis" Analysis reports based on the data analyzed (analysis of the increased heap usage over time indicating a memory leak for example)
- "Recommendations" Recommendations and suggestions to resolve the observed problem (Analysis of heapdumps to identify the cause of memory leak for example)
Some cool Java Runtime tools to check out are: (all tools available on IBM Support Assistant)
- Garbage Collection and Memory
Visualizer
The Garbage Collection and Memory
Visualizer is available as a plug-in to IBM Support Assistant
(ISA). It analyzes verbose GC output to provide plots, summaries and
recommendations. The tool profiles heap usage, heap sizes, pause times
and many other properties. Flexibility of comparing multiple logs in the same
plots and many views on data (reports, graphs, tables) are available. This is a powerful tool to debug performance bottlenecks in the Java application due to Garbage Collection.
- Memory Analysis Tool (MAT)
Memory Leaks in Java are a consequence of non obvious programming errors - and debugging memory leaks is not an exact science. The Eclipse Memory Analyzer is a
fast and feature-rich Java heap analyzer that helps you find memory leaks and
reduce memory consumption. The Memory Analyzer was developed to analyze
productive heap dumps with hundreds of millions of objects. Once the heap dump
is parsed, you can re-open it instantly, immediately get the retained size of
single objects and quickly approximate the retained size of a set of objects.
- The IBM® Monitoring and Diagnostic
Tools for Java™ – Health Center™
Health Center
is a lightweight monitoring tool (with a performance overhead of not more than 2-3%) which allows running Java™ Vitual Machines to be observed and
health-checked. The Health
Center enables insights
into general system health, application and garbage collection activity. Developers, performance engineers
and practitioners can use the IBM
Health Center
tool to quickly identify any performance bottlenecks which is specifically
helpful in an agile development environment. Designed to attach to a running
Java process to explore what it is doing, how it is behaving, and what you
could do to make it happier. More information on these
tools can be found at: http://www.ibm.com/developerworks/java/jdk/tools/index.html
*Java is the registered trademark of Oracle Inc.
|
Here is what I think of java garbage collection:
In java programs, the use of pointers is forbidden by virtue of a design strategy or a security policy. Without pointers, functions cannot access objects across stack frames, among many other limitations. The inability to pass objects to and from functions will limit the scope of a programming language at large. To remedy this, in java, user defined objects are inherently passed by address (termed as reference), in contrast to C and C++ where passing arguments by their addresses is a volitional choice.
Conventionally, when arguments are passed by value, what the callee recieves is an isolated copy of the passed object. In C, when passed by address, the callee can manipulate the caller's arguments. In C++ the same applies, along with the call by reference. The user objects are normally created on the stack. In cases of producer functions where the function generates and returns an object, the allocation has to be made in the heap (locally created objects cannot be returned from a function, which causes dangling reference). Such cases are not so often, so one can free the object manually which was 'newed'. Two modes of creating user objects are:
Class obj(); => object and the handle created on the stack. Class *obj = new Class(); => object in the heap, reference on the stack.
In java, without pointers, the language semantics does not allow the above flexibility and we have only one way to create objects – either everything on the stack or in the heap, not both. Creating all the objects on the stack is a bad choice, since objects whose life span is greater than the defining method will be destroyed when the frame is popped off while the function’s return, essentially forbidding methods from returning generated objects, causing java to be an incomplete language. As a workaround, all the objects are created in the heap. Now, as a matter of fact, it is difficult for a programmer to delete all the objects he 'newed' which are quite many, rather most of them.
Hence the garbage and hence the collector.
In a non-java programming paradigm, it is like allocating memory at arbitrary heap locations, and later scanning the entire virtual memory to clean up the filth.
Garbage collection is not a java feature. It is a compromise. A consequence of refraining from pointers. A skillful attempt to mend a defect. An unchecked sun heredity and an unbridled software hypothesis which we carried and dragged all the way along.
|
Here is what I think of Java parameter passing conventions.
At programmer's level, Java is said to pass objects by reference and primitives by value. This means for references, what the callee receives is a heap address of the object, and the object references themselves are actually passed by value. This also means Java saves some space and effort in copying the entire object onto the subroutine linkage channel (for example stack memory).
By definition, pass by reference means 'a parameter passing convention where the lvalue of the actual parameter (argument) is assigned to the lvalue of the formal parameter.'
When passed by reference, the callee method can manipulate the original object’s attributes, can invoke the methods of the object, can re-new, re-assign and purge the components of a composite object thus passed. These operations affect the original reference of the caller, because we have only one object in the heap, which are pointed to by both of these references.
For destroying an object, the C++ way is to 'delete' the object, and the C way is to 'free' the pointer. If passed by reference or address, both these languages have the flexibility of cleaning the object or a structure from anywhere in the caller-callee chain. The invalidation of an object indirectly invalidates other references or pointers cached elsewhere in the stack locations, and trying to reuse those references or pointers results in a crash.
This is different in java. Since there is no explicit freeing of objects, we rely on null assignment on the reference, which is the only way to force an object cleanup. Even after the callee nullifies an object, the object lives through the caller's reference. This means that an object cannot be freed (or initiated for freeing) from an assignee reference, when the a peer reference is alive, and vice versa.
This may be a conscious design to eliminate bad references and make sure that all the object references are either null or a valid object's address. This is because, in the garbage collection, the memory of unreferenced objects are not really freed into the system, rather kept in the internal free pool, and is still mapped into the process, and if accessed through stale references, such a bad dangling pointer will actually cause more damage than a crash.
But then how to clean up an unwanted java object? Set your object reference to null and wait for a gc to occur? might not work because, if there is a second reference elsewhere in the stacks and registers, consciously or unknowingly, the object is not collected. Consequently, many of the objects the programmer has explicitly discarded will lay remnant in the heap until the last reference of the object also went out of scope. This may be sooner or later, or never.
Many of the memory leaks including the infamous Classloader leaks can be attributed to this 'hidden and under-documented' behavior of java. And this is the very reason we see more OutOfMemoryErrors than NullPointerExceptions.
|
Here is what I think about the virtual methods in java.
In java, by design and specification, all the non-static, non-private, non-final, non-constructive methods are virtual. This means, the selection of the method to be invoked at a call site will depend on the actual (runtime) type of the invoker object (reciever), rather than its declared (static, compile time) type.
In the case of C++, this is true only when the invoker object is declared as a pointer type, and the method is declared explicitly as 'virtual'. If either of this is false, then the method is resolved (identify and select the definition) always to the definition in the defining class of the declared type of the invoker.
In contrast in java, since there is no pointer, there is no flexibility for methods to exhibit virtual and non-virtual behavior based on the type declaration mode - there is only one way to cite objects, that is through references. And moreover, in JRE implementations, java object looses its connection with the declaring type and gets associated with the defining class. At this point it is imperative that the virtual keyword be removed and designate all the normal methods as virtual.
But how often a program really requires the virtual property? Very rarely. What is the percentage of virtual methods who exercise this feature in a meaningful manner? less than 5%. Even in those cases where multiple subclasses are designed and methods redefined, an efficient programmer will go for an interface (or abstract class) for the base class, which means the base method is pure virtual(abstract), not virtual.
This precisely means that a normal, concrete java method (designed to be virtual) actually utilizing its virtual-ness is a rarest possibility.
Implementing virtual methods is easy in JREs, but their presence make the execution engine incapable of pre-linkage of the method call site, potentially slowing down the performance. In practice, the method resolution has to wait until the execution reaches the call site. Dynamic compilers devirtualize methods up to an extend, by tracing the source of the invoker object in the neighborhood of the call site, but this does not really alleviate the problem, and adds it's own additional computation overhead. One of the potential challenges of JIT today is the inability to perform inter-procedural analysis and compress the code any further owing the extremely delayed method resolutions. A powerful technique called ahead-of-time compilation is rendered ineffective because of the inability to resolve methods in advance.
The decision to qualify all the methods as virtual was not a well thought-out design, instead an un-anticipated side effect. An accidental by-product or an unexpected misfire came out of the pointer-less design.
|
You can create your own exceptions in Java. Keep the following points in mind when writing your own exception classes:
- All exceptions must be a child of Throwable.
- If you want to write a checked exception that is automatically
enforced by the Handle or Declare Rule, you need to extend the Exception
class.
- If you want to write a runtime exception, you need to extend the RuntimeException class.
We can define our own Exception class as below:
class MyException extends Exception{ }
|
You just need to extend the Exception class to create your own Exception
class. These are considered to be checked exceptions. The following
InsufficientFundsException class is a user-defined exception that
extends the Exception class, making it a checked exception. An exception
class is like any other class, containing useful fields and methods.
Example:
// File Name InsufficientFundsException.java import java.io.*;
public class InsufficientFundsException extends Exception { private double amount; public InsufficientFundsException(double amount) { this.amount = amount; } public double getAmount() { return amount; } }
|
To demonstrate using our user-defined exception, the following
CheckingAccount class contains a withdraw() method that throws an
InsufficientFundsException.
// File Name CheckingAccount.java import java.io.*;
public class CheckingAccount { private double balance; private int number; public CheckingAccount(int number) { this.number = number; } public void deposit(double amount) { balance += amount; } public void withdraw(double amount) throws InsufficientFundsException { if(amount <= balance) { balance -= amount; } else { double needs = amount - balance; throw new InsufficientFundsException(needs); } } public double getBalance() { return balance; } public int getNumber() { return number; } }
|
The following BankDemo program demonstrates invoking the deposit() and withdraw() methods of CheckingAccount.
// File Name BankDemo.java public class BankDemo { public static void main(String [] args) { CheckingAccount c = new CheckingAccount(101); System.out.println("Depositing $500..."); c.deposit(500.00); try { System.out.println("\nWithdrawing $100..."); c.withdraw(100.00); System.out.println("\nWithdrawing $600..."); c.withdraw(600.00); }catch(InsufficientFundsException e) { System.out.println("Sorry, but you are short $" + e.getAmount()); e.printStackTrace(); } } }
|
Compile all the above three files and run BankDemo, this would produce following result:
Depositing $500...
Withdrawing $100...
Withdrawing $600... Sorry, but you are short $200.0
To follow me and know more about programming stuff,
Facebook : http://www.facebook.com/code2learn
Website : http://www.code2learn.com InsufficientFundsException at CheckingAccount.withdraw(CheckingAccount.java:25) at BankDemo.main(BankDemo.java:13)
|
Since its a java technology week. I start with the post on learning Java. This thing i.e. Karel helps not only learning and understanding Java concepts but it also helps in building Logic through the puzzles that it offers. the more puzzle you solve using this Robot, the more powerful your logic and problem solving ability gets sharpened. Karel The Robot is a robot simulator that affords a gentle introduction
to computer programming. Users write Karel programs and feed them to the
simulator to watch them execute.
By solving karel problems you will build your logic and enjoy programming.
<- Karel
Karel- its a robot.. its has its own world that is known as the grid.
The grid has avenues (column) and streets(rows). Karel is placed at the 1st street and 1st avenues ie 1,1 if denotes in Cartesian co-ordinates.
This is Karels World.
Karel has only four Main functions :
1. move()- that makes karel move one step ahead.
2. putBeeper()- It puts a beeper.
3.pickBeeper()- it picks a beeper.
4. turnLeft()-makes karel turn left at that place itself.
It is simialr to JAVA so it is helpful for those who are learning
JAVA. It also supports object oriented programming ie u don’t need to
define a function every time if u want to use instead in OOP u can write
function once inside a class then u can call the function by creating
an object of that class and then calling it..
to download karel, click
to install: Download an assignment from clicck(in this assiggnments)
 |
Import button |
After
downloading both the assignment and eclipse IDE, unrar it and open
Eclipse.exe, then click on the import button
----------------------------------------------------------------->
Then use
the browser to find Assignment-1 folder.When you do so, Eclipse will
load the starter project and display its name in the Package Explorer
window like this:
Then click on the small triangle and start doing solving the problem.
Now lets do a simple Karel program :
Q.we will solve a problem from assignment-1 ie CollectNewspaper(the documentation for the problem is available in, handouts-assignment 1)
clip of problem (we have to get the beeper):
Code is:
import stanford.karel.*;
public class CollectNewspaperKarel extends Karel{
public void run(){ move(); turnRight(); move(); turnLeft(); move(); pickBeeper(); }
public void turnRight(){ turnLeft(); turnLeft(); turnLeft(); } }

Karels running interface is :
To learn Karel download the book : www.stanford.edu/class/cs106a/book/karel-the-robot-learns-java.pdfTo follow me and know more about programming stuff, Facebook : http://www.facebook.com/code2learnWebsite : http://www.code2learn.com
|
Over the next few days I will post on Classloading, Classloaders, Debugging, and Debugging (the more you do, the more you'l learn) .. . The JVM spec defines LOADING like this (take a deep breath!) : "Loading refers to the process of finding the binary form of a class or interface type with a particular name, perhaps by computing it on the fly, but more typically by retrieving a binary representation previously computed from source code by a compiler and constructing, from that binary form, a Class object to represent the class or interface. The binary format of a class or interface is normally the class file format" . If almost everything in Java is an object , then there should be a class from where this object came from. A simpler chicken and egg problem you say :) One of the most oft talked about aspects of Java programming is the Classloading ( i will use the terms 'classloading' and 'loading' interchangeably - both intended to describe the same act ) If you have a huge application, with classloading, you probably have one less thing to worry about. . Classloading in Java is achieved by an entity called the Classloader. A classloader takes in the raw bytes from a .class file , standard Java api's provided with the SDK (or) any other custom classes required by your class file and then creates the right bytecodes to eventually run your .class file. . Classloading is a blanket term used to describe the process of reading .class file and executing it. Behind the scenes, the classloader does much more than just "loading". We have three main stages: LOADING -> LINKING -> INITIALIZING . LOADING: The very first step - read the .class bytes into the JVM process memory. . LINKING: All the "under the hood" stuff a JVM does for the class to mean something to it. This stage involves three mini steps: i) verifying the bytecodes of the class for a "lot" of things - VERIFICATION ii) building up JVM internal data structures to store the class entities in a way the JVM finds - PREPARATION (method table strikes a chord?) iii) the previous step had built up the data structures with relative offsets and symbol references - in this step we "resolve" - RESOLUTION . INITIALIZING: The revered <clinit> for a class is run. And your static fields of a class are finally set up to user defined values(if any). Dear Programmer.. we have take off! Go ahead and create an instance of this class. . This is from JVM perspective. What about the one who initiates this - the classloader ? Some background on the "much loved" classloader (let me abbreviate it as CL) . All , except the primordial/bootstrap CL, must have a parent - which makes it abide by the "Parent Delegation Model" How the delegation model looks like? . Bootstrap | Extension | Application | User defined CL1 , User defined CL2 . Bootstrap CL looks for classes in the bootclasspath (sdk/jre/lib/rt.jar, vm.jar etc) : -Xbootclasspath:... Extension CL looks for classes in the jre/lib/ext directory of the SDK Application CL looks for classes on $CLASSPATH (on *nix machines) / %CLASSPATH% (on Win) or directories/classes pointed by -classpath option User defined CL's look for classes on , well, the user defined class path(s) (think of the URLClassLoder .. ) . What is the need for a delegation model? 1) With Parent Delegation, you "always" ask your parent to try and load the class 2) The higher you go, the more trusted the classloader is by the JVM (there is a *caveat, let me dwell on that later) 3) The lower you come , the more "places" your classloader is allowed/designed to scout for classes to load 4) Security. Period. (probably is the crux of the above points) . With this kind of a delegation, you make one thing sure - you are loading the right class . Classic example: Say you have your own implementation of java.lang.String class. Place it in the classpath Run your app, say, HelloWorld, with a custom classloader MyClassloader to load the String class Without parent delegation: you could probably force the classloading of your own implementation, rather than the more trusted one which comes with the SDK. In which case the chances of goofing up and crashing the JVM rises from 99-100% (i just made this up, but you get the point right?) With parent delegation: sit back and enjoy while JVM go on with its usual business of asking bootstrap to load it from the RIGHT jar. . *Caveat: You could append/prepend custom JAR's to bootclasspath - thereby circumventing the "security" part of the CL and force JVM to somehow load your own version of the standard Java API's. We do not recommend this at all - i myslef use it only for debug purposes. Reason? Remember the 99-100% .. i do not want to fall into an obvious pit as this. :) . Do not lose heart. If none of the three JVM provided CL's does the job for you , you could (and sometimes, should) write your own classloader. There are tons of material on the web for "how and why you should write a custom classloader" . Gist of the "why" reasons: . 1) With a custom classloader you can have many paths from where you can load a class - even from the web (applets) 2) In an enterprise app, you SHOULD have your own CL. This way , you need not put all jar's on the classpath (plus) you have the option of UNLOADING classes when not needed. (More on this later) 3) Create namespaces - have multiple copies of the same class across different classloaders - because the JVM recognizes your class by the classname + classloader name . . I will post more on classloaders, classloading, EXCEPTIONS (who hasn't seen a NoClassDefFoundError / ClassNotFoundException in his programming life?) and much more in the coming days. . Happy Java week..
|
Locking is one of the key aspects in Java application programming and helps protect the integrity of shared data in a multi-threaded application. Locking however can also cause problems if improperly used. Some of the common problems associated - Incorrect sequence of acquiring multiple locks often results in deadlocks
- Excessive locking can cause high CPU issues.
- Improper/frequent lock acquires and prolonged lock acquires can cause performance issues.
Let’s try and understand basic terms pertaining to Locking in the IBM J9 Java Virtual Machine which is based on modified “Tasuki Locks”. Let's look at locking at the Java layer. The Java programming language provides two basic synchronization idioms: synchronized methods and synchronized statements. A method can be defined as synchronized, in which case, the particular java object on which the method is called, will be locked. Similarly, synchronized() statements can be used to lock any Java objects. Now let's take a look at the synchronization at native level. In JNI applications, the synchronization is achieved by MonitorEnter() and MonitorExit() methods on the object. (*env)->MonitorEnter(env,obj); … code… (*env)->MonitorExit(env,obj);
Java Object Lock operates in two modes. To avoid having a native level monitor on every object, the IBM J9 JVM usually uses a n bit flag called 'Lock word' (n may vary from release to release) in Java object to indicate that the item is locked. 1 bit of 'Lock Word' indicates the lock mode. Most of the time, a piece of code will transit the locked section without contention. Therefore, the guardian flag is enough to protect this piece of code. This is called a flat monitor/lock. When ever a Flat lock is acquired on an object, the 'Lock Word' will be updated with the Thread id. When a lock is found to be busy (i.e., the n bit lock word has some thread id information), JVM directs the thread into a three tier spin loop (spinning increases efficiency by avoiding expensive process context switches). Depending on platform and Java release, there could be a CPU pause or thread yield as well.
if (object.lockWord.flat) { for (int k = 0; k < spin3; k++) { for (int j = 0; j < spin2; j++) { AcquireLock_with_objectflagupdate(); return; CPU_pause(); for (int i = 0; i < spin1; i++) { ; /* do nothing */ } } thread.yield(); } } If the thread exhausts 3 tier spin, it will be made to sleep until the lock is freed. The lock will now be considered as a contended lock (for it was busy for considerable period) and will be inflated, once it is freed. As soon as the lock is freed, a native level JVM system monitor structure will be created and the object flag will be updated with the address of native structure along with the indicator bit set to 1 (indicating the object is inflated). This is called an inflated monitor/lock. On a contended lock, there could be multiple
threads that could be waiting. The 32 bit flag
is not capable of holding all the required information about waiting threads, entering threads and so on. So we create a JVM structure to hold this information during Inflation and update the Object. The JVM system monitor structure holds the Operating System mutex information, owner thread id, ids of the threads that are waiting on the lock and other related information. The waiting thread is woken up and is given the access to JVM system monitor (owner thread id will be updated to reflect that this monitor is busy). All subsequent lock acquires will happen in inflated mode through a similar 3 tier spin on the JVM system monitor. If a thread fails to acquire the JVM system monitor before the spin exhaustion, the thread will be directed to OS routines for further lock operations. Locks are also inflated automatically whenever wait/notify is performed as object flag is not large enough to hold waiting threads and relevant information. When there is no contention, the lock will be deflated.
In the subsequent blog posts, I will take you through the changes in Locking mechanism in R626 and how to identify and interpret the locking related issues.
|
ORB aka object request broker, is the java implementation of the OMG's CORBA (Common Object Request Broker Architecture) specification, which enables the usage of remote objects usable like the local objects. It also lets objects communicate with each other independent of the platform and languages used to implement those objects.
The ORB is implemented as part of the java virtual machine and plays an integral part in the application server runtime environment. In the client/server communication, the ORB provides primarily the following functionality - 1) Provide a framework for clients to locate the remote objects on the server and invoke requests on them. 2) Manage the connections. 3) Manage the request and response messages to/from the remote Java objects. 4) Marshal/Demarshal, based on the CDR (Common Data Representation) , the messages sent over the wire. Provide a framework for clients to locate the remote objects on the server and invoke requests For a client to be able to successfully locate the remote object on the server, the remote object needs to be "registered to a naming registry" and "exported". The "bind" call will ensure that the remote object is registered to a naming registry and "javax.rmi.PortableRemoteObject.exportObject()" will ensure that the remote object is exported i.e. it is ready to be invoked. Once, the client does a look-up, "javax.rmi.PortableRemoteObject.narrow()" will have to be called on the reference returned by the look-up. The object returned by the "javax.rmi.PortableRemoteObject.narrow()" will be used to invoke the remote object. Now lets look at what internally happens in the ORB when an javax.rmi.PortableRemoteObject.exportObject() - 1) The tie and stub classes associated with the remote implementation are loaded. 2) The tie class to the remote implementation class and tie class to the stub class are cached for future faster look-up. The "javax.rmi.PortableRemoteObject.narrow()" is fairly simple, it just checks whether the object returned by the look-up can be casted to the remote interface i.e. the object should extend javax.rmi.CORBA.Stub and implement the remote interface. The reference returned by the javax.rmi.PortableRemoteObject.narrow() will be used to invoke the remote function. The stub and tie play a key role in the remote communication. The reference returned by the javax.rmi.PortableRemoteObject.narrow() is actually a stub. As the stub implements the "remote interface", the implemented remote functions will invoke the ORB to send the data to the server and receive the response from the server. At the server side, the data will be received by the ORB. The ORB will identify the corresponding tie and the tie will invoke the remote function implementation. Once the reply is received, the tie will pass the response data to the ORB and subsequently the data will sent to the stub at the client side. In the next blog, we will explore how the ORB manages the connections.
|
In today’s complex and
integrated environments, the ability to determine the “health” of an
application quickly assumes high importance. The IBM Health Center is a light-weight profiling tool
that provides a comprehensive view of the “health” of various subsystems for
any Java based application.
Developers, performance
engineers and practitioners can use the IBM Health Center tool to quickly identify any
performance bottlenecks which is specifically helpful in an agile development
environment. Designed to attach to a running Java process to explore what it is
doing, how it is behaving, and what you could do to make it happier.
The IBM Health Center attempts to answer some of the
following common questions that developers, performance engineers, service
personnel and WebSphere administrators often ask:-
What is my
Java application doing ?
Why is it
doing that ?
Why is my
application going so slowly ?
How can I make
it go faster ? Is my application scaling well ?
Is our
algorithm sensible ?
Do we need to
tune the Java Virtual Machine ?
Is my
configuration sensible ?
Is the system
stable ?
Have I got a
memory leak ?
Is the
application about to crash ?
Classes, Environment, Garbage
Collection, Locking and Method Profiling, Native Memory, I/O are areas of JVM
activity which can be viewed and receive recommendations on, from the Health
Center tool. The Health Center
summary will indicate if there are any potential problems with the particular
subsystem (indicated in Red color), or tuning suggestions for optimizing the
subsystem further (indicated in Amber) and an indication of a healthy subsystem
(with no problems) denoted in Green color
The IBM Health Center not only provides a visualization
into the various subsystems of the JVM from a performance and reliability
perspective but also provides a set of recommendations that can be applied to
alleviate the identified problems. The performance overhead of the tool is
negligible and can be deployed on production systems.
The IBM Health Center is available via the IBM Support
Assistant framework. (www.ibm.com/software/support/isa/ )
The latest
version of Health
Center (version 1.3)
comes with some very useful new features. Options are available now in Health Center
for you to generate dumps (Heap Dump, System Dump and Java Dump) at Runtime for
more detailed analysis. For example, if the Health Center
reports increasing heap usage over time and indicates a possible memory leak,
you can generate a Heap Dump for further offline analysis.
Health Center 1.3 now supports enablement of
native trace points and the ability to disabling systems selectively to
configure data collection to only monitor areas of interest.
An interesting youtube
video providing a quick introduction to IBM Health Center: http://www.youtube.com/watch?v=5Tcktcl0qxs
The next couple of blog
posts on this subject will cover the steps to install and launching the IBM
Health Center and a detailed description of the various subsystems it profiles
and provides recommendations on.
|
Last time, I ended on Parent Delegation model and reasons for "Why you should have a custom classloader for your application". Let me dwell on this a bit: . In the delegation model, all custom classloaders sit beneath the Application Classloader. I mentioned that when there's a classload requested for a classloader, it should ask its parent to try and load it. Well, the guys who wrote all this magic (a.k.a classloading) thought - "how do we improve this process of 'parent delegation'? " Answer to that is the Classloader cache. . Every classloader has its own cache - which would hold the "defined class structure" of every successful classload it has performed. So let me change the parent delegation definition a bit now: . "when a classload is requested out of a CL, the CL should look for the class structure in its cache . If it finds one, use it. Else, delegate to the parent .. (and they can live happily-ever-after till CNFE doesn't surface!)" . How does it help? Well, the very first time for any class - parent delegation is a must . If the parents fail to load the class, it will come down to the current CL itself. If it finds the class, it goes and stores it in its cache. So the next time a classload is requested, and say , you found it in the cache - you can use it because you are assured that a parent delegation 'had' occurred sometime in the past for this class . All is well. . So, its handy you remember this when you write your own classloader Lookup sequence: CACHE (this.findLoadedClass()) >>> PARENT(super.loadClass()) >>> DISK(this.findClass() and then this.loadClass()) . sample code : . ----> ... Class c = findLoadedClass(name); //Used to check if the class is already loaded if(c == null) { try { c = super.loadClass(name); //ask the parent classloader } catch (CNFE) { } if(c == null) c = findClass(name); //else load from disk } return c; } <---
. So, the classloader cache certainly pips speed of the classloading sequence a bit (not to be confused with Shared Class Cache, which I hope to blog in few days time). To lessen the tax on the system , the JVM also employs what is called as a Lazy Loading - meaning a classload is initiated at the following times only i) creation of the first object of that class ii) first instance of any of the subclasses of this class iii) any of this class's static field is initialized . This way you are assured of classloads of only the required classes. Saves space , saves time. (Its opposite number is the Eager classloading - recursive loading of all classes referenced in our application - and I believe is used in Real Time applications.. ) . . Few things on Class Unloading: Comes into effect ONLY if there's a custom classloader in your application - because the other three classloader loaded classes are never unloaded. A class and its classloader are linked through its JVM internal structures. So only when a classloader is out of scope for the JVM will that classloader's classes will be unloaded. (for more on this, you should attend the webinars on "Understanding Java Memory Management" and "Debugging Classloader Memory Leaks in the WebSphere Application Server" as part of the Java Week) . . CNFE , NCDFE and the other usual suspects are coming soon.. and also some of the debugging techniques we use to find " 'em goons" .. . Happy Java Week!
|
This is a continuation from the blog post by Rajeev which provided an introduction to the IBM Health Center - Please read more hereHealth Center consists of two parts. An agent that needs to be enabled with the running Java application and an eclipse based client that comes with IBM Support Assistant which needs to be connected to the agent. The Health Center Client is supported on Windows and Linux x86 operating systems whereas the agent is available for all the IBM supported platforms except HP and Solaris. The agent is shipped with the IBM JDK 5 SR9 and JDK 6 SR3 onwards. The latest agent can be downloaded from the IBM Support Assistant. The IBM Health Center can be installed from the IBM Support Assistant(ISA). Follow the you tube link (http://www.youtube.com/watch?v=6WjE9U0jvEk)to know about 1. How to install IBM Support Assistant 2. How to install Health Center Client 3. How to launch Health Center Client from IBM Support Assistant To launch a Java application with the Health Center agent enabled, users need to provide -Xhealthcenter as Java Runtime parameter at the start of the application. For Java 5.0 SR9 and earlier or Java 6.0 SR4 and earlier users need to provide the –agentlib:healthcenter –Xtrace:output=healthcenter.out as Java Runtime parameter before the start of the application. To launch the Health Center Client from IBM Support Assistant, users need to click on Analyze Problem tab in the Home page of IBM Support Assistant.Select Health Center and Click on Launch. Click on Next to get connection dialog box. Specify hostname, port number and basic authentication (if required) of the machine where the agent is running. Click on Next. Once the Client is successfully connected users will see the hostname and port number on which the Health Center agent is started. Click on Finish to proceed to Status Summary page.
|