A significant time is expended in Diagnostic Data collection for various problem scenarios encountered by customers - in many situations leading to prolonged downtime of production systems. Gathering the right set of data in minimal number of iterations from a failing application scenario is a common challenge encountered by IBM Service teams as the data collection process is largely manual and error prone. The challenge is aggravated as several sets of data need to be collected for common problem scenarios and the lack of awareness amongst the IT teams on the same.
IBM Serviceability teams have addressed these challenges by automating the data collection process thereby reducing cycle times for problem resolution and improved turnaround times for problem resolution.
There are automated data collectors available for different IBM products as a part of the IBM Support Assistant framework. Some of them include Information Management Collectors, Rational Collectors, WebSphere Collectors and Lotus Collectors which provide an automated way of collecting product specific logs.
In this poster we introduce the Java Diagnostics Collector which gathers documentation and diagnostic data associated with Java Virtual Machine problems. It is started automatically when the Java Virtual Machine detects a problem and produces dumps. Some of the key features of Java Diagnostics collector are:
• Runs automatically when a running Java Virtual Machine detects a problem • At JVM start up, the tool runs a Diagnostic Configuration Check and warns the user if any environment setting required for diagnostics collection setting is disabled.
• Searches for data artifacts (system dumps, javadumps, heapdumps, etc) from that event.
• It also gathers the application specific logs which is specified by the user .
• It gathers diagnostic data files produced by the Java Virtual Machine and generates a single zip file containing the diagnostics from the event .
|
This is a continuation from the blog post by Rajeev which provided an introduction to the IBM Health Center - Please read more here and the blog post from me which provided an information about Installing and Launching Health Center - Please read more here Subsystems:The Status Summary page lists the various subsystems profiled by the Health Center which include Classes, Environment, Garbage Collection, I/O data, Locking, Native Memory and Method Profiling. This page provides a summary of the “health” of the various subsystems. The summary will indicate if there are any potential problems with the particular subsystem (indicated in Red color), or tuning suggestions for optimizing the subsystem further (indicated in Amber) and an indication of a healthy subsystem (with no problems) denoted in Green color. Classes Subsystem:Health Center provides class loading information, showing exactly when a class has been loaded into the Java Virtual Machine (JVM) and whether the class is cached or not (in the Shared Classes Cache). This helps users in determining whether their application is being affected by excessive class loading. Environment Subsystem:Health Center uses an 'environment perspective' to provide details of the Java version, Java classpath, boot classpath, environment variables, and system properties. This is particularly useful for identifying problems on remote systems or systems where you are not able to access the configuration details. If the Health Center detects mis-configured applications, it will provide recommendations on how to fix it. For example a debug trace option introduced during development cycle may have been left as is with possible performance implications when moved to production. Environment Subsystem will point to such a scenario. Garbage Collection Subsystem:Garbage Collection is the most significant performance bottleneck for Java applications. Tuning GC correctly can potentially deliver significant performance gains. Health Center identifies where garbage collection is causing performance problems and suggests more appropriate command line options. IO Subsystem:Health Center uses an 'I/O perspective' to monitor application input or output (I/O) tasks as they run. Users can use this perspective to monitor how many and which files are open, and to help solve problems such as when the application fails to close files. Locking Subsystem:Health Center records all locking activity and identifies the objects with most contention. Health Center analyses this information, and uses it to provide guidance about whether synchronization is impacting performance. Native Memory Subsystem:"Native memory" subsystem gives a view of 'Process Virtual Memory' and 'Process Physical Memory' plotted against time with JVM start time as the reference. Profiling Subsystem:The Health Center uses a sampling method profiler to diagnose applications showing high CPU usage giving full call stack information for all sample methods. Health Center works without recompilation or bytecode instrumentation and shows where the application is spending its time.
|
In the earlier post, I have discussed about the Locking
mechanism in IBM J9 VM in Java 5 and Java 6. In this blog post, let us see the
changes that are to be introduced in the new Locking mechanism R626.
We have seen that a thread does spinning in the following
scenarios
- When Flat Lock is Busy
- When Inflated Lock is Busy
- When native level Lock is Busy
As earlier mentioned, spinning
increases efficiency as we are avoiding expensive context switches and also
avoiding the need to immediately move to OS calls to manage Locking. However,
CPU consumption is not free. If the system is already busy, spinning can eat
away some CPU cycles which could otherwise be utilized elsewhere. For locks
that are held for longer time, the spinning does not make sense (say the thread
spins for a period X. And the lock is typically held for X+Y).
The following features are
introduced in regard to the above:
Eliminate multiple spin
Disable excessive spinning. Suppose an object monitor which is in inflated mode. When a thread fails to acquire that lock in inflated mode with a spin, it does a spin again
on JVM system monitor. We are disabling the secondary spinning that we do on
JVM system monitors.
-Xthr:secondarySpinForObjectMonitors
is the option to disable this feature and enable spin on JVM system monitors.
Adaptive Spinning
This feature decides whether
spinning is beneficial for certain set of locks and disables if not necessary.
This is enabled by default from IBM Java R626.
This feature is achieved by sampling
lock acquiring information, whenever there is contention on the lock. It
collects data related to hold times (the period for which the lock is held) and
Slow percentages (percentage of lock acquires that made the thread wait to the
total lock acquires).
Based on heuristic values obtained
in our labs about the above entities, J9 JVM will disable spinning on that
particular lock, if it is deemed as unnecessary. Once spinning on a lock is
disabled, it is done so for the rest of the application run. Sampling will also
be disabled too on that lock during that period.
-Xthr:adaptSpin/-Xthr:noAdaptSpin
options are used to enable/disable this feature.
|
JAX is a common term used for JAX-B and JAX-WS technologies which are basic building blocks of WebService. WebService is a distributed web application which uses open XML-based standards and transport protocols to exchange data between remote devices. Starting from Java6 the said WebService fundamental classes are bundled within JDK itself. The main intention is to help WebService development instead of using an application container like Tomcat / WebSphere. JDK also bundles a light weight WebService container to host the WebService testing.
1) What is JAXB?
JAXB is also known as Java API for XML binding. JAXB technology make easier to transform and access data from XML within Java and also to create an XML from the representing Java objects. For JAXB, JDK provides API's (javax.xml.bind), compiler tools (xjc & schemagen) and a framework with implementation that automates mapping between a) In memory Java Objects to XML b) XML to in memory Java Object. a) is commonly called as JAXB marshal while b) is commonly called as JAXB un-marshal. JAXB Marshalling provides a client application the ability to convert a JAXB-derived Java object tree back into XML data form. JAXB Marshalling process can be compared to Object serialization where java object is converted into network friendly way. JAXB Unmarshalling provides a client application the ability to convert XML data into JAXB-derived Java objects. JAXB UnMarshalling process can be compared to Object de-serialization, where network transmitted Object bytes are converted back to Java Objects.
JAXB provides an efficient and standard way of mapping between XML and Java Object. Comparing to XML parsing using DOM / SAX, JAXB uses lesser memory footprint. JAXB create objects based on demand and thus uses memory efficiently.
2) What are all JAXB Compilers tools?
JAXB compilers are used to generated JAXB artifact which are essential during JAXB runtime, for Marshalling and Un-Marshalling. i) xjc : generate fully annotated Java classes from a XML schema file. usage : xjc [-options ...] <schema file>
ii) schemagen : can generate a schema file from JAXB annotated Java classes usage: schemagen [-options ...] [java source files] 3) What is JAX-WS technology? JAX-WS means Java API for XML Web Services and it the technology helps to build client server Web Services. JAX-WS implementation under the hood utilizes JAXB for XML to Java and Java to XML conversion for WebService communication. Old implementations of WebServices were based on Remote procedure Call (RPC) which uses RMI underneath. JAX-WS hides all SOAP operation form a WebService developer and he is not supposed to know in depth about SOAP unless if some problem rises for debugging.
With JAX-WS, WebService, it is supported through open standards a JAX-WS client (java client) can access any other compatible service provided (either it can Java service, or can be a .Net service). This is feasible because JAX-WS uses technologies defined by the World Wide Web Consortium (W3C): HTTP, SOAP, and the Web Service Description Language (WSDL). WSDL specifies an XML format for describing a service as a set of endpoints operating on messages.
4) What are all JAX-WS tooling supported by JDK?
Like the JAX-RPC tooling, the JAX-WS tooling provides tools to help with bottoms-up (wsgen) and top-down (wsimport) development approaches.
i) wsgen (with JAX RPC this was called java2wsdl) wsgen tool help to generate WebService artifacts from an annotated Java class. The tool produces Java classes from the annotated class, required to build the WSDL.
Usage : wsgen [–options] [Java service class (SEI)]
ii) wsimport (with JAX RPC this was called wsdl2java) wsimport tool helps to write a Java Web Service client using artifact generated by wsgen. Usage: wsimport [-options] [WSDL file]
|
Understanding DNS Hostname Resolution. Determining the IP Address of the machine from the Hostname is termed as Hostname Resolution. Every computer (host) has a name. The Hostname should correspond to an IP address mapping stored in local Hosts file or in a database on a DNS Server. Translating (resolving) machine (and domain) names into the numbers actually used on the Internet is the business of machines that offer the Domain Name Service. Definition of DNS: Domain Name System is a hierarchical naming system for computers, services or any resources connected to the internet or a private network.It translates meaningful domain names to IP Addresses. An often-analogy to explain the Domain Name System is that it serves as phone-book for the internet by translating human-friendly computer hostname into IP Addresses. Methods of Hostname Resolution. Local hostname Return value of command hostname or the name configured for the computer. Hosts file This name is compared with destination hostname. DNS Server This file maps hostname to IP Addresses. Hostname Resolution on Windows/Linux: Windows a) Local hostname, if the destination address is same as local address then IP address for the same is returned and hostname resolution process is stopped. b) If (a) returns false, then “DNS Client Resolver Cache” is looked upon to check whether it consists of required hostname, if true then IP Address is returned else DNS server is queried to get the IP Address. Linux If your application needs to know the IP address of a particular computer. The application requiring this information asks the 'resolver' on your Linux PC to provide this information; a) The resolver queries the local host file (/etc/hosts and/or the domain name servers it knows about (the exact behavior of the resolver is determined by /etc/host.conf) ; b) If the answer is found in the host file, this answer is returned; if a domain name server is specified, your PC queries this machine The DNS machine already knows the IP number for the required name, it returns it. If it does not, it queries other name servers across the Internet to find the information. The name server than passes this information back to the requesting resolver - which gives the information to the requesting application. DNS Client Resolver Cache “DNS Client Resolver Cache” is a RAM based table (dynamically constructed) consisting of entries in the Hosts file and the host names that OS has previously resolved successfully through DNS. Response received from DNS queries are cached for a certain period of time and that period of time is configurable thru the parameter “Time to Live” (TTL). Managing DNS Resolver cache: Following command line variables are used to maintain the DNS Resolver cache in Java. networkaddress.cache.ttl Specified in java.security to indicate the caching policy for successful name lookups from the name service. The value is specified as integer to indicate the number of seconds to cache the successful lookup. networkaddress.cache.negative.ttl Specified in java.security to indicate the caching policy for un-successful name lookups from the name service. The value is specified as integer to indicate the number of seconds to cache the failure for un-successful lookups. A value of 0 indicates never cache. A value of -1 indicates cache forever. Starting Java 6 the default value for “networkaddress.cache.ttl” is changed to 30 and this can be determined thru the following program: public class Foo { public static void main (String[] args) { System.out.println(sun.net.InetAddressCachePolicy.get()); } } Viewing and Flushing DNS Cache view the contents of “DNS Client Resolver Cache” on windows --- Issue ipconfig /displaydns.from command prompt. Flush the DNS Cache contents on windows --- Issue ipconfig /flushdns from command prompt Flush the DNS Cache contents on Linux. --- restart the nscd daemon On Linux the nscd daemon manages the DNS cache. To restart the nscd daemon, use the command `/etc/init.d/nscd restart`. Hosts File is a common way to resolve hostname to IP Address through a locally stored text file containing IP-address-to-host-name mappings. The file is resident in /etc/hosts on UNIX platform and systemroot/System32/Drivers/Etc folder on windows. Hostname resolution in Java. The inetAddress class provides methods to resolve hostname to IP Address and vice-versa also. Following are the methods that can be used for hostname resolution: getAddress Returns the raw IP Address for this object. getAllByName(String host) Given the name of host, an array of IP address is returned. getByAddress(byte[] addr) Returns an InetAddress object given the raw IP address getByAddress(String host, byte[] addr) Create an InetAddress based on the provided host name and IP address getByName(String host) Determines the IP address of a host, given the host's name. getCanonicalHostName() Gets the fully qualified domain name for this IP address. getHostAddress() Returns the IP address string in textual presentation getHostName() Gets the host name for this IP address getLocalHost() Returns the local host. Simple Program exercising few of the above methods: import java.net.*; class InetAddressTest { public static void main(String args[]) throws UnknownHostException { InetAddress Address = InetAddress.getLocalHost(); System.out.println(Address); Address = InetAddress.getByName("<Any URL Address>"); System.out.println(Address); InetAddress SW[] = InetAddress.getAllByName("Any URL Address"); for (int i=0; i<SW.length; i++) System.out.println(SW[i]); System.out.println("InetAddress.getCanonicalHostName()"+Address.getCanonicalHostName()); } }
|
This is a continuation from the blog post by Rajeev which provided an introduction to the IBM Health Center - Please read more hereHealth Center consists of two parts. An agent that needs to be enabled with the running Java application and an eclipse based client that comes with IBM Support Assistant which needs to be connected to the agent. The Health Center Client is supported on Windows and Linux x86 operating systems whereas the agent is available for all the IBM supported platforms except HP and Solaris. The agent is shipped with the IBM JDK 5 SR9 and JDK 6 SR3 onwards. The latest agent can be downloaded from the IBM Support Assistant. The IBM Health Center can be installed from the IBM Support Assistant(ISA). Follow the you tube link (http://www.youtube.com/watch?v=6WjE9U0jvEk)to know about 1. How to install IBM Support Assistant 2. How to install Health Center Client 3. How to launch Health Center Client from IBM Support Assistant To launch a Java application with the Health Center agent enabled, users need to provide -Xhealthcenter as Java Runtime parameter at the start of the application. For Java 5.0 SR9 and earlier or Java 6.0 SR4 and earlier users need to provide the –agentlib:healthcenter –Xtrace:output=healthcenter.out as Java Runtime parameter before the start of the application. To launch the Health Center Client from IBM Support Assistant, users need to click on Analyze Problem tab in the Home page of IBM Support Assistant.Select Health Center and Click on Launch. Click on Next to get connection dialog box. Specify hostname, port number and basic authentication (if required) of the machine where the agent is running. Click on Next. Once the Client is successfully connected users will see the hostname and port number on which the Health Center agent is started. Click on Finish to proceed to Status Summary page.
|
Last time, I ended on Parent Delegation model and reasons for "Why you should have a custom classloader for your application". Let me dwell on this a bit: . In the delegation model, all custom classloaders sit beneath the Application Classloader. I mentioned that when there's a classload requested for a classloader, it should ask its parent to try and load it. Well, the guys who wrote all this magic (a.k.a classloading) thought - "how do we improve this process of 'parent delegation'? " Answer to that is the Classloader cache. . Every classloader has its own cache - which would hold the "defined class structure" of every successful classload it has performed. So let me change the parent delegation definition a bit now: . "when a classload is requested out of a CL, the CL should look for the class structure in its cache . If it finds one, use it. Else, delegate to the parent .. (and they can live happily-ever-after till CNFE doesn't surface!)" . How does it help? Well, the very first time for any class - parent delegation is a must . If the parents fail to load the class, it will come down to the current CL itself. If it finds the class, it goes and stores it in its cache. So the next time a classload is requested, and say , you found it in the cache - you can use it because you are assured that a parent delegation 'had' occurred sometime in the past for this class . All is well. . So, its handy you remember this when you write your own classloader Lookup sequence: CACHE (this.findLoadedClass()) >>> PARENT(super.loadClass()) >>> DISK(this.findClass() and then this.loadClass()) . sample code : . ----> ... Class c = findLoadedClass(name); //Used to check if the class is already loaded if(c == null) { try { c = super.loadClass(name); //ask the parent classloader } catch (CNFE) { } if(c == null) c = findClass(name); //else load from disk } return c; } <---
. So, the classloader cache certainly pips speed of the classloading sequence a bit (not to be confused with Shared Class Cache, which I hope to blog in few days time). To lessen the tax on the system , the JVM also employs what is called as a Lazy Loading - meaning a classload is initiated at the following times only i) creation of the first object of that class ii) first instance of any of the subclasses of this class iii) any of this class's static field is initialized . This way you are assured of classloads of only the required classes. Saves space , saves time. (Its opposite number is the Eager classloading - recursive loading of all classes referenced in our application - and I believe is used in Real Time applications.. ) . . Few things on Class Unloading: Comes into effect ONLY if there's a custom classloader in your application - because the other three classloader loaded classes are never unloaded. A class and its classloader are linked through its JVM internal structures. So only when a classloader is out of scope for the JVM will that classloader's classes will be unloaded. (for more on this, you should attend the webinars on "Understanding Java Memory Management" and "Debugging Classloader Memory Leaks in the WebSphere Application Server" as part of the Java Week) . . CNFE , NCDFE and the other usual suspects are coming soon.. and also some of the debugging techniques we use to find " 'em goons" .. . Happy Java Week!
|
In today’s complex and
integrated environments, the ability to determine the “health” of an
application quickly assumes high importance. The IBM Health Center is a light-weight profiling tool
that provides a comprehensive view of the “health” of various subsystems for
any Java based application.
Developers, performance
engineers and practitioners can use the IBM Health Center tool to quickly identify any
performance bottlenecks which is specifically helpful in an agile development
environment. Designed to attach to a running Java process to explore what it is
doing, how it is behaving, and what you could do to make it happier.
The IBM Health Center attempts to answer some of the
following common questions that developers, performance engineers, service
personnel and WebSphere administrators often ask:-
What is my
Java application doing ?
Why is it
doing that ?
Why is my
application going so slowly ?
How can I make
it go faster ? Is my application scaling well ?
Is our
algorithm sensible ?
Do we need to
tune the Java Virtual Machine ?
Is my
configuration sensible ?
Is the system
stable ?
Have I got a
memory leak ?
Is the
application about to crash ?
Classes, Environment, Garbage
Collection, Locking and Method Profiling, Native Memory, I/O are areas of JVM
activity which can be viewed and receive recommendations on, from the Health
Center tool. The Health Center
summary will indicate if there are any potential problems with the particular
subsystem (indicated in Red color), or tuning suggestions for optimizing the
subsystem further (indicated in Amber) and an indication of a healthy subsystem
(with no problems) denoted in Green color
The IBM Health Center not only provides a visualization
into the various subsystems of the JVM from a performance and reliability
perspective but also provides a set of recommendations that can be applied to
alleviate the identified problems. The performance overhead of the tool is
negligible and can be deployed on production systems.
The IBM Health Center is available via the IBM Support
Assistant framework. (www.ibm.com/software/support/isa/ )
The latest
version of Health
Center (version 1.3)
comes with some very useful new features. Options are available now in Health Center
for you to generate dumps (Heap Dump, System Dump and Java Dump) at Runtime for
more detailed analysis. For example, if the Health Center
reports increasing heap usage over time and indicates a possible memory leak,
you can generate a Heap Dump for further offline analysis.
Health Center 1.3 now supports enablement of
native trace points and the ability to disabling systems selectively to
configure data collection to only monitor areas of interest.
An interesting youtube
video providing a quick introduction to IBM Health Center: http://www.youtube.com/watch?v=5Tcktcl0qxs
The next couple of blog
posts on this subject will cover the steps to install and launching the IBM
Health Center and a detailed description of the various subsystems it profiles
and provides recommendations on.
|
ORB aka object request broker, is the java implementation of the OMG's CORBA (Common Object Request Broker Architecture) specification, which enables the usage of remote objects usable like the local objects. It also lets objects communicate with each other independent of the platform and languages used to implement those objects.
The ORB is implemented as part of the java virtual machine and plays an integral part in the application server runtime environment. In the client/server communication, the ORB provides primarily the following functionality - 1) Provide a framework for clients to locate the remote objects on the server and invoke requests on them. 2) Manage the connections. 3) Manage the request and response messages to/from the remote Java objects. 4) Marshal/Demarshal, based on the CDR (Common Data Representation) , the messages sent over the wire. Provide a framework for clients to locate the remote objects on the server and invoke requests For a client to be able to successfully locate the remote object on the server, the remote object needs to be "registered to a naming registry" and "exported". The "bind" call will ensure that the remote object is registered to a naming registry and "javax.rmi.PortableRemoteObject.exportObject()" will ensure that the remote object is exported i.e. it is ready to be invoked. Once, the client does a look-up, "javax.rmi.PortableRemoteObject.narrow()" will have to be called on the reference returned by the look-up. The object returned by the "javax.rmi.PortableRemoteObject.narrow()" will be used to invoke the remote object. Now lets look at what internally happens in the ORB when an javax.rmi.PortableRemoteObject.exportObject() - 1) The tie and stub classes associated with the remote implementation are loaded. 2) The tie class to the remote implementation class and tie class to the stub class are cached for future faster look-up. The "javax.rmi.PortableRemoteObject.narrow()" is fairly simple, it just checks whether the object returned by the look-up can be casted to the remote interface i.e. the object should extend javax.rmi.CORBA.Stub and implement the remote interface. The reference returned by the javax.rmi.PortableRemoteObject.narrow() will be used to invoke the remote function. The stub and tie play a key role in the remote communication. The reference returned by the javax.rmi.PortableRemoteObject.narrow() is actually a stub. As the stub implements the "remote interface", the implemented remote functions will invoke the ORB to send the data to the server and receive the response from the server. At the server side, the data will be received by the ORB. The ORB will identify the corresponding tie and the tie will invoke the remote function implementation. Once the reply is received, the tie will pass the response data to the ORB and subsequently the data will sent to the stub at the client side. In the next blog, we will explore how the ORB manages the connections.
|
Since its a java technology week. I start with the post on learning Java. This thing i.e. Karel helps not only learning and understanding Java concepts but it also helps in building Logic through the puzzles that it offers. the more puzzle you solve using this Robot, the more powerful your logic and problem solving ability gets sharpened. Karel The Robot is a robot simulator that affords a gentle introduction
to computer programming. Users write Karel programs and feed them to the
simulator to watch them execute.
By solving karel problems you will build your logic and enjoy programming.
<- Karel
Karel- its a robot.. its has its own world that is known as the grid.
The grid has avenues (column) and streets(rows). Karel is placed at the 1st street and 1st avenues ie 1,1 if denotes in Cartesian co-ordinates.
This is Karels World.
Karel has only four Main functions :
1. move()- that makes karel move one step ahead.
2. putBeeper()- It puts a beeper.
3.pickBeeper()- it picks a beeper.
4. turnLeft()-makes karel turn left at that place itself.
It is simialr to JAVA so it is helpful for those who are learning
JAVA. It also supports object oriented programming ie u don’t need to
define a function every time if u want to use instead in OOP u can write
function once inside a class then u can call the function by creating
an object of that class and then calling it..
to download karel, click
to install: Download an assignment from clicck(in this assiggnments)
 |
Import button |
After
downloading both the assignment and eclipse IDE, unrar it and open
Eclipse.exe, then click on the import button
----------------------------------------------------------------->
Then use
the browser to find Assignment-1 folder.When you do so, Eclipse will
load the starter project and display its name in the Package Explorer
window like this:
Then click on the small triangle and start doing solving the problem.
Now lets do a simple Karel program :
Q.we will solve a problem from assignment-1 ie CollectNewspaper(the documentation for the problem is available in, handouts-assignment 1)
clip of problem (we have to get the beeper):
Code is:
import stanford.karel.*;
public class CollectNewspaperKarel extends Karel{
public void run(){ move(); turnRight(); move(); turnLeft(); move(); pickBeeper(); }
public void turnRight(){ turnLeft(); turnLeft(); turnLeft(); } }

Karels running interface is :
To learn Karel download the book : www.stanford.edu/class/cs106a/book/karel-the-robot-learns-java.pdfTo follow me and know more about programming stuff, Facebook : http://www.facebook.com/code2learnWebsite : http://www.code2learn.com
|
Here is what I think about the virtual methods in java.
In java, by design and specification, all the non-static, non-private, non-final, non-constructive methods are virtual. This means, the selection of the method to be invoked at a call site will depend on the actual (runtime) type of the invoker object (reciever), rather than its declared (static, compile time) type.
In the case of C++, this is true only when the invoker object is declared as a pointer type, and the method is declared explicitly as 'virtual'. If either of this is false, then the method is resolved (identify and select the definition) always to the definition in the defining class of the declared type of the invoker.
In contrast in java, since there is no pointer, there is no flexibility for methods to exhibit virtual and non-virtual behavior based on the type declaration mode - there is only one way to cite objects, that is through references. And moreover, in JRE implementations, java object looses its connection with the declaring type and gets associated with the defining class. At this point it is imperative that the virtual keyword be removed and designate all the normal methods as virtual.
But how often a program really requires the virtual property? Very rarely. What is the percentage of virtual methods who exercise this feature in a meaningful manner? less than 5%. Even in those cases where multiple subclasses are designed and methods redefined, an efficient programmer will go for an interface (or abstract class) for the base class, which means the base method is pure virtual(abstract), not virtual.
This precisely means that a normal, concrete java method (designed to be virtual) actually utilizing its virtual-ness is a rarest possibility.
Implementing virtual methods is easy in JREs, but their presence make the execution engine incapable of pre-linkage of the method call site, potentially slowing down the performance. In practice, the method resolution has to wait until the execution reaches the call site. Dynamic compilers devirtualize methods up to an extend, by tracing the source of the invoker object in the neighborhood of the call site, but this does not really alleviate the problem, and adds it's own additional computation overhead. One of the potential challenges of JIT today is the inability to perform inter-procedural analysis and compress the code any further owing the extremely delayed method resolutions. A powerful technique called ahead-of-time compilation is rendered ineffective because of the inability to resolve methods in advance.
The decision to qualify all the methods as virtual was not a well thought-out design, instead an un-anticipated side effect. An accidental by-product or an unexpected misfire came out of the pointer-less design.
|
Here is what I think of Java parameter passing conventions.
At programmer's level, Java is said to pass objects by reference and primitives by value. This means for references, what the callee receives is a heap address of the object, and the object references themselves are actually passed by value. This also means Java saves some space and effort in copying the entire object onto the subroutine linkage channel (for example stack memory).
By definition, pass by reference means 'a parameter passing convention where the lvalue of the actual parameter (argument) is assigned to the lvalue of the formal parameter.'
When passed by reference, the callee method can manipulate the original object’s attributes, can invoke the methods of the object, can re-new, re-assign and purge the components of a composite object thus passed. These operations affect the original reference of the caller, because we have only one object in the heap, which are pointed to by both of these references.
For destroying an object, the C++ way is to 'delete' the object, and the C way is to 'free' the pointer. If passed by reference or address, both these languages have the flexibility of cleaning the object or a structure from anywhere in the caller-callee chain. The invalidation of an object indirectly invalidates other references or pointers cached elsewhere in the stack locations, and trying to reuse those references or pointers results in a crash.
This is different in java. Since there is no explicit freeing of objects, we rely on null assignment on the reference, which is the only way to force an object cleanup. Even after the callee nullifies an object, the object lives through the caller's reference. This means that an object cannot be freed (or initiated for freeing) from an assignee reference, when the a peer reference is alive, and vice versa.
This may be a conscious design to eliminate bad references and make sure that all the object references are either null or a valid object's address. This is because, in the garbage collection, the memory of unreferenced objects are not really freed into the system, rather kept in the internal free pool, and is still mapped into the process, and if accessed through stale references, such a bad dangling pointer will actually cause more damage than a crash.
But then how to clean up an unwanted java object? Set your object reference to null and wait for a gc to occur? might not work because, if there is a second reference elsewhere in the stacks and registers, consciously or unknowingly, the object is not collected. Consequently, many of the objects the programmer has explicitly discarded will lay remnant in the heap until the last reference of the object also went out of scope. This may be sooner or later, or never.
Many of the memory leaks including the infamous Classloader leaks can be attributed to this 'hidden and under-documented' behavior of java. And this is the very reason we see more OutOfMemoryErrors than NullPointerExceptions.
|
Here is what I think of java garbage collection:
In java programs, the use of pointers is forbidden by virtue of a design strategy or a security policy. Without pointers, functions cannot access objects across stack frames, among many other limitations. The inability to pass objects to and from functions will limit the scope of a programming language at large. To remedy this, in java, user defined objects are inherently passed by address (termed as reference), in contrast to C and C++ where passing arguments by their addresses is a volitional choice.
Conventionally, when arguments are passed by value, what the callee recieves is an isolated copy of the passed object. In C, when passed by address, the callee can manipulate the caller's arguments. In C++ the same applies, along with the call by reference. The user objects are normally created on the stack. In cases of producer functions where the function generates and returns an object, the allocation has to be made in the heap (locally created objects cannot be returned from a function, which causes dangling reference). Such cases are not so often, so one can free the object manually which was 'newed'. Two modes of creating user objects are:
Class obj(); => object and the handle created on the stack. Class *obj = new Class(); => object in the heap, reference on the stack.
In java, without pointers, the language semantics does not allow the above flexibility and we have only one way to create objects – either everything on the stack or in the heap, not both. Creating all the objects on the stack is a bad choice, since objects whose life span is greater than the defining method will be destroyed when the frame is popped off while the function’s return, essentially forbidding methods from returning generated objects, causing java to be an incomplete language. As a workaround, all the objects are created in the heap. Now, as a matter of fact, it is difficult for a programmer to delete all the objects he 'newed' which are quite many, rather most of them.
Hence the garbage and hence the collector.
In a non-java programming paradigm, it is like allocating memory at arbitrary heap locations, and later scanning the entire virtual memory to clean up the filth.
Garbage collection is not a java feature. It is a compromise. A consequence of refraining from pointers. A skillful attempt to mend a defect. An unchecked sun heredity and an unbridled software hypothesis which we carried and dragged all the way along.
|
Why do we want
to solve problems quickly?
- It means a lower Total Cost of Ownership
(TCO) for us and our customers.
- Problem determination and resolution has
become a daunting task as more of today’s solutions involve complex
collections of products and applications deployed in heterogenous
environments
- Developing and deploying new solutions gets
delayed by maintenance of diverse existing systems
- 25-50% of time is spent in
problem determination and resolution
- The skills needed to do manual cross-product
problem determination are scarce and expensive.
In this age of complex and integrated systems and short cycle times of deployment, the ability to respond to business demands in a timely manner is becoming critical. Pro-active problem determination, quick and easy access to relevant information, reduced turnaround times on interactions with support organizations are key - client "self assist" is the "buzzword" Java developers frequently
encounter runtime problems during development, migration and post production
stages and spend significant amount of time diagnosing and resolving those
issues. Quick turnaround for problem determination is a key focus area.
In the recent times a wide array
of new tooling has emerged from the IBM Java
Technology Center
and WebSphere Serviceability teams that enable developers to debug Java Runtime
issues in a convenient way. A key focus area for IBM is to make the user experience with these tools a convenient one. Tools should be easy to obtain, available from a single source and easy to update. The IBM Support Assistant (www.ibm.com/software/support/isa)
is a free local workbench that includes rich features and serviceability tools
for quick resolution to problems. All the Java tools delivered through the IBM Support Assistant provide the capabilities of :
- "Visualization" Provide different graphical views for a diagnostic data input (a view of increased java heap usage over time for example)
- "Analysis" Analysis reports based on the data analyzed (analysis of the increased heap usage over time indicating a memory leak for example)
- "Recommendations" Recommendations and suggestions to resolve the observed problem (Analysis of heapdumps to identify the cause of memory leak for example)
Some cool Java Runtime tools to check out are: (all tools available on IBM Support Assistant)
- Garbage Collection and Memory
Visualizer
The Garbage Collection and Memory
Visualizer is available as a plug-in to IBM Support Assistant
(ISA). It analyzes verbose GC output to provide plots, summaries and
recommendations. The tool profiles heap usage, heap sizes, pause times
and many other properties. Flexibility of comparing multiple logs in the same
plots and many views on data (reports, graphs, tables) are available. This is a powerful tool to debug performance bottlenecks in the Java application due to Garbage Collection.
- Memory Analysis Tool (MAT)
Memory Leaks in Java are a consequence of non obvious programming errors - and debugging memory leaks is not an exact science. The Eclipse Memory Analyzer is a
fast and feature-rich Java heap analyzer that helps you find memory leaks and
reduce memory consumption. The Memory Analyzer was developed to analyze
productive heap dumps with hundreds of millions of objects. Once the heap dump
is parsed, you can re-open it instantly, immediately get the retained size of
single objects and quickly approximate the retained size of a set of objects.
- The IBM® Monitoring and Diagnostic
Tools for Java™ – Health Center™
Health Center
is a lightweight monitoring tool (with a performance overhead of not more than 2-3%) which allows running Java™ Vitual Machines to be observed and
health-checked. The Health
Center enables insights
into general system health, application and garbage collection activity. Developers, performance engineers
and practitioners can use the IBM
Health Center
tool to quickly identify any performance bottlenecks which is specifically
helpful in an agile development environment. Designed to attach to a running
Java process to explore what it is doing, how it is behaving, and what you
could do to make it happier. More information on these
tools can be found at: http://www.ibm.com/developerworks/java/jdk/tools/index.html
*Java is the registered trademark of Oracle Inc.
|
Let me start with something I said in my previous post - "The Just-In-Time compilation activity is an investment. To maximize the returns on this investment ( RoI), the JIT needs to compile the right methods, at the right time, in the right manner". We can deduce a lot of understanding from this statement. * *
Why is JIT compilation an "investment" ? Just-In-Time compilation is NOT the compilation of Java code to bytecode, something achieved by the javac compiler. JIT compilation is the conversion of bytecode to machine instructions. The phrase Just-In-Time signifies the need-based and dynamic nature of this compilation. JIT compilation happens during application execution. Extensively utilized methods are candidates of JIT compilation. The following expenses are incurred by the JIT compilation of a method: 1. CPU time, which could be actually utilized by the application - JIT is a major CPU contender of the JRE, next to the application threads and the divine Garbage Collector! 2. A permanent chunk of memory to store the "machine code" - The compiled machine code would reside on the heap, not in the data segment as it would for statically compiled code. This increases the memory footprint of the JRE. 3. A temporary chunk of memory to be used as work-memory during compilation
While the third part is usually insignificant, the first and second can be of a significant magnitude, considering the large number of methods that can get JIT compiled in business applications. These expenses should, apparently, affect application execution. But the prime objective of the JIT compilation philosophy is imparting performance boosts on extensively used (hot) regions of the application code. It is hence imperative to treat JIT compilation as an investment; the performance boost being the returns. * * What is precisely meant by "maximizing the RoI " ? With the current understanding one may attempt to assert that JIT compiling the entire application would, theoretically, yield the best performance. A more curious user of the JRE may know her application code very well and point out potentially hot regions in the code, which she may want to be JIT compiled unconditionally! But trust me, this is against the philosophy of dynamic compilation. The IBM J9 JIT Compiler is a testimony to the wonderful concept of dynamic compilation. It follows a simple rule : "Limit your investment based on the current hotness(significance) of a method". The amount of CPU and memory spent in JIT compilation should, ideally, be a function of the hotness (significance) of the method. This means, we are making a genuine assumption that a method which has attained a particular degree of hotness, may turn hotter in the near future. And we may now make start investing, or make a new investment (discussed soon), in its JIT compilation. This controlled form of investing CPU and memory in JIT compilation ensures an effective use of its end product - the machine code, which precisely means "maximizing the RoI". On the other hand, aggressive and unconditional JIT compilation can waste valuable CPU and memory and is never recommended. Just think what would happen if you JIT compiled a hundred thousand methods and your application never used them again! This risk always exists, but it tends to zero in the approach followed in the IBM J9 JRE. * * How do we select the 'right methods" ? In this context, the right methods are the hot-spots in the application code. They are methods which are extensively utilized and have a high relative significance. These methods are candidates for JIT compilation. The J9 JIT uses two approaches to identify these hot-spots: 1. Invocation counters : This approach is used only in the early stages of tracking. It is definitely expensive. Each invocation of a method increments a counter. When a particular threshold is hit, the method deserves to be JIT compiled. 2. Sampling: A cheaper approach where application threads are periodically sampled. It tracks the increase in the relative significance of a method and selects methods for recompilation (new investments!). * * What do we actually mean by "right manner" ? This question brings us to one of the hallmarks of the J9 JIT Compiler. The JIT compiler does a lot of code optimization on the method under compilation. There is a wide range of optimization algorithms that work on the method. These optimizations include classical compiler optimizations, optimizations specific to object oriented languages and Java and platform specific optimizations. Optimizations can involve simple to massive computation and are the real expense points in the compilation process. It is imperative to categorize the optimizations into categories like (for example ) low-cost, moderate-cost, high-cost and very expensive. We ought to define various optimization levels and attribute each level with a set of optimizations. Lower levels can have a set of low to moderate cost optimizations and the higher levels may include the very expensive ones. I earlier stated, "Limit your investment based on the current hotness(significance) of a method". This limiting is achieved through the optimization levels. When the method first becomes eligible for JIT compilation, through the invocation count technique, we chose to compile it at a low optimization level. As the hotness(significance) of the method keeps increasing, we keep investing more by recompiling the method to higher levels of optimization. This is what I meant by "compiling in the right manner". This adaptive compilation strategy is responsible for "maximizing the RoI" through a process of incremental investment of CPU/memory, ensuring an effective utilization of the end-product (machine code) at each increment and before moving to the next increment. Of course, the number of optimization levels is small. * * I plan to re-visit optimization levels in the next post.
|
The amount of memory available to the Java Heap and Native Heap for a Java process is limited by the Operating System and hardware. A 32 bit Java process has a 4 GB process address space available shared by the Java Heap, Native Heap and the Operating System. 32 bit Java processes have a maximum possible heap size which varies according to the operating system and platform used. On AIX the maximum java heap size possible is 3.25 GB (though the advised maximum is 2.5 GB allowing sufficient space for the native heap) whereas on Windows the maximum available is 1.8 GB [More on Java Heap sizing in the next post]. 64 bit processes do not have this limit and the addressability is in terabytes. It is common for many enterprise applications to have large java heaps (we have seen applications with java heap requirements of over 100 GB). 64 bit Java allows massive Java heaps (benchmark released with heaps upto 200 GB) However the ability to use more memory is not “free”. 64 bit applications also require more memory as java Object references and internal pointers are larger. The same Java application running on a 64 bit Java Runtime may have 70% more footptint as compared to running on a 32 bit Runtime. 64 bit applications also perform slower as more data is manipulated and cache performance is reduced. (As data is larger, processor cache is less effective). 64 bit applications can be upto 20% slower. 64 bit JVM is only recommended if a Java heap much greater than 2 GB is required or application uses computationally intensive algorithms for statistics, encryption etc for high precision support. (IBM Just In Time compiler takes advantage of 64-bit capabilities: it generates machine code to take advantage of 64-bit instruction extensions and high performance computational features. It leverages extra registers to reduce register spills and memory loads and stores) There have been major improvements on 64 bit Java performance with the compressed pointers technology. (More about this in another post) 32 bit versus 64 bit Runtimes bring another interesting consideration: Scaling. When considering application scaling there are two choices : Monolithic scaling with small number of 64 bit JVMs (scaling up) or Horizontal scaling with many clustered 32 bit JVMs (scaling out). The advantage of monolithic scaling is that more data / larger datasets can be cached with less administration / management overhead. The flipside is the reduced performance with increased size Horizontal scaling provides process isolation (and resilience) with better performance though there is a administration overhead.
|
The last two decades, I would say, have experienced a revolution in the way people think about computer programming. This revolution emanated out of the development and adoption of, moderately to extremely, rich programming languages. Richness spans from something as simple as platform-independence, to something as complex as dynamic typing. Irrespective of the amount of wealth hidden in them, these programming languages have often mandated a robust runtime environment to bolster an effective use of their wealth. Undoubtedly, Java has led this bandwagon and the Java Runtime Environment (JRE) has been constantly maturing to support this phenomenal programming language.
Java gets most of its applauses for the fact that it is platform-independent. Java code, when compiled, is transformed into something called bytecode. Bytecode is a sequence of software instructions (the Java Virtual Machine has its own, well-defined and universal instruction set; just as how a hardware platform would have). The Java methods that you write are visible to the JRE as bytecode. It is this universally accepted bytecode that imparts the platform-independence to Java. The JRE has a stack-based interpreter that processes one bytecode instruction at a time. Bytecode instructions can be as simple as an add and as complex as a tableswitch - an instruction that represents a switch statement. While executing a method, the interpreter loops through bytecode, triggering computations. The interpreter is an abstract execution engine that wraps around the physical machine - hardware bundled with the operating system.
The interpreter hence forms an additional layer of abstraction between the application and the underlying physical machine. Other than platform-independence, this layer also allows the JRE to effectively exercise runtime control over the execution. I shall delve deeper into this in future. For now, it is important to realize that this layer introduces an operational delay and decreases the throughput of the application. This is a side-effect of interpretation. The best approach to work around this side-effect is dynamic compilation or Just-In-Time(JIT) compilation.
Just-In-Time (dynamic) compilation transforms method bytecode into machine code. The compilation is called dynamic because it happens during application execution, unlike the classical static compilation of C/C++ programs. The unit of JIT compilation is a method. Not an entire class! JIT compilation picks up methods, at runtime, and translates their bytecode into machine instructions. All the subsequent invocations of this method, will not result into bytecode interpretation, but into machine-level execution (just like your statically compiled C/C++ program). The additional interpreter layer is hence peeled off! This leads to a large improvement in the method execution times and increases the application throughput. But there is a certain cost involved in JIT compilation.
JIT compilation happens parallel to application execution. So, is it an overhead to the application ? Not exactly. JIT compilation is an investment, the return of which is execution speed caused by the peel-off of the interpreter layer. The best candidates for JIT compilation are methods which have been extensively interpreted. They have reached a stage where they deserve to be promoted to machine code. A small investment in the form of compilation will lead to a huge return in the form of an increased speed of execution. But there is a risk involved here. What if a JIT compiled method was never used again ? Will our investment not go in vain ? Yes, it will. So, the JIT compiler should make all the attempts to maximize the returns on this investment. This is possible simply by compiling the right methods, at the right time, in the right way! This is precisely what the IBM J9 Just-In-Time compiler achieves. I plan to discuss how it does so in the subsequent posts.
|
Computers may keep getting faster and more powerful, yet many organizations are still hampered by the inconsistent performance of their systems. While today’s systems are capable of processing most transactions within a matter of milliseconds, there is still a percentage of these transactions that will take an order of magnitude longer to complete, because systems temporarily slow down from extraneous or internal housekeeping operations that may tax system resources. For a growing number of organizations today, such unpredictability is disruptive, costly and unacceptable. A trading desk at a brokerage firm cannot ensure the integrity of its transactions if some are slowed because of a systems bottleneck. Financial services organizations are under pressure to assure that both front-office and middle-office transactions not only are executed at blazing speed, but also are consistently fast across the board – or else face the scrutiny of regulatory agencies. That is why real-time processing with determinism – or the ability to deliver predictable, consistent results – is becoming a necessity. Now, thanks to new developments in the market, real-time processing capabilities are available through standardized software solutions that don’t require massive investments in skills or additional hardware. WebSphere Real Time offers a fast and deterministic version of the world’s best known development language. Real-Time Java-based applications can be deployed with minimal impact on current configurations – with no need to re-learn a special-purpose language. WebSphere Real Time now makes it simple and affordable for organizations of all types to build out realtime operations with predictable performance. WebSphere Real Time enables organizations to redirect critical resources to core business requirements rather than expending time and money in supporting customer, low-latency systems. Programmers no longer have to rely on languages such as C, C++ or ADA 95 for real-time programming. This Java platform now represents a viable alternative environment for highly deterministic, distributed, real-time applications in critical system applications, ranging from command and control, weapons, industrial automation and financial systems to telecommunication infrastructures. Financial services firms have been buying the latest and greatest technology for years, in an effort to manage an explosion of data, support complex transactions, meet stringent regulatory requirements, and compete in fast-changing markets. Functions such as value weighted average pricing (VWAP), derivatives pricing, and pre- and post-trade short-running analytic programs can benefit from real-time. In this industry, the ability to analyze and leverage the latest and freshest data means competitive advantage. To meet this challenge, firms are embracing high-performance trading and analytics systems. Enterprises leveraging real-time capabilities can respond more quickly than their competition to new information and changing market conditions. Running their time-sensitive mission critical applications on Real-Time Linux, Java, and WebSphere middleware not only reduces process dispatch latencies, but also gives enterprises the time advantage they need to reduce the risks of financial losses, and retain leadership in their markets. While financial services organizations are seeing the initial benefits of Real-Time Linux and Java, there are numerous advantages for other industries, from government to healthcare to manufacturing. Real-time processing will provide predictability to applications such as realtime product simulations, language translation, and audio/video streaming, among others. For many industries, the time is NOW to move.
|
Member : : Ramana V Polavarapu Tell us about yourself & your role with IBM Right from the beginning, I have been extremely fond of mathematics. Consequently, I planned to get into Mathematics, Physics and Chemistry (MPC) in intermediate (11th and 12th grades). However, I was discouraged to get into MPC because of my visual impairment and there was no provision for the college to support me with the lab work. Of course, I was disappointed and I decided to go to Commerce, Economics and Civics (CEC) since I believed that Economics would certainly have math in it. I completed my bachelor's degree in Economics. Then, I looked for the management career in XLRI, IRMA and IIM. All three institutes claimed that they did not have a provision for a visually impaired dude! Then, I decided to go for master's degree in Economics from the University of Hyderabad. Later, I spent one year in ISI. Then, I secured my Ph.D. in Economics from the University of California at Davis. I joined the University of Colorado at Denver as a faculty member and continued in that position for the next seven years. During my stint in the University of Colorado, I realized that everyone was making more money in the field of Information Technology than I was with my university job! I switched careers and went into computer sciences. For the last eleven years, I have been with this industry. In conclusion, I did not really choose the path that I have been walking. In other words, I am not one of those heroes that you find in the writings of Ayn Rand! At the same time, I wholeheartedly tell you that I have no regrets at all. Current Job Profile: For the last one year, I have been with Java Technology Center (JTC). I have been working on Apache Open Source project called Harmony (the alternative class library). Once again, I am planning to head back to Research because of one more interesting opportunity with MIOP. Q. What has your career journey been like? How did you get started? A. To some extent, I answered this question in the introduction. In IT, I commenced my career with a bunch of E-Commerce projects. I used application servers like BroadVision and ATG Dynamo. Then, I joined SAP Labs. For the next five years, I worked on various aspects of NetWeaver. Roughly three years ago, I got into IBM Research. Q. You were once an Economist, how challenging is the role of an IBM Researcher for you? A. I did not face any problem in terms of research methodology but I did have challenges to develop a research agenda in the area of computers. While I was with research division, I focused on Service sciences, Software Engineering, Mobile Computing and Social Networks. Q. How do you keep yourself updated on technologies? Are there any favorite websites, books, journals or blogs you follow? A. I read lots of books. Currently, I am reading Domain-Driven Design. I strongly recommend it. I also follow the blogs. My favorite ones are DeveloperWorks, TechCrunch and Harvard Business Review. Q. Is there anyone that you look up to and model yourself on? A. It changes from time to time. Currently, my hero is Kent Beck. Q. How do you find DeveloperWorks/My developerWorks useful? A. For over six years, I have been working only in Java. In my view, there is no better source than DeveloperWorks. However, I am still a figuring out My DeveloperWorks. Q. What is your advice to the future aspirants ? A. Keep nurturing the love for technology. Money alone cannot make us happy. Q. What motivates you? A. The previous answer makes sense here too. I use Java to solve my day-to-day problems. For example, I downloaded a book called "Tips from the Trenches" today. The copyright stuff was there on every page. I was getting irritated. I wrote a simple Java program with a hint of regular expressions to clean it up. Coding is fun. I do not know how long it lasts but I have been enjoying coding for the last 25 years.
|
M E M B E R S P O T L I G H T
In his own words, Hemanth is that kind of a person , who admits he doesn't know it , if he really doesn't know it and will not stop there , but rather hunt down and always loves to sit for long hours learning new things and trying to do something revolutionary. He believes in the quote " IF u can dream it you can do it " , though he is a Computer science graduate , he is interested in art and philosophy.Also does some translations in launchpad. Q Tell us a bit about yourself & your accomplishments. As a kid, I always had imagined that computers are devices which answer any questions we have, solves any problem we have in life! No wonder this curiosity has put me on a high adrenaline rush, that is still making me explore the extraordinary field of computers. Defining oneself is the hardest thing to do, asked my friends, most of them said I’m just a computer geek, the relative term is always confusing, I would define myself as a person who always likes to face challenges in life and has always tried to do something different. Completing my bachelors in computer science , I was much more adept than the normal curriculum and was more interested in doing new things by my own. Accomplishments indeed is a very relative term! The below are a few noticeable: - Won the top speaker award in IBM Develothon 2010.
- First prize in State level technical paper presentation conducted by Infosys.
- Gave more than 50 ideas to Google wave, a few did got implemented.
- 201th best contributor in the world of Ubuntu brainstrom.
- Took part in the first ever open-web p2pu course from Mozilla.
- Gold medal in state level debate competition.
- One of the top 100 selected members in Adobe Flex boot camp.
- Mentor approval for GSOC for a true idea called "Ants and Algorithm".
Q. You were announced as the Best Speaker at IBM Develothon 2010, Bangalore, how was the experience? This was the third time in a row I attended IBM developerWorks Unconference as a speaker, I have seen it evolving. This time there was pre talk voting, where the audience had to decide if they liked to hear someone speak about a particular topic. Crossing over that in itself was a tough task, especially amongst many on-spot entries. It’s always tricky to get the crowd mentality, sometimes the overwhelming preparation is just void and sometimes an extempore wins the crowd!! Its been quite an epic victory, I was all excited [ most of the time I am when it related to speaking in front of a bombastic crowd ] . I had already updated my social networks as most of them would have saying, “Me selected to be one of the speakers in IBM develothon, wish me luck”, after the pre talk voting. Then when it was for the audience to select who was the best for that day amongst the lot. It was a roller coaster ride, seeing so many people agreeing with my thoughts. It made me feel elated and got great opportunity to network during the yummy lunch time. I have blogged it in detail here: http://www.h3manth.com/content/ants-journey-planet-called-ibm Q. Do you have any favorite technology related websites/books that you can recommend to others? Websites : {techcrunch, slashdot,news.ycombinator,reedit, mashable,readwriteweb}.com Books : {founder,coder}@work, Design Patterns: Elements of Reusable Object-Oriented Software, Surely You're Joking, Mr. Feynman! and if you have read that all, for life time reading there is “The Art of Computer Programming (TAOCP)” for all of us :). Q. What is the best advice you have ever been given? The best advice, that my mother gave me once was, as humans everyone will have good and bad qualities, learn to see only the good qualities in them, I though it was just a cliche, but when I tried to apply it, I saw a different world all together! Q. How do you find developerWorks useful ? The very first article I read on developerWorks was that in the linux section, it was happy read. Mydeveloperworks is rather drawing my attention these days with many renowned geeks blogging on their expertise. It is indeed a great repository of good stuff. It would be more nice if there is live interaction with everyone on a simple chat application would be fun. Q. Your blog has a vast amount of great content, what advice would you give a blogger trying to find new stories for their blog and producing new great content everyday? First and foremost, write on everything you do which is in unison with your blogs genre, have a deadline, be self motivated and never think how others would react. As everyone is unique, we must find our own way and add variety of flavours in all our writings, as of my philosophy is to just write what I have tried, in an particular topic and not to worry about getting traffic into the blog and generating comments and for the fact of it when something is interesting and new, indeed people will like it, but after blogging a new post its much necessary at least for a few of your social media friends to know it, sharing is life. Blogger block indeed happens to every blogger, few things I have followed is to lead the mind relax in a cool room start brainstorming on a particular topic, do look out for resources if already someone has done it try to think of a better ways of doing it. Q. How do you wish to see your career shape in future? Completing my degree in 2009 and a few start-up experience with me, I am eager to ply my potential to the fullest. Moving on I’m confident enough to rely on my technical and articulation skills to achieve full satisfaction in whatever I get to work on. Thanks for the candid responses Hemanth!
To network with Hemanth visit hemanth.hm
|