I thought I'd write about WebSphere because it's my area of expertise and I can't help but be excited about our new Application Server and how they are going to help out our clients. I will be sharing some of the features of the Application Server because they really have evolved to meet our clients demands.
The application server market has evolved over the past decade with many advances in the architectural framework, thus allowing for increasingly mission-critical and rapid scaling applications. A teen in a centennial company, IBM WebSphere Application Server (WAS) has travelled a challenging but highly exciting path of innovation in this space. Everyone is talking about trends such as Cloud Computing, Data-center Virtualization, Mobile computing and Green IT which is driving new requirements and today’s enterprises demand an application server that is intelligent, fast, scalable and highly cloud and mobile optimized for developing mission-critical applications!!! Whoa... and IBM WebSphere Application Server V8.5.5 Liberty Profile, not only provides all of these elements, but also helps reduce the total cost of ownership, while increasing developer productivity. Now you can see why I'm excited!
The previous release of WebSphere Application Server V8.5, provided significant enhancements in terms of developer experience and high-end resiliency. For developers requiring a quick and simple development run time, the Liberty profile was introduced as a great option for building web applications that do not require the full Java Platform, Enterprise Edition (Java EE) environment of traditional enterprise application server profiles. WebSphere Application Server V8.5.5 continued this focus with key enhancements to the Liberty profile capabilities and qualities of service, enabling clients and partners to build a strong foundation and reduce cost. For developers and businesses that need a lightweight and simple development environment, the Liberty Profile supports programming model additions such as EJB-Lite, CDI (Contexts and Dependency Injection), web services, Java Message Service, and message-driven beans. It also comes with developer tool options to match ones project development needs.
If you work with SMB clients, WebSphere provides a family of fit-for-purpose application servers to meet the application demands at every level. There is great news for clients looking at developing and deploying faster, lightweight, secure web and mobile applications, by way of the recently introduced IBM WebSphere Application Server Liberty Core, a Lightweight development and production run-time for web applications. Designed for applications that do not require a full Java EE stack, WebSphere Application Server Liberty Core is certified to the Java EE 6 Web Profile specification. It helps you respond to enterprise and market needs more quickly by enabling less complex and rapid development, and deployment of web and mobile applications using fewer resources. The new Liberty Core is extremely lightweight and fast, with a small footprint (@50MB) and quick start-up (<5 secs) ! It is easily packageable for deployment, and is extensible through Liberty features SPIs (System Programming Interfaces). Did you also know that Liberty Profile can also run on an android mobile ? It can also run on a Raspberry Pi device, that can be used to remotely control your toy car over a network ! Sounds really interesting indeed.. isn't it ?
Readers who are interested to try out the free trial versions of the product can also visit the following link: http://wasdev.net
[Padmakumar (PK) Nambiar is the Lead Manager for WebSphere Application Server development in India. PK can be reached at firstname.lastname@example.org]
We live in a world surrounded by devices now, it could be an alarm clock that wakes you up in the morning or a mobile phone that you always use. These devices form a important part of our life. Previously computing included only big enterprise servers and slowly moved towards personal computers. But Usage of Tablets, Smartphones are increasing now-a-days...Most of the businesses are looking at extending their business capabilities to mobile devices. To achieve this , they are building applications for mobile and also interacting with several other devices that consume, route or produce information.
So Its now the time to connect everything together. People are connecting things as there are business opportunities in doing so. Now-a-days we are connecting several things to the enterprise, like temperature sensors, medical devices, power meters, etc. Some of them in the past have been connected using hard wiring. Now-a-days, we are seeing increasing use of networks such as cellphone infrastructure... We have started to think if people can talk, why not the devices...
There are several types of applications that can now run on a mobile. Some of them are web based and some are native applications. Now we have applications that are even hybrid, ie native applications that utilize the browser interface to delivery content. A product from IBM that can be helpful to address the mobile needs is IBM Worklight. It can be used to develop hybrid applications. IBM Worklight provides an open, comprehensive and advanced mobile application platform for smartphones and tablets, helping organizations of all sizes to efficiently develop, connect, run and manage HTML5, hybrid and native applications. Mobile messaging can be used in hybrid application to connect to the enterprise messaging backbone.
HTTP has been standard way for several years now for web applications . But you need more than just HTTP. Mobile and the internet of things have additional challenges like emitting information from one to many, listening to events, pushing information, etc. To achieve this, IBM now provides reliable mobile access to the Enterprise. Its underpinned by MQ Telemetry Transport (MQTT) protocol. The product that will help address the needs for messaging is IBM WebSphere MQ. Its evolved a lot now and provides capabilities to move messages, interact between services, move files, integrate with Mobile, applications and even sensors. MQTT plays a important role in mobile messaging. It provides reliable transport, push notification, publish subscribe paradigm, and also converse battery and bandwidth, IBM WebSphere MQ Telemetry provides mission-critical connectivity and intelligence by delivering, relevant near real-time data to intelligent decision making assets.
Something interesting for the developers to think about now ... How are mobile applications different from the desktop or server applications? How applications targeted for small sensors interact with the enterprise?
Disclaimer: Each posting on this site is the view of its author and does not
necessarily represent IBM’s positions, strategies or opinions. I do not
guarantee correctness of the opinions or content or sample code
presented here. Use it at your own risk.
Srihari Kulkarni, Staff Software Engineer at IBM India Software Labs, Bangalore.
works as a Level 3 Support Engineer providing solutions to PMRs raised by
WebSphere MQ customers. Prior to his role as a Level 3 engineer, he has worked
on WebSphere MQ as Technical lead for JMS testing as well as pre-GA and fixpack
testing. He has also co-authored an IBM Redbook and posts messaging related
content on the Messaging Blog on developerWorks.
Q. What has been the most challenging part of your job?
of course. Each PMR is different from one another and every PMR
requires an in-depth study of the customer's setup and the code.
Q. What are your interests other than work? Recently, I have started toying around programming for mobile phones. But apart from that, I like to do photography.
You had been involved with IBM WebSphere MQ for quite some time now.
How do you see the progression, both for technology and your personal
growth? Ever since I started working with
WebSphere MQ, I've seen it evolve over many releases to offer customers
with immense capabilities in the messaging arena. MQ's interoperability
with other products is very impressive too and is only growing year to
year. As a technology, MQ will continue to provide core reliable
messaging capabilities with more and more features. I also anticipate to
see more products spin off MQ to offer customized capabilities (like MQ
FTE, AMS etc.)
Q. Recently you did a virtual session for My developerWorks community, How did you find the experience? The
experience was different one for me. I've mostly been used to giving
sessions to a closed group of audience unlike this one where the webinar
was open to all. It is always nice to have new experiences. I do hope
more and more people get accustomed to the concept of webinars as they
can be useful minus the cost.
Q. How do you keep yourself skilled on emerging technologies? Well,
for one, I read a lot on the internet. And any site I find interesting,
I quickly subscribe to its feed. Plus there are a few colleagues of
mine who I interact with often and talk about such things.
Are you following IMPACT 2011? Could you tell our readers a little on
the latest buzz at the event, something they would be looking forward
to. Yes, I am following IMPACT 2011. As
with previous IMPACT conferences, do watch out for important
announcements coming in during the conference.
Q. Any advice you would want to give our members? My
developerWorks is an excellent extension of IBM developerWorks. All I'd
say is if you are using any of IBM's products, it pays to be on My
developerWorks. Keep your participation active and you'll see the
rewards coming back to you.
Guru Mantra by Annbuvel C Natarajan on 'WebSphere
Application Server 7.0 Overview We
will highlight new features introduced in Websphere Application Server Version
7.0 with focus on - Simplification introduced for developers with JavaEE5
certification, Performance Improvements in the newer version, New topologies
introduced by Intelligent Management, Enhanced Runtime Provisioning, New Fine
Grained Security Features, Properties based File Configuration, Packaging
improvements with Business Level Applications, along with Scripting and Messaging
Enhancements. Webinar on' WebSphere
Application Server 7.0 Overview'
WHEN : : DECEMBER 07, 2010 Start Time: 3:00 P.M. End Time: 4:00 P.M. WEB-CONFERENCE LOGIN DETAILS with Audiocast streaming available https://apps.lotuslive.com/meetings/join?id=7619301 OR https://apps.lotuslive.com/meetings/join Conference ID: 7619301
TELE-CONFERENCE LOGIN DETAILS
Dial In Number: 44088778 (Bangalore, Chennai, Hyderabad, Kolkata, Mumbai, New Delhi, Pune) Meeting ID: 92378741
The meeting Id will be activated 10 minutes prior to the session.
Speaker: Annbuvel C
Natarajan Connect with Annbuvel on My developerWorks! Annbuvel is based in Bangalore. He has been associated with
WebSphere Lab Servicesas a performance
Expert on WebSphere family of application servers.He focuses on Infrastructure set up/ maintenance,
performance tuning and handling critical situation on customer location for all
platforms except System z. He is having deep skills on WAS 6.1/7, WPS, WVE and
Check out the Learning resources shared by Annbuvel
webSphere Applications Server http://en.wordpress.com/tag/websphere-application-server/
There have been a
number of teams across IBM who have independently been starting to use Agile and
Lean approaches to increase the effectiveness of their development process.
few agile practices that teams may look at to improve on the agile scoring :
document for the release which mentions the high level themes/ focus areas.
it should contain :
stakeholders you have spoken to ?
their needs ?
This is a
way to express your requirements and to communicate requirements within the
feature they need to address ?
business value they will get if we give the solution ?
stories with high business value are prioritized.
have similar artifacts like line items,use cases, tasks etc.
itself commits and gives estimates for the use cases in the iteration.( And not
team lead or management)
decides the best practices to follow and adheres to it.
distribution is done by the team itself and not by management.
decides how to act on the blocks and tries to solve the problem. If not able to
scrum master takes it to the management.
agile, you must be having iterations for your release. Daily Scrum is a must.
is involved in planning and execution. ( DEV, Test, ID, support ) and works
together. Its basically the way you structure and work together.
Prioritized Backlog :
you to choose what you want to do in a particular iteration. This is done at the
beginning of the iteration.You can have dynamic backlogs which gets
This has to
be done mostly by the product owner in concurrence with stake holder
ideally.Others can add to the product backlog.
and use it for your planning.
team velocity of the previous iteration, you can replan your work.
duration of each iteration. Do not stretch your iteration.
be complete if Unit test, FVT, required IDand demos are done.( IBM defined criteria).
iteration is fully tested. Unit Test, FVT, Systems Test for the new capabilities
included.Testing should be brought early in test life cycle.
Stability/ High Quality :
At the end
of each iteration, use cases are completely done and there should be low
It’s a good
practice to gather feedback during or at the end of each iteration from
stakeholders. Stakeholders can also be internal to the project.
Key is to
take feedback and act on it.
Reflections : (MUST)
should be a specific time allocated to this at the end of each iteration. You
discuss the lessons learned and make a note of the action items,
This has to
be done with the entire team.
action, carry on the same practices in the next iteration.
reflections is to come up with some good practice / improvement areas and
inculcate in the next iteration.
do for the heck of it. Should bring out some value out of it. This should be
limited to three common questions:
did yesterday ?
you going to do today ?
some questions to be addressed--
you are integrating build ? Is it automated ?
should be a dedicated machine and a build should be triggered as soon as some
check in happens.
happens, notification goes to the person who checked in last.
run automated unit tests.
Driven development :
cases before code is in place .Use mock objects to write test cases. However its
not a must but there is quite a lot of focus on TDD.
good quality of the product. If there are any design flaws, TDD brings it to
review and analyze tools to uncoverpotential flaws.
: Informal peer
reviews to remove defects.
Inspections : Conducted for critical part of
code. It’s a team of 3-4 people might be internal to the team or external if
they are aware of the product and can do
justiceto the inspection.
There is a
formal training given on how to do inspection.
Reviewer/Committer : You use a reviewer / committer
check-in process to remove defects as early as possible in the development
developers work in pairs during coding, creating, reviewing and fixing defects
in the code during data entry.
Sustainable Pace :There is a continuous pressure on
the team and not only during the end of the iteration.
customers to test the product. See how they use it and how they test.
We get to
know the scenarios and add them to our test bucket. This can help reduce PMRs at
a later stage.
you are aware of such scenarios at the time of collecting the requirements, then
it’s a good practice always to include such scenarios in your testing cycle.You
can collect this from stakeholders .
You go to a
customer location and do the testing in their environment and setup.
Stream Maps :
breaking down the processes ( dev /test)
and optimizes it. Team sits together and performs this task. This basically is
for removal of waste in the product.
Usage of effective Tools :
effective tools for iteration planning, defects workflow , build setup and
Binary Logging and usage of binaryLog command in WebSphere Liberty Server
IBM WebSphere Application Server V8.5.5 enhances the High Performance Extensible Logging (HPEL) feature that was first introduced in V8.0. As the name suggests, HPEL was created to significantly outperform the existing application server logging and tracing mechanism, plus provide a powerful way to extend log and trace entries with extra fields. Beyond that, HPEL provides a number of ease of use improvements for working with log and trace content. Enabling HPEL is quick and easy, and does not require any code changes to your applications.
Binary logging is a high performance log and trace facility based on the full profile High Performance Extensible Logging (HPEL) technology.
Binary logging provides a convenient mechanism for storing and accessing log, trace, System.err, and System.out information produced by the application server or your applications. It is an alternative to the default log and trace facility, which provides the JVM logs and diagnostic trace files commonly named messages.log and trace.log.
In order to view Binary logs in human readable format , we need to use command, call as binarylog which is an extension to logViewer command in WebSphere Application Server. But, before that I would like to give a brief introduction to Binary log concept and how does it differs from normal 'Basic' logging.
Introducing Binary Logging
Binary Logging is a new log and trace content storage and access system for WebSphere Liberty Server. Binary Logging stores log and trace content produced by the existing log and trace APIs, in particular the java.util.logging API.
Binary logging provides a log data repository and a trace data repository. See the following figure to understand how applications and the application server store log and trace information.(Figure 1)
Log data repository
The log data repository is a storage facility for log records. Log data is typically intended to be reviewed by administrators. This includes any information applications or the server write to System.out, System.err, OSGi logging service at level LOG_INFO or higher (including LOG_INFO, LOG_WARNING, and LOG_ERROR), or java.util.logging at level Detail or higher (including Detail, Config, Info, Audit, Warning, Severe, Fatal, and any custom levels at level Detail or higher).
Trace data repository
The trace data repository is a storage facility for trace records. Trace data is typically intended for use by application programmers or by the WebSphere® Application Server support team. This includes any information applications or the server write to the OSGi logging service at level LOG_DEBUG or java.util.logging at levels below level Detail (including Fine, Finer, Finest, and any custom levels below level Detail).
Now , I would like to come on the topic , which says how administer binary logs.As we know we do not have textlogs after enabling binarylog, so the challenge is to view binary logs whenever required.
For that we have introduced binarylogcommand.All(logs and traces) content can be accessed using one easy-to-use command (binaryLog), avoiding any possible confusion over which file to access for certain content.
binaryLogis an easy-to-use command-line tool provided for users to work with the log data and trace data repositories. binaryLog provides filtering and formatting options that make finding important content in the log data and trace data repositories easy. For example, a user might filter any errors or warnings, then filter all log and trace entries that occurred within 10 seconds of a key error message on the same thread.
When problems occur on your production application servers, it can be a time consuming task to discover and isolate the most important content in your log and trace files. Imagine that somewhere in your server's log files an error occurred on one of your systems.
Traditionally, you would have three options for finding problems in your log files:
Look at each log manually, perhaps grep-ing for " E " or " W " or "Error" or "Exception," and so on.
Write scripts to do the above searches.
Use tools to mine your logs for warnings, errors, and exceptions.
BinaryLog stores its log and trace data efficiently in binary repositories, which you can then access, filter, and format using the binaryLog command line tool.To better understand usage of binaryLog , I have used one sample application which produces logrecords of various levels and with custom messages.
In its simplest form, you can output all content from your logs and trace using this simple command from your bin directory (output shown in Figure 2):
./binaryLog view server1
where 'server1' is my server name , command syntax is as follows:
The value of options is different based on the value of action.
Figure 2: Output from binaryLog command
Now , in case I have various instances of logs stored in log directory , but I want to view the specific instance logs only.For that first of all I need to list all instances so that I can choose on of them I am interested in.(Shown in Figure 3)
Command would be :./binaryLog listInstances server1
Figure 3: Output from binaryLog command
As you can see,I have five instances for “server1” server , but I am only interested in viewing logs for latest instance.(Output shown in Figure 4)
So I can use commnad
./binaryLog view server1 --includeInstance=latest
Figure 4: Output from binaryLog command
Also , if we need to view logs for any of the listed instances , we can simply use the same command with Instance ID with it.(Shown in Figure 5)
Similarly I can filter logs on basis of thread ID also.(Shown in Figure 8)
So the command will look like this,
./binaryLog view server1 --includeThread=0000001c
Figure 8: Output from binaryLog command
Now I would like to say something about new features of Binary logging , which can be explored when we view logs in advanced format. As another tip, viewing your logs in advanced format is also a great way to get the full logger name from the log entry (which appears as the 'source' field in log/trace records); this is helpful in case you want to enable more granular tracing for a particular logger. Similarly, you can see the name of the thread that logged/traced each entry in the advanced format.(Shown in Figure 9)
Lets discuss about the third and important feature of binaryLog command which helps us to view logs at realtime.
View log or trace data in real time
When developing or troubleshooting applications, it's often helpful to be able to monitor the log output in real time. Perhaps you are aware of a problem and are trying to get it to reoccur, or you want to watch for a specific event as part of testing your application. At one point or another, you've probably found a need to fire up a shell window and execute a tail -f SystemOut.log to monitor your log output. (Shown in Figure 12)
Binary Logging can make this type of task easy by enabling you to monitor a log repository for new events without having to monitor multiple log files or worry about files rolling over, interfering with your ability to tail them. Try it out by simply invoking the binaryLog tool with the -monitor option:
./binaryLog view server1 –monitor
Figure 12: Output from binaryLog command
The monitor feature can be combined with filtering options to watch for specific events. Imagine you need to monitor a verbose logging application for critical error messages that pertain to a specific username that is part of a message. You could do this with something like the following to see all new severe or fatal log events , without the need for grep or other tooling.Now in example below , I am trying to use monitor option with filtering logs based on Logger.(Shown in Figure 13)
So , this is something really magical , and WebSphere administrators can easily understand the usability and simplicity of Binary Logging as well as binaryLog command.In time of critical situations in production environments , where an administrator typically spend most of his time in looking out the correct logs , which actually can help him in troubleshooting problem , tools like binaryLog command are boon for them.Also , for developers who are not interested in looking out the whole chunk of logs and are only interested in their application logs being running on one of the JVM , binaryLog filtering is must for them.
Performance tuning is really not a new topic these days and I am happy to see many people are aware of importance of Performance. I am hoping many of you are already familiar with various tools that WebSphere Application Server (WAS) has for Performance Monitoring, Tuning and Problem Determination purpose. In this session, I would like to highlight some of new tools and might just mention some of existing tools as well.
Following are the key sub-systems, you should looking at to identify the cause of any Performance Problem.
Looking at overall System Utilization - (Hardware perspective)
JVM Performance (Java SE Perspective)
WAS Resource Performance (Java EE Perspective)
Application Performance. (App Perspective)
Overall System Utilization
Use tools provided by respective OS for CPU, Mem and IO monitoring.
Use of Application Performance Diagnostics (APD Lite)**
Details are attached below.
In this session, I am not focusing on all the tools which are mentioned above. I am assuming that you are familiar with some of basic system monitoring, JVM Monitoring and PMI and TPV.
Following two might be new to some of you.
Use of ISA-5 (IBM Support Assistant 5 )
Use of Application Performance Diagnostics (APD Lite)
IBM Support Assistant (ISA) Version 5
You might be familiar with eclipse based ISA version 4. Latest V4.1 provides access to several different serviceability tools which can assist you in many areas of problem diagnosis such as Java troubleshooting, product configuration analysis, log analysis, and more.
Over the time, it was observed that system diagnostic artifacts have increased in size and we see users hitting limitations of their laptop and desktop systems. Especially with significantly larger artifacts like you may get from 64-bit systems generating multi-GB heapdumps. At the same time web technology has advanced significantly at the same time and there’s a greater desire to centralize activities associated with problem determination. ISA 5 is web based solution, which can be hosted as ‘Team Server’ and can be shared by multiple users.
More information is available at following link (look for ISA 5 Beta 3)
Timed Operation is a newly added feature to V8.5.5 which would help WebSphere Application Server administrators to see if any events in their application servers are operating either slower than expected. Currently Timed Operation have been added to track the duration for the JDBC operations which are been executed in the application server. In future this could be enhanced by adding the track for EJB, Servlets and many more operations. Timed operation feature operates by logging messages along with generating timely reports.
In order to enable timed operations, we need to add timedOperations-1.0 feature to server.xml.
Figure 1 : To enable timed operation.
When we enable timed operations it will log a warning message into conosle.log and messages.log where messages.log contains more detailed information by including the timestamps, thread ID and component names which will not be present in console.log file.
Figure 2: snapshot of messages.log when timed operations feature is enabled.
A log message have been created by timed operation in messages.log, where it's logging a warning message for a SQL query in which the execution time has been grown from 0.169 ms to 0.091 ms. This feature is driven by setting the expected time for operation ( eg SQL query) in the configuration file which will be used by Timed operations to analyze whether the operations is faster or slower than expected.
Timed operation feature reports
In addition to logging warning messages for JDBC operations, timed operations feature has the ability to create report in messages.log file, listing the applications that took the longest time for execution, the frequency of the report is once per day (24 hours).
Configuring the generation of timed operation report
We can disable timed operations report feature by explicitly setting enableReport to false, by default this property is enabled providing logging details for 10 longest timed operation, grouped by type, and sorted within each group by expected duration. We can even change the frequency of report generation by setting the reportFrequency attribute.
Figure 3: Setting the time operations property in server.xml
[ Figure 3: This configuration will lead to warning messaged to be logged when number of timed operations exceeds 15000. ]
You can also use the server dump command to get a full report of all timed operations in the messages.log file, grouped by type, and sorted within each group by expected duration.
Timed operations feature eases the life of Webshpere Application Server administrators by reporting when operations are not functioning as per their expectations.
From WAS v 8.5 onwards, a new feature has been added to WAS where in the event of a crash or failed messaging engine, if there is no backup available, it is possible to recover the configuration data of the messaging engine. This will help recover the messages in the message store (which can either be a filestore or database). This blog provides a practical example of this scenario.
1. The configuration consists of a bus Bus1 which has an application server, server1 as the bus member. This bus member hosts a queue destination named queue1 which currently has 10 messages. To simulate a failure situation, delete the bus member.
2. To recover the configuration run the recoverMEConfig command by providing bus name, message store details, node and server name -
As a level 3, I have come across many issues which says that they are observing JfapConnectionBrokenException in their logs. So today i decided to write on what exactly the exception indicates and the possible scenarios where you would encounter this exception.
JfapConnectionBrokenException is observed in the following two scenarios -
a) When there is a network connection failure between the client and server.
b) When the server/client is busy and does not respond to the request sent by the peer.
Basically, when the connection between the server and client is idle for sometime, 5 mins by default, the socket times out which triggers the server/client to send a special request called heartbeat, to the peer to check if the connection is still valid. If the peer does not respond to the request sent by the client/server within the stipulated timeframe, 7 seconds by default, JfapHeartbeatTimeoutExceptions would be thrown to indicate that the heartbeat request has timed out. Eventually, the connection would be invalidated and closed logging a JfapConnectionBrokenException.
One thing to look out for in such a situation is the load on the server/client which was not able to respond to the peer within 7 seconds. There might be a situation where the server is not able to respond within 7 seconds but given extra time, it would have avoided the disaster. In such scenarios, increasing the timeout period to 60 seconds or max 2 mins would act as a workaround to the existing problem providing some relief.
To increase the heartbeat timeout property, add the following entry in the sib.properties file in the properties subdirectory of the profiles directory -
For more information on tuning the heartbeat properties refer the following link-
What is Performance Monitoring Infrastructure (PMI)in WebSphere
Application Server? How PMI data
is useful for tuning WebSphere Application Server?
PMI (Performance Monitoring Infrastructure) as the name
suggests is responsible for collecting performance data of WebSphere
Application Server environment, using which the performance bottlenecks in the
application server can be identified and fixed. Basically PMI is a server side component that is responsible
for collecting performance metrics of a running Application Server.
RequestCount :Total number of requests that a servlet processed.
FreePoolSize :The number of free connections in the pool
WaitingThreadCount: The number of threads that are currently
waiting for a connection
Performance Metrics of the Application Server are collected
in such a way that there is Minimum performance impact and ensures efficient
operation of WebSphere Application Server with minimum performance over head.
classified into four levels
Each level is designated to collect certain amount of data
which would be the subset of the next level.Using Custom Level, one can select his own performance metrics which
needs to be collected. Each module exposes a different set of Performance
metrics ,one can refer to the infocentre for available metrics for a particular
module and start collecting data based on the requirement.
Example: Please select the below link to know the available
metrics for JDBC Connection Pool module
lists out some of the examples of ConnectionPool Metrics which includes
closeCount AllocateCount and the Description and overhead involved in
collecting data.Based the requirement user can enabled the collection of the
The total number of connections closed.
Per connection pool
The total number of connections allocated
Per connection pool
The total number of connections created
Per connection pool
Also user's always have the option to enable or disable collection
of Performance Metrics based on his requirement.
responsible for collecting data, once the data is collected there are multiple
ways in which you can view the data
1) Using TPV(Tivoli Performance Viewer) --Out of the Box Monitoring
easy to use.
Will discus more on
TPV in next Blog.
2) Using JMX Client Programs.
3) Using Performance Servlet Application
Once PMI data is available, we
need to perform analysis on it and identify if tuning is required or not.
Let us Consider data exposed by ConnectionPool which
includes WaitingThreadCount which gives us number of threads that are currently
waiting for a connection and FreePoolSize which includes number of free
Connections available in pool.For a given connection if the WaitingThreadCount
is slowly increasing and if number of freePoolSize is slowly decreasing it
indicates poolSize needs to be increased or adjusted in order to increase the
Similarly there are several other Metrics which are made
available as part of PMI using
which an Application Server can be efficiently tuned to give better
Disclaimer: The postings on this site are my own and don't
necessarily represent IBM's positions, strategies or opinions.
The much awaited simple, lightweight Liberty application server profile is now available with the free download for WebSphere Application Server for Developers v8.5 here.
This new profile provides a very fast, highly composable, lightweight, easy to use application server that can be used in the development environment. As a developer you also have the option of installing the WebSphere Developer Tools for Eclipse v8.5 from the eclipse marketplace
So what makes Liberty stand apart ?
- Incredibly fast with less than 5 sec startup time - Lightweight runtime footprint occupying less than 60MB for most applications - Small Download size of less than 50MB
The Liberty profile currently supports Java web applications, OSGI and mobile applications. It is not only designed to improve the development experience but can also be run on production. Go ahead and download the new Liberty profile and share your experiences, feedback and comments as a response to this blog.
As a J2EE developer/administrator, you might have heard about GlobalTransaction or UserTransaction(UT), which is a specification that J2EE prescribes to build transactional enterprise applications. In this blog, I will only discuss the relationship it has with J2C Connections/Database connections.
When a connection is requested under a global transaction, by default, the connection has affinity towards that transaction. This means that a connection, that is reserved under a certain transaction, is visible only to that transaction even if it is closed explicitly by the application. This state remains as is until the enclosed transaction ends(either commits/rollbacks).
You might be wondering what benefit might it get if a connection is kept unused though it is explicitly closed? You have a valid question. But, there are other school of applications which access database multiple times to complete business logic execution with in a single transaction. In those scenarios, the said behavior acts as a connection cache keyed by transaction context. So, it would be relatively faster to fetch a connection from its transaction cache than getting it from a pool. It is exactly for the same reason such an approach is taken by websphere to boost the application performance.
Another interesting question you might ask is "In my application I don't use UserTransaction; should I really care about it?". The answer is "YES". In the absense of UT, websphere transparantely proives what is known as "Local Transaction Containment(LTC)". This LTC is created by containers before executing application components such as service method of a Servlet or business method of an EJB. So, if you acquire a connection in a servlet service method, then the connection is by default associated with LTC. Even if you close the connection explicitly using con.close() method, the connection is not freed until the service method is completed. Sometimes, it may cause leak in the connection pool and finally make other requests wait for the connection.
How to avoid such an anamoly?
It is simple. Change the connection sharing mode to UNSHAREABLE. By default it is SHAREABLE. You can find this setting under <Resource- Ref> of module level Deployment Descriptor in your application(web.xml, ejb-jar.xml). You can configure <RES-SHARING-SCOPE> to suite your needs. <resource-ref> <description></description> <res-ref-name>jdbc/ERWWDataSourceV5</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Shareable</res-sharing-scope> <!--- your change must go here --> </resource-ref>
How do I decide? First, analyze if your application asks for connection multiple times with in a UT or LTC. If yes, then go with the default behavior. Perhaps, that is the optimal setting for your application. Otherwise switch to UNSHAREABLE behavior as explained above.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Well, in the past we have seen many customers who tuned their connection pool improrely and got into troubles. So, in this blog I will discuss few important connection pool parameters that would help in improving the application performance.
1) Set min/max poolsize parameters with cautions.
Setting very high value for maxPoolSize will allow more requests to go to database. If the database is not capable of handling high load, then congestion will start building on connection pool and eventually the system will get stuck. Then the only to recover from this situation is by restarting the application server process.
So, be aware of the ramifications of changing max poolsize. The maxPoolSize of "50" will do for most of the workloads. This setting, ideally, should be less than the thread pool size given the fact that fraction of web request processing time is spent in jdbc while the rest is used for executing the business logic.
Though this itself is not a rule, you can try and tune this parameter to get better performance without getting into any trouble.
The minPoolSize parameter ensures that at least those many connections are maintained in the pool. Since database connections are very expensive, it is always wise to close idle connections which could otherwise be used by some other processes. You can do this by setting a connection timeout which is the time after which an idle connection is claimed.
When connections are in freepool for a long period of time, there are always chances that these connections get dropped at network layer (thanks to firewall and other network level settings) without the knowledge of the pool. Later, when this connection is put to use, it will throw a famous "StaleConnectionException". To avoid this error, adjust agedTimeout so that when a connection is idle(i.e in freepool) for very long time it will be claimed. This setting overrides the minimum connection poolsize in that this setting would make pool size fall below minimum connection pool size.
This is another important parameter, tuned correctly, will enhance the application performance by a modest 20%-30%. This parameter will lead the most used prepared statement objects to be cached so that they need not be recreated every time application requests for it. Since the objects are cached, it will cut into heap space. And another important thing to be noted is the cache is maintained per connection. So, if you have a poolsize of m and cache size of n, then the total cache size becomes 'm X n'. Monitor the queries being executed against database from this server and count the queries that are executed frequently. This will help you in tuning the size of the cache.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Impact2011 conference on May 31st at Grand Hyatt, Mumbai event covers several interesting topics. There's going to be interesting discussions around the WebSphere Application Server and its new release version 8.0. The new version of WebSphere Application Server v8.0 has introduced a number of features and enhancements towards building an agile business environment and extend the reach of web applications to mobile devices.
Here are some of the quick highlights that I could drew on what's new in WAS v8 from a system management perspective.
Installation Manager and Centralized Installation Manager A complete new experience to install products that results in faster time to value and lower operational costs. It brings a single user experience across WebSphere, WebSphere Components and various other IBM Products. So effectively, a single IBM Installation Manager can manage the product life cycle for any IM based products from WebSphere, Rational, etc,. The newer install and maintenance technology allows you to install the desired level of service in one pass without needing to install GA product first and then apply fix pack and/or interim service fixes as a separate step. The Centralized Installation Manager from V7 is also re-based on the new Installation Manager which would help large enterprises by leaps and bounds on lowering their operational costs.
Recovering and Moving Nodes It has never been easier to recover from a damaged nodes with the new WebSphere V8 addNode -asExistingNode option that allows you to recover a damaged node without having to stop using the dmgr. This feature is also faster as it does not require taking a backup and restoring the dmgr's configuration which is quite important if you are managing a larger cell deployments.
Monitored Directory Deployment The feature that I love!. The application deployment is now just a matter of a file copy operation. To install an application all you need to do is to copy the application to a directory, and to uninstall just remove the application from the directory. This holds good for applications such as EAR, WAR, EJB JAR and SAR. Also there is Enhanced Properties File Based Configuration format for easier editing of application deployment options.
Apart from this there are also new SDK commands (managesdk and wsadmin) introduced in WAS v8 with the goal of having a common API for all supported platforms.
Here are a few quick links that you can browse through:
Here is the URL for this bookmark: www.websphereusergroup.org/southasia The Global WebSphere Community launched new website recently. The site helps you
connect with the WebSphere users and access to lots of content relating
to WebSphere ... Also you might find WebSphere Messaging and
Connectivity User Group South Asia relevant. Join now !!!
CICS Transaction Server data transfer between CICS Clients to Server or CICS
Server to Server applications is possible by using Commarea. Until CICS
Transaction Server 3.1 and TXSeries 7.1, the only solution for inter-program
data transfer was to use Commarea. Maximum data that can be sent over Commarea
is 32KB. When CICS Transaction Server was developed, 32KB data was more than
sufficient to transfer data. But as Enterprise Software’s evolved, need for
higher data transfer became essential in CICS. Consider a Banking
scenario, where customer details and customer photographs are required to be
uploaded from a branch system for creating an account. Since these operations
involve large data transfer in the application, CICS developers cannot use Commarea
for writing their applications. CICS Developers have to use several programming
techniques for circumventing this 32KB data limitation on Commarea. One such
technique can be to store these details in a Temporary Storage Queue (TSQ) and
pass the TSQ Name in the Commarea to the backend application.
solution to this 32KB data limitation on Commarea is implemented as Channels
and Containers from CICS Transaction Server Version 3.1 and in Distributed CICS
from TXSeries 7.1.
Container is a named block of data that can be passed to a subsequent program
or transaction. It may be easiest to think of Container as a named Commarea.
There is no CICS enforced limit on the physical size of a single Container. You
are only limited by the available user storage in the CICS address space.
Channel is a collection or group of Container, which can be used to pass data
between program or transaction. Channels and Commareas are mutually exclusive.
That is, you may use one technique or another technique for passing data but
not both on the same command.
places in CICS application where Commarea is used for data transfer are
mentioned below. Here Channels and containers can be used instead of Commarea
in the CICS application to provide higher data transfer facilities.
1)Commarea can be used to transfer data between programs.
data between two programs within a region, Commarea can be used with EXEC CICS
LINK or EXEC CICS XCTL API’s. To transfer data between two programs across
regions, Commarea can be used with Distributed Program Link (DPL) facility.
2)Commarea can be used to transfer data between tasks.
Commarea can be passed between two tasks by the use of EXEC
CICS START TRANSID FROM API.
3)Commarea can be used to transfer data between CICS Clients and
Commarea can be used to transfer data between CICS Clients
and CICS Server applications by using External Call Interface (ECI) facility.
2)After installation is complete, donot start the clusters. Ensure that the
installed directories have bcguser and bcggroup as their owners. Give 777
permissions as well.
Command for changing owner :
chown -R bcguser:bcggroup <folder name>
Command to change the permissions :
chmod -R 777 <folder>
The above commands need to be given for bcghub-dmgr and
2)Before installing on Machine B, create the common folder in the
same path as it exists on Machine A. The nodeagent startup gives errors soon
after it completes installation if it doesn’t find common folder.So its better
to have the common folder mounted before installing document manager on machine
The Cream of Clustering :(Mounting process ):
Machine A :
Open the command prompt and run the following commands,
1) service nfs start ( to start the nfs daemon)
2) exportfs -o sync -o rw *:<complete file path>
3) exportfs à shows the paths exported.
To start the nfs service on Suse Linux:
rcnfsserver à shows the directories ready to mount
If exportfs –o sync –o rw*:<completefile path> doesn’t work , then use
the gui for NFS server.
NFS server will be present in YAST.
The share will be removed if the daemon is stopped. When the daemon is
restarted, You need to repeat all these steps.
If you want to keep it forever you need to make an entry in /etc/exports
file, To make an entry append the following entry
<complete file path> *(rw,sync)
Note :This is required if you are using the command line for exporting.
In case if the entry is made through NFS server GUI, then it automatically adds
the entry in /etc/exports file.
On Machine B :
Create the folder structure to map the MachineA folder to MachineB in the
same path as it exists on MachineA.
Open the command prompt and run the following commands,
In 1999, Information Architect Darcy DiNucci wrote in her article "Fragmented Future", that the Web as we know it, is a fleeting thing, in only its embryonic stage. It was then that the term "Web 2.0" was coined. Tim O'Reilly, the founder of the O'Reilly Media, hosted the Web 2.0 Conference in 2004, to acknowledge the second generation of Internet. However, Tim Berners-Lee, the founder of the World Wide Web had contested Web 2.0 as "new" web technologies and instead refered to it as "A piece of jargon" - since the web had always meant to have Web 2.0 values in the first place.
And since then the times have only been changing. Web 2.0 has found wide spread reassurance with the ever rising popularity of Internet giants such as Google Search, Wikipedia, Blogger, Twitter, Digg, Reddit, Del.icio.us, Facebook etc. The effects of Web 2.0 is only evident in the popular Wikipedia Vs Britannica comparision. While the 2007 edition of Britannica Encyclopedia boasted an author count of 4411, most of whom contributed single articles, Wikipedia had an activiely contributing population of 700,000 odd !
The altruistic contributions of the general internet junta even led to TIME magazine declaring "You" as the "Person of the Year" for the year 2006. The article called "Web 2.0" as a massive social experiment which saw internet world working for nothing and beating the pro's at their own game.
Which Doors open ?
Previously, dissemination of information was done in a strict publisher-subscriber model. A source creates information, which is processed in a presentable manner and dispatched to the reader, in the form of newsletters, mailers etc. The room for interaction and the audience count being limited, the scope for monopoly of information was high. Talking about new products meant reaching out via a well defined process.
This meant user base all over the world uploading, editing and managing information every second of the day! Somehow, the raw data about the war in Iraq or Tharoor's tweets on the daily routine and other inanities became preferred over traditional news. People were informed of the rain on the other side of their walls via tweets and information on fire brigade dispatchment was all done via twitter! In Bangalore, people tweeted from inside Carlton towers while it burned. Reddit users popularized technologies and practices and had fun while doing it all.
The new face of Evangelisation
Since the last few years, more and more companies have found the blogosphere, RSS etc, a rewarding way of evangelising products. A way of throwing open the gates for constructive feedback that alpha testing could've only dreamt of. Evan Williams' Blogger and Twitter projects are two such ways of reaching out to customers and beyond.
Social Media and web 2.0 is now one of the more powerful forms of media and it only makes sense to leverage this platform to speak about products, features and share content with customers, both present and prospective. Del.icio.us, Digg, IBM myDeveloperWorks, Reddit, Twitter and Facebook are perfect avenues to evangelise.
IBM's myDeveloperWorks is one of the premier social networking sites to connect with technology enthusiasts, blog about new trends and be informed on a variety of events by joining the different groups on myDw.
Del.icio.us allows people to bookmark any page anywhere on the internet and thus, has brought about what we now know as "Tag Folksonomy". Del.icio.us has following from many a technical enthusiasts and is one of the main wings of the social network. Digg and Reddit are sites of content discovery on the internet where links can be posted which allows readers to upvote / downvote the links.
All of these avenues present a good platform for evangelizing over social media and due to their popularity, they already possess a huge user base. We at IBM TXSeries for Multiplatforms, have been active on Twitter for the past year distributing facts, tips, sharing links and addressing queries. This has helped as a good platform to notify customers with new features, releases and provide general information.
Evangelize your product the Web 2.0 way today, for better interaction with customers, direct feedback and for building a following!
This document discusses administration of Liberty profile cluster using java JMX client. Two machines are used with some distinct names and IP addresses for cluster setup. As a prerequisite to this article one need to setup Liberty profile cluster. The cluster used in this article has name TestLibertyCluster with two instances Mem01 and Mem02.
Setting up Liberty Cluster
One can refer below link to setup liberty Cluster:
With deprecation of Load Balancer for IPv4, Load Balancer for IPv4 and IPv6 is the product replacing it. As with any product migration, there are bound to be questions on migration. This blog is to help with migration questions that you might have. In this introductory blog, I will talk about the new features which have been added in Load Balancer for IPv4 and IPv6, and touch upon the differences at a high level, which you would need to consider during migration
A few features were available in Load Balancer for IPv4, but not in Load Balancer for IPv6. To help customers with migration, a few important features have been added to Load Balancer for IPv6 in 8.5.5. These are :
1. Network Address Translation
2. Colocation on AIX and Linux
3. Content Based Router
4. Site Selector
Network Address Translation (NAT)
This is similar to what is available in Load Balancer for IPv4 in most of the ways. This forwarding mechanism is used to balance servers in a subnet different from that of the Load Balancer. This is in addition to encapsulation, which can also be used in scenarios where the servers are in a different subnet. And in terms of performance, it is recommended to use encapsulation. This is because, in NAT, the reverse path is also through the load balancer, while in encapsulation, the reverse path goes directly to the client.
The difference is however in the level at which NAT is enabled. In Load Balancer for IPv4, NAT is enabled at port level, while in Load Balancer for IPv6, it is available at server level. This will enable a more granular level control in the forwarding mechanism.
Also, Network Address Port Translation (NAPT), in Load Balancer for IPv6, is implemented leveraging the port translation feature which is available by default in the OS. The port translation is configured in the backend servers using OS features like iptables (on Linux), netsh (n Windows), ipfilter (on AIX, HP-UX, Solaris).
Colocation on AIX & Linux
Colocation is available on Load Balancer for IPv6 on AIX and Linux platform. Even in Load Balancer for IPv4, Colocation was not implemented in Windows, and this continues. The functionality in Load Balancer for IPv6 is similar to its predecessor. However, extreme caution needs to be exercised when implementing this feature in production environment, since this can affect the performance.
Content Based Router
This feature is similar to what it is in Load Balancer for IPv4. However SNMP and Remote administration is not available. This is because, today, there are advanced third party tools which are available for SNMP, and Remote administration comes by default with OS. Caching Proxy is a prerequisite product for this feature.
This feature is similar to what is available in Load Balancer for IPv4. Again, Remote Administration is not available, and use of the Remote Administration feature which comes with the OS is recommended.
Load Balancer for IPv6 performs the best on Linux, followed by AIIX and then Windows. Before migration, it is recommended that you evaluate the performance as well
In all the commands, Load Balancer for IPv4 has : as the delimiter, but Load Balancer for IPv6 has @ as the delimiter. This is because Load Balancer for IPv6 supports IPv6, where the IP address is delimited by :
Got more questions on migration? Post it here.....
WebSphere Application Server version 8.5 ships the Intelligent Management pack which offers application server virtualization, resource management, performance visualization, health management, and application edition management, which are collectively referred to as dynamic operations. Earlier these dynamic operations were part of the product WebSphere Virtual Enterprise, now it is rolled to WebSphere Application Server version 8.5 Network Deployment editions.
You can use Intelligent Management offerings to enhance operational efficiency by deploying dynamic operations, service high-volume transactional workloads with linear scalability and high availability, and manage large scale, continuously available application server environment. By configuring application infrastructure virtualization with Intelligent Management, you can pool together resources to accommodate the fluctuations of workload in your environment and increase the quality of service.
Application infrastructure virtualization
Typically, applications and resources are statically bound to a specific server. Some of these applications might experience periodic increases in load, and the application may become unavailable during a period of high demand but it may last a short time; you must build your IT infrastructures to be able to accommodate these peaks, which is a very costly IT investment.
In the virtualized dynamic operations environment of Intelligent Management, the static relationship is replaced with a dynamic relationship with looser coupling of applications or resources and server instances by deploy applications to dynamic clusters, which can expand and contract depending on the workload in your environment. You can separate applications from the physical infrastructure on which they are hosted. Workloads can then be dynamically placed and migrated across a pool of application server resources, which allows the infrastructure to dynamically adapt and respond to business needs. Requests are prioritized and intelligently routed to respond to the most critical applications and users.
Benefits of Application infrastructure virtualization
Improved management of software and applications: Management using automated services and operational policies, less error-prone.
Allocation of software resources: Dynamic reallocation of resources can be based on load among applications.
Increased number of applications: More applications can run in a virtualized application environment than in a static configuration.
Reduced configuration complexity: Loosened coupling reduces the overall complexity and provides for a better, more usable environment.
The On-Demand Router (ODR), which is a specialized Java-based proxy server that classifies and dispatches the incoming requests, across the application server environment. The ODR classifies incoming HTTP and SIP requests and then works in conjunction with other Intelligent Management “decision makers” to route workload. Requests are prioritized and routed based upon administrator-defined rules, called service policies, which are used to specify application response time goals.
The dynamic clusters function of Intelligent Management can automatically scale up and down the number of running cluster members as needed in order to meet response time goals for your users. You can leverage overload protection to limit the rate at which the ODR forwards traffic to application servers to prevent heap exhaustion, CPU exhaustion, or both types of exhaustion from occurring.
Application edition management
Application edition management enables management of interruption-free production application deployments, the ability to upgrade while maintaining application continuous availability. In other words, users of the application experience no loss of service during the application upgrade.
Using this feature, you can validate and deploy a new edition of an application in your production environment and upgrade your applications without incurring user outages. You can also run multiple editions of a single application concurrently, directing different users to different editions, as the On-Demand Router (ODR) maintains not only traditional application state (for example, HTTP session) affinity, but also application version affinity.
Intelligent Management provides a health management feature to monitor the status of your application servers, as well as sense and respond to problem areas before an outage occurs. You can manage the health of your application serving environment with a policy-driven approach that enables specific actions to occur when monitored criteria are met.
For example, if a memory leak is detected, a replacement application server instance can be started, and direct workload to it, the memory leak detected application server can be placed into maintenance mode, and the administrative or operations staff can automatically be notified.
A split brain scenario refers to a case where the HAManager's connectivity between servers is broken, resulting in two isolated parts of a core group believing they are the only servers running. In this scenario, the HAManager tries to ensure all services are running.
For the Service Integration Bus, the HAManager will instruct any messaging engine that needs to be running to start, even though they may already be running in another server.
The HAManager does not know they are running in another server because of the split brain.Normally,the new messaging engine incarnation (INC2) instructed to start will not be able to because the original incarnation (INC1) is still running and holds a lock on the database preventing any other incarnation starting. If, however, the network outage also caused INC1's database connection to break, INC1 loses its lock and INC2 is then able to start successfully and update the table to indicate that INC2 is now the owner.After the network is restored, HAManager realizes that there are two messaging engine incarnations running and instructs INC2 to stop.
When INC2 releases its lock on the database, INC1, who has been attempting to reobtain the lock, will be able to connect and finds that the incarnation ID has changed.
One of the topology Of Split brain Scenario:
There is one Cluster having two servers on different node and both node are on different machine.
Cluster has been added as bus member of SIBus using dataStore.
Message Engine has been started initially on server1 at Node1(Machine1).
Let suppose due to some reasons Network failure has been done between Machine1 to Machine2:
Now HA manager on Node2 is not aware about the status of started ME instance of Server1 and ME instance of server2 keeps retrying for getting lock for active ME.
6/21/13 5:12:11:401 EDT] 0000002c SibMessage I [BackPortingBus:myCluster.000-BackPortingBus] CWSID0056I: Connection to database is successful
[6/21/13 5:12:11:447 EDT] 0000002d SibMessage I [BackPortingBus:myCluster.000-BackPortingBus] CWSIS1599I: The messaging engine, ME_UUID=CE69594529AF745A, INC_UUID=29852985660157B6, is attempting to obtain ownership on the data store.
[6/21/13 5:12:16:450 EDT] 0000002d SibMessage I [BackPortingBus:myCluster.000-BackPortingBus] CWSIS1593I: The messaging engine, ME_UUID=CE69594529AF745A, INC_UUID=29852985660157B6, has failed to gain an initial lock on the data store.
[6/21/13 5:12:16:452 EDT] 0000002d SibMessage I [BackPortingBus:myCluster.000-BackPortingBus] CWSIS1599I: The messaging engine, ME_UUID=CE69594529AF745A, INC_UUID=29852985660157B6, is attempting to obtain ownership on the data store.
[6/21/13 6:18:56:780 EDT] 0000003b SibMessage I [BackPortingBus:myCluster.000-BackPortingBus] CWSID0016I: Messaging engine myCluster.000-BackPortingBus is in state Joined.
[6/21/13 6:18:56:782 EDT] 0000003b SibMessage E [BackPortingBus:myCluster.000-BackPortingBus] CWSID0054E: Messaging engine myCluster.000-BackPortingBus is not enabled, as the re-enable count 5 exceeded.
Once it reaches to maximum retry=5(default) for re-enabling , the ME instance is disabled and needs manual intervention for enabling it again.
Once network is re-established between machine1 and machine2 the HA manager on server2 will be notified about the existing lock of server1 ME instance.
Some Scenario may occur as follows:
a)If network is re-established before maximum retry:
In this situation the ME instance of server2 will retry for current attempted cycle and will be in joined state followed by enabled state and stop retrying further as ME instance of server1 has already a lock.
b)Network is re-established after maximum retry:
In this situation the ME instance of server2 will be disabled after maximum retry and need manual intervention(restart of server) for enabling it again.
c)Active instance of ME looses lock before network re-establishment (It may happen due to several reasons as server crash, datastore connection lost , server shutdown etc.):
In this situation due to failover ME instance of server2 will get a lock and become active with new Incarnation ID .