I am working on WAS6.1 currently . I got invitation for attending "Webinar on WebSphere Application Server V8.5 Beta". I am interested in attending. Could you please let me know, what am I suppose to do to attend this webinar.
As a J2EE developer/administrator, you might have heard about GlobalTransaction or UserTransaction(UT), which is a specification that J2EE prescribes to build transactional enterprise applications. In this blog, I will only discuss the relationship it has with J2C Connections/Database connections.
When a connection is requested under a global transaction, by default, the connection has affinity towards that transaction. This means that a connection, that is reserved under a certain transaction, is visible only to that transaction even if it is closed explicitly by the application. This state remains as is until the enclosed transaction ends(either commits/rollbacks).
You might be wondering what benefit might it get if a connection is kept unused though it is explicitly closed? You have a valid question. But, there are other school of applications which access database multiple times to complete business logic execution with in a single transaction. In those scenarios, the said behavior acts as a connection cache keyed by transaction context. So, it would be relatively faster to fetch a connection from its transaction cache than getting it from a pool. It is exactly for the same reason such an approach is taken by websphere to boost the application performance.
Another interesting question you might ask is "In my application I don't use UserTransaction; should I really care about it?". The answer is "YES". In the absense of UT, websphere transparantely proives what is known as "Local Transaction Containment(LTC)". This LTC is created by containers before executing application components such as service method of a Servlet or business method of an EJB. So, if you acquire a connection in a servlet service method, then the connection is by default associated with LTC. Even if you close the connection explicitly using con.close() method, the connection is not freed until the service method is completed. Sometimes, it may cause leak in the connection pool and finally make other requests wait for the connection.
How to avoid such an anamoly?
It is simple. Change the connection sharing mode to UNSHAREABLE. By default it is SHAREABLE. You can find this setting under <Resource- Ref> of module level Deployment Descriptor in your application(web.xml, ejb-jar.xml). You can configure <RES-SHARING-SCOPE> to suite your needs. <resource-ref> <description></description> <res-ref-name>jdbc/ERWWDataSourceV5</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Shareable</res-sharing-scope> <!--- your change must go here --> </resource-ref>
How do I decide? First, analyze if your application asks for connection multiple times with in a UT or LTC. If yes, then go with the default behavior. Perhaps, that is the optimal setting for your application. Otherwise switch to UNSHAREABLE behavior as explained above.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Well, in the past we have seen many customers who tuned their connection pool improrely and got into troubles. So, in this blog I will discuss few important connection pool parameters that would help in improving the application performance.
1) Set min/max poolsize parameters with cautions.
Setting very high value for maxPoolSize will allow more requests to go to database. If the database is not capable of handling high load, then congestion will start building on connection pool and eventually the system will get stuck. Then the only to recover from this situation is by restarting the application server process.
So, be aware of the ramifications of changing max poolsize. The maxPoolSize of "50" will do for most of the workloads. This setting, ideally, should be less than the thread pool size given the fact that fraction of web request processing time is spent in jdbc while the rest is used for executing the business logic.
Though this itself is not a rule, you can try and tune this parameter to get better performance without getting into any trouble.
The minPoolSize parameter ensures that at least those many connections are maintained in the pool. Since database connections are very expensive, it is always wise to close idle connections which could otherwise be used by some other processes. You can do this by setting a connection timeout which is the time after which an idle connection is claimed.
When connections are in freepool for a long period of time, there are always chances that these connections get dropped at network layer (thanks to firewall and other network level settings) without the knowledge of the pool. Later, when this connection is put to use, it will throw a famous "StaleConnectionException". To avoid this error, adjust agedTimeout so that when a connection is idle(i.e in freepool) for very long time it will be claimed. This setting overrides the minimum connection poolsize in that this setting would make pool size fall below minimum connection pool size.
This is another important parameter, tuned correctly, will enhance the application performance by a modest 20%-30%. This parameter will lead the most used prepared statement objects to be cached so that they need not be recreated every time application requests for it. Since the objects are cached, it will cut into heap space. And another important thing to be noted is the cache is maintained per connection. So, if you have a poolsize of m and cache size of n, then the total cache size becomes 'm X n'. Monitor the queries being executed against database from this server and count the queries that are executed frequently. This will help you in tuning the size of the cache.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
There are lots of issues reported
by customers on connecting to SIBus and interacting with it. Most often these
will be configuration issues which could have been easily prevented if the
customers had a better understanding on how does the JMS Connection works.
If you are looking for a place to
understand how JMS Connection works in WebSphere Application Server, then you
are at the right place. Here, I am not going to describe you about the various
fields in Connection Factory settings, since you can get the details about them
in the WebSphere Application Server Infocenter, in fact here I am going to tell
you about what exactly happens behind the scene and how does the connection
While configuring the Connection
Factory, there is a field called, Provider endpoints, although it is not a
mandatory field, it is the one which is responsible for gaining entry into the
Bus. This field accepts a comma separated values of three parameters which are
separated by colon, in this form <hostname>:<port>:<bootstrapmessagingtype>.
Each of these parameters have default
values which equates to localhost:7276:BootstrapBasicMessaging. If you
do not give any values in that field, it will take the default values, which
may not make any sense if your application is connecting from an external
machine or a J2SE environment.
After the application looks up
for the resource and tries to create a connection, the first thing that happens
is, it looks into the provider end points address. It tries to connect to the
first set of values which are specified and goes on to the next if it does not
succeed until the list gets exhausted. This process is called as the bootstrap
process. In bootstrap process the application tries to connect to the bootstrap
server with the details which are specified in the provider end point. The
bootstrap server can be any server which has the SIB Service enabled, it is not
necessary for the bootstrap server to have a Message Engine running. This connection
enables you the entry into the Bus, since Bus is just a logical entity, it is
not finished at this step. Now it is the responsibility of the bootstrap server
to identify a Server which has the message engine running to create a
connection. The bootstrap server will look for a Messaging Engine which meets
all the required criterias specified in the connection factory resource. Once
it finds a server which meets these conditions, it creates a connection and
returns back to the client.
Common misconception about the
Provider endpoints property:
Most users are in impression that
the provider endpoints are basically nothing but the properties of the
Messaging Engine which they are targeting. Connection factory has a
sophisticated mechanism to target a Messaging Engine, in the means of Target
name, Target scope etc… which determines which of the Messaging Engine to
connect to. In the end I would like to clarify that the provider endpoint just
gives you an entry point to connect to a messaging engine. Ideal way to
configure the provider endpoint is to have two comma separated values of
bootstrap servers, one being the primary and the other being the backup just in
case the primary becomes inaccessible.
"The postings on this site are my own and don't
necessarily represent IBM's positions, strategies or opinions."
IBM announced the release of WebSphere Application Server V8 in the 2011 IMPACT conference. A peek into what is in the offing:
Built upon the IBM Java Virtual Machine (JVM), the Application Server provides the foundation for the WebSphere portfolio, including IBM Workload Deployer, WebSphere Virtual Enterprise, WebSphere Compute Grid, and the WebSphere eXtreme Scale and DataPower XC10 appliance. The WebSphere Application Server family for Version 8.0 continues to provide offerings to fit your needs ranging from lightweight developer desktop environments to highly complex and highly available enterprise environments.
WebSphere Application Server for Developers - A no charge edition for the developer deskop WebSphere Application Server Hypervisor Edition - Optimized to run instantly on server virtualization environments WebSphere Application Server Network Deployment - Offers advanced performance and management capabilities WebSphere Application Server for z/OS - Takes full advantage of the z/OS Sysplex WebSphere Application Server Express - Lower cost, ready to go solution for Web applications WebSphere Application Server Community Edition - Open source based, small foot print offering with no upfront acquisition costs
WebSphere Application Server Version 8.0 focuses on three primary goals:
Speeding your delivery of applications and services
Enhancing operational efficiency and reliability
Expanding the security and administrative control of the server
Some of key features in V8:
Speeds the development and test lifecycle by providing “self-service” access to consistent topologies and patterns through the WebSphere Application Server Hypervisor Edition and the IBM Workload Deployer.
Faster to edit, compile and debug is through a new feature that permits you to use a monitored directory. Application contents, including complete applications and/or modules, can be moved into or out of the monitored directory.
Rich set of feature packs for programming model support
Java EE6 expands greatly the developer value that was first introduced in Java EE5, which was a core programming model in Version 7
Rapidly assemble & deploy applications to WebSphere Application Server environments using IBM Assembly and Deploy Tools for WebSphere Administration.
Faster time to value & lower operational costs through new install & maintenance technology using IBM Installation Manager
Administrative enhancement in Version 8.0 permits the movement of a node from one machine to another, even if the two machines are different platforms
A new “in-place” bundle update capability for OSGi applicationsto rapidly extend applications to meet new business requirements with reduced down time
Migrate WebSphere environments faster with minimized risk with enhanced configuration migration tooling
CICS Transaction Server data transfer between CICS Clients to Server or CICS
Server to Server applications is possible by using Commarea. Until CICS
Transaction Server 3.1 and TXSeries 7.1, the only solution for inter-program
data transfer was to use Commarea. Maximum data that can be sent over Commarea
is 32KB. When CICS Transaction Server was developed, 32KB data was more than
sufficient to transfer data. But as Enterprise Software’s evolved, need for
higher data transfer became essential in CICS. Consider a Banking
scenario, where customer details and customer photographs are required to be
uploaded from a branch system for creating an account. Since these operations
involve large data transfer in the application, CICS developers cannot use Commarea
for writing their applications. CICS Developers have to use several programming
techniques for circumventing this 32KB data limitation on Commarea. One such
technique can be to store these details in a Temporary Storage Queue (TSQ) and
pass the TSQ Name in the Commarea to the backend application.
solution to this 32KB data limitation on Commarea is implemented as Channels
and Containers from CICS Transaction Server Version 3.1 and in Distributed CICS
from TXSeries 7.1.
Container is a named block of data that can be passed to a subsequent program
or transaction. It may be easiest to think of Container as a named Commarea.
There is no CICS enforced limit on the physical size of a single Container. You
are only limited by the available user storage in the CICS address space.
Channel is a collection or group of Container, which can be used to pass data
between program or transaction. Channels and Commareas are mutually exclusive.
That is, you may use one technique or another technique for passing data but
not both on the same command.
places in CICS application where Commarea is used for data transfer are
mentioned below. Here Channels and containers can be used instead of Commarea
in the CICS application to provide higher data transfer facilities.
1)Commarea can be used to transfer data between programs.
data between two programs within a region, Commarea can be used with EXEC CICS
LINK or EXEC CICS XCTL API’s. To transfer data between two programs across
regions, Commarea can be used with Distributed Program Link (DPL) facility.
2)Commarea can be used to transfer data between tasks.
Commarea can be passed between two tasks by the use of EXEC
CICS START TRANSID FROM API.
3)Commarea can be used to transfer data between CICS Clients and
Commarea can be used to transfer data between CICS Clients
and CICS Server applications by using External Call Interface (ECI) facility.