Here's a few things I learned today:
When using the DB2 JCC (Java Common Connector) JDBC driver, you can enable trace by appending the following to your jdbc uri: :traceFile=c:\jcc.log;TraceLevel=TRACE_ALL;
(See here for some tips on reading the trace and here for other trace levels you can use)
You can also specify a schema in the jdbc:uri using :currentSchema=MySchema. Here's an example JDBC uri showing both in use for easy cut and pasting: jdbc:db2://192.168.0.2:50000/TestDB:currentSchema=MySchema;traceFile=C:/jcc.log;TraceLevel=TRACE_ALL;
Next onto Apache ddlutils. When using this library with DB2, be aware that when reading a model from DB2, the provided DB2ModelReader does not either a default schema or a default catalogue pattern. The combination of the default behaviour in ddlutils which is to read out the existing database before writing it back out and also creating your new tables provided in your Torque formatted XML, with a null default scheme results in ddlutils reading out not only your user tables (desired behaviour) but also the DB2 catalogue (undesirable!). This caused me a headache because the ddlutils model couldn't hack a default DB2 9.7 install in it's entirety.
You can work around this in 2 ways:
1) Set the property alterdatabase="false" on your <writeSchemaToDatabase> tag (this prevents the model being read).
2) Update the source code to include setting the default schema to something other than null.
Finally, a little nugget on Liferay in conjunction with ddlutils. Liferay seems to use some database tables which have table names ending in an underscore. Unfortunately, underscore is the single character wildcard in SQL so when ddlutils tries to systematically read out the entire database by inspection SELECT * FROM contact_ can result in you getting fields returned from both the contact_ table and also the contacts table, which then causes other grief. I think the underscores were probably used to prevent clashes with user tables or reserved words, but unfortunately clashes is exactly what they ended up causing.
In this post, I'll describe how to setup ActiveMQ as a JMS provider in WebSphere Application Server 7. We used ActiveMQ 5.2 with WAS 7.0.7 and had 1 topic connection factory (JNDI name TopicConnectionFactory) and a single topic (JNDI name jms/systemMessageTopic). Our ActiveMQ Message Broker was running locally on port 61620.
Before you start, take a look at your application and do the following - particularly if you're coming from an application server other than WebSphere:
- Remove any jndi.properties files as these we will be configuring JNDI within WAS
- Remove any activemq JARs from your webapp lib directory as we will add these to the WAS extension class loader
First, you need to add the ActiveMQ JAR files to <WAS_ROOT>/lib/ext - Do not
use the activemq-all JAR as this contains JMS implementation which will interfere with WebSphere's own classes. We added the following ActiveMQ libs:
You should stop/start your application server at this point so the libraries get loaded.
Next, in the WAS admin console navigation tree, click "Resources -> JMS -> JMS Providers" . In the content pane click "New" and set the attributes as follows:Name:
ActiveMQExternal initial context factory:
org.apache.activemq.jndi.ActiveMQWASInitialContextFactoryExternal provider URL:
(nb. as you've added the ActiveMQ JARs to the lib/ext directory, you do not need to explicitly fill in the class path in the JMS provider.)
Once you've clicked OK to create the provider, from the same JMS Providers screen, click on the new "ActiveMQ provider" and then click on "Custom Properties". In this screen you need to create entries that will be used to initialise the ActiveMQInitialContextFactory. You need to create Java.lang.String properties, one which lists your connection factories, and one property for each of your topics/queues. For our setup, we added:
- java.naming.connectionFactoryNames = TopicConnectionFactory (if you have other types of connection factory, you should supply these here too, comma seperated)
- java.naming.topic.jms.systemMessageTopic = jms/systemMessageTopic (if you have queues, use java.naming.queue here)
Finally, you need to configure your connection factories and topics/queues. This is done from "Resources -> JMS -> Connection Factories", "Resources -> JMS -> Topic Connection Factories", "Resources -> JMS -> Queue Connection Factories" and "Resources -> JMS -> Queues", "Resources -> JMS -> Topics".
For our setup we created a Topic Connection Factory with the following properties (select ActiveMQ as the provider as the first step):
- Name: Topic Connection Factory
- JNDI name: TopicConnectionFactory
- External JNDI name: TopicConnectionFactory
and a Topic withe the following properties:
- Name: System Message Topic
- JNDI name: jms/systemMessageTopic
- External JNDI name:
One other tool you might find useful is the WAS dumpNameSpace utility which can be found in <WAS_PROFILE>/bin - as you'd expect it dumps out the JNDI namespace so you can see if eveything looks right. For more information on this, see the Information Center
This week I've been working with AIX 220.127.116.11 and WAS 7.0.7 Network Deployment. The cluster topology is as follows:
- 1 Power6 blade running the Deployment Manager and a managed Web Server (IHS 7.0.7 + Plugin)
- 4 LPARs with 8 x 3.5Ghz and 16 Gb RAM (on a Power 750) as nodes with 1 Application Server instance per node
Before I get into the detail, I'd like to point out that I hadn't used AIX for a while, and found this developerWorks article
really useful to remind me of all the commands that I'd forgotten.
The aim of the game this week has been throughput, the test involves a 25 step HTTP interaction delivered by jMeter
with no pauses and very little ramp up. It's not a "real world" test at all and the goal was to identify the optimum
throughput point for the application beyond which latency becomes unacceptable.
Before we started tuning, nmon
was showing the CPUs as busy, but with a higer than expected proportion of system time and noticable context switching suggesting that threads were spinning waiting for resources. I must stress that the tuning below is not
a replacement for performance analysis of your application to understand where programmatic improvements can be introduced to minimise locking (see page 16 of this
for some app dev guidance). Further, as with any performance tuning, application of this tuning may deliver a throughput enhancement, but equally it may move the point of contention to elsewhere in the stack. In short, I'm not claiming that I
have a one-size-fits-all set of magic setting that will act as the
silver bullet to all your performance problems, but I wanted to share
what's worked for me this week along with why I used the settings that I did. I'll also collate the information that is documented in a variety of places into a single resource for you and cover:
AIX environment variables
- AIX environment variables
- IBM HTTP Server configuration
- WebSphere web server plug-in configuration
The WAS Info Center
has plenty of information on AIX OS setup, but not much that seems to be pertinent to our issue. There is good information however, in both the AIX 6.1 Info Center
and chapter 4.5 of this Redbook
- you should bookmark both. The AIX Info Center has a set of recommended environment variables, and the Redbook builds on that with network settings which I confess I didn't use this week as we've not been network bound. Below is the list of environment variables from the AIX Info Center plus a few extras from some expert colleagues:
- AIXTHREAD_COND_DEBUG = OFF
- AIXTHREAD_COND_RWLOCK = OFF
- AIXTHREAD_MUTEX_DEBUG = OFF
- AIXTHREAD_MUTEX_FAST = ON
- AIXTHREAD_SCOPE = S
- SPINLOOPTIME = 4000
- YIELDLOOPTIME = 20
Chapter 4.7 of the Redbook details creating a shared rc.was files so that you can apply these settings to all of your servers, but I simply applied them in Servers -> Server Types -> Application Servers -> <server name> -> Java and Process Management -> Process definition -> Environment Entries
as shown below: IBM HTTP Server configuration
After reading a number of Technotes and support articles, I finally decided that this Technote
was the most useful resource for improving the out of the box configuration of the multi-processing module in IBM HTTP Server. As each IHS server thread has it's own copy of the WebSphere plugin, under the relentless load we were generating this seemed like a potential area of weakness wherby different plugins could have different views of the availability of the available Application Servers, so we settled on the configuration used in example 1 of the Technote
which has a single server thread:httpd.conf
If you're running on a UNIX-based platform, you may need to have ulimit -s 512 in the session which starts IHS.
Note that using a single server in this way means that if that process exits unexpectedly you potentially lose 2000 in-flight connections.Web server plug-in configuration
Combining the wisdom from both of these two Technotes (Understanding plugin load balancing
and Recommended configuration values
) I ended up making the following changes to the default plugin configuration:
- Set all but one of your servers LoadBalanceWeight to 20 and the remaining server to 19. This prevents the plugin from reducing the weights by finding a common denominator and results in the the weights getting reset less frequently.
- If you're using session affinity (who isn't?) the ensure that you set IgnoreAffinityRequests=false on your ServerCluster entry. This works around a known limitation of the plugin which can result in skewed weighting when using round robin distribution and session affinity.
Prior to making these changes we were seeing servers marked down by the plugin, which consequentially resulted in the other servers bearing more load. These changes prevented servers from being marked down and gave a smoother workload distribution amongst the cluster.plugin-cfg.xml
<ServerCluster CloneSeparatorChange="false" GetDWLMTable="true" IgnoreAffinityRequests="false"
LoadBalance="Round Robin" Name="myCluster" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60">
<Server CloneID="14uddvvpt" ConnectTimeout="5" ExtendedHandshake="false" LoadBalanceWeight="20"
MaxConnections="-1" Name="node01Server01" ServerIOTimeout="60" WaitForContinue="false">
<Server CloneID="14udeeloa" ConnectTimeout="5" ExtendedHandshake="false" LoadBalanceWeight="19"
MaxConnections="-1" Name="node01Server02" ServerIOTimeout="60" WaitForContinue="false">
If you’re deploying a large EAR file which includes a lot of
EJBs, it is possible for the EJB deployment process to fail when you’re
installing or updating an enterprise application. If this happens you’ll
see in the admin console:
ADMA5018I: The EJBDeploy program
is running on file <temp EAR>.
JVMDUMP006I Processing dump event “systhrow”, detail
“java/lang/OutOfMemoryError” – please wait.
ADMA5008E: The EJBDeploy program failed on file <temp EAR>.
Exception com.ibm.etools.ejbdeploy.EJBDeploymentException: Error
What you need to know is that the EJB deployment step of
enterprise application installation is handled by a specific component
called ejbdeploy which uses it’s own seperate JVM instance. Therefore,
to fix this error you need to increase the heap size for the ejbdeploy
process. On Windows you need to edit ejbdeploy.bat and on UNIX,
ejbdeploy.sh. Both can be found in:
Inside this file, look for $JAVA_CMD, and then find the
–Xms (minimum heap size) and –Xmx (maximum heap size) parameters (which
both default to 256m). I was able to overcome my issue by increasing to
WebSphere Application Server runs on top of IBM’s Java Virtual Machine, so one of the ways of tuning WebSphere performance is to tune the JVM. IBM publishes the excellent Diagnostic Guides for Java, which will give you a thorough overview of the JVM and great debugging, troubleshooting and tuning advice.
IBM’s general purpose problem determination tool also offers some really cool graphical tools to assist with JVM tuning. In this post, I’m going to quickly run through which ISA plugins I like to use.
If you don’t have ISA installed yet, you can download the latest version here, for our purposes you’ll need the Workbench version, not the Lite version.
Be aware that the default heap size currently configured for IBM Support Assistant is not all that useful for diagnosing large amounts of JVM data, so I strongly recommend you increase the heap available to ISA. You can do this by editing the jvm.properties file found in <ISA_INSTALL_DIR>/rcp/eclipse/plugins/com.ibm.rcp.j2se.win32.x86_18.104.22.16890211a-200903301321. Simply change the –Xmx256m line to something larger, such as –Xmx1024m.
Once installed and with your heap increased, click Update –> Find New… –> Tools Add-ons as shown below:
On the next screen expand JVM-based Tools, and then select the plugins that start “IBM Monitoring and Diagnostic Tools for Java” as shown below:
At a high-level, these tools will enable you to:
IBM Monitoring and Diagnostic Tools for Java – Dump Analyser
Perform problem determination on JVM crashes or hangs caused by OutOfMemoryErrors, application code deadlocks and unexpected signals.
IBM Monitoring and Diagnostic Tools for Java – Garbage Collection and Memory Visualiser
Visually inspect and improve JVM performance by understanding memory usage and garbage collection behaviour.
IBM Monitoring and Diagnostic Tools for Java – Health Center
Connect to a running JVM and perform real-time problem determination on garbage collection, i/o activity, locking, memory usage and more.
IBM Monitoring and Diagnostic Tools for Java – Memory Analyzer
Inspect heapdumps from IBM JVMs to identify memory leaks and understand application memory usage.
If you have a JVM system heapdump on WebSphere, before you can use the excellent tools provided in IBM Support Assistant, you’ll need to use the jextract command to pre-process the dump file. I’ve recently been doing this on 64bit WebSphere Application Server 22.214.171.124 on AIX, and I wanted to share with you the solution to the following exception when running jextract:
Exception in thread "main" java.lang.NegativeArraySizeException
This is a known issue which has already been fixed in the JVM, and shipped in SR8 FP1 for the Java SDK which is provided in WAS 126.96.36.199.
To solve this you should apply the Java SDK fixpack
to your WAS installation. Remember that the support policy
for WAS is that any WAS 7.0 fix pack can upgrade to the latest Java SDK release packaged for WAS, so you DON'T need to fix pack your entire WAS install to 188.8.131.52 - you can fix pack only the Java SDK.
The only downside is that jextract is specific to each JDK version, so you won't be able to jextract your SR7 dump file with SR8, you will have to reproduce the issue against SR8.
As a final tip, remember that if you’re running 64bit WebSphere 7, you’ll need to use the –J-Xcompressedrefs command line option with jextract.
Earlier in the year, I worked with an application that used the EJB3 timer service to perform a number of frequently polled application tasks. As we were porting the application to WebSphere from another application server, we ran into a few issues a long the way, particularly with EJB3 timers and timeouts either not firing at all, or not firing as frequently as expected.
Here are the key things I learned about the EJB3 timer service and WebSphere:
- Timers are persistent and managed by the EJB container. Once created, timers will survive a server restart. This means you need to be careful about how and when you create your timers.
- The default WAS Timer Service poll interval is 300 seconds (five minutes). If you require your timers to tick more frequently you'll need to lower the poll interval.
- By default, WAS uses a single thread to service EJB timer tasks. This means WAS can handle one concurrent timer task. If you require more concurrently active timers, you'll need to increase the number of timer threads.
- WAS ships two very useful utilities that can help you diagnose strange timer service behaviour: findEJBTimers shows you all timers for a given server and cancelEJBTimers enables you to cancel either a specific timer or all timers registered to a given server.
For this particular application we achieved success by reducing the poll interval from 300 to 10 and increasing the number of timer threads from 1 to 10.
To edit the EJB timer service settings
, first navigate to the application server you want to modify, and then in the "Container Settings" section, expand "EJB Container Settings" and then click "EJB timer service settings". You will then see this screen:
Modificado em por timdp
Here are 13 (it's a bakers dozen) useful command-line options for the IBM Java virtual machine. They're presented here for you in a format that is easy to cut & paste and in no particular order. Unless specified, all options are for the Java 6 virtual machine.
Enable verbose garbage collection output. This output can be loaded into the (superb) Garbage Collection and Memory Visualiser
in the IBM Support Assistant to analyse garbage collector performance. The -Xverbosegclog
line above specifies a date/time stamped file to write to (default behaviour is write to stderr), and configures a rolling log with 5 files, each containing 20000 entries.
-Xgcpolicy:gencon -Xmn<x> -Xmo<y> -Xms<z> -Xmx<z>
Enable the generational concurrent garbage collection policy
. The (excellent) Java Diagnostics Guide
says "A generational garbage collection strategy is well suited to an application that creates many short-lived objects, as is typical of many transactional applications.
", in my experience this policy when correctly tuned can provide a good throughput boost to most Java applications. With gencon, the heap is split into two areas, so as well as the well known -Xms
parameters there are parameters to control the size of the tenured space (-Xmo
) and nursery space (-Xmn
). For more advanced scenarios, you can specify both minimum and maximum for either the tenured space (-Xmos
) and nursery space (-Xmns
Turns System.gc() into a no-op. Useful when the Garbage Collection and Memory Visualiser
tool is telling you that someone is calling System.gc() and you (quite rightly) want to prevent this impacting your application. Remember, the VM is much smarter than you are when it comes to determining when to trigger garbage collection.
Disables compaction on System.gc() so that compaction only occurs when the compaction triggers are met. If you have classes which call System.gc() (or you're using RMI) this can help to minimise the impact. This option is useless if you're using -Xdisableexplicitgc.
Specifies a method trace that will generate a stack trace to the console whenever System.gc() is called. The numerical arguments in the trigger state that a stack trace will be generated from the fifth occurrence of System.gc() and for the next ten invocations. Use this syntax when you want to find out who has been calling System.gc()
or vary the method and action
to suit your needs.
Prevents excessive garbage collection caused when using RMI for Java versions prior to Java 6.0. Garbage collection is important when using RMI to ensure the clearance of reference objects, but this is done with a System.gc(), which forces compaction, and the defaults intervals prior to Java 6 are unrealistically short. The values above set the intervals to hourly, which is the same as the default behaviour on Java 6.
Configures a dump agent which generates a core dump when the JVM receives a SIGQUIT signal (kill -3). The request options ensure that unreachable objects are removed from the heap, and that the heap will be usable with IBM's (brilliant) Memory Analysis Tool
. Make sure you have your ulimits set appropriately and ff you're running on AIX ensure you enable full core dumps
if you are working with large heaps. Also, core dumps need to be processed with jextract
which can be found in the jre/bin
directory before they can be loaded into the Memory Analysis Tool.
Configures a dump agent which generates a core dump when the JVM encounters and OutOfMemoryError exception.
Modify the default SIGQUIT (kill -3) behaviour of the IBM JVM to more closely match Sun JVM behaviour. By default, the IBM JVM produces a highly useful javacore on receipt of a SIGQUIT, the option above generates a basic stack dump
for the current thread into a named file. I don't see the point of this one personally (javacores are awesome), but I have been asked how to achieve this so here's the method.
Prints out the configuration of all enabled dump agents. -Xdump flags are processed left to right, so make sure this is the last one on the command line!
Enable the IBM Health Center
agent on a given port. If you're using a version prior to Java 5 SR 9 or Java 6 SR 5, the syntax is a little different
. Specifying a port is useful when you are running more than one VM on a single box as the default behaviour is to enumerate ports starting from 1972.
Outputs the various heap and stack sizes in use by the JVM on startup. Very useful when you're capturing a number of runs that you plan to compare.
If you are using Java 6 on a 64 bit operating system, you should definitely
be using this flag. This enables compressed references
, which makes the JVM store all all references as 32 bit values, often delivering a performance increase over the previous JVM versions on 64 bit architectures. There is a limit of 25 GB
maximum heap size when using compressed references.
I've been working this week with WAS 184.108.40.206 64bit on AIX and have been having trouble with losing session affinity. The observable result of this from a user perspective was being dumped back to the application login screen when moving between screens as a logged in user. My cluster had memory to memory (M2M) session replication enabled and as such, the GetDWLMTable attribute of my <ServerCluster> element in plugin-cfg.xml was set to true.
To troubleshoot this, I increased the Plugin's LogLevel from it's default value of Error to Trace in plugin-cfg.xml:
(If you're using WAS7 It's worth checking that the path contained Name
attribute in both the LogLevel
in case they're incorrect
<Log LogLevel="Trace" Name="/usr/IBM/HTTPServer/Plugins/logs/webserver1/http_plugin.log"/>
<Property Name="PluginInstallRoot" Value="/usr/IBM/HTTPServer/Plugins/"/>
Checking the trace revealed the following:
[Mon Jul 25 17:26:16 2011] 007d00a2 00003738 - TRACE: ws_common: websphereHandleSessionAffinity: Checking the JSESSIONID in cookie: 0057OI-hCbGcu1eGnRnHw3j-lOY:-H819K1:-K20AFL
[Mon Jul 25 17:26:16 2011] 007d00a2 00003738 - TRACE: ws_server_group: serverGroupFindClone: Searching primary server group for match
[Mon Jul 25 17:26:16 2011] 007d00a2 00003738 - TRACE: ws_server_group: serverGroupFindClone: Comparing curCloneID '-H819K1' to server clone id '16576a77e'
[Mon Jul 25 17:26:16 2011] 007d00a2 00003738 - TRACE: ws_server_group: serverGroupFindClone: Comparing curCloneID '-K20AFL' to server clone id '16576a77e'
[Mon Jul 25 17:26:16 2011] 007d00a2 00003738 - TRACE: ws_server_group: serverGroupFindClone: No match in primaryservers, are any available ?
[Mon Jul 25 17:26:16 2011] 007d00a2 00003738 - TRACE: ws_common: websphereHandleSessionAffinity: Affinity server not available, retrying another server
[Mon Jul 25 17:26:16 2011] 007d00a2 00003738 - TRACE: ws_common: websphereHandleSessionAffinity: Original affinity server was not available. Failover occurred. Added $WSFO : TRUE header
By default, WebSphere uses a JSESSIONID
cookie to maintain session affinity within a cluster. A number of pieces of information are encoded into the JSESSIONID
, as written up by Doug Breaux
Initially I was really confused as the clone ids stored in the JSESSIONID bore no resemblance to the CloneID attribute of the <Server> elements in my plugin-cfg.xml. Then I remembered that when using session replication, JSESSIONID contains a PartitionID rather than a CloneID, and as Doug notes, PartitionIDs are mapped to CloneIDs by the WebSphere HAManager with no easy was of discovering the mapping that I know of.
After speaking to a colleague, he suggested that this behaviour may be caused by a known bug:
Sure enough, applying that iFix resolved my problem. The bug appears to affect WAS 7 prior to fixpack 17 and WAS 6.1 before fixpack 37, so I hope this article helps you diagnose and resolve quickly if you've been seeing the same symptom.
I spend a fair amount of time creating environments for IBM Business
Partners to test their solutions and therefore I install WAS frequently.
To reduce the effort required and time taken, I've investigated a
number of methods of automation and in this post I'd like to share what I
now consider to be best practice. In this post I'm going to cover use of response files with the Centralized Installation Manager to deploy entire cells. Combining this approach with use of Customized Installation Packages (CIP)
offers a repeatable and highly efficient method of installing WebSphere Application Server with minimal human intervention.
Centralized Installation Manager
The Centralized Installation Manager (CIM) was a new feature introduced in WebSphere Application Server (WAS) version 7 aimed at system administrators that greatly simplifies the task of installing WAS and any associated fix packs or ifixes onto an estate of hardware. In short, the CIM has an associated repository of product binaries (GA code, fix packs, ifixes, CIPs) and uses operating system level commands to install these binaries onto remote target machines. The CIM is an attractive option for installers as no software or agent needs to be installed on the remote targets for the CIM to perform an installation. The CIM can deploy WAS 7 binaries and fix packs/ifixes for both WAS 6.1 and 7, meaning you can maintain mixed-version cells. The CIM can install onto three targets in parallel, so it can be a big time saver over performing manual installations. It can also download fix packs directly from the IBM Support web site. I'm not going to go into any further detail about installing, configuring or using the CIM as these topics are already well covered by (in order of depth) IBM Education Assistant
, the InfoCenter
, and this Redpaper
I wanted to write specifically about use of response files with the Centralized Installation Manager. This is a technique I use to deploy new WebSphere clusters for my business partners using CIPs
I've created that comprise WAS 7 and a specific fix pack. To achieve this I use a stand-alone Deployment Manager which includes the Centralized Installation Manager component and a CIM repository stocked with WAS product binaries and my CIPs that I have created using IBM WebSphere Installation Factory. The trick is using a response file with the CIM to control the type of profile that gets created when the product is installed. To suit my needs I have a response file that creates a Deployment Manager profile and another that creates a Custom profile and then federates that profile to a named Deployment Manager. This enables me start by creating using my CIM Deployment Manager to create a new Deployment Manager (and therefore Cell), and then edit my Custom profile response file to specify the newly-created Deployment Manager for federation as shown below.
Here's the steps I use in more detail:
- Add each host of my target hardware to the Centralized Installation Manager using the Add installation target option.
- Use the Centralized Installation Manager to Install with response file a WebSphere Application Server binary (GA or CIP) onto your target Deployment Manager using a Deployment Manager response file that creates a new management profile.
- Log on to the newly installed Deployment Manager and start it using startManager. This step is vital as the Custom response file used to create the nodes expects to federate the profile with the Deployment Manager at profile creation time. If you don't start your Deployment Manager, your nodes won't federate!
- Edit your Custom response file and set the -OPT PROF_dmgrHost option to the IP/hostname of your newly installed Deployment Manager.
- Use the Centralized Installation Manager to Install with response file a WebSphere Application Server binary onto all remaining hosts using a Custom response file that creates a new custom profile and federates each new node to the Deployment Manager specified by the PROF_dmgrHost option.
If you've never installed WAS using a response file before, take a look at the sample response file that ship with the product - it's very well documented. You'll find the sample in the WAS
directory of your installation media and if you need more information than supplied in the file, check out the InfoCenter
. Alternatively, you're welcome to use my samples as a starting point for your own customisation (please note: all comments have been removed for brevity and these samples were created for the AIX operating system, check carefully against the sample response file for your target OS for additional required options).Sample Deployment Manager response file
# General settings
# Create a CIM in the new DM
# Profile settings
This response file installs the product and then creates a new Deployment Manager profile named DMgr01 with administrative security enabled, most other options are left at their default settings. I also create a new CIM on the target DM in case I need to install more nodes unexpectedly.
Sample Custom profile
# General settings
# Profile settings
-OPT PROF_dmgrHost="<target DM>"
This response file installs the product and then creates a new Custom profile named AppSrv01, most other options are left at their default settings. The PROF_federateLater setting of false makes the profile automatically federate with the Deployment Manager specified by PROF_dmgrHost.
THE SMALL PRINT: The above assumes that you're fulfilled all the product installation requirements fully before starting to perform installations from the Centralized Installation Manager. The CIM requires that the operating system clocks on all targets are within 5 minutes of each other so use of NTP is strongly recommended. When installing a CIP using a response file, the CIM has limited visibility of the install, if something goes wrong during installation (eg. you forgot to start the new DM!), the CIM Installations in Progress may continue to read "Installation in progress", the default timeout for an installation is two hours.
Just a quick one here, but I recently encountered a situation where we received a java.lang.OutOfMemoryError whilst developing a script to perform an automated installation of a large EAR file (>100mb) using wsadmin.
In a standalone application server (default profile), wsadmin
creates its own JVM, and this is used for application unpack and installation. My initial thought was to use the -javaoption
parameter of wsadmin
to pass in my own -Xmx
value to increase the heap size. Unfortunately, wsadmin
is written in such a way that when it creates its JVM, your -javaoption
parameter is passed in on the command line before
it's own internal PERF_JVM_OPTIONS
parameter which specifies -Xmx
. When the JVM receives two -Xmx
arguments, the latter takes precedence (although, when I checked this with a colleague in our Java Technology Centre, he said that the JVM sent him an email and he flipped a coin!), so to work around this you must do one of two things:
- Edit wsadmin so that javaOption (this is the variable inside wsadmin that the -javaoption parameter is stored into) occurs after PERF_JVM_OPTIONS, and pass -javaoption into wsadmin setting -Xmx. This is documented here.
- Update the setting of PERF_JVM_OPTIONS directly in wsadmin.
As we were trying to develop a script package for and IBM Workload Deployer virtual system pattern that would run with no manual intervention, we used the second option and modified wsadmin
In a clustered environment, wsadmin
always connects to the Deployment Manager, and application unpack and installation occurs on the nodeagent
of each node. This meant that in our clustered install script, before we installed the application to the cluster, we had to loop through all nodeagents
and set the process defition - > Java virtual machine properties to increase the heap size and then perform a restart for the new settings to take effect. We were using the excellent wsadminlib.py
as an accelerator and were able to use the listNodes
functions to fully automate this process.
The final tweak we had to make was handling the case where a custom node has been cloned to create a new node as part of a running virtual systems pattern. In this instance, we were using the advanced options of IBM Workload Deployer to create our cluster for us, and this meant that our script package was invoked after the node had been created and federated into the cell. In a clone operation, here's what happens when you're using IWD to create your cluster:
- IWD creates the new custom node virtual machine and starts it.
- Once the virtual machine has started and WAS has been installed, the new custom node is federated to the Deployment Manager and the IWD cluster creation scripts create the specified number of new cluster members (servers) on the new node.
- Once the cluster members are created, the Deployment Manager pushes the application over to the nodeagent on the new custom node.
- As the nodeagent still has it's default heap size, the application install causes the nodeagent to OoM.
- Our script package executes on the new custom node and increases the nodeagent heap size.
At this point, to bring the new custom node fully into service we had to ssh
in, stop the nodeagent
(kill -9 if stopNode
doesn't work for you) and restart it. Once the nodeagent
is restarted, the Deployment Manager pushes out the application again (no OoM this time!) and the application is started on the new servers.
Another quick post relating to developing script packages for IBM Workload Deployer, but also applicable to anyone using wsadmin. If one of your commands in a wsadmin script takes a long time to complete, you might receive a timeout from wsadmin which looks something like this:
org.apache.soap.SOAPException: [SOAPException: faultCode=SOAP-ENV:Client; msg=Read timed out; targetException=java.net.SocketTimeoutException: Read timed out]
What is happening here is you're running into com.ibm.SOAP.requestTimeout, a property which is defined in <PROFILE_ROOT>/properties/soap.client.props and defaults to 180 seconds. The "easy" fix for this is to just directly edit the soap.client.props file directly, but if you're trying to create a self-contained script package or working with a clustered environment - this is a messy solution. Unfortunately, the answer is not as simple as passing -javaoption "-Dcom.ibm.SOAP.requestTimeout=1800" into wsadmin, but there is a lovely neat solution:
- Copy the soap.client.props file from <PROFILE_ROOT>/properties and give it a new name such as mysoap.client.props.
- Edit mysoap.client.props and update the value of com.ibm.SOAP.requestTimeout as required
- Create a new Java properties file soap_override.props and enter the following line:
- Pass soap_override.props into wsadmin using the -p option: wsadmin -p soap_override.props...
This will cause your mysoap.client.props to be loaded in preference to the version shipped with the product, whilst enabling you to keep your scripts self-contained and not reliant on any nasty bits of sed.
As I'm sure you know by now, the installation of WebSphere Application Server (WAS) version 8 has moved to the common IBM installation platform, IBM Installation Manager
. IBM Installation Manager is also used by WebSphere Application Server version 7 for the purposes of installing Feature Packs
, but as the process is different to the main product install I thought I'd write a quick cheat sheet.
- Download and install IBM Installation Manager.
- Start IBM Installation Manager.
- Click File -> Preferences and add the following two repositories (you will need to authenticate with your IBM ID):
- Import your existing WAS installation (specify the location of $WAS_HOME)
- Click Install to add Feature Packs (WAS processes must be stopped for installation)
Installed Feature Packs are delivered as profile templates
into the $WAS_HOME/profileTemplates
director. Installing a Feature Pack does not make it active, to add the functionality you need to augment
your existing profile using the newly installed profile template. This augmentation is performed using the manageProfiles tool. Here's an example of augmenting a standalone WAS (default) profile with the JPA Feature Pack:
manageprofiles.sh -augment -profileName AppSrv01 -templatePath $WAS_HOME/profileTemplates/JPA/default.jpafep
A very quick note today to let you know about a cool feature which isn't in the IBM Workload Deployer (IWD) InfoCenter yet. When you're building a virtual system pattern and using script packages
to customise an IBM-provided virtual machine image, you may want to let the pattern deployer paramaterise your script package by supplying a password. Obviously when you're asking the user for a password, you want their input obscured and a confirmation input to ensure the password entered is correct, right?
The great news is that IWD supports this right out of the box. When you specify your input variables in the keys section of cbscript.json, any scriptkeyname matching the pattern /.*password.*/ (I confess, I like PERL, for the non-techies, anything including the word "password" (in lowercase)) will automatically be given the desired treatment.
Here's the GUI output for the following snippet of cbscript.json:
As a reminder, if you set scriptvalue, that sets a non-editable value into the associated key. Setting scriptdefaultvalue sets a user-editable default for the associated key.
Having a go with the scripting interface to WebSphere Application Server is one of those tasks that has been "on my list" for a long time. Last week I had a need to create a simple script to silently install a JEE application as part of an IWD script package.
Here's a small piece of Jython that I came up with:
# import os module so we can get environment variables
contextRoot = '/myapp'
appName = 'myApp'
# get IWD environment variables CELL_NAME and NODE_NAME, or fall back to sensible defaults
cell = os.getenv('CELL_NAME')
cell = 'CloudBurstCell_1'
node = os.getenv('NODE_NAME')
node = 'CloudBurstNode_1'
# install the application, then save the configuration
AdminApp.install('/path/to/myApp.war', ['-contextroot', contextRoot, '-appname', appName, '-MapWebModToVH', '[[".*", ".*", "default_host"]]'])
# get the ApplicationManager object and use it to start the application
appMan = AdminControl.queryNames('cell='+cell+',node='+node+',type=ApplicationManager,process=server1,*')
AdminControl.invoke(appMan, 'startApplication', appName)
Assuming you saved the script into a file called installApp.jy, you would execute it from the command line using wsadmin as follows:
<WAS_PROFILE_ROOT>/wsadmin.sh -lang jython -f <path_to>/installApp.jy
The script will only work for a standalone application server in it's current state, but hopefully it will help you out and provide a good starting point for installing to a cluster.