IBM Support

Making your life easier: Installing/configuring a TEPS on HACMP

Technical Blog Post


Making your life easier: Installing/configuring a TEPS on HACMP





In this new blog I'm going to cover the installation/configuration the TEPS on HACMP cluster and “overall” what to check to be sure it was configured correctly. Obviously if our TEPS starts and failover without any problem then it is a great indication that it was well :-)))) but it may be interesting to know what files are modified in case, for example, we have to change the hostname or find files making reference to the physical name making us to think about If TEPS was configured well or not .


When we want to install/cofigure the TEPS in a HACMP cluster we have to follow these step:


1. Make sure that the resource group is online and the shared FS, DB2 and virtual IP are available in on cluternode1.

2. Run the script “obviously” to install the TEPS in the shared file system.

3. Add the line KFW_INTERFACE_cnps_HOST=<virtualhostname> in ..../cq/original/lnxenv



in case we got this error starting the TEPS (ITM 6.3 normally) :

(54217590.0013-1:ctserverorb.cpp,1620,"CTServerORB::initORB") KFW1004E A fatal condition was encountered during startup
(54217590.0014-1:ctserverorb.cpp,1621,"CTServerORB::initORB") CORBA exception during initialization. nRetCode: 1
(54217590.0015-1:ctserverorb.cpp,1623,"CTServerORB::initORB") EXCEPTION: CORBA::BAD_PARAM
(54217590.0016-1:ctrashelper.cpp,91,"RAS_CORBA_SystemException") EXCEPTION: CORBA System Exception has occurred
(54217590.0017-1:ctrashelper.cpp,93,"RAS_CORBA_SystemException") Name: CORBA::BAD_PARAM
(54217590.0018-1:ctrashelper.cpp,94,"RAS_CORBA_SystemException") Minor: 1330446344
(54217590.0019-1:ctrashelper.cpp,95,"RAS_CORBA_SystemException") Completed: NO
(54217590.001A-1:ctserver.cpp,1031,"CTServer::shutdown") KFW1048I ********* Shutdown initiated ***********
(54217591.0000-1:ctserver.cpp,1122,"CTServer::exitProcess") KFW1049E Process exit code: 1, CORBA exception caught.

​we should
1. Delete KFW_INTERFACE_cnps_HOST property from $CANDLEHOME/<archi>/cq/original/lnxenv and $CANDLEHOME/<archi>/cq/original/lnxenv_template and cq.ini

2. In $CANDLEHOME/config/cq.ini :
   we add the KFW_INTERFACE_cnps_PROXY_HOST=<virtualhostname> variable

3. Edit $CANDLEHOME/config/tep.jnlpt and $CANDLEHOME/config/component.jnlpt and replace $HOST$ to <virtualhostname> (so TEP can connect with the TEPS)

4. Reconfigure TEPS (itmcmd config -A cq) and restart TEPS.


This will allow TEPS to create a socket on the real machine name and tell clients to connect using the virtual host name.
the virtual host name would need to be defined in a DNS entry or /etc/hosts file elsewhere to allow clients to find and connect to
that host name.


4. Configure TEPS to connect with monitoring server (using the virtual hostname in case the HUB server is also clustered) .

5. Backup the original folder:


6. Run the <current_hostname> <virtual_hostname> script located in CANDLEHOME/<archi>/iw/scripts . Which is going to modify the files:

                                 serverindex.xml, server.xml and


The output of this script will be similar to:


./ A B

Buildfile: /opt/IBM/TEPS/ITM/aix536/iw/scripts/exportImport.xml



[wsadmin] WASX7357I: By request, this scripting client is not


to any server process. Certain configuration and application operations

will be available in local mode.

[wsadmin] TEPSEWASBundle loaded.

[wsadmin] Executing ValidateRelease.jacl script

[wsadmin] List of Node is ITMNode(cells/ITMCell/nodes/ITMNode|






[echo] Was Home is /opt/IBM/TEPS/ITM/aix536/iw

[echo] User Install Root

is /opt/IBM/TEPS/ITM/aix536/iw/profiles/ITMProfile

[echo] Config Root

is /opt/IBM/TEPS/ITM/aix536/iw/profiles/ITMProfile/config

[echo] Cell name is ITMCell

[echo] Node name is ITMNode



[echo] Modifying serverindex.xml from A to B



[echo] Modifying server.xml from A to B



[echo] Modifying from A to B







Total time: 12 seconds






NOTE: Change the ServerName in <CANDLEHOME>/<platform>/iu/ihs/conf/httpd.conf to clustername is only required if we are using an external web server but if we are using the IHS embedded with ITM then it is not needed.


      1. Configure the portal server database on the shared file system in the database settings of the portal server, start TEPS and check TEP can connect with it.

      2. Stop the TEPS and remove the autostart of the TEPS.

        The /etc/rc.itm1 file has to be removed to avoid that the TEPS is started during reboot.

      3. Start the TEPS to make sure there is no problem

      4. Now, we have to configure the TEPS on clusternode2. Move the resource group to clusternode2

      5. Install the GSKIT locally on clusternode2

      6. Start the TEPS manually on clusternode2 and test it is working correctly.



Eventually we have to add the TEPS to the resource group of cluster to get the failover works correctly and under the cluster's control (for this task we may need the help from OS team ) .



  1.  Create a couple of start/stop script in the same local directory on each cluster node like:

export CANDLEHOME=/sharedisk/IBM/ITM

/$CANDLEHOME/bin/itmcmd agent start cq



export CANDLEHOME=/sharedisk/IBM/ITM

/$CANDLEHOME/bin/itmcmd agent stop cq


  1. use smit hacmp to create an application server resource for the TEPS
  2. Make sure the resource group is ofline, and add the application server resource to the basic cluster resource with DB2, shared filesystem and IP .
  3. Synchronize the cluster.
  4. Bring the cluster resource group online
  5. Use HACMP monitoring to monitor the kfwservices process


Now the failover should be working correctly.


Summarizing: In order to be sure that our TEPS is configured correctly we have to check that:


1. The variable KFW_INTERFACE_cnps_HOST=<virtualhostname> in

..../cq/original/lnxenv or cq.ini is set.

2. The serverindex.xml, server.xml and files were modified with the clustername

3. In the tep.jnlp file the <codebase>,<title> and "" should have the clustername

4. The k**_resources.jar.jnlp files in aixXXX/cw directory the <codebase> must have the clustername.


NOTE: since ITM 6.2.3 FP3 the tep.jnlp and k*_resouces.jar.jnlp files should be automatically modified .

Anyway, keep in mind that if some of these 4 steps were not right your TEPS failed starting, the failover wouldn't work or the TEP browser or JWS won't be able to connect with the TEPS.



Thanks for reading, Fran.





Check out all our other posts and updates:

Academy Blogs:         
Academy Videos:       
Academy Google+:    
Academy Twitter Handle:












[{"Business Unit":{"code":"BU004","label":"Hybrid Cloud"},"Product":{"code":"","label":""},"Component":"","Platform":[{"code":"","label":""}],"Version":"","Edition":""}]