Creating a multi-instance queue manager for WebSphere MQ on UNIX with auto client reconnect

Learn how to use WebSphere MQ V7 to create a multi-instance queue manager on UNIX and then run a sample program to check client connectivity.

Geetha Rh Murthy (gemurthy@in.ibm.com), Software Tester, Websphere Message Broker Test Team, IBM

Photo of Geetha Rh MurthyGeetha Rh Murthy is a Software Tester on the Websphere Message Broker Test Team at the IBM Software Lab in Bangalore, India. She has five years of testing experience with the Websphere MQ Test Team and now works with the Websphere Broker Test Team. She has a Bachelor of Electronics and Communications Engineering degree. You can contact Geetha at gemurthy@in.ibm.com.



Chaitra Sampige (csampige@in.ibm.com), Software Tester, WebSphere MQ Test Team , IBM

Photo of Chaitra SampigeChaitra Sampige is a Software Tester working on the Websphere MQ Test Team at the IBM Software Lab in Bangalore, India. She has four years of testing experience with the Websphere MQ Test Team and also handles WebSphere TxSeries and WebSphere MQ interoperability testing. She is a Certified Software Test Engineer (CSTE) and has a Bachelor of Engineering degree in Information Science from Viswesvaraya Technological University in India. You can contact Chaitra at csampige@in.ibm.com.



20 April 2011

Also available in Chinese

Introduction

IBM® WebSphere® MQ V7 can help you increase messaging availability without requiring specialist skills or additional hardware. It provides automatic failover via multi-instance queue managers in the event of an unplanned outage, or controlled switchover for planned outages such as applying software maintenance.

With this new availability option, the messages and data for a multi-instance queue manager are held on networked storage accessed via a network file system (NFS) protocol, such as NFS V4. You can then define and start multiple instances of this queue manager on different machines, with one active instance and one standby instance. The active queue manager instance processes messages and accepts connections from applications and other queue managers. It holds a lock on the queue manager data to ensure that there is only one active instance of the queue manager. The standby queue manager instance periodically checks whether the active queue manager instance is still running. If the active queue manager instance fails or is no longer connected, the standby instance acquires the lock on the queue manager data as soon as it is released, performs queue manager restart processing, and becomes the active queue manager instance.

Here is an illustration of the multi-instance queue manager and client auto-reconnect:

Figure 1.1
Figure 1.1
Figure 1.2
Figure 1.2
Figure 1.3
Figure 1.3

Prerequisites

  • Set up Websphere MQ V7.0.1 on both the server and the client machines according to the guidelines and instructions in the information center.
  • The machines should have mqm and mqtest users belonging to the mqm group.
  • The user id and the group id of mqm and mqtest users should be the same on both machines.
  • Machine1:
    • id mqm: uid=301(mqm), gid=301(mqm)
    • id mqtest: uid=501(mqtest), gid=301(mqm)
  • Machine2:
    • id mqm: uid=301(mqm), gid=301(mqm)
    • id mqtest: uid=501(mqtest), gid=301(mqm)

Setting up NFS on HP-UX

In this example, NFS server = hpate1, exported directory = /HA, and NFS client = hostile.

NFS server configuration on HP-UX

  1. Log in to the server machine as root and configure.
  2. Edit the file /etc/rc.config.d/nfsconf to change the values for NFS_SERVER and START_MOUNTD to 1:
    #more /etc/rc.config.d/nfsconf
    NFS_SERVER=1
    START_MOUNTD=1
  3. Start the nfs.server script:
    /sbin/init.d/nfs.server start
  4. Edit /etc/exports to add an entry for each directory that is to be exported:
    # more /etc/exports
    /HA
    #
  5. Force the NFS daemon nfsd to reread /etc/exports :
    #/usr/sbin/exportfs -a
  6. Verify the NFS setup using showmount -e:
    # showmount -e
    export list for hpate1:
    HA (everyone)
    #

NFS client configuration on HP-UX

  1. Log in as root.
  2. Check that the directory that you are importing to on the NFS client machine is either empty or doesn't exist.
  3. Create a directory if the directory doesn't exist:
    #mkdir /HA
  4. Add an entry to /etc/fstab so the file system will be automatically mounted at boot-up:
    nfs_server:/nfs_server_dir /client_dir  nfs defaults 0 0
    # more /etc/fstab
    hpate1:/ha /ha NFS DEFAULTS 0 0
  5. Mount the remote file system:
    #/usr/sbin/mount -a
  6. Verify the NFS setup:
    # mount -v
    hpate1:/HA on /HA type nfs rsize=32768,wsize=32768,NFSv4,dev=4000004 
        on Tue Aug  3 14:15:18 2010
    #

Setting up NFS on AIX

In this example, NFS server = axion, exported directory = /HA, and NFS client = hurlqc.

NFS Server configuration on AIX

  1. Login as root.
  2. Enter smitty mknfsexp on the command line and specify the directory that has to be exported:
    #smitty mknfsexp
    
    Pathname of directory to export                   [/HA]
    Anonymous UID                                     [-2]
    Public filesystem?                                no
    * Export directory now, system restart, or both?  Both
    Pathname of alternate exports file                []
    Allow access by NFS versions                      []
    External name of directory (NFS V4 access only)   []
    Referral locations (NFS V4 access only)           []
    Replica locations                                 []
    Ensure primary hostname in replica list           Yes
    Allow delegations?                                No
    Scatter                                           None
    * Security method 1                               [sys,krb5p,krb5i,krb5,dh]
    * Mode to export directory                        Read-write
    Hostname list. If exported read-mostly            []
    Hosts and netgroups allowed client access         []
    Hosts allowed root access                         []
    Security method 2                                 []
    Mode to export directory                          []

    If there were no problems, you should see an "OK".

  3. Verify the server setup:
    # showmount -e
    export list for axion:
    /HA (everyone)
    #

NFS Client configuration on AIX

  1. Login as root.
  2. Check that the directory that you are importing to the NFS client machine is either empty or doesn't exist.
  3. Create a directory if the directory doesn't exist: #mkdir /HA
    smitty mknfsmnt
    
    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.
    [TOP]                                                [Entry Fields]
    * Pathname of mount point                            [/HA]
    * Pathname of remote directory                       [/HA]
    * Host where remote directory resides                [axion]
      Mount type name                                    []
    * Security method                                    [sys]
    * Mount now, add entry to /etc/filesystems or both?  Both
    * /etc/filesystems entry will mount the directory    Yes
         on system restart.
    * Mode for this NFS file system                      Read-write
    * Attempt mount in foreground or background          Background
    * Number of times to attempt mount                   []
    * Buffer size for read                               []
    * Buffer size for writes                             []
    [MORE...26]
    
    F1=Help         F2=Refresh        F3=Cancel          F4=List
    F5=Reset        F6=Command        F7=Edit            F8=Image
    F9=Shell        F10=Exit          Enter=Do

    If you received an "OK" message, it worked. You should be able to see and access the mounted NFS.

Setting up NFS on Solaris

In this example, NFS server = stallion.in.ibm.com, Exported directory = /HA, and NFS client = saigon.in.ibm.com

NFS Server configuration on Solaris

  1. Login as root.
  2. Check that the directory that you are importing to the NFS client machine is either empty or doesn't exist.
  3. Create a directory if the directory doesn't exist and change the permissions accordingly:
    #mkdir /HA
    #chmod 777 HA
  4. Edit /etc/dfs/dfstab to add an entry for sharing the HA directory with NFS clients:
    # more /etc/dfs/dfstab
    share -F nfs -o rw /HA
  5. Start the NFS server:
    #/etc/init.d/nfs.server start
  6. Verify the setup using showmount command:
    #showmount -e
    export list for stallion:
    /HA (everyone)
    #

NFS client configuration on Solaris

  1. Login as root on the client machine.
  2. Create a directory and give the appropriate permissions:
    #mkdir /HA
    #chmod 777 HA
  3. Enter:
    mount -f nfs stallion.in.ibm.com:/ /HA
  4. Verify the setup using the mount -v command:
    #mount -v
    stallion.in.ibm.com:/HA on /HA type nfs remote/read/write/setuid/devices/xattr/dev=
        5280002 on Mon Aug 23 15:00:50  2010
    #

Executing amqmfsck to verify that the file system is compliant with POSIX standards

In this example: Server1 = stallion.in.ibm.com and Server2 = saigon.in.ibm.com.

  1. Execute amqmfsck with no options to check the basic locking:
    su - mqtest
    export PATH=/opt/mqm/bin:$PATH
    
    On Server1:
    $ amqmfsck /HA/mqdata
    The tests on the directory completed successfully.
    
    On Server2:
    $ amqmfsck /HA/mqdata
    The tests on the directory completed successfully.
  2. Execute amqmfsck with the -c option to test the writing to a directory:
    On Server1:
    $ amqmfsck -c /HA/mqdata
    Start a second copy of this program with the same parameters on another server. 
    Writing to test file. 
    This will normally complete within about 60 seconds.
    .................
    The tests on the directory completed successfully.
    
    On Server2:
    
    $ amqmfsck -c /HA/mqdata
    Start a second copy of this program with the same parameters on another server.
    Writing to test file. 
    This will normally complete within about 60 seconds.
    .................
    The tests on the directory completed successfully.
  3. Execute amqmfschk with the -w option on both the machines simultaneously to test waiting for and releasing a lock on the directory concurrently:
    On Server1:
    $ $ amqmfsck -wv /HA/mqdata
    System call: stat("/HA/mqdata",&statbuf)
    System call: statvfs("/HA/mqdata")
    System call: fd = open("/HA/mqdata/amqmfsck.lkw",O_CREAT|O_RDWR,0666)
    System call: fchmod(fd,0666)
    System call: fstat(fd,&statbuf)
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Start a second copy of this program with the same parameters on another server.
    File lock acquired.
    Press Enter or terminate this process to release the lock.
    
    On Server2:
    
    $ amqmfsck -wv /HA/mqdata
    System call: stat("/HA/mqdata",&statbuf)
    System call: statvfs("/HA/mqdata")
    System call: fd = open("/HA/mqdata/amqmfsck.lkw",O_CREAT|O_RDWR,0666)
    System call: fchmod(fd,0666)
    System call: fstat(fd,&statbuf)
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    File lock acquired.
    Press Enter or terminate this process to release the lock.
    
    System call: close(fd)
    File lock released.
    
    The tests on the directory completed successfully.

Setting up a multi-instance queue manager

In this example, Server1 = stallion.in.ibm.com and Server2 = saigon.in.ibm.com.

Server 1

  1. Create the logs and qmgrs directory in the shared file system:
    # mkdir logs
    # mkdir qmgrs
    # chown -R mqm:mqm /HA
    # chmod -R ug+rwx /HA
  2. Create the queue manager:
    # crtmqm -ld /HA/logs -md /HA/qmgrs -q QM1
    WebSphere MQ queue manager created.
    Directory '/HA/qmgrs/QM1' created.
    Creating or replacing default objects for QM1.
    Default objects statistics : 65 created. 0 replaced. 0 failed.
    Completing setup.
    Setup completed.
    #
  3. Copy the queuemanager configuration details from Server1:
    #  dspmqinf -o command QM1
  4. Copy the output of the above command to Notepad. The output will be in the following format:
    addmqinf -s QueueManager -v Name=QM1 -v Directory=QM1 -v Prefix=/var/mqm -v  
        DataPath=/HA/qmgrs/QM1

Server 2

  1. Paste the output of the command was saved in Notepad in Step 4:
    # addmqinf -s QueueManager -v Name=QM1 -v Directory=QM1 -v Prefix=/var
    /mqm -v DataPath=/HA/qmgrs/QM1
    WebSphere MQ configuration information added.
    #
  2. Start the active instance of queue manager on Server 1:
    # strmqm -x QM1
    WebSphere MQ queue manager 'QM1' starting.
    5 log records accessed on queue manager 'QM1' during the log replay phase.
    Log replay for queue manager 'QM1' complete.
    Transaction manager state recovered for queue manager 'QM1'.
    WebSphere MQ queue manager 'QM1' started.
    #
  3. Start the standby instance of queue manager on Server 2:
    # strmqm -x QM1
    WebSphere MQ queue manager QM1 starting.
    A standby instance of queue manager QM1 has been started. 
    The active instance is running elsewhere.
    #
  4. Verify the setup using dspmq -x:
    On Server1 (stallion)
    # dspmq -x
    QMNAME(QM1) STATUS(Running)
        INSTANCE(stallion) MODE(Active)
        INSTANCE(saigon) MODE(Standby)
    #
    
    On Server2 (saigon)
    # dspmq -x
    QMNAME(QM1) STATUS(Running as standby)
        INSTANCE(stallion) MODE(Active)
        INSTANCE(saigon) MODE(Standby)
    #

Creating a client auto-reconnect setup

In this example, Server1 = lins.in.ibm.com and Server2 = gtstress42.in.ibm.com. On Server 1:

  1. Create a local queue called Q with defpsist(yes).
  2. Create a svrconn channel called CHL.
  3. Start a listener running at port 9898:
    [root@lins ~]# runmqsc QM1
    5724-H72 
    (C) Copyright IBM Corp. 1994, 2009.  ALL RIGHTS RESERVED.
    Starting MQSC for queue manager QM1.
    
    def ql(Q) defpsist(yes)
        1 : def ql(Q) defpsist(yes)
    AMQ8006: WebSphere MQ queue created.
    define channel(CHL) chltype(SVRCONN) trptype(tcp) MCAUSER('mqm') replace
        2 : define channel(CHL) chltype(SVRCONN) trptype(tcp) MCAUSER('mqm') replace
    AMQ8014: WebSphere MQ channel created.
    end
    
    [root@lins ~]# runmqlsr -m SAMP -t tcp -p 9898 &
    [1] 26866
    [root@lins ~]# 5724-H72 
    (C) Copyright IBM Corp. 1994, 2009.  ALL RIGHTS RESERVED.
  4. Set MQSERVER variable on Server1:
    Export MQSERVER=<channelname>/tcp/<server1(port), server2(port)>
    
    For example: export MQSERVER=CHL/TCP/'9.122.163.105(9898),9.122.163.77(9898)'
  5. On Server 2, start a listener at port 9898:
    [root@gtstress42 ~]# runmqlsr -m QM1 -t tcp -p 9898 &
    [1] 24467
    [root@gtstress42 ~]# 5724-H72 
    (C) Copyright IBM Corp. 1994, 2009.  ALL RIGHTS RESERVED.

Executing the client auto-reconnect samples

Server 1

  1. Invoke the amqsphac sample program:
    [root@lins ~]# amqsphac Q QM1
    Sample AMQSPHAC start
    target queue is Q
    message < Message 1 >
    message < Message 2 >
    message < Message 3 >
    message < Message 4 >
    message < Message 5 >
    message < Message 6 >
    message < Message 7 >
    message < Message 8 >
    message < Message 9 >
    message < Message 10 >
  2. In another window on Server 1, end the queue manager with the -is option so that it will switch over to a standby queue manager:
    Server 1(new session):
    
    [root@lins ~]# endmqm -is QM1
    WebSphere MQ queue manager 'QM1' ending.
    WebSphere MQ queue manager 'QM1' ended, permitting switchover to a standby
  3. Verify that a switchover has occurred:
    On Server2:
    
    [root@gtstress42 ~]# dspmq -x -o standby
    QMNAME(QM1)         STANDBY(Permitted)
        INSTANCE(gtstress42.in.ibm.com) MODE(Active) instance.
  4. The connection will break and a reconnection will occur on the standby queue manager:
    On Server 1
    16:12:28 : EVENT : Connection Reconnecting (Delay: 57ms)
    10/06/2010 04:12:35 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:12:35 PM AMQ9999: Channel program ended abnormally.
    16:12:37 : EVENT : Connection Reconnecting (Delay: 0ms)
    10/06/2010 04:12:37 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:12:37 PM AMQ9999: Channel program ended abnormally.
    16:12:37 : EVENT : Connection Reconnected
    16:12:38 : EVENT : Connection Broken
    message < Message 11 >
    message < Message 12 >
    message < Message 13 >
    message < Message 14 >
    message < Message 15 >
    message < Message 16 >
    message < Message 17 >
    message < Message 18 >
    message < Message 19 >
    message < Message 20 >
    message < Message 21 >
    message < Message 22 >
  5. Run the sample program amsghac on Server 1 to get the messages:
    [root@lins ~]# amqsghac Q SAMP
    Sample AMQSGHAC start
    10/06/2010 04:14:33 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:14:33 PM AMQ9999: Channel program ended abnormally.
    message < Message 1 >
    message < Message 2 >
    message < Message 3 >
    message < Message 4 >
    message < Message 5 >
    message < Message 6 >
    message < Message 7 >
    message < Message 8 >
    message < Message 9 >
    message < Message 10 >
    message < Message 11 >
    message < Message 12 >
    message < Message 13 >
    message < Message 14 >
    message < Message 15 >
    message < Message 16 >
    message < Message 17 >
    message < Message 18 >
    message < Message 19 >
    message < Message 20 >
    message < Message 21 >
    message < Messagee 22 >

Conclusion

This article showed you how to set up a multi-instance queue manager on various Unix flavors, including AIX, HP-UX, and Solaris, and how to run sample programs to check client auto-reconnection.

Acknowledgements

The authors would like to thank Umamahesh Ponnuswamy of the IBM Websphere MQ Level-3 Support Team for reviewing this document for technical accuracy

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=646987
ArticleTitle=Creating a multi-instance queue manager for WebSphere MQ on UNIX with auto client reconnect
publish-date=04202011