IBM Support

Reconfiguring multi-instance queue managers for the high availability configuration

Product Documentation


Abstract

IBM Intelligent Operations Center (IOC) 1.6 configures multi-instance queue managers and brokers to support a high availability messaging solution. The configuration of this specific capability shipped with IBM Intelligent Operations Center 1.6 is not an ideal configuration. This document discusses reconfiguration options that a customer should implement to support a more robust failover solution.

Content

IBM Intelligent Operations Center can be installed in two ways. The first, called a standard topology, consists of four (4) servers. Products based on the WebSphere Application Server are minimally configured to support a 'cluster of one'. Operational personnel can then create additional cluster members enabling minimal horizontal scaling and improving some availability aspects.

The second implementation, called the high availability topology, consists of eight (8) servers. Simplistically (but not entirely accurately), you can think of a set of 4 servers serving as standby servers for a set of 4 primary servers. This offers more robust portfolio of failover capabilities.

The optional semantic model server is not addressed or affected by this document.


The problem with the default configuration

Multi-instance queue managers require shared storage to store logs and message data. This shared storage must be available to all of the multi-instance queue managers. In IBM Intelligent Operations Center 1.6, the primary queue managers are hosted on the primary IBM Intelligent Operations Center analytics server (analytics server 1) and the standby queue managers are hosted on the standby IBM Intelligent Operations Center analytics server (analytics server 2).

Where to host the shared storage is problematic since the IBM Intelligent Operations Center installation process is unaware of the customer environment or the preferred shared storage implementations. To provide a working reference implementation, the IBM Intelligent Operations Center 1.6 installation process uses a Network File System (NFS) implementation using the primary IBM Intelligent Operations Center analytics server as the file system host. Obviously, this is not an optimal solution since losing the primary server (which is hosting the exported NFS directory) renders the messaging component of the standby server inoperable as well.

Multi-instance queue managers should be reconfigured to conform to local shared storage practices. Two options are described in this document:

  1. Standalone or external NFS server
  2. Storage Area Network (SAN) storage



Planning Considerations

Carefully consider if you want to install the default high availability implementation of the WebSphere Message Queue (WMQ) and WebSphere Message Broker (WMB) components.

Note: By default high availability components will be installed.

Pros
Cons
Neutral
• Basic high availability functionality out of the box
• Suitable for prototype or demonstration platforms
• If the default implementation is installed, conversion to an external NFS or SAN implementation will require manual remediation steps• High availability messaging may never be required (Note: If high availability messaging is not installed, some IBM Intelligent Operations Center tools may need to be modified, namely the platform control tool and system verification test tool).
Table 1: Considerations for installing default high availability messaging components

Illustration 1: IBM Intelligent Operations Center command line installer options




WebSphere Message Queue and WebSphere Message Broker High Availability Installation

If you are using the IBM Intelligent Operations Center command line installer (iop.ha.install.sh), it is possible to alter the usual installation process to exclude the creation and configuration of the WebSphere Message Queue (WMQ) and WebSphere Message Broker (WMB) artifacts:

• This includes the default NFS configuration.

• This implementation only affects the configuration of WMQ/WMB artifacts. The base messaging products are still installed on the servers.

This is one of the scenarios depicted in Illustration 1: IBM Intelligent Operations Center command line installer options.

This scenario is useful when:

• WebSphere Message Queue and WebSphere Message Broker artifacts are not required by the application.

• You do not plan to use the default NFS implementation, but rather use another or alternative NFS or SAN implementation.

Note: If you are using an alternative NFS implementation, the version of that NFS server must be NFS v4 or higher. NFS v3 or lower is not supported by WebSphere Message Queue for fail over. For more details, refer to Testing statement for IBM MQ multi-instance queue manager file systems.


Instructions


If you are performing a new installation, and you do not want to use the default NFS implementation (with the NFS mount on analytics server 1 ), and you want to go directly to a SAN implementation, or a stand-alone NFS, for example, an NFS server external to the IOC system, you unmanage the default NFS implementation. Perform the unmanage by running the iop.unmanage.wmb.sh script. The unmanage action changes the WMQ/WMB NFS-specific tasks to Ready so that the NFS-specific tasks never run, and for installation purposes, the SAN directory or standalone NFS is just another directory. All of these steps are performed on the installation server (analytics server 1).

Partially run the command line installer

Run the following command line installer (iop.ha.install.sh) steps:

• 1 – Validate installation media checksums

• 2 - Copy templates to topology directory

• 3 – Create topology keystore

• 4 - Parameterize all topologies


Run the unmanage script

Run the iop.unmanage.wmb.sh script. This script includes the iop.managefacet.sh script and performs the following functions:

• Backs up the original topology files.

• Renames the temporary topology files to the original file names.

For each referenced topology, changes the topology Status attributes for all components associated with the facet mq_ha to “Ignore” and save the changes to a temporary file. The topologies referenced are in the following files:

◦ iop.ha.coreinst.xml

◦ iop.ha.coreconfig.xml

Running the iop.unmanage.wmb.sh script produces output similar to the following:


.

Continue with the command line installer

Continue with the remainder of the installation steps in iop.ha.install.sh.



Option A: Implementing a standalone NFS Server

There are several possibilities when using high availability messaging and a standalone NFS server:

1. When Intelligent Operations Center was installed with the default NFS/WMQ/WMB configuration.

2. When Intelligent Operations Center was installed but the NFS/WMQ/WMB artifacts were unmanaged.

3. When Intelligent Operations Center is not installed but the implementation choice is to replace the default NFS/WMQ/WMB configuration with a standalone NFS implementation.

Options 2 and 3 are similar in that no existing NFS/WMQ/WMB artifacts have to be removed.

Option
Actions
1
• Remove existing NFS/WMQ/WMB resources. See "Backing out existing NFS/WMQ/WMB Resources".
• See "Configuring users"
• Run the wmbextnfs topology. See "Install the wmqextnfs topology."
2
• See "Configuring users"
• Run the wmbextnfs topology. See "Install the wmqextnfs topology."
3
• Run the Intelligent Operations Center installation but unmanage NFS/WMQ/WMB HA components. See "WebSphere Message Queue and WebSphere Message Broker High Availability Installation."
• Run the wmbextnfs topology. See See "Install the wmqextnfs topology."

Backing out existing NFS/WMQ/WMB Resources


Delete existing Message Broker Artifacts

This step assumes that the active server is the Intelligent Operations Center primary analytics server (analytics server 1) and its broker is the currently active instance.


On the primary analytics server (analytics server 1)

As the mqm user run:

source /opt/IBM/mqsi/8.0.0.1/bin/mqsiprofile
mqsistop IOC_BROKER
mqsideletebroker IOC_BROKER

On the standby server (analytics server 2)

As the mqm user run:

source /opt/IBM/mqsi/8.0.0.1/bin/mqsiprofile
mqsiremovebrokerinstance IOC_BROKER

Delete existing Message Queue Artifacts


On the primary analytics server (analytics server 1) – assure the queue manager is stopped

As the mqm user run:

endmqm -i IOC.MB.QM

On the standby server (analytics server 2) – remove queue manager configuration references

As the mqm user run:

rmvmqinf IOC.MB.QM

On the primary analytics server (analytics server 1) – delete the queue manager

As the mqm user run:

dltmqm IOC.MB.QM

Delete NFS Configuration Artifacts


On the standby server (analytics server 2 - NFS-mounting)

As user root (or account with appropriate privileges) run:

umount /opt/ibm/ioc/shared/wmq
rm -rf /opt/ibm/ioc/shared/wmq

Note: You have to adjust the mount point if it does not agree with the WMQ.MOUNT.DIR value in the iop.ha.properties file.

On the primary analytics server (analytics server 1 - NFS-exporting)

As user root (or account with appropriate privileges) run:

umount /opt/ibm/ioc/shared/wmq
rm -rf /opt/ibm/ioc/shared/wmq
/etc/init.d/nfs stop

Note: You have to adjust the mount point if it does not agree with the WMQ.MOUNT.DIR value in the iop.ha.properties file.

Edit the /etc/exports file and delete the export statement which looks something like:

$exportedFolder *(rw,sync,no_wdelay,fsid=0)

Edit the /etc/fstab file and delete the previous mount point for /opt/ibm/ioc/shared/wmq to avoid duplicate mount errors.


Configuring users


Configure users on the NFS server

The Intelligent Operations Center NFS implementation requires consistent user and group information across the NFS server and the NFS clients. Add the user mqm and the groups mqm and mqbrkrs to the NFS server. The following iop.ha.wmqextnfs.nfscfg.sh script can be used to make this change. The script can be found in the /samples directory on the installation server. Otherwise you can extract the three (3) commands and run them on the NFS server.

#!/bin/bash
#-----------------------------------------------------------------------------
# Sample of creating users for WMQ/WMB multi-instance queue managers at
# .. 'external NFS' server
#-----------------------------------------------------------------------------

typeset -i rc=0
typeset -i grc=0

groupadd mqbrkrs -g 778 2>&1 > /dev/null
rc=$?
if [[ $rc == 0 || $rc == 9 ]]; then
echo "group mqbrkrs added or already exists"
else

echo "groupadd mqbrkrs RC: $rc"
grc=$rc
fi

groupadd mqm -g 777 2>&1 > /dev/null
rc=$?
if [[ $rc == 0 || $rc == 9 ]]; then
echo "group mqm added or already exists"
else
echo "groupadd mqm RC: $rc"
grc=$rc
fi

useradd -u 777 -g mqm -G mqbrkrs mqm
rc=$?
if [[ $rc == 0 || $rc == 9 ]]; then
echo "user mqm added or already exists"
else
echo "useradd mqm RC: $rc"
grc=$rc
fi

if [[ $grc -ne 0 ]]; then
echo "One or more commands failed"
exit $grc
else
exit 0
fi


Configure users on NFS server

The rpcidmapd daemon must be running on your NFS server. Run the following command as a root user to verify that it is configured at the appropriate runlevels:

chkconfig --list

If the rpcidmapd daemon is not started or configured to start, run the following commands as a root user to configure and start the daemon:

chkconfig –level 235 rpcidmapd on
service rpcidmapd restart


Install the wmqextnfs topology


Update the wmqextnfs topology parameters

On the installation server (analytics server 1), change to the installation topology directory (/install_home/ioc16/topology) and edit the iop.ha.wmqextnfs.properties file.

Modify the server connection properties for configuration of the external, non-Intelligent Operations Center NFS server. Update the HOST, ACCOUNT and ACCOUNT.PWD properties as appropriate for your environment.

#--------------------------------------------------------------
# External NFS server configuration
#--------------------------------------------------------------
NFS.1.HOST = nfsdev.wma.ibm.com
NFS.1.ACCOUNT = root
NFS.1.ACCOUNT.PWD = Pas5W0rd
NFS.1.SSH_PORT = 22

Run the wmqextnfs installation

The installation of the wmqextnfs topology is done in several steps. These steps are presented by the iop.ha.wmqextnfs.sh script. The steps are:

root@ba7ana1 bin]# ./iop.ha.wmqextnfs.sh -p topology_password



** Intelligent Operations Center external network file system **
** configuration **

1 - Prepare - Copy template to topology directory
2 - Prepare - Parameterize topology
3 - Prepare - Encrypt topology
4 - Install - Setup the external network file system
  1. Log on to the installation server as the root user.
  2. Navigate to the /install_home/ioc16/bin directory.
  3. Run the ./ioc-env.sh command.
  4. Run the iop.ha.wmqextnfs.sh script selecting steps 1-4 in order.

Note: Steps 1-3 take 1 minute each and step 4 takes approximately 7 minutes.
The iop.ha.wqmqextnfs.sh script is in the /install_home/ioc16/bin directory on the installation server.

The script must be run with the -p parameter specifying the topology password defined when the IBM Intelligent Operations Center was installed. For example: iop.ha.wqmqextnfs.sh -p  password.


Test the configuration


Run the following on the primary server (analytics server 1)

As the mqm user, start the primary broker:

strmqm -x IOC.MB.QM
source /opt/IBM/mqsi/8.0.0.1/bin/mqsiprofile
mqsistart IOC_BROKER

Run the following on the standby server (analytics server 2)

As the mqm user, start the standby broker:

strmqm -x IOC.MB.QM
source /opt/IBM/mqsi/8.0.0.1/bin/mqsiprofile
mqsistart IOC_BROKER

Test the configuration

Follow directions in the WebSphere MQ product documentation to test the multi-instance queue manager.



Option B: Storage Area Network

The following procedure is used to configure an IBM SAN. Use this as a reference for your own SAN implementation. This option is only applicable if you have already installed IBM Intelligent Operations Center and want to convert from the default NFS shared directory implementation to a SAN shared directory implementation. If Intelligent Operations Center was already installed with the default NFS shared directory (located on analytics server 1), then do not run the iop.unmanage.wmb.sh script described above. Instead, follow the instructions below.

Step 1 – Delete existing Message Broker Artifacts

Same as "Delete existing Message Broker Artifacts."


Step 2 – Delete existing Message Queue Artifacts

Same as "Delete existing Message Queue Artifacts."


Step 3 – Delete NFS Configuration Artifacts

Same as "Delete NFS Configuration Artifacts."


Step 4 – Configure SAN

1. Run the tar -xzvf IBM_XIV_Host_Attachment_Kit_1.10.0-b1221_for_RHEL_5_RHEL_6_x86-64_portable.tar.gz command.


2. Run the yum install device-mapper command.
3. Run the service multipathd start command.
4. Run the chkcongi multipathd on command.
5. Reboot the system after LUN has been configured.
6. Run the xiv_attach tool and follow the fiber channel steps.
7. Run the multipath -ll command to view the LUNs
8. Log in as root, open the disk utility, and format the disk to ext4. Once formatted, mount the drive
9. Should see use% increasing by placing a file on mount point.

Step 5 – Configure Message Queue and Message Broker artifacts

On the IBM Intelligent Operations Center installation server, run the following steps to reconfigure the Message Queue and Message Broker instances.

1. Edit the iop.ha.coreconfig.xml file and do the following for the mq_ha and broker_ha elements.


    a) Change the value of the Status attribute to 'New'.

    b) Check that the value for the sharedFolder element is consistent with the mount point of the SAN mount point.


2. Run the following commands to reconfigure IBM Intelligent Operations Center.

    ./ba.sh doAction -t iop.ha.coreconfig -c mq_ha -action install -p topology-password

    ./ba.sh doAction -t iop.ha.coreconfig -c broker_ha -action install -p topology-password



Step 7 – Test Configuration

Same as "Test the configuration"



Appendices

Glossary

Term/Acronym
Meaning/Use
HA
High-Availability.
NFS
Network File System. One of several types of 'shared storage' implementations. For IBM Intelligent Operations Center, only NFS Version 4 is supported.
WMB
WebSphere Message Broker
WMQ
WebSphere Message Queue
IOC.managefacet.sh

[{"Product":{"code":"SS3NGB","label":"IBM Intelligent Operations Center"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Component":"--","Platform":[{"code":"PF016","label":"Linux"}],"Version":"1.6;1.6.0.1;1.6.0.2;1.6.0.3","Edition":"","Line of Business":{"code":"LOB59","label":"Sustainability Software"}}]

Document Information

Modified date:
17 June 2018

UID

swg27040402