Tuesday, October 13, 2015

WMQ High Availability (Multi-Instance) Setup

In this article I will describe how to setup HA Queue Managers using Multi-Instance feature in RHEL. Please note that multi-instance support was added starting WMQ v7, so it is not applicable for lower MQ versions. Also, both the MQ machines participating in HA configuration must be running same MQ versions.

Environment:
                  Operating System:  RedHat Linux 6
                  Shared Storage:     NFS v4
                  WMQ:                     v 7.5


Note that you must have 'root' level access to perform the steps.

Assumption:
                We have two WMQ servers (mq.server1 and mq.server2) and one NFS server (nfs.server). We will use id 'mqm' for MQ administration purpose.

Now let us make sure that all the servers (NFS server and MQ servers) have correct configuration.

1) Verify NFS Server:
      Lets us assume that the drive 'nfsshare' has been mounted on NFS server and we will use /nfsshare/MQ for storing data and logs of our Queue Managers.

Log-on to NFS server with root access.
Run the command:
cat  /etc/exports

It must display below result
/nfsshare *(rw,no_root_squash,sync,no_wdelay)

If not, update the /etc/exports to have above entry.
Run the command “showmount –e”. It should display below result (hostname will be your virtual machine):
If user mqm doesn't exist on NFS server, you can use below commands to set this up and set adequate permissions (as you are logged-on as root):
chmod -R 775 /nfsshare            
groupadd -g 501 mqm
useradd -g 501 -d /home/mqm -s /bin/bash -u 495 mqm

* Here I have choosesn groupid 501 for group mqm and userid 495 for user mqm. You can have different vlaues, but they should be in sync with corresponding mqm user and groups on NFS client machines (MQ Servers).

Set the appropriate password using 'passwd' command.

Make sure that /nfsshare/MQ/data and /nfsshare/MQ/log directories exist and have correct ownership.
mkdir  /nfsshare/MQ
mkdir  /nfsshare/MQ/data
mkdir  /nfsshare/MQ/log
chown -R mqm:mqm /nfsshare/MQ

Run the command “id mqm”. Verify that below output is displayed.


Now disable IPv4 filters:
service iptables stop
chkconfig iptables off

Add NFS4 domain:
echo "Domain = local.domain" >> /etc/idmapd.conf

Run below commands as well:
chkconfig nfs on
service rpcidmapd restart
chkconfig rpcidmapd on
service nfs start

Run the command “chkconfig --list iptables”. Below result should be displayed:
Run the command “cat /etc/idmapd.conf” and verify that below entry is present in the output:
Run the command “chkconfig --list rpcidmapd”. Below output should be displayed:
Run the command “service rpcidmapd status”. Below output should be displayed

Run the command “chkconfig --list nfs”. It should display below result:

  Run the command “service nfs status” or “/etc/init.d/nfs status”. It should display below results:

Verify that /nfsshare has below permissions & ownership set:
Verify that ‘data’ and ‘log’ directories have been created inside ‘MQ’ directory and have below permission 775 and mqm:mqm ownership.

If all the above tests are positive, nfs server virtual machine is correctly setup for WMQ. Now reboot the machine using below command:

/sbin/shutdown –r 0

After nfs server is rebooted, it’s ready to use.

2) Verify MQ Servers (NFS Clients):
Make sure that /nfsshare/MQ from NFS server is mounted on /MQ at WMQ server properly and adequate permissions have been setup.
Login as root on MQ. Validate below stuffs on both WMQ servers and correct the configuration is any issue.

Run the command “id mqm”. Verify that below results are displayed:
Make sure that you have directory '/MQ' on WMQ servers and mount /nfsshare/MQ on it. You can do it by adding it in /etc/fstab.
mkdir /MQ

Also, run below commands:
service iptables stop
chkconfig iptables off 
service netfs stop
chkconfig netfs on
echo "umask 0002" >> /var/mqm/.bashrc 
chown -R mqm:mqm /var/mqm   

Run the command “cat /etc/fstab”. Verify that nfs mount entry is present there as below, if not then make this entry:
Make sure that netfs is started and restart rpcidmapd and nfslock after adding domain in /etc/idmapd.conf
service netfs start 
echo "Domain = local.domain" >> /etc/idmapd.conf
service rpcidmapd restart
service nfslock restart


Run the command “df –k”. Verify that it displays the nfs mount (/MQ) as below:
Check the ownership and permissions of MQ. It should be as below:
Verify that ‘data’ and ‘log’ directories are displayed in MQ and have permissions 775 and ownership mqm:mqm.

You can see the mount information using the command mount -v

Run the command “chkconfig --list iptables”. Below should be the output:


Run the command “chkconfig --list netfs”. Below should be the output:
Run the command “service netfs status” or “/etc/init.d/netfs status”. Below output should be displayed:
Run the command “cat /var/mqm/.bashrc”. Below should be output (note the umask 0002):
Run the command “cat /etc/idmapd.conf”. It should have below entry:
Run the command “chkconfig –list rpcidmapd”. Below should be the output:
Run the command “service rpcidmapd status” or “/etc/init.d/rpcidmapd status”. Below should be the output
If all these tests are positive, your WMQ virtual machines is setup correctly to use the nfs mount. 



3) Create HA Queue Managers:
 Now you can simply create Queue Managers and give data & log path to Shared storage.

On mq.server1:
                 crtmqm -ld /MQ/log -md /MQ/data TEST_QM

dspmqinf -o command TEST_QM
Copy the output of the above command to Notepad. The output will be in the following format:
addmqinf -s QueueManager -v Name=TEST_QM -v Directory=TEST_QM -v Prefix=/var/mqm -v  
    DataPath=...

On mq.server2:
Paste the output of the command was saved in Notepad in Step 4:
# addmqinf -s QueueManager -v Name=TEST_QM -v Directory=TEST_QM -v Prefix=/var
/mqm -v DataPath=...
WebSphere MQ configuration information added.
#

Now you can start the multi-instance queue manager using -x option on both the servers. Whichever instance is started first, will become active and the other one will be standby.
strmqm -x TEST_QM

You can switch the state using the supplied commands. Please see WMQ infocenter for more information.

No comments:

Post a Comment