WebSphere MQ Clusters, queue manager versions and release migration.
Anthony Beardsmore 110000J1UB Comments (5) Visits (9468)
This subject is another from the ‘Frequently asked questions’ bag for WMQ Clusters. This one comes in various sub categories:
And I’ll try to tackle these all together here.
The first thing to understand is that WebSphere MQ Clustering doesn’t try to be too smart – anything cluster related that a particular repository process doesn’t understand, it just ignores. That might sound philosophically closed minded – but it’s great news for an administrator trying to deal with an estate of hundreds of queue managers in a cluster. Keeping them all at the same level would be an impossible task, so communication between cluster queue managers works on a lowest common denominator basis. Administration messages include all information which the sender knows about, and if the recipient expected more, he fills in the blanks with defaults. Any excess information is discarded.
In theory then, any WMQ Queue manager will happily participate in a cluster with any other version from the day clusters were introduced (but of course, if a release is out of support you’d try it at your own risk). However, there are a few more things to think about.
If the Full Repository doesn’t understand some fields of a structure, say a new property of a queue, or a whole new object type (for example Topic objects when they were introduced in Version 7), it won’t be able to store and forward the new properties or objects. For this reason, we always recommend upgrading FRs first if at all possible. Don’t worry if you can’t do this for some reason – the cluster will still work fine, but you need to be careful not to make use of any new cluster function until you do get round to it. If you do try to make use of new features, bear in mind the above principle – WMQ will ignore what it doesn’t understand, so things might look as though they’re working… until you realise that that new workload balancing parameter isn’t having the effect you wanted!
One specific situation in which you might deliberately continue with FRs at a lower release than PRs is when FRs are hosted on z/OS. At time of writing the highest level released on this platform is 7.1, while WMQ for Distributed is at version 7.5. In this particular case, there is no concern with allowing the FRs to remain at 7.1, as there are no changes to the structures exchanged between repositories in these two releases. This is because the majority of the changes in 7.5 simply relate to the repackaging of the FTE (now Managed File Transfer) and AMS components. Even the one cluster related change, allowing the use of multiple cluster transmit queues in 7.5, can safely be exploited in this configuration as all changes are local to the individual queue manager, and not flowed through the Full Repositories.
When it comes to migrating an individual queue manager in a cluster, whether it’s a Full Repository or not, the process is much the same as for any other migration: the only additional consideration is that you might want to ‘SUSPEND’ the queue manager from the cluster before taking it down, which will give other Queue Managers a hint to avoid sending work to cluster queues hosted here. However, bear in mind that any message sent to this particular queue manager by name, or sent BIND_ON_OPEN will have to wait on the transmission queues until it becomes available again. Don’t forget to take a backup as a rollback strategy (and if you do have to use it, remember that you need to REFRESH the queue manager in the cluster after restoring). On z/OS make sure that you have any backwards migration PTFs in place.
After starting the queue manager using the new product version, remember to un-SUSPEND the queue manager, and no further action is required. No REFRESH CLUSTER for example is needed as part of a clean upgrade.
For your Full Repositories, always take them down and bring them back up one at a time (so that there is always one available for business as usual). Don’t worry too much about the fact that that means you will only have one for a period – this is why you had two in the first place (and the cluster can continue processing for weeks without any FRs present in the worst case – only first time MQOPENs and new or altered object definitions will be affected.)
As always, see Knowledge Center for more best practice information on this and similar clustering topics (in particular http
and for general version migration information http