IBM Support

Testing statement for IBM MQ multi-instance queue manager file systems

Question & Answer


Question

Which environments have IBM used to test multi-instance queue managers?

Answer

This document is not a support statement. The support statement can be found in IBM MQ's support position on Virtualization, low-level hardware, file systems on networks and high availability. This document defines testing that IBM has conducted on network file systems for use with the IBM MQ multi-instance queue manager feature.

To validate an environment that IBM has not tested, please follow the guidance in Testing a shared file system for compatibility with WebSphere MQ Multi-instance Queue Managers.

To use the multi-instance queue manager feature of IBM MQ, you need a shared file system on networked storage, such as a NAS, or a cluster file system, such as IBM's General Parallel File System (GPFS). You can use a SAN as the storage infrastructure for the shared file system.

It can be advantageous to use a cluster file system, such as GPFS, in preference to a standard network file system, such as NFS. Cluster file systems differ in that both the server and client parts of the solution are usually provided by the same vendor, often making problem diagnosis and resolution quicker.

There are three fundamental requirements that a shared file system must meet to work reliably with IBM MQ:
  1. Data write integrity. Data write integrity is sometimes called "Write through to disk on flush". The queue manager must be able to synchronize with data being successfully committed to the physical device. In a transactional system, you need to be sure that some writes have been safely committed before continuing with other processing, and that the ordering of writes across multiple files is honored.
  2. Guaranteed exclusive access to files. In order to synchronize multiple queue managers, there needs to be a mechanism for a queue manager to obtain an exclusive lock on a file.
  3. Release locks on failure. If a queue manager fails, or if there is a communication failure with the file system, files locked by the queue manager need to be unlocked and made available to other processes without waiting for the queue manager to be reconnected to the file system. Modern file systems, such as NFS v4, use leased locks to detect failures and then release locks following a failure. Older file systems, such as NFS v3 that do not have a reliable mechanism to release locks after a failure, must not be used with multi-instance queue managers.

If a shared file system does not meet these requirements, the queue manager data and logs might get corrupted when using the shared file system in a multi-instance queue manager configuration. This might result in a failure to start IBM MQ, and possible data loss.

On operating systems other than Microsoft Windows, IBM MQ provides a tool called amqmfsck to assist with checking the suitability of networked storage for use with multi-instance queue managers. This can be used to verify the basic configuration of the networked storage, such as access permissions. It can also assist with the second and third of the requirements above. It cannot check that data write integrity is maintained because it cannot observe whether data is being safely committed to disk as opposed to being held in a cache.

Multi-instance queue managers do not work with mandatory file locking. The NFS support provided by some NAS devices enforces mandatory file locking. Although this is permitted by the NFS v4 specification, multi-instance queue managers were designed to use the less restrictive advisory file locking scheme and are not compatible with mandatory file locking. IBM has encountered mandatory file locking only with NAS devices from the EMC Celerra family. Please note that the version of amqmfsck supplied with IBM MQ 7.0.1 does not test for mandatory file locking, although later versions do.

The following file systems are known not to work as they do not meet IBM MQ's technical requirements:
  • Network File System (NFS) version 3 - does not provide lease-based file locking.
  • Red Hat Global File System (GFS, or GFS1) - does not provide the correct locking semantics.
  • Oracle Cluster File System version 2 (OCFS2) - does not provide the correct locking semantics in version 1.4.
  • Oracle ASM Cluster File System (ACFS) - does not provide the correct locking semantics.
  • Gluster V3.x, V4.x, and V5.x, using the Gluster Native (FUSE) client
  • Azure Files, using a Linux CIFS client (tested February 2020)1
Notes:
  1. The Linux CIFS client is used on Linux nodes running in the Red Hat OpenShift Container Platform

The following file systems have been tested with IBM MQ by the file system vendor:
Please note that any vendor-affirmed test results listed above have not been validated by IBM.
 
The following file systems have been tested by IBM MQ and have been found to meet IBM MQ's technical requirements:
  • IBM AIX 5.3 TL10 NFS v4 server123457
  • IBM General Parallel File System 3.2.1
  • IBM General Parallel File System 3.4.0
  • IBM i5/OS NetServer V6R1
  • IBM System Storage N series Data ONTAP 7.3.2 NFS v4 server123457
  • Microsoft Windows 8611
  • Microsoft Windows Server 2008611
  • Microsoft Windows Server 2012611
  • PortWorx 2.5.510
  • Red Hat Enterprise Linux 5.3 NFS v4 server123457
  • Red Hat Enterprise Linux 6.5 NFS v4 server123457
  • Red Hat Global File System 2 (GFS2)
  • Red Hat OpenShift Container Storage 4.2 (CephFS)9
  • Red Hat OpenShift Container Storage 4.3 (CephFS)9
  • SUSE Linux Enterprise Server 10 NFS v4 server123457
  • Veritas Storage Foundation V5.0 MP3 RP3 Cluster File System
  • Veritas Storage Foundation V5.1 SP1 Cluster File System
  • AWS EFS - Subject to locking considerations8
Notes:
  1. Multi-instance queue managers on IBM AIX 5.3 TL6 to TL9 using NFS v4 require AIX APAR IZ29559 (or equivalent for the specific technology level).
  2. NFS v4 has been found not to work with IBM i.
  3. NFS v4 was tested using the following mount options 'rw,bg,hard,intr,vers=4,sec=sys' and the following export options 'rw,sync,no_wdelay,fsid=0'.
  4. SUSE Linux Enterprise Server V10 Update 3 introduced a suspected problem in the NFS v4 server which prevents correct operation of multi-instance queue managers. The problem was rectified in kernel level 2.6.16.60-0.60.1.
  5. Multi-instance queue managers on Solaris using NFS v4 require Solaris 10 with patch 147440-13 (SPARC) or patch 147441-13 (x86-64). This patch supersedes IDR 145513 revision 3 and patch 147268-01 (SPARC), and IDR 145514 revision 3 and patch 147269-01 (x86-64) which are no longer supported.
  6. If using a Windows cluster file system, a failover of the cluster file system will trigger an IBM MQ multi-instance queue manager failover as Windows will return a file system error to IBM MQ. To avoid this use SMB 3.0 or later with the 'Continuous Availability' option. SMB 3.0 became available in Microsoft Windows Server 2012.
  7. It has been found that server delegation must be disabled to prevent I/O errors under certain conditions with NFSv4. Mount the filesystem with 'nfsv4delegation=NONE' to disable server delegation.
  8. A description of locking considerations, which should be reviewed before deploying IBM MQ with AWS EFS, is available from https://www.ibm.com/support/pages/ibm-mq-considerations-efs-aws.
  9. See also IBM MQ: Considerations for OpenShift Container Storage 4.2 and 4.3 (CephFS).
  10. See also IBM MQ: Considerations for PortWorx 2.5.5.
  11. Windows multi-instance testing was performed using CIFS shares, as required by the Requirements for shared file systems on Microsoft Windows.

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSYHRD","label":"IBM MQ"},"ARM Category":[{"code":"a8m0z00000008NKAAY","label":"Components and Features-\u003EHigh Availability (HA)-\u003EMulti Instance Queue Managers"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Version(s)","Line of Business":{"code":"LOB67","label":"IT Automation \u0026 App Modernization"}}]

Product Synonym

IBM MQ; WebSphere MQ; WMQ; MQ

Document Information

Modified date:
18 March 2024

UID

swg21433474