[Linux]

Configuring and tuning the operating system on Linux

Use this topic when you are configuring IBM® MQ on Linux® systems.

Note: The information in this topic principally concerns global kernel tuning parameters, and applies to all Linux systems. The exception to this statement are the sections described in Configuring the users who start IBM MQ, which are specific to the user.

Shell interpreter

Ensure that /bin/sh shell is a valid shell interpreter compatible with the Bourne shell, otherwise the post-installation configuration of IBM MQ does not complete successfully. If the shell was not installed using RPM, you might see a prerequisites failure of /bin/sh shell when you try to install IBM MQ . The failure is because the RPM tables do not recognize that a valid shell interpreter is installed. If the failure occurs, you can reinstall the /bin/sh shell by using RPM, or specify the RPM option --nodeps to disable dependency checking during installation of IBM MQ .
Note: The --dbpath option is not supported when installing IBM MQ on Linux.

Swap space

During high load IBM MQ can use virtual memory (swap space). If virtual memory becomes full it could cause IBM MQ processes to fail or become unstable, affecting the system.

To prevent this situation your IBM MQ administrator should ensure that the system has been allocated enough virtual memory as specified in the operating system guidelines.

System V IPC kernel configuration

IBM MQ uses System V IPC resources, in particular shared memory. However, a limited number of semaphores are also used.

The minimum configuration for IBM MQ for these resources is as follows:
Table 1. Minimum tunable kernel parameters values
Name Kernel-name Value Increase Description
shmmni kernel.shmmni 4096 Yes Maximum number of shared memory segments
shmmax kernel.shmmax 268435456 No Maximum size of a shared-memory segment (bytes)
shmall kernel.shmall 2097152 Yes Maximum amount of shared memory (pages)
semmsl kernel.sem 32 No Maximum amount of semaphores permitted per set
semmns kernel.sem 4096 Yes Maximum number of semaphores
semopm kernel.sem 32 No Maximum number of operations in single operations
semmni kernel.sem 128 Yes Maximum number of semaphore sets
thrmax kernel.threads-max 32768 Yes Maximum number of threads
pidmax kernel.pid_max 32768 Yes Maximum number of process identifiers
Notes:
  1. These values are sufficient to run two moderate sized queue managers on the system. If you intend to run more than two queue managers, or the queue managers are to process a significant workload, you might need to increase the values displayed as Yes in the Increase column.
  2. The kernel.sem values are contained within a single kernel parameter containing the four values in order.
To view the current value of the parameter log on, as a user with root authority, and type:

sysctl Kernel-name
To add or alter these values, log on as a user with root authority. Open the file /etc/sysctl.conf with a text editor, then add or change the following entries to your chosen values:

kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 268435456
kernel.sem = 32 4096 32 128
Then save and close the file.

To load these sysctl values immediately, enter the following command sysctl -p.

If you do not issue the sysctl -p command, the new values are loaded when the system is rebooted.

By default the Linux kernel has a maximum process identifier, that can also be used with threads, and might limit the allowed number of threads.

The operating system reports when the system lacks the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process {PTHREAD_THREADS_MAX} would be exceeded.

For more information on kernel.threads-max and kernel.pid-max, see Resource shortage in IBM MQ queue manager when running a large number of clients

Setting RemoveIPC on IBM MQ

Attention: Leaving the setting of RemoveIPC on its default value of Yes in the login manager configuration files (logind.con and logind.conf.d) might cause IBM MQ owned IPC resources being removed outside the control of IBM MQ.

You should set the value to No. For more information on RemoveIPC see the login.conf man page.

TCP/IP configuration

If you want to use keepalive for IBM MQ channels, you can configure the operation of the KEEPALIVE using the kernel parameters:

net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
net.ipv4.tcp_keepalive_time
See Using the TCP/IP SO_KEEPALIVE option for further information.

To view the current value of the parameter log on, as a user with root authority, and type sysctl Kernel-name.

To add or alter these values, log on as a user with root authority. Open the file /etc/sysctl.conf with a text editor, then add or change the following entries to your chosen values.

To load these sysctl values immediately, enter the following command sysctl -p.

If you do not issue the sysctl -p command, the new values are loaded when the system is rebooted.

RDQM - configuring resource limits and environment variables

For replicated data queue managers (RDQMs), configure the nproc and nofile values for the mqm user in /etc/security/limits.conf. Alternatively set LimitNOFILE and LimitNPROC variables in the Pacemaker systemd service unit file for RDQM, named rdqm.conf. If the resource limits (nproc and/or nofile) are configured in both limits.conf and rdqm.conf, the higher value of the limits configured is used by the RDQM queue manager. You can use rdqm.conf to configure other resource limits (for example, stack size) and environment variables. Note that the rdqm.conf file is only read when the queue manager is started automatically by Pacemaker. This might be at system startup, or when the queue manager fails over to the node where the rdqm.conf file exists. If the queue manager is started manually with the strmqm command, it will inherit the environment where strmqm is run.

The following steps create a sample configuration in rdqm.conf:
  1. Log in as root on the RDQM node.
  2. Create the directory /etc/systemd/system/pacemaker.service.d.
  3. Create the file rdqm.conf in that directory. The rdqm.conf file contains the required environment variables and resource limits in the following format:
    [Service] 
    Environment="MQ_ENV_VAR=1" 
    LimitNOFILE=65536 
    LimitNPROC=32768 
    LimitSTACK=16777216

    For more details on configuring the systemd unit file, consult your operating system documentation.

  4. Restart the pacemaker service:
    systemctl daemon-reload 
    systemctl restart pacemaker.service
    Any RDQM queue managers running on this node move to another node while pacemaker is restarted.
  5. Repeat the procedure on the other two RDQM nodes so that the same configuration is used by the RDQM queue manager when it fails over or switches over to other nodes.
Note: You should use qm.ini attributes in preference to environment variables to control queue manager behavior because the qm.ini file is replicated between RDQM nodes.

RDQM - configuring the kernel console log level

The DRBD kernel module (kmod-drbd) can sometimes write many messages at the KERN_ERR (3) log level. To prevent these messages being copied to the system console, which can cause significant processing delays affecting the entire system, reduce the first number of the kernel.printk parameter to 3. For more information about kernel message priorities, see https://www.kernel.org/doc/html/latest/core-api/printk-basics.html.

To view the current value of the parameter, log on as a user with root authority and type sysctl kernel.printk.

To add or alter this value, log on as a user with root authority. Open the file /etc/sysctl.conf with a text editor, then add or change the following entry to your chosen value:

kernel.printk = 3 4 1 7

To load these sysctl values immediately, enter the command sysctl -p. If you do not issue the sysctl -p command, the new values are loaded when the system is rebooted.

32-bit support on 64-bit Linux platforms

Some 64-bit Linux distributions no longer support 32-bit applications by default. For details of affected platforms, and guidance on enabling 32-bit applications to run on these platforms, see Hardware and software requirements on Linux systems.

Configuring the users who start IBM MQ

You must make the configuration changes described in Maximum open files and Maximum processes for all users who start IBM MQ. This usually includes the mqm user ID, but the same changes must be made for any other user IDs who start queue managers.

For queue managers started with systemd, specify equivalent NOFILE and NPROC values in the unit file that contains the queue manager service configuration.

Maximum open files

The maximum number of open file-handles in the system is controlled by the parameter fs.file-max

The minimum value for this parameter for a system with two moderate sized queue managers is 524288.
Note: If the operating system default is higher, you should leave the higher setting, or consult your operating system provider.

You are likely to need a higher value if you intend to run more than two queue managers, or the queue managers are to process a significant workload.

To view the current value of a parameter, log on as a user with root authority, and type sysctl fs.file-max.

To add or alter these values, log on as a user with root authority. Open the file /etc/sysctl.conf with a text editor, then add or change the following entry to your chosen value:

fs.file-max = 524288
Then save and close the file.

To load these sysctl values immediately, enter the following command sysctl -p.

If you do not issue the sysctl -p command, the new values are loaded when the system is rebooted.

If you are using a pluggable security module such as PAM (Pluggable Authentication Module), ensure that this module does not unduly restrict the number of open files for the mqm user. To report the maximum number of open file descriptors per process for the mqm user, login as the mqm user and enter the following values:

ulimit -n
For a standard IBM MQ queue manager, set the nofile value for the mqm user to 10240 or more. To set the maximum number of open file descriptors for processes running under the mqm user, add the following information to the /etc/security/limits.conf file:

mqm       hard  nofile     10240
mqm       soft  nofile     10240

The pluggable security module limits are not applied to queue managers started with systemd. To start an IBM MQ queue manager with systemd set LimitNOFILE to 10240 or more in the unit file that contains the queue manager service configuration.

For instructions on how to configure nofile for RDQM queue managers, see RDQM - configuring resource limits and environment variables.

Maximum processes

A running IBM MQ queue manager consists of a number of thread programs. Each connected application increases the number of threads running in the queue manager processes. It is normal for an operating system to limit the maximum number of processes that a user runs. The limit prevents operating system failures due to an individual user or subsystem creating too many processes. You must ensure that the maximum number of processes that the mqm user is allowed to run is sufficient. The number of processes must include the number of channels and applications that connect to the queue manager.

The following calculation is useful when determining the number of processes for the mqm user:
nproc = 2048 + clientConnections * 4 + qmgrChannels * 4 +
    localBindingConnections
where:
  • clientConnections is the maximum number of connections from clients on other machines connecting to queue managers on this machine.
  • qmgrChannels is the maximum number of running channels (as opposed to channel definitions) to other queue managers. This includes cluster channels, sender/receiver channels, and so on.
  • localBindingConnections does not include application threads.
The following assumptions are made in this algorithm:
  • 2048 is a large enough contingency to cover the queue manager threads. This might need to be increased if a lot of other applications are running.
  • When setting nproc, take into account the maximum number of applications, connections, channels and queue managers that might be run on the machine in the future.
  • This algorithm takes a pessimistic view and the actual nproc needed might be slightly lower for later versions of IBM MQ and fastpath channels.
  • On Linux, each thread is implemented as a light-weight process (LWP) and each LWP is counted as one process against nproc.
You can use the PAM_limits security module to control the number of processes that users run. You can configure the maximum number of processes for the mqm user as follows:

mqm       hard  nproc      4096
mqm       soft  nproc      4096
For more details on how to configure the PAM_limits security module type, enter the following command:

man limits.conf

The pluggable security module limits are not applied to queue managers started with systemd. To start an IBM MQ queue manager with systemd set LimitNPROC to a suitable value in the unit file that contains the queue manager service configuration.

For instructions on how to configure nproc for RDQM queue managers, see RDQM - configuring resource limits and environment variables.

You can check your system configuration using the mqconfig command.

For more information on configuring your system, see How to configure AIX® and Linux systems for IBM MQ.