Changing required node settings
Some services that run on IBM® Cloud Pak for Data require specific settings on the nodes in
the cluster. To ensure that the cluster has the required settings for these services, an operating
system administrator with root privileges must review and adjust the settings on
the appropriate nodes in the cluster.
Required permissions
To adjust these settings, you must be an operating system administrator with
root privileges.
Load balancer timeout settings
To prevent connections from being closed before processes complete, you might need to adjust the timeout settings on your load balancer node. The recommended timeout is at least 5 minutes.
In some situations, you might need to set the timeout even higher. For more information, see Processes time out before completing.
This setting is required if you plan to install the Watson™ Knowledge Catalog service. However, this setting is also recommended if you are working with large data sets or you have slower network speeds.
The following steps assume that you are using HAProxy. If you are using a different load balancer, see the documentation for your load balancer.
- On the load balancer node, check the HAProxy timeout settings in the
/etc/haproxy/haproxy.cfg file.The recommended values are at least:
timeout client 300s timeout server 300s - If the timeout values are less than 300 seconds (5 minutes), update the values:
- To change the
timeout clientsetting, enter the following command:sed -i -e "/timeout client/s/ [0-9].*/ 5m/" /etc/haproxy/haproxy.cfg - To change the
timeout serversetting, enter the following command:sed -i -e "/timeout server/s/ [0-9].*/ 5m/" /etc/haproxy/haproxy.cfg
- To change the
- Run the following command to apply the changes that you made to the HAProxy
configuration:
systemctl restart haproxy
CRI-O container settings
To ensure that services can run correctly, you must adjust the maximum number of processes and the maximum number of open files in the CRI-O container settings.
These settings are required if you are using the CRI-O container runtime.
For Red Hat® OpenShift® version 4.3
On Red Hat OpenShift version 4.3, Machine Config
Pools manage your cluster of nodes and their corresponding Machine Configs
(machineconfig). To change a setting in the crio.conf file, you
can create a new machineconfig containing only the crio.conf
file.
- Make sure that you have
python3installed. - On any worker node in the cluster, extract the current crio.conf settings
to /tmp/crio.conf.
For example:
python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(sys.argv[1]))" $(oc get $(oc get mc -o name --sort-by=.metadata.creationTimestamp | grep rendered-worker| tail -n1 ) -o jsonpath='{.spec.config.storage.files[?(@.path=="/etc/crio/crio.conf")].contents.source}' | awk -F" " '{print $1}' | sed 's/data:,//g') > /tmp/crio.conf - On the worker node, check the maximum number of open files setting by running the following
command:
ulimit -nThe recommended value is at least
66560.- If the
ulimitvalue is less than66560, edit or add the following entry in the[crio.runtime]section of the /tmp/crio.conf file:default_ulimits = [ "nofile=66560:66560" ]
- If the
- On the same worker node, check the maximum number of processes setting by running the following
command:
ulimit -uThe recommended value is at least
12288.- If the
ulimitvalue is less than12288, edit or add the following entry in the[crio.runtime]section of the /tmp/crio.conf file:pids_limit = 12288
- If the
- Verify that /tmp/crio.conf looks like the following
example:
[crio] root = "/var/lib/containers/storage" runroot = "/var/run/containers/storage" storage_driver = "overlay" storage_option = [ "overlay.override_kernel_check=1", ] [crio.api] listen = "/var/run/crio/crio.sock" stream_address = "" stream_port = "10010" file_locking = true [crio.runtime] runtime = "/usr/bin/runc" runtime_untrusted_workload = "" default_workload_trust = "trusted" no_pivot = false conmon = "/usr/libexec/crio/conmon" conmon_env = [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", ] selinux = true seccomp_profile = "/etc/crio/seccomp.json" apparmor_profile = "crio-default" cgroup_manager = "systemd" hooks_dir_path = "/usr/share/containers/oci/hooks.d" default_mounts = [ "/usr/share/rhel/secrets:/run/secrets", ] default_ulimits = [ "nofile=66560:66560" ] pids_limit = 12288 enable_shared_pid_namespace = false log_size_max = 52428800 [crio.image] default_transport = "docker://" pause_image = "docker.io/openshift/origin-pod:v3.11" pause_command = "/usr/bin/pod" signature_policy = "" image_volumes = "mkdir" insecure_registries = [ "" ] registries = [ "docker.io" ] [crio.network] network_dir = "/etc/cni/net.d/" plugin_dir = "/opt/cni/bin" - Save the contents of the /tmp/crio.conf to a local variable by using the
following
command:
crio_conf=$(cat /tmp/crio.conf | python3 -c "import sys, urllib.parse; print(urllib.parse.quote(''.join(sys.stdin.readlines())))")Important: Verify thatcrio.confis not empty. - Create a new
machineconfig, with the name 51-worker-cp4d-crio-conf, by using the following command:cat << EOF > /tmp/51-worker-cp4d-crio-conf.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 51-worker-cp4d-crio-conf spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,${crio_conf} filesystem: root mode: 0644 path: /etc/crio/crio.conf EOF - Apply the new
machineconfigto the cluster by running the following command:oc create -f /tmp/51-worker-cp4d-crio-conf.yaml
For more information, see Changing Ignition Configs after installation.
For Red Hat OpenShift version 3.11
On each compute node in the cluster, perform the following steps.
- Check the maximum number of open files setting by running the following
command:
ulimit -nThe recommended value is at least
66560.- If the
ulimitvalue is less than66560, edit or add the following entry in the[crio.runtime]section of the /etc/crio/crio.conf file:default_ulimits = [ "nofile=66560:66560" ]
- If the
- Check the maximum number of processes setting by running the following
command:
ulimit -uThe recommended value is at least
12288.- If the
ulimitvalue is less than12288, edit or add the following entry in the[crio.runtime]section of the /etc/crio/crio.conf file:pids_limit = 12288
- If the
- Verify that /etc/crio/crio.conf is similar to the following
example:
[crio] root = "/var/lib/containers/storage" runroot = "/var/run/containers/storage" storage_driver = "overlay" storage_option = [ "overlay.override_kernel_check=1", ] [crio.api] listen = "/var/run/crio/crio.sock" stream_address = "" stream_port = "10010" file_locking = true [crio.runtime] runtime = "/usr/bin/runc" runtime_untrusted_workload = "" default_workload_trust = "trusted" no_pivot = false conmon = "/usr/libexec/crio/conmon" conmon_env = [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", ] selinux = true seccomp_profile = "/etc/crio/seccomp.json" apparmor_profile = "crio-default" cgroup_manager = "systemd" hooks_dir_path = "/usr/share/containers/oci/hooks.d" default_mounts = [ "/usr/share/rhel/secrets:/run/secrets", ] default_ulimits = [ "nofile=66560:66560" ] pids_limit = 12288 enable_shared_pid_namespace = false log_size_max = 52428800 [crio.image] default_transport = "docker://" pause_image = "docker.io/openshift/origin-pod:v3.11" pause_command = "/usr/bin/pod" signature_policy = "" image_volumes = "mkdir" insecure_registries = [ "" ] registries = [ "docker.io" ] [crio.network] network_dir = "/etc/cni/net.d/" plugin_dir = "/opt/cni/bin" - Run the following command to apply the changes that you made to the
/etc/crio/crio.conf
file:
systemctl restart crio
Docker container settings
To ensure that services can run correctly, you must adjust the maximum number of processes and the maximum number of open files in the Docker container settings.
These settings are required if you are using the Docker container runtime on Red Hat OpenShift version 3.11.
On each compute node in the cluster, perform the following steps.
- Check the maximum number of open files setting by running the following
command:
ulimit -nThe recommended value is at least
66560.- If the
ulimitvalue is less than66560, edit or append the following setting in theOPTIONSline in the /etc/sysconfig/docker file:OPTIONS=' --default-ulimit nofile=66560'
- If the
- Check the maximum number of processes setting by running the following
command:
ulimit -uThe recommended value is at least
12288.- If the
ulimitvalue is less than12288, edit or append the following setting in theOPTIONSline in the /etc/sysconfig/docker file:OPTIONS=' --default-pids-limit=12288'
- If the
- Run the following command to apply the changes that you made to the
/etc/sysconfig/docker
file:
systemctl restart docker
Kernel parameter settings
To ensure that certain microservices can run correctly, you must verify the kernel parameters.
These settings are required for all deployments; however, they depend on the machine RAM size and
the OS page size. The following steps assume that you have worker nodes with 64 GB of RAM on an x86
platform with a 4 K OS page size. If the worker nodes have 128 GB of RAM each, you must double the
values for the kernel.shm* values.
- Virtual memory limit (
vm.max_map_count) - Message limits (
kernel.msgmax,kernel.msgmnb, andkernel.msgmni) - Shared memory limits (
kernel.shmmax,kernel.shmall, andkernel.shmmni)The following settings are recommended:kernel.shmmni: 256 * <size of RAM in GB>kernel.shmmax: <size of RAM in bytes>kernel.shmall: 2 * <size of RAM in the default OS system page size>
The default OS system page size on Power Systems is 64KB. Take this OS page size into account when you set the value for
kernel.shmall. For more information, see Modifying kernel parameters (Linux®) in Kernel parameters for Db2® database server installation (Linux and UNIX). - Semaphore limits (
kernel.sem)As of Red Hat Enterprise Linux version 7.8 and Red Hat Enterprise Linux version 8.1, the
kernel.shmmni, kernel.msgmni, and kernel.semmnisettings inkernel.semsettings are capped at 32768. Values larger than 32768 are not applied and default values are used. The default value forshmmniis 4096. The default value formsgmniis 32000. The default value forsemmniis 128. Although you can apply values larger than 32768 by using the boot parameteripcmni_extend, the values are still capped to 32768 internally by Red Hat Enterprise Linux. For more information, see On RHEL servers, changing the semaphore value fails with a message "setting key "kernel.sem": Numerical result out of range".
For Red Hat OpenShift version 4.3
On Red Hat OpenShift, you can use the Node Tuning Operator to manage node-level profiles. For more information, see Using the Node Tuning Operator.
- Create a YAML file, 42-cp4d.yaml, with the following content. If your
current settings are less than the recommendations, adjust the settings in your YAML file. This step
assumes that you have worker nodes with 64 GB of RAM.
apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: cp4d-wkc-ipc namespace: openshift-cluster-node-tuning-operator spec: profile: - name: cp4d-wkc-ipc data: | [main] summary=Tune IPC Kernel parameters on OpenShift Worker Nodes running WKC Pods [sysctl] kernel.shmall = 33554432 kernel.shmmax = 68719476736 kernel.shmmni = 16384 kernel.sem = 250 1024000 100 16384 kernel.msgmax = 65536 kernel.msgmnb = 65536 kernel.msgmni = 32768 vm.max_map_count = 262144 recommend: - match: - label: node-role.kubernetes.io/worker priority: 10 profile: cp4d-wkc-ipcIBM Power Systems On Power Systems, create a YAML file, 42-cp4d.yaml, with the following content. Adjust the
kernel.semif required.apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: cp4d-wkc-ipc namespace: openshift-cluster-node-tuning-operator spec: profile: - name: cp4d-wkc-ipc data: | [main] summary=Tune IPC Kernel parameters on OpenShift Worker Nodes running WKC Pods [sysctl] kernel.shmall = 2097152 kernel.shmmax = 68719476736 kernel.shmmni = 16384 kernel.sem = 250 1024000 100 16384 kernel.msgmax = 65536 kernel.msgmnb = 65536 kernel.msgmni = 32768 vm.max_map_count = 262144 recommend: - match: - label: node-role.kubernetes.io/worker priority: 10 profile: cp4d-wkc-ipc - Run the following command to apply the
changes:
oc create -f 42-cp4d.yaml
For Red Hat OpenShift version 3.11
On each compute node in the cluster, perform the following steps.
- Create a conf file for the Cloud Pak for Data kernel settings in the
/etc/sysctl.d directory.
The files are processed in alphabetical order. To ensure that the settings from the Cloud Pak for Data conf file are retained after the sysctl files are processed, give the conf file a name that ensures it is at the end of the queue. For example, name the file /etc/sysctl.d/42-cp4d.conf
- Check the virtual memory limit setting by running the following
command:
sysctl -a 2>/dev/null | grep vm.max | grep -v next_idThe recommended value is at least:vm.max_map_count = 262144- If the
vm.max_map_countvalue is less than262144, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:vm.max_map_count = 262144
- If the
- Check the message limit settings by running the following
command:
sysctl -a 2>/dev/null | grep kernel.msg | grep -v next_idThe recommended values are at least:kernel.msgmax = 65536 kernel.msgmnb = 65536 kernel.msgmni = 32768- If the
kernel.msgmaxvalue is less than65536, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:kernel.msgmax = 65536 - If the
kernel.msgmnbvalue is less than65536, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:kernel.msgmnb = 65536 - If the
kernel.msgmnivalue is less than32768, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:kernel.msgmni = 32768
- If the
- Check the shared memory limit settings by running the following
command:
sysctl -a 2>/dev/null | grep kernel.shm | grep -v next_id | grep -v shm_rmid_forcedThe recommended values are at least:kernel.shmmax = 68719476736 kernel.shmall = 33554432 kernel.shmmni = 16384- If the
kernel.shmmaxvalue is less than68719476736, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:kernel.shmmax = 68719476736 - If the
kernel.shmallvalue is less than33554432, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:kernel.shmall = 33554432 - If the
kernel.shmmnivalue is less than16384, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
- If the
- Check the semaphore limit settings by running the following
command:
sysctl -a 2>/dev/null | grep kernel.sem | grep -v next_idThe recommended values are at least:kernel.sem = 250 1024000 100 16384Specifically:- The
max semaphores per arraymust be at least250. - The
max semaphores system widemust be at least1024000. - The
max ops per semop callmust be at least100. - The
max number of arraysmust be at least16384.
- If any of the semaphore limit settings are less that the minimum requirements, add the following
entry in the /etc/sysctl.d/42-cp4d.conf
file:
kernel.sem = 250 1024000 100 16384
- The
- Run the following command to apply the changes that you made to the
/etc/sysctl.d/42-cp4d.conf:
sysctl -p /etc/sysctl.d/42-cp4d.conf
Power SMT settings
On Power Systems, you must also change the simultaneous multithreading (SMT) settings for small core (Kernel-based Virtual Machine capable) and big core (PowerVM® capable) systems.Kernel-based Virtual Machine (KVM) capable systems include LC922, IC922, AC922.
- Create a YAML file, smt.yaml, with the following
content:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-smt spec: kernelArguments: - 'slub_max_order=0' config: ignition: version: 2.2.0 storage: files: - path: /usr/local/bin/powersmt overwrite: true mode: 0700 filesystem: root contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKZXhwb3J0IFBBVEg9L3Jvb3QvLmxvY2FsL2Jpbjovcm9vdC9iaW46L3NiaW46L2JpbjovdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluCmV4cG9ydCBLVUJFQ09ORklHPS92YXIvbGliL2t1YmVsZXQva3ViZWNvbmZpZwpDT1JFUFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNvcmVcKHNcKSBwZXIgc29ja2V0JC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKU09DS0VUUz0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eU29ja2V0XChzXCkkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQpsZXQgVE9UQUxDT1JFUz0kQ09SRVBTKiRTT0NLRVRTCk1BWFRIUkVBRFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNQVVwoc1wpJC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKbGV0IE1BWFNNVD0kTUFYVEhSRUFEUy8kVE9UQUxDT1JFUwpDVVJSRU5UU01UPSQoL2Jpbi9sc2NwdSB8IC9iaW4vYXdrIC1GOiAnICQxIH4gL15UaHJlYWRcKHNcKSBwZXIgY29yZSQvIHtwcmludCAkMn0nfC9iaW4veGFyZ3MpCgp3aGlsZSA6CmRvCiAgSVNOT0RFREVHUkFERUQ9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtbyB5YW1sIHwvYmluL2dyZXAgbWFjaGluZWNvbmZpZ3VyYXRpb24ub3BlbnNoaWZ0LmlvL3JlYXNvbiB8L2Jpbi9ncmVwICJ1bmV4cGVjdGVkIG9uLWRpc2sgc3RhdGUgdmFsaWRhdGluZyIpCiAgU01UTEFCRUw9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtTCBTTVQgLS1uby1oZWFkZXJzIHwvYmluL2F3ayAne3ByaW50ICQ2fScpCiAgaWYgW1sgLW4gJFNNVExBQkVMIF1dCiAgICB0aGVuCiAgICAgIGNhc2UgJFNNVExBQkVMIGluCiAgICAgICAgMSkgVEFSR0VUU01UPTEKICAgICAgOzsKICAgICAgICAyKSBUQVJHRVRTTVQ9MgogICAgICA7OwogICAgICAgIDQpIFRBUkdFVFNNVD00CiAgICAgIDs7CiAgICAgICAgOCkgVEFSR0VUU01UPTgKICAgICAgOzsKICAgICAgICAqKSBUQVJHRVRTTVQ9JENVUlJFTlRTTVQgOyBlY2hvICJTTVQgdmFsdWUgbXVzdCBiZSAxLCAyLCA0LCBvciA4IGFuZCBzbWFsbGVyIHRoYW4gTWF4aW11bSBTTVQuIgogICAgICA7OwogICAgICBlc2FjCiAgICBlbHNlCiAgICAgIFRBUkdFVFNNVD0kTUFYU01UCiAgZmkKCiAgaWYgW1sgLW4gJElTTk9ERURFR1JBREVEIF1dCiAgICB0aGVuCiAgICAgIHRvdWNoIC9ydW4vbWFjaGluZS1jb25maWctZGFlbW9uLWZvcmNlCiAgZmkKCiAgQ1VSUkVOVFNNVD0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eVGhyZWFkXChzXCkgcGVyIGNvcmUkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQoKICBpZiBbWyAkQ1VSUkVOVFNNVCAtbmUgJFRBUkdFVFNNVCBdXQogICAgdGhlbgogICAgICBJTklUT05USFJFQUQ9MAogICAgICBJTklUT0ZGVEhSRUFEPSRUQVJHRVRTTVQKICAgICAgaWYgW1sgJE1BWFNNVCAtZ2UgJFRBUkdFVFNNVCBdXQogICAgICAgIHRoZW4KICAgICAgICAgIHdoaWxlIFtbICRJTklUT05USFJFQUQgLWx0ICRNQVhUSFJFQURTIF1dCiAgICAgICAgICBkbwogICAgICAgICAgICBPTlRIUkVBRD0kSU5JVE9OVEhSRUFECiAgICAgICAgICAgIE9GRlRIUkVBRD0kSU5JVE9GRlRIUkVBRAoKICAgICAgICAgICAgd2hpbGUgW1sgJE9OVEhSRUFEIC1sdCAkT0ZGVEhSRUFEIF1dCiAgICAgICAgICAgIGRvCiAgICAgICAgICAgICAgL2Jpbi9lY2hvIDEgPiAvc3lzL2RldmljZXMvc3lzdGVtL2NwdS9jcHUkT05USFJFQUQvb25saW5lCiAgICAgICAgICAgICAgbGV0IE9OVEhSRUFEPSRPTlRIUkVBRCsxCiAgICAgICAgICAgIGRvbmUKICAgICAgICAgICAgbGV0IElOSVRPTlRIUkVBRD0kSU5JVE9OVEhSRUFEKyRNQVhTTVQKICAgICAgICAgICAgd2hpbGUgW1sgJE9GRlRIUkVBRCAtbHQgJElOSVRPTlRIUkVBRCBdXQogICAgICAgICAgICBkbwogICAgICAgICAgICAgIC9iaW4vZWNobyAwID4gL3N5cy9kZXZpY2VzL3N5c3RlbS9jcHUvY3B1JE9GRlRIUkVBRC9vbmxpbmUKICAgICAgICAgICAgICBsZXQgT0ZGVEhSRUFEPSRPRkZUSFJFQUQrMQogICAgICAgICAgICBkb25lCiAgICAgICAgICAgIGxldCBJTklUT0ZGVEhSRUFEPSRJTklUT0ZGVEhSRUFEKyRNQVhTTVQKICAgICAgICAgIGRvbmUKICAgICAgICBlbHNlCiAgICAgICAgICBlY2hvICJUYXJnZXQgU01UIG11c3QgYmUgc21hbGxlciBvciBlcXVhbCB0aGFuIE1heGltdW0gU01UIHN1cHBvcnRlZCIKICAgICAgZmkKICBmaQogIC9iaW4vc2xlZXAgMzAKZG9uZQo= - path: /etc/systemd/system/smt.service overwrite: true mode: 0644 filesystem: root contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKZXhwb3J0IFBBVEg9L3Jvb3QvLmxvY2FsL2Jpbjovcm9vdC9iaW46L3NiaW46L2JpbjovdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluCmV4cG9ydCBLVUJFQ09ORklHPS92YXIvbGliL2t1YmVsZXQva3ViZWNvbmZpZwpDT1JFUFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNvcmVcKHNcKSBwZXIgc29ja2V0JC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKU09DS0VUUz0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eU29ja2V0XChzXCkkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQpsZXQgVE9UQUxDT1JFUz0kQ09SRVBTKiRTT0NLRVRTCk1BWFRIUkVBRFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNQVVwoc1wpJC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKbGV0IE1BWFNNVD0kTUFYVEhSRUFEUy8kVE9UQUxDT1JFUwpDVVJSRU5UU01UPSQoL2Jpbi9sc2NwdSB8IC9iaW4vYXdrIC1GOiAnICQxIH4gL15UaHJlYWRcKHNcKSBwZXIgY29yZSQvIHtwcmludCAkMn0nfC9iaW4veGFyZ3MpCgp3aGlsZSA6CmRvCiAgSVNOT0RFREVHUkFERUQ9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtbyB5YW1sIHwvYmluL2dyZXAgbWFjaGluZWNvbmZpZ3VyYXRpb24ub3BlbnNoaWZ0LmlvL3JlYXNvbiB8L2Jpbi9ncmVwICJ1bmV4cGVjdGVkIG9uLWRpc2sgc3RhdGUgdmFsaWRhdGluZyIpCiAgU01UTEFCRUw9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtTCBTTVQgLS1uby1oZWFkZXJzIHwvYmluL2F3ayAne3ByaW50ICQ2fScpCiAgaWYgW1sgLW4gJFNNVExBQkVMIF1dCiAgICB0aGVuCiAgICAgIGNhc2UgJFNNVExBQkVMIGluCiAgICAgICAgMSkgVEFSR0VUU01UPTEKICAgICAgOzsKICAgICAgICAyKSBUQVJHRVRTTVQ9MgogICAgICA7OwogICAgICAgIDQpIFRBUkdFVFNNVD00CiAgICAgIDs7CiAgICAgICAgOCkgVEFSR0VUU01UPTgKICAgICAgOzsKICAgICAgICAqKSBUQVJHRVRTTVQ9JENVUlJFTlRTTVQgOyBlY2hvICJTTVQgdmFsdWUgbXVzdCBiZSAxLCAyLCA0LCBvciA4IGFuZCBzbWFsbGVyIHRoYW4gTWF4aW11bSBTTVQuIgogICAgICA7OwogICAgICBlc2FjCiAgICBlbHNlCiAgICAgIFRBUkdFVFNNVD0kTUFYU01UCiAgZmkKCiAgaWYgW1sgLW4gJElTTk9ERURFR1JBREVEIF1dCiAgICB0aGVuCiAgICAgIHRvdWNoIC9ydW4vbWFjaGluZS1jb25maWctZGFlbW9uLWZvcmNlCiAgZmkKCiAgQ1VSUkVOVFNNVD0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eVGhyZWFkXChzXCkgcGVyIGNvcmUkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQoKICBpZiBbWyAkQ1VSUkVOVFNNVCAtbmUgJFRBUkdFVFNNVCBdXQogICAgdGhlbgogICAgICBJTklUT05USFJFQUQ9MAogICAgICBJTklUT0ZGVEhSRUFEPSRUQVJHRVRTTVQKICAgICAgaWYgW1sgJE1BWFNNVCAtZ2UgJFRBUkdFVFNNVCBdXQogICAgICAgIHRoZW4KICAgICAgICAgIC91c3IvYmluL3N5c3RlbWN0bCBzdG9wIGNyaW8KICAgICAgICAgIHdoaWxlIFtbICRJTklUT05USFJFQUQgLWx0ICRNQVhUSFJFQURTIF1dCiAgICAgICAgICBkbwogICAgICAgICAgICBPTlRIUkVBRD0kSU5JVE9OVEhSRUFECiAgICAgICAgICAgIE9GRlRIUkVBRD0kSU5JVE9GRlRIUkVBRAoKICAgICAgICAgICAgd2hpbGUgW1sgJE9OVEhSRUFEIC1sdCAkT0ZGVEhSRUFEIF1dCiAgICAgICAgICAgIGRvCiAgICAgICAgICAgICAgL2Jpbi9lY2hvIDEgPiAvc3lzL2RldmljZXMvc3lzdGVtL2NwdS9jcHUkT05USFJFQUQvb25saW5lCiAgICAgICAgICAgICAgbGV0IE9OVEhSRUFEPSRPTlRIUkVBRCsxCiAgICAgICAgICAgIGRvbmUKICAgICAgICAgICAgbGV0IElOSVRPTlRIUkVBRD0kSU5JVE9OVEhSRUFEKyRNQVhTTVQKICAgICAgICAgICAgd2hpbGUgW1sgJE9GRlRIUkVBRCAtbHQgJElOSVRPTlRIUkVBRCBdXQogICAgICAgICAgICBkbwogICAgICAgICAgICAgIC9iaW4vZWNobyAwID4gL3N5cy9kZXZpY2VzL3N5c3RlbS9jcHUvY3B1JE9GRlRIUkVBRC9vbmxpbmUKICAgICAgICAgICAgICBsZXQgT0ZGVEhSRUFEPSRPRkZUSFJFQUQrMQogICAgICAgICAgICBkb25lCiAgICAgICAgICAgIGxldCBJTklUT0ZGVEhSRUFEPSRJTklUT0ZGVEhSRUFEKyRNQVhTTVQKICAgICAgICAgIGRvbmUKICAgICAgICAgIC91c3IvYmluL3N5c3RlbWN0bCBzdGFydCBjcmlvCiAgICAgICAgZWxzZQogICAgICAgICAgZWNobyAiVGFyZ2V0IFNNVCBtdXN0IGJlIHNtYWxsZXIgb3IgZXF1YWwgdGhhbiBNYXhpbXVtIFNNVCBzdXBwb3J0ZWQiCiAgICAgIGZpCiAgZmkKICAvYmluL3NsZWVwIDMwCmRvbmUK systemd: units: - name: smt.service enabled: true - Run the following command to apply the
changes:
oc create -f smt.yamlYour worker nodes will perform a rolling reboot action to update the kernel boot command line parameters. When the rolling reboot is complete, perform the remaining steps.
- On all PowerVM capable worker nodes
(for example, nodes on L922, E950, and E980), run the following
command:
oc label node <node> SMT=4 - On all KVM capable worker nodes (for example, nodes on LC922, IC922, and AC922), run the
following
command:
oc label node <node> SMT=2