Changing required node settings

Some services that run on IBM® Cloud Pak for Data require specific settings on the nodes in the cluster. To ensure that the cluster has the required settings for these services, an operating system administrator with root privileges must review and adjust the settings on the appropriate nodes in the cluster.

Required permissions

To adjust these settings, you must be an operating system administrator with root privileges.

Load balancer timeout settings

To prevent connections from being closed before processes complete, you might need to adjust the timeout settings on your load balancer node. The recommended timeout is at least 5 minutes.

In some situations, you might need to set the timeout even higher. For more information, see Processes time out before completing.

This setting is required if you plan to install the Watson™ Knowledge Catalog service. However, this setting is also recommended if you are working with large data sets or you have slower network speeds.

The following steps assume that you are using HAProxy. If you are using a different load balancer, see the documentation for your load balancer.

  1. On the load balancer node, check the HAProxy timeout settings in the /etc/haproxy/haproxy.cfg file.
    The recommended values are at least:
    timeout client          300s 
    timeout server          300s 
  2. If the timeout values are less than 300 seconds (5 minutes), update the values:
    • To change the timeout client setting, enter the following command:
      sed -i -e "/timeout client/s/ [0-9].*/ 5m/" /etc/haproxy/haproxy.cfg
    • To change the timeout server setting, enter the following command:
      sed -i -e "/timeout server/s/ [0-9].*/ 5m/" /etc/haproxy/haproxy.cfg
  3. Run the following command to apply the changes that you made to the HAProxy configuration:
    systemctl restart haproxy

CRI-O container settings

To ensure that services can run correctly, you must adjust the maximum number of processes and the maximum number of open files in the CRI-O container settings.

These settings are required if you are using the CRI-O container runtime.

For Red Hat® OpenShift® version 4.3

On Red Hat OpenShift version 4.3, Machine Config Pools manage your cluster of nodes and their corresponding Machine Configs (machineconfig). To change a setting in the crio.conf file, you can create a new machineconfig containing only the crio.conf file.

  1. Make sure that you have python3 installed.
  2. On any worker node in the cluster, extract the current crio.conf settings to /tmp/crio.conf.

    For example:

    python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(sys.argv[1]))" $(oc get $(oc get mc -o name --sort-by=.metadata.creationTimestamp | grep rendered-worker| tail -n1 ) -o jsonpath='{.spec.config.storage.files[?(@.path=="/etc/crio/crio.conf")].contents.source}' | awk -F" " '{print $1}' | sed 's/data:,//g')  > /tmp/crio.conf
  3. On the worker node, check the maximum number of open files setting by running the following command:
    ulimit -n

    The recommended value is at least 66560.

    1. If the ulimit value is less than 66560, edit or add the following entry in the [crio.runtime] section of the /tmp/crio.conf file:
      default_ulimits = [
              "nofile=66560:66560"
      ]
  4. On the same worker node, check the maximum number of processes setting by running the following command:
    ulimit -u

    The recommended value is at least 12288.

    1. If the ulimit value is less than 12288, edit or add the following entry in the [crio.runtime] section of the /tmp/crio.conf file:
      pids_limit = 12288
  5. Verify that /tmp/crio.conf looks like the following example:
    [crio]
    root = "/var/lib/containers/storage"
    runroot = "/var/run/containers/storage"
    storage_driver = "overlay"
    storage_option = [
            "overlay.override_kernel_check=1",
       ]
    
    [crio.api]
    listen = "/var/run/crio/crio.sock"
    stream_address = ""
    stream_port = "10010"
    file_locking = true
    
    [crio.runtime]
    runtime = "/usr/bin/runc"
    runtime_untrusted_workload = ""
    default_workload_trust = "trusted"
    no_pivot = false
    conmon = "/usr/libexec/crio/conmon"
    conmon_env = [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
       ]
    selinux = true
    seccomp_profile = "/etc/crio/seccomp.json"
    apparmor_profile = "crio-default"
    cgroup_manager = "systemd"
    hooks_dir_path = "/usr/share/containers/oci/hooks.d"
    default_mounts = [
            "/usr/share/rhel/secrets:/run/secrets",
       ]
    default_ulimits = [
            "nofile=66560:66560"
    ]
    pids_limit = 12288
    enable_shared_pid_namespace = false
    log_size_max = 52428800
    
    [crio.image]
    default_transport = "docker://"
    pause_image = "docker.io/openshift/origin-pod:v3.11"
    pause_command = "/usr/bin/pod"
    signature_policy = ""
    image_volumes = "mkdir"
    insecure_registries = [
    ""
    ]
    registries = [
    "docker.io"
    ]
    
    [crio.network]
    network_dir = "/etc/cni/net.d/"
    plugin_dir = "/opt/cni/bin"
  6. Save the contents of the /tmp/crio.conf to a local variable by using the following command:
    crio_conf=$(cat /tmp/crio.conf | python3 -c "import sys, urllib.parse; print(urllib.parse.quote(''.join(sys.stdin.readlines())))")
    Important: Verify that crio.conf is not empty.
  7. Create a new machineconfig, with the name 51-worker-cp4d-crio-conf, by using the following command:
    cat << EOF > /tmp/51-worker-cp4d-crio-conf.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
     labels:
       machineconfiguration.openshift.io/role: worker
     name: 51-worker-cp4d-crio-conf
    spec:
     config:
       ignition:
         version: 2.2.0
       storage:
         files:
         - contents:
             source: data:,${crio_conf}
           filesystem: root
           mode: 0644
           path: /etc/crio/crio.conf
    EOF
  8. Apply the new machineconfig to the cluster by running the following command:
    oc create -f /tmp/51-worker-cp4d-crio-conf.yaml

For more information, see Changing Ignition Configs after installation.

For Red Hat OpenShift version 3.11

On each compute node in the cluster, perform the following steps.

  1. Check the maximum number of open files setting by running the following command:
    ulimit -n

    The recommended value is at least 66560.

    1. If the ulimit value is less than 66560, edit or add the following entry in the [crio.runtime] section of the /etc/crio/crio.conf file:
      default_ulimits = [
              "nofile=66560:66560"
      ]
  2. Check the maximum number of processes setting by running the following command:
    ulimit -u

    The recommended value is at least 12288.

    1. If the ulimit value is less than 12288, edit or add the following entry in the [crio.runtime] section of the /etc/crio/crio.conf file:
      pids_limit = 12288
  3. Verify that /etc/crio/crio.conf is similar to the following example:
    [crio]
    root = "/var/lib/containers/storage"
    runroot = "/var/run/containers/storage"
    storage_driver = "overlay"
    storage_option = [
            "overlay.override_kernel_check=1",
       ]
    
    [crio.api]
    listen = "/var/run/crio/crio.sock"
    stream_address = ""
    stream_port = "10010"
    file_locking = true
    
    [crio.runtime]
    runtime = "/usr/bin/runc"
    runtime_untrusted_workload = ""
    default_workload_trust = "trusted"
    no_pivot = false
    conmon = "/usr/libexec/crio/conmon"
    conmon_env = [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
       ]
    selinux = true
    seccomp_profile = "/etc/crio/seccomp.json"
    apparmor_profile = "crio-default"
    cgroup_manager = "systemd"
    hooks_dir_path = "/usr/share/containers/oci/hooks.d"
    default_mounts = [
            "/usr/share/rhel/secrets:/run/secrets",
       ]
    default_ulimits = [
            "nofile=66560:66560"
    ]
    pids_limit = 12288
    enable_shared_pid_namespace = false
    log_size_max = 52428800
    
    [crio.image]
    default_transport = "docker://"
    pause_image = "docker.io/openshift/origin-pod:v3.11"
    pause_command = "/usr/bin/pod"
    signature_policy = ""
    image_volumes = "mkdir"
    insecure_registries = [
    ""
    ]
    registries = [
    "docker.io"
    ]
    
    [crio.network]
    network_dir = "/etc/cni/net.d/"
    plugin_dir = "/opt/cni/bin"
  4. Run the following command to apply the changes that you made to the /etc/crio/crio.conf file:
    systemctl restart crio

Docker container settings

To ensure that services can run correctly, you must adjust the maximum number of processes and the maximum number of open files in the Docker container settings.

These settings are required if you are using the Docker container runtime on Red Hat OpenShift version 3.11.

Note: Docker is not supported for Red Hat OpenShift version 4.3.

On each compute node in the cluster, perform the following steps.

  1. Check the maximum number of open files setting by running the following command:
    ulimit -n

    The recommended value is at least 66560.

    1. If the ulimit value is less than 66560, edit or append the following setting in the OPTIONS line in the /etc/sysconfig/docker file:
      OPTIONS=' --default-ulimit nofile=66560'
  2. Check the maximum number of processes setting by running the following command:
    ulimit -u

    The recommended value is at least 12288.

    1. If the ulimit value is less than 12288, edit or append the following setting in the OPTIONS line in the /etc/sysconfig/docker file:
      OPTIONS=' --default-pids-limit=12288'
  3. Run the following command to apply the changes that you made to the /etc/sysconfig/docker file:
    systemctl restart docker

Kernel parameter settings

To ensure that certain microservices can run correctly, you must verify the kernel parameters. These settings are required for all deployments; however, they depend on the machine RAM size and the OS page size. The following steps assume that you have worker nodes with 64 GB of RAM on an x86 platform with a 4 K OS page size. If the worker nodes have 128 GB of RAM each, you must double the values for the kernel.shm* values.

  • Virtual memory limit (vm.max_map_count)
  • Message limits (kernel.msgmax, kernel.msgmnb, and kernel.msgmni)
  • Shared memory limits (kernel.shmmax, kernel.shmall, and kernel.shmmni)
    The following settings are recommended:
    • kernel.shmmni: 256 * <size of RAM in GB>
    • kernel.shmmax: <size of RAM in bytes>
    • kernel.shmall: 2 * <size of RAM in the default OS system page size>

    The default OS system page size on Power Systems is 64KB. Take this OS page size into account when you set the value for kernel.shmall. For more information, see Modifying kernel parameters (Linux®) in Kernel parameters for Db2® database server installation (Linux and UNIX).

  • Semaphore limits (kernel.sem)

    As of Red Hat Enterprise Linux version 7.8 and Red Hat Enterprise Linux version 8.1, the kernel.shmmni, kernel.msgmni, and kernel.semmni settings in kernel.sem settings are capped at 32768. Values larger than 32768 are not applied and default values are used. The default value for shmmni is 4096. The default value for msgmni is 32000. The default value for semmni is 128. Although you can apply values larger than 32768 by using the boot parameter ipcmni_extend, the values are still capped to 32768 internally by Red Hat Enterprise Linux. For more information, see On RHEL servers, changing the semaphore value fails with a message "setting key "kernel.sem": Numerical result out of range".

For Red Hat OpenShift version 4.3

On Red Hat OpenShift, you can use the Node Tuning Operator to manage node-level profiles. For more information, see Using the Node Tuning Operator.

Note: The following steps affect all services and all worker nodes on the cluster. You may need to manage node-level profiles for each worker node in the cluster based on the services that are installed. You can limit node tuning to specific nodes. For more information, see Managing nodes.
  1. Create a YAML file, 42-cp4d.yaml, with the following content. If your current settings are less than the recommendations, adjust the settings in your YAML file. This step assumes that you have worker nodes with 64 GB of RAM.
    apiVersion: tuned.openshift.io/v1
    kind: Tuned
    metadata:
      name: cp4d-wkc-ipc
      namespace: openshift-cluster-node-tuning-operator
    spec:
      profile:
      - name: cp4d-wkc-ipc
        data: |
          [main]
          summary=Tune IPC Kernel parameters on OpenShift Worker Nodes running WKC Pods
          [sysctl]
          kernel.shmall = 33554432
          kernel.shmmax = 68719476736
          kernel.shmmni = 16384
          kernel.sem = 250 1024000 100 16384
          kernel.msgmax = 65536
          kernel.msgmnb = 65536
          kernel.msgmni = 32768
          vm.max_map_count = 262144
      recommend:
      - match:
        - label: node-role.kubernetes.io/worker
        priority: 10
        profile: cp4d-wkc-ipc

    IBM Power Systems On Power Systems, create a YAML file, 42-cp4d.yaml, with the following content. Adjust the kernel.sem if required.

    apiVersion: tuned.openshift.io/v1
    kind: Tuned
    metadata:
      name: cp4d-wkc-ipc
      namespace: openshift-cluster-node-tuning-operator
    spec:
      profile:
      - name: cp4d-wkc-ipc
        data: |
          [main]
          summary=Tune IPC Kernel parameters on OpenShift Worker Nodes running WKC Pods
          [sysctl]
          kernel.shmall = 2097152
          kernel.shmmax = 68719476736
          kernel.shmmni = 16384
          kernel.sem = 250 1024000 100 16384
          kernel.msgmax = 65536
          kernel.msgmnb = 65536
          kernel.msgmni = 32768
          vm.max_map_count = 262144
      recommend:
      - match:
        - label: node-role.kubernetes.io/worker
        priority: 10
        profile: cp4d-wkc-ipc
  2. Run the following command to apply the changes:
    oc create -f 42-cp4d.yaml

For Red Hat OpenShift version 3.11

On each compute node in the cluster, perform the following steps.

  1. Create a conf file for the Cloud Pak for Data kernel settings in the /etc/sysctl.d directory.

    The files are processed in alphabetical order. To ensure that the settings from the Cloud Pak for Data conf file are retained after the sysctl files are processed, give the conf file a name that ensures it is at the end of the queue. For example, name the file /etc/sysctl.d/42-cp4d.conf

  2. Check the virtual memory limit setting by running the following command:
    sysctl -a 2>/dev/null | grep vm.max | grep -v next_id
    The recommended value is at least:
    vm.max_map_count = 262144
    1. If the vm.max_map_count value is less than 262144, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
      vm.max_map_count = 262144
  3. Check the message limit settings by running the following command:
    sysctl -a 2>/dev/null | grep kernel.msg | grep -v next_id
    The recommended values are at least:
    kernel.msgmax = 65536
    kernel.msgmnb = 65536
    kernel.msgmni = 32768
    1. If the kernel.msgmax value is less than 65536, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
      kernel.msgmax = 65536
    2. If the kernel.msgmnb value is less than 65536, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
      kernel.msgmnb = 65536
    3. If the kernel.msgmni value is less than 32768, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
      kernel.msgmni = 32768
  4. Check the shared memory limit settings by running the following command:
    sysctl -a 2>/dev/null | grep kernel.shm | grep -v next_id | grep -v shm_rmid_forced
    The recommended values are at least:
    kernel.shmmax = 68719476736
    kernel.shmall = 33554432
    kernel.shmmni = 16384
    1. If the kernel.shmmax value is less than 68719476736, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
      kernel.shmmax = 68719476736
    2. If the kernel.shmall value is less than 33554432, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
      kernel.shmall = 33554432
    3. If the kernel.shmmni value is less than 16384, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
  5. Check the semaphore limit settings by running the following command:
    sysctl -a 2>/dev/null | grep kernel.sem | grep -v next_id
    The recommended values are at least:
    kernel.sem = 250 1024000 100 16384
    Specifically:
    • The max semaphores per array must be at least 250.
    • The max semaphores system wide must be at least 1024000.
    • The max ops per semop call must be at least 100.
    • The max number of arrays must be at least 16384.
    1. If any of the semaphore limit settings are less that the minimum requirements, add the following entry in the /etc/sysctl.d/42-cp4d.conf file:
      kernel.sem = 250 1024000 100 16384
  6. Run the following command to apply the changes that you made to the /etc/sysctl.d/42-cp4d.conf:
    sysctl -p /etc/sysctl.d/42-cp4d.conf

Power SMT settings

On Power Systems, you must also change the simultaneous multithreading (SMT) settings for small core (Kernel-based Virtual Machine capable) and big core (PowerVM® capable) systems.
Note: PowerVM capable systems include L922, E950, E980.

Kernel-based Virtual Machine (KVM) capable systems include LC922, IC922, AC922.

  1. Create a YAML file, smt.yaml, with the following content:
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-smt
    spec:
      kernelArguments:
        - 'slub_max_order=0'
      config:
        ignition:
          version: 2.2.0
        storage:
          files:
            - path: /usr/local/bin/powersmt
              overwrite: true
              mode: 0700
              filesystem: root
              contents:
                source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKZXhwb3J0IFBBVEg9L3Jvb3QvLmxvY2FsL2Jpbjovcm9vdC9iaW46L3NiaW46L2JpbjovdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluCmV4cG9ydCBLVUJFQ09ORklHPS92YXIvbGliL2t1YmVsZXQva3ViZWNvbmZpZwpDT1JFUFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNvcmVcKHNcKSBwZXIgc29ja2V0JC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKU09DS0VUUz0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eU29ja2V0XChzXCkkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQpsZXQgVE9UQUxDT1JFUz0kQ09SRVBTKiRTT0NLRVRTCk1BWFRIUkVBRFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNQVVwoc1wpJC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKbGV0IE1BWFNNVD0kTUFYVEhSRUFEUy8kVE9UQUxDT1JFUwpDVVJSRU5UU01UPSQoL2Jpbi9sc2NwdSB8IC9iaW4vYXdrIC1GOiAnICQxIH4gL15UaHJlYWRcKHNcKSBwZXIgY29yZSQvIHtwcmludCAkMn0nfC9iaW4veGFyZ3MpCgp3aGlsZSA6CmRvCiAgSVNOT0RFREVHUkFERUQ9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtbyB5YW1sIHwvYmluL2dyZXAgbWFjaGluZWNvbmZpZ3VyYXRpb24ub3BlbnNoaWZ0LmlvL3JlYXNvbiB8L2Jpbi9ncmVwICJ1bmV4cGVjdGVkIG9uLWRpc2sgc3RhdGUgdmFsaWRhdGluZyIpCiAgU01UTEFCRUw9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtTCBTTVQgLS1uby1oZWFkZXJzIHwvYmluL2F3ayAne3ByaW50ICQ2fScpCiAgaWYgW1sgLW4gJFNNVExBQkVMIF1dCiAgICB0aGVuCiAgICAgIGNhc2UgJFNNVExBQkVMIGluCiAgICAgICAgMSkgVEFSR0VUU01UPTEKICAgICAgOzsKICAgICAgICAyKSBUQVJHRVRTTVQ9MgogICAgICA7OwogICAgICAgIDQpIFRBUkdFVFNNVD00CiAgICAgIDs7CiAgICAgICAgOCkgVEFSR0VUU01UPTgKICAgICAgOzsKICAgICAgICAqKSBUQVJHRVRTTVQ9JENVUlJFTlRTTVQgOyBlY2hvICJTTVQgdmFsdWUgbXVzdCBiZSAxLCAyLCA0LCBvciA4IGFuZCBzbWFsbGVyIHRoYW4gTWF4aW11bSBTTVQuIgogICAgICA7OwogICAgICBlc2FjCiAgICBlbHNlCiAgICAgIFRBUkdFVFNNVD0kTUFYU01UCiAgZmkKCiAgaWYgW1sgLW4gJElTTk9ERURFR1JBREVEIF1dCiAgICB0aGVuCiAgICAgIHRvdWNoIC9ydW4vbWFjaGluZS1jb25maWctZGFlbW9uLWZvcmNlCiAgZmkKCiAgQ1VSUkVOVFNNVD0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eVGhyZWFkXChzXCkgcGVyIGNvcmUkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQoKICBpZiBbWyAkQ1VSUkVOVFNNVCAtbmUgJFRBUkdFVFNNVCBdXQogICAgdGhlbgogICAgICBJTklUT05USFJFQUQ9MAogICAgICBJTklUT0ZGVEhSRUFEPSRUQVJHRVRTTVQKICAgICAgaWYgW1sgJE1BWFNNVCAtZ2UgJFRBUkdFVFNNVCBdXQogICAgICAgIHRoZW4KICAgICAgICAgIHdoaWxlIFtbICRJTklUT05USFJFQUQgLWx0ICRNQVhUSFJFQURTIF1dCiAgICAgICAgICBkbwogICAgICAgICAgICBPTlRIUkVBRD0kSU5JVE9OVEhSRUFECiAgICAgICAgICAgIE9GRlRIUkVBRD0kSU5JVE9GRlRIUkVBRAoKICAgICAgICAgICAgd2hpbGUgW1sgJE9OVEhSRUFEIC1sdCAkT0ZGVEhSRUFEIF1dCiAgICAgICAgICAgIGRvCiAgICAgICAgICAgICAgL2Jpbi9lY2hvIDEgPiAvc3lzL2RldmljZXMvc3lzdGVtL2NwdS9jcHUkT05USFJFQUQvb25saW5lCiAgICAgICAgICAgICAgbGV0IE9OVEhSRUFEPSRPTlRIUkVBRCsxCiAgICAgICAgICAgIGRvbmUKICAgICAgICAgICAgbGV0IElOSVRPTlRIUkVBRD0kSU5JVE9OVEhSRUFEKyRNQVhTTVQKICAgICAgICAgICAgd2hpbGUgW1sgJE9GRlRIUkVBRCAtbHQgJElOSVRPTlRIUkVBRCBdXQogICAgICAgICAgICBkbwogICAgICAgICAgICAgIC9iaW4vZWNobyAwID4gL3N5cy9kZXZpY2VzL3N5c3RlbS9jcHUvY3B1JE9GRlRIUkVBRC9vbmxpbmUKICAgICAgICAgICAgICBsZXQgT0ZGVEhSRUFEPSRPRkZUSFJFQUQrMQogICAgICAgICAgICBkb25lCiAgICAgICAgICAgIGxldCBJTklUT0ZGVEhSRUFEPSRJTklUT0ZGVEhSRUFEKyRNQVhTTVQKICAgICAgICAgIGRvbmUKICAgICAgICBlbHNlCiAgICAgICAgICBlY2hvICJUYXJnZXQgU01UIG11c3QgYmUgc21hbGxlciBvciBlcXVhbCB0aGFuIE1heGltdW0gU01UIHN1cHBvcnRlZCIKICAgICAgZmkKICBmaQogIC9iaW4vc2xlZXAgMzAKZG9uZQo=
            - path: /etc/systemd/system/smt.service
              overwrite: true
              mode: 0644
              filesystem: root
              contents:
                source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKZXhwb3J0IFBBVEg9L3Jvb3QvLmxvY2FsL2Jpbjovcm9vdC9iaW46L3NiaW46L2JpbjovdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluCmV4cG9ydCBLVUJFQ09ORklHPS92YXIvbGliL2t1YmVsZXQva3ViZWNvbmZpZwpDT1JFUFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNvcmVcKHNcKSBwZXIgc29ja2V0JC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKU09DS0VUUz0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eU29ja2V0XChzXCkkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQpsZXQgVE9UQUxDT1JFUz0kQ09SRVBTKiRTT0NLRVRTCk1BWFRIUkVBRFM9JCgvYmluL2xzY3B1IHwgL2Jpbi9hd2sgLUY6ICcgJDEgfiAvXkNQVVwoc1wpJC8ge3ByaW50ICQyfSd8L2Jpbi94YXJncykKbGV0IE1BWFNNVD0kTUFYVEhSRUFEUy8kVE9UQUxDT1JFUwpDVVJSRU5UU01UPSQoL2Jpbi9sc2NwdSB8IC9iaW4vYXdrIC1GOiAnICQxIH4gL15UaHJlYWRcKHNcKSBwZXIgY29yZSQvIHtwcmludCAkMn0nfC9iaW4veGFyZ3MpCgp3aGlsZSA6CmRvCiAgSVNOT0RFREVHUkFERUQ9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtbyB5YW1sIHwvYmluL2dyZXAgbWFjaGluZWNvbmZpZ3VyYXRpb24ub3BlbnNoaWZ0LmlvL3JlYXNvbiB8L2Jpbi9ncmVwICJ1bmV4cGVjdGVkIG9uLWRpc2sgc3RhdGUgdmFsaWRhdGluZyIpCiAgU01UTEFCRUw9JCgvYmluL29jIGdldCBub2RlICRIT1NUTkFNRSAtTCBTTVQgLS1uby1oZWFkZXJzIHwvYmluL2F3ayAne3ByaW50ICQ2fScpCiAgaWYgW1sgLW4gJFNNVExBQkVMIF1dCiAgICB0aGVuCiAgICAgIGNhc2UgJFNNVExBQkVMIGluCiAgICAgICAgMSkgVEFSR0VUU01UPTEKICAgICAgOzsKICAgICAgICAyKSBUQVJHRVRTTVQ9MgogICAgICA7OwogICAgICAgIDQpIFRBUkdFVFNNVD00CiAgICAgIDs7CiAgICAgICAgOCkgVEFSR0VUU01UPTgKICAgICAgOzsKICAgICAgICAqKSBUQVJHRVRTTVQ9JENVUlJFTlRTTVQgOyBlY2hvICJTTVQgdmFsdWUgbXVzdCBiZSAxLCAyLCA0LCBvciA4IGFuZCBzbWFsbGVyIHRoYW4gTWF4aW11bSBTTVQuIgogICAgICA7OwogICAgICBlc2FjCiAgICBlbHNlCiAgICAgIFRBUkdFVFNNVD0kTUFYU01UCiAgZmkKCiAgaWYgW1sgLW4gJElTTk9ERURFR1JBREVEIF1dCiAgICB0aGVuCiAgICAgIHRvdWNoIC9ydW4vbWFjaGluZS1jb25maWctZGFlbW9uLWZvcmNlCiAgZmkKCiAgQ1VSUkVOVFNNVD0kKC9iaW4vbHNjcHUgfCAvYmluL2F3ayAtRjogJyAkMSB+IC9eVGhyZWFkXChzXCkgcGVyIGNvcmUkLyB7cHJpbnQgJDJ9J3wvYmluL3hhcmdzKQoKICBpZiBbWyAkQ1VSUkVOVFNNVCAtbmUgJFRBUkdFVFNNVCBdXQogICAgdGhlbgogICAgICBJTklUT05USFJFQUQ9MAogICAgICBJTklUT0ZGVEhSRUFEPSRUQVJHRVRTTVQKICAgICAgaWYgW1sgJE1BWFNNVCAtZ2UgJFRBUkdFVFNNVCBdXQogICAgICAgIHRoZW4KICAgICAgICAgIC91c3IvYmluL3N5c3RlbWN0bCBzdG9wIGNyaW8KICAgICAgICAgIHdoaWxlIFtbICRJTklUT05USFJFQUQgLWx0ICRNQVhUSFJFQURTIF1dCiAgICAgICAgICBkbwogICAgICAgICAgICBPTlRIUkVBRD0kSU5JVE9OVEhSRUFECiAgICAgICAgICAgIE9GRlRIUkVBRD0kSU5JVE9GRlRIUkVBRAoKICAgICAgICAgICAgd2hpbGUgW1sgJE9OVEhSRUFEIC1sdCAkT0ZGVEhSRUFEIF1dCiAgICAgICAgICAgIGRvCiAgICAgICAgICAgICAgL2Jpbi9lY2hvIDEgPiAvc3lzL2RldmljZXMvc3lzdGVtL2NwdS9jcHUkT05USFJFQUQvb25saW5lCiAgICAgICAgICAgICAgbGV0IE9OVEhSRUFEPSRPTlRIUkVBRCsxCiAgICAgICAgICAgIGRvbmUKICAgICAgICAgICAgbGV0IElOSVRPTlRIUkVBRD0kSU5JVE9OVEhSRUFEKyRNQVhTTVQKICAgICAgICAgICAgd2hpbGUgW1sgJE9GRlRIUkVBRCAtbHQgJElOSVRPTlRIUkVBRCBdXQogICAgICAgICAgICBkbwogICAgICAgICAgICAgIC9iaW4vZWNobyAwID4gL3N5cy9kZXZpY2VzL3N5c3RlbS9jcHUvY3B1JE9GRlRIUkVBRC9vbmxpbmUKICAgICAgICAgICAgICBsZXQgT0ZGVEhSRUFEPSRPRkZUSFJFQUQrMQogICAgICAgICAgICBkb25lCiAgICAgICAgICAgIGxldCBJTklUT0ZGVEhSRUFEPSRJTklUT0ZGVEhSRUFEKyRNQVhTTVQKICAgICAgICAgIGRvbmUKICAgICAgICAgIC91c3IvYmluL3N5c3RlbWN0bCBzdGFydCBjcmlvCiAgICAgICAgZWxzZQogICAgICAgICAgZWNobyAiVGFyZ2V0IFNNVCBtdXN0IGJlIHNtYWxsZXIgb3IgZXF1YWwgdGhhbiBNYXhpbXVtIFNNVCBzdXBwb3J0ZWQiCiAgICAgIGZpCiAgZmkKICAvYmluL3NsZWVwIDMwCmRvbmUK
        systemd:
          units:
            - name: smt.service
              enabled: true
  2. Run the following command to apply the changes:
    oc create -f smt.yaml

    Your worker nodes will perform a rolling reboot action to update the kernel boot command line parameters. When the rolling reboot is complete, perform the remaining steps.

  3. On all PowerVM capable worker nodes (for example, nodes on L922, E950, and E980), run the following command:
    oc label node <node> SMT=4
  4. On all KVM capable worker nodes (for example, nodes on LC922, IC922, and AC922), run the following command:
    oc label node <node> SMT=2