Configuring node scoring for the scheduling service

You can adjust the node scoring configuration for the scheduling service if you want to have more control over where the IBM Cloud Pak® for Data scheduling service schedules pods.

Who needs to complete this task?
A cluster administrator must complete this task.
When do you need to complete this task?
This task applies only if you installed the scheduling service.

Complete this task if you want to override the default pod scheduling behavior of the scheduling service.

Important: The node scoring configuration will impact all instances of Cloud Pak for Data on the cluster.

Before you begin

Best practice: You can run many of the commands in this task exactly as written if you set up environment variables for your installation. For instructions, see Setting up installation environment variables.

Ensure that you source the environment variables before you run the commands in this task.

About this task

The scheduling service is based on the default Kubernetes scheduler.

When the Kubernetes scheduler needs to schedule a pod, it uses node scoring to determine which node to schedule the pod on. The Kubernetes scheduler includes several plug-ins. Each plug-in has a weight that factors into the node score. The pod is scheduled on the node with the highest score.

Kubernetes scheduler plug-in Default weight
TaintToleration 3
InterPodAffinity 2
NodeAffinity 2
PodTopologySpread 2
ImageLocality 1
NodeResourcesBalancedAllocation 1
NodeResourcesFit 1

The Cloud Pak for Data scheduling service includes a parameter called nodePreference. The Cloud Pak for Data scheduling service looks at the node scores based on the following plug-ins:

  • TaintToleration
  • InterPodAffinity
  • NodeAffinity
  • PodTopologySpread
  • ImageLocality

If the score for two nodes is the same, the nodePreference parameter acts as a tiebreaker.

By default, the Cloud Pak for Data scheduling service is configured to give more weight to nodes with the lowest amount of allocated CPU (LessCPURequest). This setting causes pods to spread out across nodes.

You can configure the nodePreference parameter to use a different dimension or to use multiple dimensions. The configuration that you use determines whether pods are spread out across nodes or whether the pods are packed on a subset of the nodes.

Settings that spread pods out across nodes
Option Dimension
LessCPURequest Nodes with less allocated CPU (more available CPU) score higher. The available CPU is determined by pod requests.
LessMemRequest Nodes with less allocated memory (more available memory) score higher. The available memory is determined by pod requests.
LessCPULimit Nodes with less allocated CPU (more available CPU) score higher. The available CPU is determined by pod limits.
LessMemLimit Nodes with less allocated memory (more available memory) score higher. The available memory is determined by pod limits.

You can also combine options. For example:

Options Dimension
LessCPURequest LessMemRequest The score for each node is based on the average of the available CPU and available memory. The available resources are determined by pod requests.
LessCPULimit LessMemLimit The score for each node is based on the average of the available CPU and available memory. The available resources are determined by pod limits.
Settings that pack pods on a subset of the nodes
Option Dimension
MoreCPURequest Nodes with more allocated CPU score higher. The available CPU is determined by pod requests.
MoreMemRequest Nodes with more allocated memory score higher. The available memory is determined by pod requests.
MoreCPULimit Nodes with more allocated CPU score higher. The available CPU is determined by pod limits.
MoreMemLimit Nodes with more allocated memory score higher. The available memory is determined by pod limits.

You can also combine options. For example:

Options Dimension
MoreCPURequest MoreMemRequest The score for each node is based on the average of the allocated CPU and allocated memory. The available resources are determined by pod requests.
MoreCPULimit MoreMemLimit The score for each node is based on the average of the allocated CPU and allocated memory. The available resources are determined by pod limits.

Procedure

To configure node scoring for the Cloud Pak for Data scheduling service:

  1. Log in to Red Hat® OpenShift® Container Platform as a user with sufficient permissions to complete the task.
    oc login ${OCP_URL}
  2. Set your editor to nano:
    export EDITOR=nano
  3. Open the ibm-cpd-scheduler-scheduler ConfigMap in the editor:
    oc edit cm ibm-cpd-scheduler-scheduler \
    --namespace=${PROJECT_SCHEDULING_SERVICE}
  4. Update the nodePreference parameter.

    Delimit multiple options with a space. For example: LessCPURequest LessMemRequest

    ......
    apiVersion: v1
    data:
    scheduler.yaml: |-
        .....
        # Configure node preference, valid values are:
        # LessCPURequest LessMemRequest LessCPULimit LessMemLimit 
        # MoreCPURequest MoreMemRequest MoreCPULimit MoreMemLimit
        # To configure more than one values, separate the values by space, like "nodePreference: LessCPURequest LessMemRequest"
        nodePreference: LessCPURequest
        ......
  5. Press Ctrl+O to save your changes.
  6. Press Ctrl+X to exit the editor.