topologySpreadConstraints parameter
Use the topologySpreadConstraints parameter to define a list of Topology
Spread Constraints definitions to spread pods evenly to the nodes.
Defining this parameter helps control how pods are spread across your cluster among failure domains such as regions, zones, nodes, and other user-defined topology domains. If server configurations are defined, the Operator pulls from a predefined set of constraints and integrates them into the pod specification of the managed workload.
The following YAML snippet is a sample schema of topologySpreadConstraints:
spec:
topologySpreadConstraints:
- name: constraint1
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: sip
matchLabelKeys:
- type
minDomains: 3
- name: constraint2
maxSkew: 2
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
topologySpreadConstraints parameter of SIPEnvironment. For more
information, see Pod Topology Spread Constraints.
| Property | Default value | Value type | Required | Description |
|---|---|---|---|---|
name |
string | Yes | Specify the name of the Topology Spread Constraint. | |
maxSkew |
integer | Yes | Specify the degree to which pods might be unevenly distributed. This is a mandatory property and its value must be greater than zero. | |
minDomains |
integer | No | Specify the minimum number of eligible domains. | |
topologyKey |
string | Yes | Specify the key of node labels. | |
whenUnsatisfiable |
string | Yes | Specify this property to determine how a pod is handled if it does not satisfy the spread constraint. | |
labelSelector |
string | No | Specify this property to find matching pods. | |
matchLabelKeys |
string | No | Specify the list of pod label keys to select the pods over which spreading will be calculated. | |
nodeAffinityPolicy |
string | No | Specify this property to determine how nodeAffinity or
nodeSelector of a pod are treated when calculating the pod topology spread skew.
Allowed values are Honor or Ignore. |
|
nodeTaintsPolicy |
string | No | Specify this property to determine how node taints are treated when calculating the pod topology spread skew. Allowed values are Honor or Ignore. |
Examples to call Topology Spread Constraints from servers
Example 1
The following example demonstrates how to define topologySpreadConstraints and
use it for all the pods from ApiSupplies and ApiDemands to spread
evenly and independently, by using the matchLabelKeys.
- Define
topologySpreadConstraintsin SIPEnvironment custom resource.spec: topologySpreadConstraints: - name: constraint1 maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: sip matchLabelKeys: type - From the individual server or service group custom resource, call the topologies that are
defined in step 1.
appServers: - active: true names: - ApiSupplies podLabels: app: sip type: appserver1 topology: - [constraint1] - active: true names: - ApiDemands podLabels: app: sip type: appserver2 topology: - [constraint1]
Example 2
The following example demonstrates how to define topologySpreadConstraints and
use it for all the pods from ApiSupplies and ApiDemands to spread
together.
- Define
topologySpreadConstraintsin SIPEnvironment custom resource.spec: topologySpreadConstraints: - name: constraint1 maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: sip - From the individual server or service group custom resource, call the topologies that are
defined in step 1.
appServers: - active: true names: - ApiSupplies podLabels: app: sip type: appserver1 topology: - [constraint1] - active: true names: - ApiDemands podLabels: app: sip type: appserver2 topology: - [constraint1]