Skip to main content

Pod_Affinity_Rules

Pod Affinity Rules

Pods which may communicate a lot or share data may operate best if co-located, which would be a form of affinity. For greater fault tolerance, you may want Pods to be as separate as possible, which would be anti-affinity. These settings are used by the scheduler based on the labels of Pods that are already running. As a result, the scheduler must interrogate each node and track the labels of running Pods. Clusters larger than several hundred nodes may see significant performance loss. Pod affinity rules use the following operators:

  • `In
  • NotIn
  • Exists
  • DoesNotExist

Node Affinity

  • requiredDuringSchedulingIgnoredDuringExecution Means that the Pod will not be scheduled on a node unless the following operator is true. If the operator changes to become false in the future, the Pod will continue to run. This could be seen as a hard rule.

  • preferredDuringSchedulingIgnoredDuringExecution Will choose a node with the desired setting before those without. If no properly-labeled nodes are available, the Pod will execute anyway. This is more of a soft setting, which declares a preference instead of a requirement.

  • podAffinity the scheduler will try to schedule Pods together.

  • podAntiAffinity Would cause the scheduler to keep Pods on different nodes.

  • topologyKey Allows a general grouping of Pod deployments. Affinity (or the inverse anti-affinity) will try to run on nodes with the declared topology key and running Pods with a particular label. The topologyKey could be any legal key, with some important considerations.

    • If using requiredDuringScheduling and the admission controller LimitPodHardAntiAffinityTopology setting, the topologyKey must be set to kubernetes.io/hostname.
    • If using PreferredDuringScheduling, an empty topologyKey is assumed to be all, or the combination of kubernetes.io/hostname, topology.kubernetes.io/zone and topology.kubernetes.io/region.