Pod_Affinity_Rules
Pod
Affinity Rules
Pod
s which may communicate a lot or share data may operate best if co-located, which would be a form of affinity
. For greater fault tolerance, you may want Pod
s to be as separate as possible, which would be anti-affinity
.
These settings are used by the scheduler based on the labels
of Pod
s that are already running. As a result, the scheduler must interrogate each node and track the labels
of running Pod
s. Clusters larger than several hundred nodes may see significant performance loss. Pod
affinity
rules use the following operators:
- `In
NotIn
Exists
DoesNotExist
Node
Affinity
-
requiredDuringSchedulingIgnoredDuringExecution
Means that thePod
will not be scheduled on a node unless the following operator is true. If the operator changes to become false in the future, thePod
will continue to run. This could be seen as a hard rule. -
preferredDuringSchedulingIgnoredDuringExecution
Will choose a node with the desired setting before those without. If no properly-labeled nodes are available, thePod
will execute anyway. This is more of a soft setting, which declares a preference instead of a requirement. -
podAffinity
the scheduler will try to schedulePod
s together. -
podAntiAffinity
Would cause the scheduler to keepPod
s on different nodes. -
topologyKey
Allows a general grouping ofPod
deployments. Affinity (or the inverse anti-affinity) will try to run on nodes with the declared topology key and runningPod
s with a particularlabel
. ThetopologyKey
could be any legal key, with some important considerations.- If using
requiredDuringScheduling
and the admission controllerLimitPodHardAntiAffinityTopology
setting, thetopologyKey
must be set tokubernetes.io/hostname
. - If using
PreferredDuringScheduling
, an emptytopologyKey
is assumed to be all, or the combination ofkubernetes.io/hostname
,topology.kubernetes.io/zone
andtopology.kubernetes.io/region
.
- If using