Pod_Affinity_Rules
Pod Affinity Rules
Pods which may communicate a lot or share data may operate best if co-located, which would be a form of affinity. For greater fault tolerance, you may want Pods to be as separate as possible, which would be anti-affinity.
These settings are used by the scheduler based on the labels of Pods that are already running. As a result, the scheduler must interrogate each node and track the labels of running Pods. Clusters larger than several hundred nodes may see significant performance loss. Pod affinity rules use the following operators:
- `In
NotInExistsDoesNotExist
Node Affinity
-
requiredDuringSchedulingIgnoredDuringExecutionMeans that thePodwill not be scheduled on a node unless the following operator is true. If the operator changes to become false in the future, thePodwill continue to run. This could be seen as a hard rule. -
preferredDuringSchedulingIgnoredDuringExecutionWill choose a node with the desired setting before those without. If no properly-labeled nodes are available, thePodwill execute anyway. This is more of a soft setting, which declares a preference instead of a requirement. -
podAffinitythe scheduler will try to schedulePods together. -
podAntiAffinityWould cause the scheduler to keepPods on different nodes. -
topologyKeyAllows a general grouping ofPoddeployments. Affinity (or the inverse anti-affinity) will try to run on nodes with the declared topology key and runningPods with a particularlabel. ThetopologyKeycould be any legal key, with some important considerations.- If using
requiredDuringSchedulingand the admission controllerLimitPodHardAntiAffinityTopologysetting, thetopologyKeymust be set tokubernetes.io/hostname. - If using
PreferredDuringScheduling, an emptytopologyKeyis assumed to be all, or the combination ofkubernetes.io/hostname,topology.kubernetes.io/zoneandtopology.kubernetes.io/region.
- If using