kubectl describe pod <pod-name> to look up event information, which can be used to analyze the cause.$ kubectl describe pod tikv-0...Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 3m (x106 over 33m) default-scheduler 0/4 nodes are available: 1 node(s) had no available volume zone, 2 Insufficient cpu, 3 Insufficient memory.
kubectl describe node <node-name>
Allocatable: all resources the current node can apply for.Allocated resources: resources that have been allocated (Allocatable minus all Requests by all Pods on the node).Allocatable minus Allocated resources. If it is less than the Request from the Pod, then the node does not have enough resources to accommodate the Pod, which means the Scheduler skips the Pod in the Predicates stage. Therefore, the pod is not scheduled to the node.nodeAffinity: affinity to nodes. You can think of this as an enhanced version of nodeSelector. It limits the Pod to the nodes that meet certain conditions.podAffinity: affinity to Pods. This schedules related Pods to the same node or nodes in the same availability zone.podAntiAffinity: anti-affinity to pods. This is used to prevent the scheduling of the same type of Pods to the same place in order to avoid single point of failure. For example, you can schedule the Pods that provide DNS service to the cluster to different nodes in order to prevent the DNS service crashes causing business interruptions because a single node fails.kubectl describe node <node-name> to query existing node taints, as shown below:$ kubectl describe nodes host1...Taints: special=true:NoSchedule...
special:kubectl taint nodes host1 special-
nginx) as an example to describe how to add a toleration:nginx. kubectl edit deployment nginx
spec in the template section. The following adds a toleration for the existing taint special:tolerations:- key: "special"operator: "Equal"value: "true"effect: "NoSchedule"

kube-scheduler that causes Pods to remain in the Pending status. You can solve the issue by upgrading kube-scheduler.kube-scheduler is running properly. If not, restart the scheduler.$ kubectl taint node host1 special=true:NoSchedulenode "host1" tainted
node.kubernetes.io/unschedulable to the node.TaintNodesByCondition. With this feature, controller manager will check conditions defined in the node when the node does not run properly. If a condition is met, then the corresponding taint is added automatically.
For example, if the condition of OutOfDisk=true is met, then a taint called node.kubernetes.io/out-of-disk is added to the node.
Conditions and corresponding taints:Condition Value Taints-------- ----- ------OutOfDisk True node.kubernetes.io/out-of-diskReady False node.kubernetes.io/not-readyReady Unknown node.kubernetes.io/unreachableMemoryPressure True node.kubernetes.io/memory-pressurePIDPressure True node.kubernetes.io/pid-pressureDiskPressure True node.kubernetes.io/disk-pressureNetworkUnavailable True node.kubernetes.io/network-unavailable
OutOfDisk is True, the node is out of storage space.Ready is False, the node is unhealthy.Ready is Unknown, the node is unreachable. If a node does not report to controller-manager in the time defined by node-monitor-grace-period (40s by default), it is marked as Unknown.MemoryPressure is True, the node has little available memory.PIDPressure is True, the node has too many processes running and it is running out of PIDs.DiskPressure is True, the node has little available storage space.NetworkUnavailable is True, the node cannot communicate with other Pods because the network is not properly configured.node.cloudprovider.kubernetes.io/uninitialized is added to it. Then, after successful node initialization, the taint is automatically removed. This is to prevent Pods from being scheduled to an uninitialized node.Feedback