Cluster Autoscaler (CA) ensures that all pods in the cluster can be scheduled regardless of the actual load, while node auto scaling based on monitoring metrics do not take pods into consideration during auto scaling. Therefore, nodes without pods might be added, or nodes with system-critical pods such as kube-dns might be deleted during auto scaling. Kubernetes discourages the latter auto scaling mechanism. In conclusion, these two modes conflict and should not be enabled at the same time.
A CA-enabled cluster will, according to the configuration of the selected node, create a launch configuration and bind an auto scaling group to it. The cluster will then perform scale-in/out in this bound auto scaling group. CVM instances scaled out are automatically added to the cluster. Nodes that are automatically scaled in/out are billed on a pay-as-you-go basis. For more information about auto scaling group, see Auto Scaling (AS).
No. CA only scales in the nodes within the auto scaling group. Nodes that are added on the TKE Console are not added to the auto scaling group.
No. We do not recommend making any modifications on the AS Console.
When creating an auto scaling group, you need to select a node in the cluster as a reference to create a launch configuration. The node configuration for reference includes:
Based on the level and type of the service, you can create multiple auto scaling groups, set different labels for them, and specify the label for the nodes scaled out in the auto scaling groups to classify the service.
Each Tencent Cloud user is provided with a quota of 30 pay-as-you go CVM instances in each availability zone. You can submit a ticket to apply for more instances for your auto scaling group.
For more information about the quotas, see CVM Instance Quantity and Quota for your current availability zone. In addition, there is a maximum limit of 200 instances for Auto Scaling. You can submit a ticket to apply for a higher quota.
Since pods will be rescheduled when a node is scaled in, scale-in can be performed only if the service can tolerate rescheduling and short-term interruption. We recommend using PDB. PDB can specify the minimum number/percentage of replicas for a pod set that remains available at all times. With PodDisruptionBudget, application deployers can make sure that the cluster operations that actively remove pods will not terminate too many pods at a time, which helps prevent data loss, service interruption, or unacceptable service degradation.
Every 10 seconds.
It generally takes less than 10 minutes. For more information, see Auto Scaling.
Please check the following:
# You can set the following information in the annotations of the node kubectl annotate node <nodename> cluster-autoscaler.kubernetes.io/scale-down-disabled=true
You can query the scaling events of an auto scaling group and view K8s events in the AS Console. Events can be found in the following three resources: