tencent cloud

Tencent Kubernetes Engine

Release Notes and Announcements
Release Notes
Announcements
Release Notes
Product Introduction
Overview
Strengths
Architecture
Scenarios
Features
Concepts
Native Kubernetes Terms
Common High-Risk Operations
Regions and Availability Zones
Service Regions and Service Providers
Open Source Components
Purchase Guide
Purchase Instructions
Purchase a TKE General Cluster
Purchasing Native Nodes
Purchasing a Super Node
Getting Started
Beginner’s Guide
Quickly Creating a Standard Cluster
Examples
Container Application Deployment Check List
Cluster Configuration
General Cluster Overview
Cluster Management
Network Management
Storage Management
Node Management
GPU Resource Management
Remote Terminals
Application Configuration
Workload Management
Service and Configuration Management
Component and Application Management
Auto Scaling
Container Login Methods
Observability Configuration
Ops Observability
Cost Insights and Optimization
Scheduler Configuration
Scheduling Component Overview
Resource Utilization Optimization Scheduling
Business Priority Assurance Scheduling
QoS Awareness Scheduling
Security and Stability
TKE Security Group Settings
Identity Authentication and Authorization
Application Security
Multi-cluster Management
Planned Upgrade
Backup Center
Cloud Native Service Guide
Cloud Service for etcd
TMP
TKE Serverless Cluster Guide
TKE Registered Cluster Guide
Use Cases
Cluster
Serverless Cluster
Scheduling
Security
Service Deployment
Network
Release
Logs
Monitoring
OPS
Terraform
DevOps
Auto Scaling
Containerization
Microservice
Cost Management
Hybrid Cloud
AI
Troubleshooting
Disk Full
High Workload
Memory Fragmentation
Cluster DNS Troubleshooting
Cluster kube-proxy Troubleshooting
Cluster API Server Inaccessibility Troubleshooting
Service and Ingress Inaccessibility Troubleshooting
Common Service & Ingress Errors and Solutions
Engel Ingres appears in Connechtin Reverside
CLB Ingress Creation Error
Troubleshooting for Pod Network Inaccessibility
Pod Status Exception and Handling
Authorizing Tencent Cloud OPS Team for Troubleshooting
CLB Loopback
API Documentation
History
Introduction
API Category
Making API Requests
Elastic Cluster APIs
Resource Reserved Coupon APIs
Cluster APIs
Third-party Node APIs
Relevant APIs for Addon
Network APIs
Node APIs
Node Pool APIs
TKE Edge Cluster APIs
Cloud Native Monitoring APIs
Scaling group APIs
Super Node APIs
Other APIs
Data Types
Error Codes
TKE API 2022-05-01
FAQs
TKE General Cluster
TKE Serverless Cluster
About OPS
Hidden Danger Handling
About Services
Image Repositories
About Remote Terminals
Event FAQs
Resource Management
Service Agreement
TKE Service Level Agreement
TKE Serverless Service Level Agreement
Contact Us
Glossary

Viewing Node Pool Scaling Logs

PDF
Focus Mode
Font Size
Last updated: 2024-12-23 15:16:18

Overview

This document describes how to view scaling records of node pools, which can help you:
See traffic changes in business and configure node pools to more efficiently meet demands.
See expenditures to manage costs more efficiently.
See the reasons for scaling failures to manage risks. For example, scale-out may fail because all resources in a region are sold out.
Note:
When multiple node pools exist, Cluster Autoscaler (CA) selects a proper node pool for scaling. Global scaling records can be obtained based on CA events.
If you are only interested in the scaling records of a specific node pool and do not care about the CA behavior, go to the node pool details page to view scaling records of this node pool.

Prerequisites

You have created an available node pool. For more information, please see Creating a Node Pool.
You have opened the "Node pool list" page. For more information, please see Viewing a Node Pool.

Directions

Viewing global scaling records

The community open-source component CA stores relevant information of each scaling activity under a specific pod or node as a Kubernetes event. However, Kubernetes events are stored in the backend for only 1 hour by default. If you want to query and review the scaling records of a node pool, we recommend that you enable event persistence to persistently store Kubernetes events.

Enabling event persistence

1. Log in to the TKE console.
2. Choose Cluster OPS > Feature Management in the left sidebar to go to the Feature Management page.
3. At the top of the Feature Management page, select a region. Click Set next to the cluster for which you want to enable event persistence, as shown in the figure below.


4. In the Configure Features window that appears, click Edit next to the Event Storage feature.
5. Select Enable Event Storage and select the logset and log topic for event persistence, as shown in the figure below.


6. Click OK.

Querying event persistence

1. Log in to the CLS console.
2. Click Search and Analyze in the left sidebar to go to the Search and Analyze management page.
3. At the top of the Search and Analyze page, select a region and select the event persistence logset and log topic that you want to view.
4. Select event.source.component:cluster-autoscaler and click Search and Analyze, as shown in the figure below.


5. Configure data columns in Column Settings on the right and visualize the desired columns, as shown in the figure below.

Specify an event type. For example, if you only want to view scale-out events, select TriggeredScaleUp for the search, as shown in the figure below.


6. The scaling log querying result (including all node pool scale-out logs) is as follows:



Search guide

You can refer to the following documents to view a more detailed scaling activity list:
For CA scaling events, the value of the Reason field may be any of the following: TriggeredScaleUp, NotTriggerScaleUp, ScaledUpGroup, FailedToScaleUpGroup, ScaleDown, ScaleDownFailed, and ScaleDownEmpty. For more information, see Detailed Field Description.

Querying scaling logs of a specific node pool

1. Log in to the TKE console and click Cluster in the left sidebar.
2. On the Cluster Management page, click the desired cluster ID to open the Deployment page.
3. In the left sidebar, choose Node Management > Node Pool to open the Node Pool List page.
4. On the node pool page, click the desired node pool ID, as shown in the figure below.


5. On the node pool details page, click the Scaling Logs tab on the top, as shown in the figure below.

The scaling log fields are as follows:
Activity ID: ID of a scaling activity.
Status: status of a scaling activity.
Description: description of a scaling activity, displaying the number of scale-out/scale-in nodes.
Activity Cause: causes for triggering a scaling activity.
Failure Cause: if a scaling activity fails, this column displays the causes of failure.
Start Time: time when a scaling activity starts, in seconds.
End Time: time when a scaling activity ends, in seconds.

References

For more information on the features and operations of node pools, please see the following documents:

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback