tencent cloud

Tencent Kubernetes Engine

Release Notes and Announcements
Release Notes
Announcements
Release Notes
Product Introduction
Overview
Strengths
Architecture
Scenarios
Features
Concepts
Native Kubernetes Terms
Common High-Risk Operations
Regions and Availability Zones
Service Regions and Service Providers
Open Source Components
Purchase Guide
Purchase Instructions
Purchase a TKE General Cluster
Purchasing Native Nodes
Purchasing a Super Node
Getting Started
Beginner’s Guide
Quickly Creating a Standard Cluster
Examples
Container Application Deployment Check List
Cluster Configuration
General Cluster Overview
Cluster Management
Network Management
Storage Management
Node Management
GPU Resource Management
Remote Terminals
Application Configuration
Workload Management
Service and Configuration Management
Component and Application Management
Auto Scaling
Container Login Methods
Observability Configuration
Ops Observability
Cost Insights and Optimization
Scheduler Configuration
Scheduling Component Overview
Resource Utilization Optimization Scheduling
Business Priority Assurance Scheduling
QoS Awareness Scheduling
Security and Stability
TKE Security Group Settings
Identity Authentication and Authorization
Application Security
Multi-cluster Management
Planned Upgrade
Backup Center
Cloud Native Service Guide
Cloud Service for etcd
TMP
TKE Serverless Cluster Guide
TKE Registered Cluster Guide
Use Cases
Cluster
Serverless Cluster
Scheduling
Security
Service Deployment
Network
Release
Logs
Monitoring
OPS
Terraform
DevOps
Auto Scaling
Containerization
Microservice
Cost Management
Hybrid Cloud
AI
Troubleshooting
Disk Full
High Workload
Memory Fragmentation
Cluster DNS Troubleshooting
Cluster kube-proxy Troubleshooting
Cluster API Server Inaccessibility Troubleshooting
Service and Ingress Inaccessibility Troubleshooting
Common Service & Ingress Errors and Solutions
Engel Ingres appears in Connechtin Reverside
CLB Ingress Creation Error
Troubleshooting for Pod Network Inaccessibility
Pod Status Exception and Handling
Authorizing Tencent Cloud OPS Team for Troubleshooting
CLB Loopback
API Documentation
History
Introduction
API Category
Making API Requests
Elastic Cluster APIs
Resource Reserved Coupon APIs
Cluster APIs
Third-party Node APIs
Relevant APIs for Addon
Network APIs
Node APIs
Node Pool APIs
TKE Edge Cluster APIs
Cloud Native Monitoring APIs
Scaling group APIs
Super Node APIs
Other APIs
Data Types
Error Codes
TKE API 2022-05-01
FAQs
TKE General Cluster
TKE Serverless Cluster
About OPS
Hidden Danger Handling
About Services
Image Repositories
About Remote Terminals
Event FAQs
Resource Management
Service Agreement
TKE Service Level Agreement
TKE Serverless Service Level Agreement
Contact Us
Glossary

Declarative Operation Practice

PDF
Focus Mode
Font Size
Last updated: 2024-06-27 11:09:15

Operations Supported by Kubectl

CRD Type
Operation
MachineSet
Creating a native node pool
kubectl create -f machineset-demo.yaml
Viewing the list of native node pools
kubectl get machineset
Viewing the YAML details of a native node pool
kubectl describe ms machineset-name
Deleting a native node pool
kubectl delete ms machineset-name
Scaling out a native node pool
kubectl scale --replicas=3 machineset/machineset-name
Machine
Viewing native nodes
kubectl get machine
Viewing the YAML details of a native node
kubectl describe ma machine-name
Deleting a native node
kubectl delete ma machine-name
HealthCheckPolicy
Creating a fault self-healing rule
kubectl create -f demo-HealthCheckPolicy.yaml
Viewing the list of fault self-healing rules
kubectl kubectl get HealthCheckPolicy
Viewing the YAML details of a fault self-healing rule
kubectl describe HealthCheckPolicy HealthCheckPolicy-name
Deleting a fault self-healing rule
kubectl delete HealthCheckPolicy HealthCheckPolicy-name


Using CRD via YAML

MachineSet

For the parameter settings of a native node pool, refer to the Description of Parameters for Creating Native Nodes.
apiVersion: node.tke.cloud.tencent.com/v1beta1
kind: MachineSet
spec:
autoRepair: false #Fault self-healing switch
displayName: test
healthCheckPolicyName: #Self-healing rule name
instanceTypes: #Model specification
- S5.MEDIUM2
replicas: 1 #Node quantity
scaling: #Auto-scaling policy
createPolicy: ZonePriority
maxReplicas: 1
subnetIDs: #Node pool subnet
- subnet-nnwwb64w
template:
metadata:
annotations:
node.tke.cloud.tencent.com/machine-cloud-tags: '[{"tagKey":"xxx","tagValue":"xxx"}]' #Tencent Cloud tag
spec:
displayName: tke-np-mpam3v4b-worker #Custom display name
metadata:
annotations:
annotation-key1: annotation-value1 #Custom annotations
labels:
label-test-key: label-test-value #Custom labels
providerSpec:
type: Native
value:
dataDisks: #Data disk parameters
- deleteWithInstance: true
diskID: ""
diskSize: 50
diskType: CloudPremium
fileSystem: ext4
mountTarget: /var/lib/containerd
instanceChargeType: PostpaidByHour #Node billing mode
keyIDs: #Node login SSH parameters
- skey-xxx
lifecycle: #Custom script
postInit: echo "after node init"
preInit: echo "before node init"
management: #Settings of Management parameters, including kubelet\\kernel\\nameserver\\hostname
securityGroupIDs: #Security group configuration
- sg-xxxxx
systemDisk: #System disk configuration
diskSize: 50
diskType: CloudPremium
runtimeRootDir: /var/lib/containerd
taints: #Taints, not required
- effect: NoExecute
key: taint-key2
value: value2
type: Native

Kubectl Operation Demo

MachineSet

1. Run the kubectl create -f machineset-demo.yaml command to create a MachineSet based on the preceding YAML file.

2. Run the kubectl get machineset command to view the status of the MachineSet np-pjrlok3w. At this time, the corresponding node pool already exists in the console, and its node is being created.


3. Run the kubectl describe machineset np-pjrlok3w command to view the description of the MachineSet np-pjrlok3w.

4. Run the kubectl scale --replicas=2 machineset/np-pjrlok3w command to execute scaling of the node pool.

5. Run the kubectl delete ms np-pjrlok3w command to delete the node pool.


Machine

1. Run the kubectl get machine command to view the machine list. At this time, the corresponding node already exists in the console.




2. Run the kubectl describe ma np-14024r66-nv8bk command to view the description of the machine np-14024r66-nv8bk.

3. Run the kubectl delete ma np-14024r66-nv8bk command to delete the node.
Note:
If you delete the node directly without adjusting the expected number of nodes in the node pool, the node pool will detect that the actual number of nodes does not meet the declarative number of nodes, and then create a new node and add it to the node pool. It is recommended to delete a node with the method as follows:
1. Run the kubectl scale --replicas=1 machineset/np-xxxxx command to adjust the expected number of nodes.
2. Run the kubectl delete machine np-xxxxxx-dtjhd command to delete the corresponding node.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback