tencent cloud

Tencent Kubernetes Engine

Release Notes and Announcements
Release Notes
Announcements
Release Notes
Product Introduction
Overview
Strengths
Architecture
Scenarios
Features
Concepts
Native Kubernetes Terms
Common High-Risk Operations
Regions and Availability Zones
Service Regions and Service Providers
Open Source Components
Purchase Guide
Purchase Instructions
Purchase a TKE General Cluster
Purchasing Native Nodes
Purchasing a Super Node
Getting Started
Beginner’s Guide
Quickly Creating a Standard Cluster
Examples
Container Application Deployment Check List
Cluster Configuration
General Cluster Overview
Cluster Management
Network Management
Storage Management
Node Management
GPU Resource Management
Remote Terminals
Application Configuration
Workload Management
Service and Configuration Management
Component and Application Management
Auto Scaling
Container Login Methods
Observability Configuration
Ops Observability
Cost Insights and Optimization
Scheduler Configuration
Scheduling Component Overview
Resource Utilization Optimization Scheduling
Business Priority Assurance Scheduling
QoS Awareness Scheduling
Security and Stability
TKE Security Group Settings
Identity Authentication and Authorization
Application Security
Multi-cluster Management
Planned Upgrade
Backup Center
Cloud Native Service Guide
Cloud Service for etcd
TMP
TKE Serverless Cluster Guide
TKE Registered Cluster Guide
Use Cases
Cluster
Serverless Cluster
Scheduling
Security
Service Deployment
Network
Release
Logs
Monitoring
OPS
Terraform
DevOps
Auto Scaling
Containerization
Microservice
Cost Management
Hybrid Cloud
AI
Troubleshooting
Disk Full
High Workload
Memory Fragmentation
Cluster DNS Troubleshooting
Cluster kube-proxy Troubleshooting
Cluster API Server Inaccessibility Troubleshooting
Service and Ingress Inaccessibility Troubleshooting
Common Service & Ingress Errors and Solutions
Engel Ingres appears in Connechtin Reverside
CLB Ingress Creation Error
Troubleshooting for Pod Network Inaccessibility
Pod Status Exception and Handling
Authorizing Tencent Cloud OPS Team for Troubleshooting
CLB Loopback
API Documentation
History
Introduction
API Category
Making API Requests
Elastic Cluster APIs
Resource Reserved Coupon APIs
Cluster APIs
Third-party Node APIs
Relevant APIs for Addon
Network APIs
Node APIs
Node Pool APIs
TKE Edge Cluster APIs
Cloud Native Monitoring APIs
Scaling group APIs
Super Node APIs
Other APIs
Data Types
Error Codes
TKE API 2022-05-01
FAQs
TKE General Cluster
TKE Serverless Cluster
About OPS
Hidden Danger Handling
About Services
Image Repositories
About Remote Terminals
Event FAQs
Resource Management
Service Agreement
TKE Service Level Agreement
TKE Serverless Service Level Agreement
Contact Us
Glossary

Instructions

PDF
Focus Mode
Font Size
Last updated: 2024-06-14 16:28:43
This document describes how to activate and use the memory compression feature based on native nodes.

Environment preparations

The memory compression feature requires updating the kernel of the native node image to the latest version (5.4.241-19-0017), which can be achieved using the following methods:

Adding Native Nodes

1. Log in to the Tencent Kubernetes Engine Console, and choose Cluster from the left navigation bar.
2. In the cluster list, click on the desired cluster ID to access its details page.
3. Choose Node Management > Worker Nodes, select the Node Pool tab, and click Create.
4. Select native nodes, and click Create.
5. In Advanced Settings of the Create Node Pool page, find the Annotations field and set "node.tke.cloud.tencent.com/beta-image = wujing", as shown in the following figure:

6. Click Create node pool.
Note:
By default, the image with the latest kernel version (5.4.241-19-0017) will be installed on the native nodes added to this node pool.

Existing Native Nodes

The kernel versions of existing native nodes can be updated using RPM packages. You can contact us through Submit a Ticket.

Kernel Version Verification

You can run the kubectl get nodes -o wide command to verify that the node's KERNEL-VERSION has been updated to the latest version 5.4.241-19-0017.1_plus.


Installing the QosAgent Component

1. Log in to the Tencent Kubernetes Engine Console, and choose Cluster from the left navigation bar.
2. In the cluster list, click on the desired cluster ID to access its details page.
3. Choose Component Management from the left-side menu, and click Create on the Component Management page.
4. On the New Component Management page, select QoS Agent and check Memory Compression in the parameter configurations, as shown in the following figure:

5. Click OK.
6. On the New Component Management page, click Complete to install the component.
Note:
The QosAgent component of the version 1.1.5 or later supports memory compression. If the component has been installed in your cluster, perform the following steps:
1. On the cluster's Component Management page, find the successfully deployed QosAgent and click the right side Upgrade.
2. After the upgrade, click Update Configuration and select Memory Compression.
3. Click Complete.

Selecting Nodes to Enable Memory Compression

To facilitate Gray Box Testing, QosAgent does not enable kernel configurations required for memory compression on all native nodes by default. You need to use NodeQOS to specify which nodes can have Compression Capability enabled.

Deploying the NodeQOS Object

1. Deploy the NodeQOS object. Use spec.selector.matchLabels to specify on which nodes to enable memory compression, as shown in the following example:
apiVersion: ensurance.crane.io/v1alpha1
kind: NodeQOS
metadata:
name: compression
spec:
selector:
matchLabels:
compression: enable
memoryCompression:
enable: true
2. Label the node to associate the node with NodeQOS. Perform the following steps:
2.1 Log in to the Tencent Kubernetes Engine Console, and choose Cluster from the left navigation bar.
2.2 In the cluster list, click on the desired cluster ID to access its details page.
2.3 Choose Node Management > Worker Nodes, select the Node Pool tab, and click Edit on the Node Pool tab page.
2.4 On the Adjust Node Pool Configuration page, modify the label and check Apply this update to existing nodes. In the example, the label is compression: enable.
2.5 Click OK.

Validation of Effectiveness

After enabling memory compression on the node, you can use the following command to obtain the node's YAML configuration and confirm whether memory compression is correctly enabled through the node's annotation. The following is an example:
kubectl get node <nodename> -o yaml | grep "gocrane.io/memory-compression"
After logging into the node, check zram, swap, and kernel parameters in turn to confirm that memory compression is correctly enabled. The following is an example:
# Confirm zram device initialization.
# zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lzo-rle 3.6G 4K 74B 12K 2 [SWAP]

# Confirm settings for swap.
# free -h
total used free shared buff/cache available
Mem: 3.6Gi 441Mi 134Mi 5.0Mi 3.0Gi 2.9Gi
Swap: 3.6Gi 0.0Ki 3.6Gi

# sysctl vm.force_swappiness
vm.force_swappiness = 1

Selecting Services to Enable Memory Compression

Deploying the PodQOS Object

1. Deploy the PodQOS object. Enable memory compression on specific pods using spec.labelSelector.matchLabels, as shown in the following example:
apiVersion: ensurance.crane.io/v1alpha1
kind: PodQOS
metadata:
name: memorycompression
spec:
labelSelector:
matchLabels:
compression: enable
resourceQOS:
memoryQOS:
memoryCompression:
compressionLevel: 1
enable: true
Note:
compressionLevel represents the compression level. The value ranges from 1 to 4, corresponding to the algorithms lz4, lzo-rle, lz4hc, zstd, in order of decreasing compression ratio and increasing performance loss.
2. Create a workload matching the labelSelector in PodQOS, as shown in the following example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: memory-stress
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: memory-stress
template:
metadata:
labels:
app: memory-stress
compression: enable
spec:
containers:
- command:
- bash
- -c
- "apt update && apt install -yq stress && stress --vm-keep --vm 2 --vm-hang 0"
image: ccr.ccs.tencentyun.com/ccs-dev/ubuntu-base:20.04
name: memory-stress
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 100M
restartPolicy: Always
Note:
All containers in a pod must have a memory limit.

Validation of Effectiveness

Verify the memory compression feature through pod annotation (gocrane.io/memory-compression), process memory usage, zram or swap usage, and cgroup memory usage, ensuring that memory compression has been correctly enabled for the pod.
# QosAgent will set an annotation for the pod with memory compression enabled.
kubectl get pods -l app=memory-stress -o jsonpath="{.items[0].metadata.annotations.gocrane\\.io\\/memory-compression}"

# zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lzo-rle 3.6G 163M 913.9K 1.5M 2 [SWAP]

# free -h
total used free shared buff/cache available
Mem: 3.6Gi 1.4Gi 562Mi 5.0Mi 1.7Gi 1.9Gi
Swap: 3.6Gi 163Mi 3.4Gi

#Check memory.zram.{raw_in_bytes,usage_in_bytes} in cgroup (usually in /sys/fs/cgroup/memory/) to see the amount of memory compressed in the pod, the size of memory after compression, and the difference, which indicates the memory saved.
cat memory.zram.{raw_in_bytes,usage_in_bytes}
170659840
934001

#Calculate the difference to obtain the size of memory saved. In the example, 170Mi of memory was saved.
cat memory.zram.{raw_in_bytes,usage_in_bytes} | awk 'NR==1{raw=$1} NR==2{compressed=$2} END{ print raw - compress }'
170659840



Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback