tencent cloud

Tencent Kubernetes Engine

Release Notes and Announcements
Release Notes
Announcements
Release Notes
Product Introduction
Overview
Strengths
Architecture
Scenarios
Features
Concepts
Native Kubernetes Terms
Common High-Risk Operations
Regions and Availability Zones
Service Regions and Service Providers
Open Source Components
Purchase Guide
Purchase Instructions
Purchase a TKE General Cluster
Purchasing Native Nodes
Purchasing a Super Node
Getting Started
Beginner’s Guide
Quickly Creating a Standard Cluster
Examples
Container Application Deployment Check List
Cluster Configuration
General Cluster Overview
Cluster Management
Network Management
Storage Management
Node Management
GPU Resource Management
Remote Terminals
Application Configuration
Workload Management
Service and Configuration Management
Component and Application Management
Auto Scaling
Container Login Methods
Observability Configuration
Ops Observability
Cost Insights and Optimization
Scheduler Configuration
Scheduling Component Overview
Resource Utilization Optimization Scheduling
Business Priority Assurance Scheduling
QoS Awareness Scheduling
Security and Stability
TKE Security Group Settings
Identity Authentication and Authorization
Application Security
Multi-cluster Management
Planned Upgrade
Backup Center
Cloud Native Service Guide
Cloud Service for etcd
TMP
TKE Serverless Cluster Guide
TKE Registered Cluster Guide
Use Cases
Cluster
Serverless Cluster
Scheduling
Security
Service Deployment
Network
Release
Logs
Monitoring
OPS
Terraform
DevOps
Auto Scaling
Containerization
Microservice
Cost Management
Hybrid Cloud
AI
Troubleshooting
Disk Full
High Workload
Memory Fragmentation
Cluster DNS Troubleshooting
Cluster kube-proxy Troubleshooting
Cluster API Server Inaccessibility Troubleshooting
Service and Ingress Inaccessibility Troubleshooting
Common Service & Ingress Errors and Solutions
Engel Ingres appears in Connechtin Reverside
CLB Ingress Creation Error
Troubleshooting for Pod Network Inaccessibility
Pod Status Exception and Handling
Authorizing Tencent Cloud OPS Team for Troubleshooting
CLB Loopback
API Documentation
History
Introduction
API Category
Making API Requests
Elastic Cluster APIs
Resource Reserved Coupon APIs
Cluster APIs
Third-party Node APIs
Relevant APIs for Addon
Network APIs
Node APIs
Node Pool APIs
TKE Edge Cluster APIs
Cloud Native Monitoring APIs
Scaling group APIs
Super Node APIs
Other APIs
Data Types
Error Codes
TKE API 2022-05-01
FAQs
TKE General Cluster
TKE Serverless Cluster
About OPS
Hidden Danger Handling
About Services
Image Repositories
About Remote Terminals
Event FAQs
Resource Management
Service Agreement
TKE Service Level Agreement
TKE Serverless Service Level Agreement
Contact Us
Glossary

Cluster DNS Troubleshooting

PDF
Modo Foco
Tamanho da Fonte
Última atualização: 2024-12-13 14:48:39

Troubleshooting Approaches

1. Make sure that the cluster DNS runs normally

Container DNS passes through cluster DNS (usually CoreDNS). First, make sure that the cluster DNS runs normally. You can see the cluster IP of the DNS in the --cluster-dns startup parameter of kubelet:
$ ps -ef | grep kubelet
... /usr/bin/kubelet --cluster-dns=172.16.14.217 ...
Find the DNS Service:
$ kubectl get svc -n kube-system | grep 172.16.14.217
kube-dns ClusterIP 172.16.14.217 <none> 53/TCP,53/UDP 47d
Check for the endpoint:
$ kubectl -n kube-system describe svc kube-dns | grep -i endpoints
Endpoints: 172.16.0.156:53,172.16.0.167:53
Endpoints: 172.16.0.156:53,172.16.0.167:53
Check whether the Pod of the endpoint is normal:
$ kubectl -n kube-system get pod -o wide | grep 172.16.0.156
kube-dns-898dbbfc6-hvwlr 3/3 Running 0 8d 172.16.0.156 10.0.0.3

2. Make sure that the Pod can communicate with the cluster DNS

Check whether the Pod can connect to the cluster DNS. You can run the telnet command in the Pod to view port 53 of the DNS:
# Cluster IP for connecting to the DNS Service
$ telnet 172.16.14.217 53
Note:
If there are no testing tools such as telnet in the container, you can use nsenter to enter netns for packet capturing and use telnet on the host for testing.
If the network is found to be disconnected, check the following network settings:
Check the security group settings of the node and the container IP range of the cluster that needs to be opened.
Check for firewall rules and check the iptables.
Check whether kube-proxy runs normally. The cluster DNS IP is the cluster IP, which is forwarded through the iptables or IPVS rules generated by kube-proxy.

3. Capture packets

If the cluster DNS runs normally and the Pod can communicate with the cluster DNS, capture packets for further checks. If the problem can be easily reproduced, you can use nsenter to enter netns to capture container packets:
tcpdump -i any port 53 -w dns.pcap
# tcpdump -i any port 53 -nn -tttt
If the cause still cannot be identified, you can capture packets at multiple points along the request linkage for analysis, such as Pod container, host cbr0 bridge, primary ENI of the host (eth0), primary ENI of the host of the CoreDNS Pod, cbr0, and container. Wait for the problem to recur and locate the point where the packet is lost.

Issue and Cause

Latency of five seconds

If it often takes five seconds to return a DNS query result, packets are usually lost due to kernel conntrack conflicts. The root cause is the bug in the conntrack module, where some packets are discarded due to resource competition when netfilter is used for NAT.
It may possibly occur when multiple threads or processes send the same quintuple UDP packet through the same socket concurrently.
Both glibc and musl (Alpine Linux's libc) use "parallel query", i.e., multiple query requests are sent concurrently, which tends to cause conflicts and request discarding.
As IPVS also uses conntrack, this problem cannot be avoided in IPVS mode of kube-proxy.

Workaround

Use local DNS. DNS requests of the container are sent to the local DNS cache service (dnsmasq, nscd, etc.), without DNAT or conntrack conflicts. In addition, the DNS service will not be a performance bottleneck. You can use local DNS in two ways:
Each container comes with a DNS cache service.
Each node runs a DNS cache service, and all containers use the DNS cache of the node as their nameserver.

Timeout of the resolution of an external domain name

Possible reasons:
The upstream DNS fails.
The ACL or firewall of the upstream DNS blocks the packet.

Timeout of all resolutions

If a cluster Pod fails to resolve both Services and external domain names, there is generally a problem with the communication between the Pod and the cluster DNS. Possible reasons:
The node firewall doesn't open the cluster IP range; therefore, the Pod cannot communicate with that of the cluster DNS when they are on different nodes, and DNS requests cannot be received.
kube-proxy is abnormal.

Ajuda e Suporte

Esta página foi útil?

comentários