replicas 给应用创建多个副本,可以适当提高应用容错能力,但这并不意味着应用就此实现高可用性。本文为部署应用高可用的最佳实践,通过以下方式实现高可用性。您可结合实际情况,选择多种方式进行部署:replicas 的值。该值如果为1 ,就必然存在单点故障。该值如果大于1但所有副本都调度到同一个节点,仍将无法避免单点故障。 affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- weight: 100labelSelector:matchExpressions:- key: k8s-appoperator: Invalues:- kube-dnstopologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution 来指示调度器尽量满足反亲和性条件。当不存在满足该条件的节点时,Pod 也可以调度到某个节点。 kubernetes.io/hostname,表示避免 Pod 调度到同一节点。如果您有更高的要求,例如避免调度到同一个可用区的节点,实现异地多活,则可以使用 failure-domain.beta.kubernetes.io/zone。但通常情况下,同一个集群的节点都在一个地域。如果节点跨地域,即使使用专线,时延也会很大。如果无法避免调度到同一个地域的节点,则可以使用 failure-domain.beta.kubernetes.io/region。topologySpreadConstraints 来打散 Pod 的分布以提高服务可用性。apiVersion: apps/v1kind: Deploymentmetadata:labels:k8s-app: nginxqcloud-app: nginxname: nginxnamespace: defaultspec:replicas: 1selector:matchLabels:k8s-app: nginxqcloud-app: nginxtemplate:metadata:labels:k8s-app: nginxqcloud-app: nginxspec:topologySpreadConstraints:- maxSkew: 1whenUnsatisfiable: DoNotScheduletopologyKey: topology.kubernetes.io/regionlabelSelector:matchLabels:k8s-app: nginxcontainers:- image: nginxname: nginxresources:limits:cpu: 500mmemory: 1Girequests:cpu: 250mmemory: 256MidnsPolicy: ClusterFirst
spec:topologySpreadConstraints:- maxSkew: 1whenUnsatisfiable: ScheduleAnywaytopologyKey: topology.kubernetes.io/regionlabelSelector:matchLabels:k8s-app: nginx
spec:topologySpreadConstraints:- maxSkew: 1topologyKey: topology.kubernetes.io/zonewhenUnsatisfiable: ScheduleAnywaylabelSelector:matchLabels:k8s-app:: nginx
spec:topologySpreadConstraints:- maxSkew: 1whenUnsatisfiable: ScheduleAnywaytopologyKey: topology.kubernetes.io/zonelabelSelector:matchLabels:k8s-app: nginx- maxSkew: 1whenUnsatisfiable: ScheduleAnywaytopologyKey: kubernetes.io/hostnamelabelSelector:matchLabels:k8s-app: nginx


affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: "placement-set-uniq"operator: Invalues:- "rack1"podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- nginxtopologyKey: kubernetes.io/hostname
apiVersion: policy/v1beta1kind: PodDisruptionBudgetmetadata:name: zk-pdbspec:minAvailable: 2selector:matchLabels:app: zookeeper
apiVersion: policy/v1beta1kind: PodDisruptionBudgetmetadata:name: zk-pdbspec:maxUnavailable: 1selector:matchLabels:app: zookeeper
Pod IP:Port,kube-proxy 会根据 Service 的 Endpoint 中的 Pod IP:Port 列表更新节点上的转发规则,而 kube-proxy 更新节点转发规则的动作并不是及时的。 Pod IP:Port 列表,kube-proxy watch 到更新也同步更新了节点上的 Service 转发规则(iptables/ipvs),如果此时有请求就可能被转发到还没完全启动完全的 Pod,此时 Pod 还无法正常处理请求,就会导致连接被拒绝。 IP:Port 列表,kube-proxy 再更新节点转发规则,完成更新后即使立刻有请求被转发到的新的 Pod,也能够确保正常处理连接,避免连接异常。 apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: nginxspec:replicas: 1selector:matchLabels:component: nginxtemplate:metadata:labels:component: nginxspec:containers:- name: nginximage: "nginx"ports:- name: httphostPort: 80containerPort: 80protocol: TCPreadinessProbe:httpGet:path: /healthzport: 80httpHeaders:- name: X-Custom-Headervalue: AwesomeinitialDelaySeconds: 15timeoutSeconds: 1lifecycle:preStop:exec:command: ["/bin/bash", "-c", "sleep 30"]
文档反馈