tencent cloud

Using GooseFSx on TKE Containers
Last updated: 2025-07-17 17:16:40
Using GooseFSx on TKE Containers
Last updated: 2025-07-17 17:16:40

Overview

Kubernetes abstracts PV (PersistentVolume) and PVC (PersistentVolumeClaim) to mount and use storage. For details, refer to Kubernetes official documentation.
PV (Persistent Volume): PV encapsulates storage resources, describing a persistent storage volume in a container cluster, which belongs to cluster-level resources.
PVC (PersistentVolumeClaim): PVC applies for storage resources and consumes PV resources in the container cluster. If no PV resources exist in the cluster, it will dynamically create PV resources and underlying storage. By associating PVC with a Pod, the Pod can use the storage resources.
StorageClass describes the type of PV and PVC in a container cluster. PV can be auto-created based on StorageClass to reduce the work of creating and maintaining PV. When dynamically creating PVC/PV, you must specify StorageClass.
Data Accelerator Goose FileSystem (GooseFS) supports two persistent volume (PV) types: Local PV and CSI PV.
Local PV (Local Persistent Volume): Kubernetes directly uses the mounted GooseFSx directory on the host machine to persistently store container data.
CSI PV: Kubernetes uses the CSI (Container Storage Interface) protocol to dynamically mount GooseFSx for persistent storage of container data.
Since GooseFSx has mounted the shared directory on the host machine of the container cluster, using Local PV is more direct and offers improved efficiency. Local PV is recommended for use.

Prerequisites

A container cluster has been created, such as TKE (Tencent Kubernetes Engine) container cluster or self-built Kubernetes container cluster.
The container cluster and GooseFSx instance are in the same VPC and same subnet.
The host machine's operating system of the container cluster is compatible with GooseFSx. See Client Specifications and Limits.
Container cluster host has mounted GooseFSx shared directory. See Managing GooseFSx Instances.

Use Limits

1. GooseFSx does not currently support TKE super node. Please use TKE Node Pool to achieve dynamic scaling.
2. GooseFSx does not currently support dynamically creating PV based on StorageClass.

Local PV Operation Steps

Defining a yaml File Sample for PV Persistent Volume local_goosefsx_PV.yaml

Note:
Replace the local path with the mount directory of GooseFSx on the host machine.
apiVersion: v1
kind: PersistentVolume
metadata:
name: csi-goosefsx-local-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
# Replace the path here with the host mounting path of GooseFSx, then delete this reminder
path: /goosefsx/x-c60-ow1j60r9-proxy
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
Key parameters are described as follows:
Parameter
Description
name: csi-goosefsx-local-pv
Define the persistent volume name and modify according to actual conditions.
accessModes: - ReadWriteMany
Define the access mode. "ReadWriteMany" means it can be mounted in read-write mode by multiple nodes.
storage: 10Gi
Define the storage capacity. "10Gi" indicates 10GiB storage capacity. This parameter does not limit the capacity provided by the file system.
The actual storage capacity is the purchased GooseFSx capacity and dynamically scales with GooseFSx expansion. For example, if the purchased GooseFSx capacity is 4.5TiB, the storage capacity is 4.5TiB (not 10GiB). After expanding GooseFSx to 9TiB, the storage capacity becomes 9TiB.
volumeMode: Filesystem
Define the persistent volume mode as file system.
persistentVolumeReclaimPolicy: Delete
Define the recycling policy, delete.
storageClassName: local-storage
Define the persistent volume belonging to the class "local-storage". The persistent volume claim must belong to the same class "local-storage". The name "local-storage" should be consistent with the storage class file's name "local-storage".
local: path: /goosefsx/x-c60-ow1j60r9-proxy
Define the container's storage space as the host machine's directory path "/goosefsx/x-c60-ow1j60r9-proxy", which is GooseFSx's mount directory on the host machine. Modify according to actual conditions.
nodeAffinity
Define node affinity.

2. Sample yaml File for Defining a PVC Persistent Volume Claim local_goosefsx_PVC.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-goosefsx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: local-storage
Key parameters are described as follows:
Parameter
Description
name: local-goosefsx-pvc
Define the persistent volume claim name and modify according to actual conditions.
accessModes: - ReadWriteMany
Define access mode and persistent volume.
resources: requests: storage: 10Gi
Define the storage capacity. "10Gi" indicates 10GiB storage capacity. This parameter does not limit the capacity provided by the file system. The actual storage capacity is the capacity purchased for GooseFSx and dynamically scales with GooseFSx expansion.
For example, if the purchased GooseFSx capacity is 4.5TiB, the storage capacity is 4.5TiB (not 10GiB). After expanding GooseFSx to 9TiB, the storage capacity becomes 9TiB.
storageClassName: local-storage
Define a persistent volume claim belonging to the class "local-storage". The persistent volume must belong to the same class "local-storage". The name "local-storage" should be consistent with the storage class file's name "local-storage".

3. Sample yaml File for Defining StorageClass Storage Class local_goosefsx_StorageClass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Key parameters are described as follows:
Parameter
Description
name: local-storage
Define a storage class named "local-storage". The yaml file for the PV persistent volume and the yaml file for the PVC persistent volume claim will use it.
provisioner: kubernetes.io/no-provisioner
Define the provisioner for the PV persistent volume. GooseFSx's PV persistent volume is relatively simple, and auto-creating PV persistent volumes by using StorageClass is not required.
volumeBindingMode:WaitForFirstConsumer
Define volume binding mode. This mode delays PersistentVolume binding and provisioning until the Pod using the PersistentVolumeClaim is created.

4. Run Command to Create StorageClass, PV, and PVC

Run the following command to create StorageClass:
kubectl apply -f local_goosefsx_storageclass.yaml



Run the following command to create a PV:
kubectl apply -f local_goosefsx_pv.yaml



Run the following command to create a PVC:
kubectl apply -f local_goosefsx_pvc.yaml




5. Deploying Pod and Mounting the PVC

A sample yaml file for the Pod mounting this PVC local_goosefsx_Pod.yaml is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: local-goosefsx-dp
name: local-goosefsx-dp
spec:
replicas: 1
selector:
matchLabels:
k8s-app: local-goosefsx-pod
template:
metadata:
labels:
k8s-app: local-goosefsx-pod
spec:
containers:
- image: nginx
name: local-goosefsx-pod
volumeMounts:
- mountPath: /local-goosefsx
name: local-goosefsx-pv
volumes:
- name: local-goosefsx-pv
persistentVolumeClaim:
claimName: local-goosefsx-pvc
Deploy Pod:
kubectl apply -f local_goosefsx_pod.yaml



Check if the Pod is in ready State:
kubectl get pod



Log in to the Pod, check whether the mount point is correct, and check whether the mount point is online:
kubectl exec -ti local-goosefsx-dp-7fb9b9f877-fcttx -- /bin/sh

CSI PV Operation Steps

Static creation of PV and PVC, StorageClass is not required.
Additionally, three yaml files for CSI need to be defined. The CSI code is already built into the TKE mirror, so no need to worry.

1. Defining a PV yaml File

Sample YAML file for defining PV pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: csi-goosefsx-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
csi:
driver: com.tencent.cloud.csi.goosefsx
volumeHandle: csi-goosefsx-pv
storageClassName: ""

2. Defining a PVC yaml File

Sample yaml file for defining PVC pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-goosefsx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: csi-goosefsx-pv
storageClassName: ""


3. Define the yaml File for the CSI driver

Sample YAML file for defining CSI driver csi-driver.yaml:
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: com.tencent.cloud.csi.goosefsx
spec:
attachRequired: false
podInfoOnMount: false
fsGroupPolicy: File


4. Defining a CSI node yaml File

Sample YAML file for defining CSI node csi-node.yaml:
Note:
Replace the fileSystemId with the file system ID for host mounting.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-goosefsx-node
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-goosefsx-node
template:
metadata:
labels:
app: csi-goosefsx-node
spec:
serviceAccount: csi-goosefsx-node
priorityClassName: system-node-critical
hostNetwork: true
hostPID: true
containers:
- name: driver-registrar
image: ccr.ccs.tencentyun.com/tkeimages/csi-node-driver-registrar:v2.0.1
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/com.tencent.cloud.csi.goosefex /registration/com.tencent.cloud.csi.goosefsx-reg.sock"]
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--kubelet-registration-path=/var/lib/kubelet/plugins/com.tencent.cloud.csi.goosefsx/csi.sock"
env:
- name: ADDRESS
value: /plugin/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: plugin-dir
mountPath: /plugin
- name: registration-dir
mountPath: /registration
- name: goosefsx
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: ccr.ccs.tencentyun.com/qcloud_goosefsx/goosefsx-csi:v1.0.1
args:
- "--v=5"
- "--logtostderr=true"
- "--nodeID=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
# Replace the fileSystemId here with the host mounting file system ID, then delete this reminder
- "--filesystemId=x-c60-s1bz66l4"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://plugin/csi.sock
volumeMounts:
- name: plugin-dir
mountPath: /plugin
- name: goosefsx-mount-dir
mountPath: /goosefsx
mountPropagation: "Bidirectional"
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
volumes:
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins/com.tencent.cloud.csi.goosefsx
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- name: goosefsx-mount-dir
hostPath:
path: /goosefsx
type: Directory


5. Define the yaml File for CSI rbac

Sample YAML file for defining CSI RBAC csi-rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-goosefsx-node
namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-goosefsx-node
rules:
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["configmaps", "events", "persistentvolumes","persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-goosefsx-node
subjects:
- kind: ServiceAccount
name: csi-goosefsx-node
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-goosefsx-node
apiGroup: rbac.authorization.k8s.io


6. Running Command to Create CSI, PV and PVC

Run the following command to configure rbac, driver, and node:
kubectl apply -f csi-rbac.yaml
kubectl apply -f csi-driver.yaml
kubectl apply -f csi-node.yaml
Run the following command to check whether it is functioning normally:
kubectl get ds -n kube-system
Run the following command to create a PV and PVC:
kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml

7. Deploying Pod and Mounting the PVC

Mount the PVC to the Pod with the pod.yaml file as follows:
Note:
Replace the claimName with the appropriate PVC name, which is the name defined in the YAML file (for example, file sample pvc.yaml) as name: csi-goosefsx-pvc.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: csi-goosefsx-pod
name: csi-goosefsx-pod
spec:
replicas: 1
selector:
matchLabels:
k8s-app: csi-goosefsx-pod
template:
metadata:
labels:
k8s-app: csi-goosefsx-pod
spec:
containers:
- image: nginx
name: csi-goosefsx-pod
volumeMounts:
- mountPath: /csi-goosefsx
name: csi-goosefsx
volumes:
- name: csi-goosefsx
persistentVolumeClaim:
claimName: csi-goosefsx-pvc

Deploy Pod
kubectl apply -f pod.yaml
Check if the Pod is in ready State:
kubectl get pod


Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback