Release Notes
# Decompress the GooseFS client from the GooseFS Docker image$ id=$(docker create goosefs/goosefs:v1.2.0)$ docker cp $id:/opt/alluxio/client/goosefs-1.2.0-client.jar - > goosefs-1.2.0-client.jar$ docker rm -v $id 1>/dev/nullThen, copy to the spark directory$ cp goosefs-1.2.0-client.jar /path/to/spark-2.4.8-bin-hadoop2.7/jars# Then, recompile the spark docker image$ docker build -t spark-goosefs:2.4.8 -f kubernetes/dockerfiles/spark/Dockerfile .# View the compiled docker image$ docker image ls

# Use sub-account keys or temporary keys to complete the configuration and enhance security. When authorizing sub-accounts, grant executable operations and resources on demand.$ goosefs ns create spark-cosntest cosn://goosefs-test-125000000/ --secret fs.cosn.userinfo.secretId=********************************** --secret fs.cosn.userinfo.secretKey=********************************** --attribute fs.cosn.bucket.region=ap-xxxx# Add a test data file$ goosefs fs copyFromLocal LICENSE /spark-cosntest
$ kubectl create serviceaccount spark$ kubectl create clusterrolebinding spark-role --clusterrole=edit \\--serviceaccount=default:spark --namespace=default
--master k8s://http://127.0.0.1:8001 \\--deploy-mode cluster \\--name spark-goosefs \\--class org.apache.spark.examples.JavaWordCount \\--conf spark.executor.instance=2 \\--conf spark.kubernetes.container.image=spark-goosefs/spark:2.4.8 \\--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \\--conf spark.hadoop.fs.gfs.impl=com.qcloud.cos.goosefs.hadoop.GooseFileSystem \\--conf spark.driver.extraClassPath=local:///opt/spark/jars/goosefs-1.2.0-client.jar \\local:///opt/spark/examples/jars/spark-examples_2.11-2.4.8.jar \\gfs://172.16.64.32:9200/spark-cosntest/LICENSE

kubectl logs spark-goosefs-1646905692480-driver to view the job execution result.
Esta página foi útil?
Você também pode entrar em contato com a Equipe de vendas ou Enviar um tíquete em caso de ajuda.
comentários