Purchase Steps | Configuration Item | Configuration Items Description | Example |
Software configuration | Region | The physical data center where the cluster is deployed. Note: Once the cluster is created, the region cannot be changed, so choose carefully. | Beijing, Shanghai, Guangzhou, Nanjing, Chengdu, and Silicon Valley |
| Cluster type | EMR on CVM supports multiple cluster types, with Hadoop being the default cluster type. | Hadoop and StarRocks |
| Product version | The components and their versions bundled with different product versions vary. | EMR-V2.7.0 includes Hadoop 2.8.5 and Spark 3.2.1. |
| Deployment components | Optional components that can be customized and combined based on your needs. | Hive-2.3.9 and Impala-3.4.1. |
Region and hardware configuration | Billing mode | Billing mode for cluster deployment | Pay-as-You-go |
| Availability zone (AZ) and network configuration | AZ and cluster network settings. Note: Once the cluster is created, the AZ cannot be directly changed, so choose carefully. | Guangzhou Zone 7. |
| Secure login | Network access control settings for nodes, with a security group firewall feature. | Create a security group. |
| Node configuration | Select the appropriate model configuration for different node types based on business requirements. For more details, see Business Evaluation. | Enable high availability for node deployment. |
Basic configuration | Associated project | Assign the current cluster to different project groups. | The associated project cannot be modified once the cluster is created. |
| Cluster name | The name of the cluster, which is customizable. | EMR-7sx2aqmu |
| Login method | Custom password setup and key association. SSH keys are used only for quick access through the EMR-UI. | Password. |
Confirm configuration | Configuration list | Confirm if there is any error in the deployment information. | Select the terms of service and click Buy Now. |
[root@172 ~]# su hadoop[hadoop@172 root]$ cd /usr/local/service/spark
/usr/local/service/spark/bin/spark-submit \\--class org.apache.spark.examples.SparkPi \\--master yarn \\--deploy-mode cluster \\--proxy-user hadoop \\--driver-memory 1g \\--executor-memory 1g \\--executor-cores 1 \\/usr/local/service/spark/examples/jars/spark-examples*.jar \\10
Feedback