tencent cloud

Elastic MapReduce

Release Notes and Announcements
Release Notes
Announcements
Security Announcements
Product Introduction
Overview
Strengths
Architecture
Features
Use Cases
Constraints and Limits
Technical Support Scope
Product release
Purchase Guide
EMR on CVM Billing Instructions
EMR on TKE Billing Instructions
EMR Serverless HBase Billing Instructions
Getting Started
EMR on CVM Quick Start
EMR on TKE Quick Start
EMR on CVM Operation Guide
Planning Cluster
Administrative rights
Configuring Cluster
Managing Cluster
Managing Service
Monitoring and Alarms
TCInsight
EMR on TKE Operation Guide
Introduction to EMR on TKE
Configuring Cluster
Cluster Management
Service Management
Monitoring and Ops
Application Analysis
EMR Serverless HBase Operation Guide
EMR Serverless HBase Product Introduction
Quotas and Limits
Planning an Instance
Managing an Instance
Monitoring and Alarms
Development Guide
EMR Development Guide
Hadoop Development Guide
Spark Development Guide
Hbase Development Guide
Phoenix on Hbase Development Guide
Hive Development Guide
Presto Development Guide
Sqoop Development Guide
Hue Development Guide
Oozie Development Guide
Flume Development Guide
Kerberos Development Guide
Knox Development Guide
Alluxio Development Guide
Kylin Development Guide
Livy Development Guide
Kyuubi Development Guide
Zeppelin Development Guide
Hudi Development Guide
Superset Development Guide
Impala Development Guide
Druid Development Guide
TensorFlow Development Guide
Kudu Development Guide
Ranger Development Guide
Kafka Development Guide
Iceberg Development Guide
StarRocks Development Guide
Flink Development Guide
JupyterLab Development Guide
MLflow Development Guide
Practical Tutorial
Practice of EMR on CVM Ops
Data Migration
Practical Tutorial on Custom Scaling
API Documentation
History
Introduction
API Category
Cluster Resource Management APIs
Cluster Services APIs
User Management APIs
Data Inquiry APIs
Scaling APIs
Configuration APIs
Other APIs
Serverless HBase APIs
YARN Resource Scheduling APIs
Making API Requests
Data Types
Error Codes
FAQs
EMR on CVM
Service Level Agreement
Contact Us

Scaling Group Configuration

PDF
Mode fokus
Ukuran font
Terakhir diperbarui: 2026-03-20 17:25:55
A scaling group adds scaling rules and scaling specifications. When a rule is triggered, the scaling group automatically scales out or scales in according to the preset strategy of the rule. If no rules and specifications are set, no scaling operation will be triggered.
Supports up to 10 scaling groups. Groups do not affect each other. Supports simultaneous scaling operations. Scaling within a single group follows the rule of "first triggered, first executed; if triggered simultaneously, executed according to rule priority".

Scaling Group Settings

Parameter Settings
Description
Name
In the same cluster, scaling group names are not allowed to be duplicated.
Maximum number of nodes
Used to control the upper limit of the number of nodes in the current scaling group; no scale-out will occur when the limit is reached.
Minimum number of nodes
Used to control the lower limit of the number of nodes in the current scaling group; no scale-in will occur when the limit is reached.
Note: When editing later, it cannot be less than the number of elastic nodes in the current scaling group.
Payment Method
Pay-as-you-go Billing: When the scale-out rule is triggered, all added pay-as-you-go nodes supplement computing power.
Bidding Instance Priority: When the scale-out rule is triggered, bidding instances are added first to supplement computing power. When bidding instance resources are insufficient, pay-as-you-go resources are used to make up for the computing power.
The HOST resource type supports pay-as-you-go billing and bidding instance billing, while the MNode and POD resources only support pay-as-you-go billing.
Spot instances support the repossession mechanism due to the limited availability of resources. We recommend you choose pay-as-you-go to ensure cluster computing power and avoid failures in supplementing computing power.
Minimum proportion of pay-as-you-go nodes: The minimum proportion of pay-as-you-go nodes to the scale-out quantity in a single scale-out. For example, if 10 nodes are to be added in a single scale-out and the minimum proportion of pay-as-you-go nodes is 20%, when the scale-out rule is triggered, at least 2 pay-as-you-go nodes will be added, and the remaining 8 nodes will be supplemented with spot instances. If spot instance resources are insufficient for 8 nodes, pay-as-you-go nodes will be used to make up the difference.
Node Label
By default, scale-out resources will be placed in the Default Label. After setting, scale-out resources will be put into the specified Label. When the scale-in rule is triggered, resources of the corresponding label will be scaled in; only capacity scheduling takes effect.

Scaling Specifications Management

Scaling specifications are used to specify the elastic scale-out node specifications. To maintain a linear change in cluster load, ensure that the CPU and memory of the scaling specifications are consistent.
1. Click Add Specifications. On the "Add Task Specifications" page, select resource type, model specifications, and other information to perform specifications settings.
2. You can add and delete the nodes in the scaling specifications, and adjust the priority of scaling specifications as needed. The priority order of specifications is from high to low (1>2>3>4>5).

Scaling Rule Management

For more information on how to set scaling rules and execution principles, see Scaling Rule Management.

Bantuan dan Dukungan

Apakah halaman ini membantu?

masukan