tencent cloud

Elastic MapReduce

Release Notes and Announcements
Release Notes
Announcements
Security Announcements
Product Introduction
Overview
Strengths
Architecture
Features
Use Cases
Constraints and Limits
Technical Support Scope
Product release
Purchase Guide
EMR on CVM Billing Instructions
EMR on TKE Billing Instructions
EMR Serverless HBase Billing Instructions
Getting Started
EMR on CVM Quick Start
EMR on TKE Quick Start
EMR on CVM Operation Guide
Planning Cluster
Administrative rights
Configuring Cluster
Managing Cluster
Managing Service
Monitoring and Alarms
TCInsight
EMR on TKE Operation Guide
Introduction to EMR on TKE
Configuring Cluster
Cluster Management
Service Management
Monitoring and Ops
Application Analysis
EMR Serverless HBase Operation Guide
EMR Serverless HBase Product Introduction
Quotas and Limits
Planning an Instance
Managing an Instance
Monitoring and Alarms
Development Guide
EMR Development Guide
Hadoop Development Guide
Spark Development Guide
Hbase Development Guide
Phoenix on Hbase Development Guide
Hive Development Guide
Presto Development Guide
Sqoop Development Guide
Hue Development Guide
Oozie Development Guide
Flume Development Guide
Kerberos Development Guide
Knox Development Guide
Alluxio Development Guide
Kylin Development Guide
Livy Development Guide
Kyuubi Development Guide
Zeppelin Development Guide
Hudi Development Guide
Superset Development Guide
Impala Development Guide
Druid Development Guide
TensorFlow Development Guide
Kudu Development Guide
Ranger Development Guide
Kafka Development Guide
Iceberg Development Guide
StarRocks Development Guide
Flink Development Guide
JupyterLab Development Guide
MLflow Development Guide
Practical Tutorial
Practice of EMR on CVM Ops
Data Migration
Practical Tutorial on Custom Scaling
API Documentation
History
Introduction
API Category
Cluster Resource Management APIs
Cluster Services APIs
User Management APIs
Data Inquiry APIs
Scaling APIs
Configuration APIs
Other APIs
Serverless HBase APIs
YARN Resource Scheduling APIs
Making API Requests
Data Types
Error Codes
FAQs
EMR on CVM
Service Level Agreement
Contact Us

Overview

PDF
Focus Mode
Font Size
Last updated: 2026-03-20 17:22:15

Feature Introduction

When the cluster load changes with business needs, you can configure scaling rules in Elastic MapReduce (EMR) to automatically add or reduce the compute resources of task nodes. This allows for a quick response to changes in computing demands while saving costs. Auto scaling supports two scaling policies: load-based scaling and time-based scaling. Load-based scaling is suitable for different types of clusters, such as Hadoop, StarRocks, and RSS clusters.

Must-Knows

1. Auto scaling is disabled by default. The scaling categories include custom scaling, scaling group scaling, and managed scaling, and only 1 option can be selected at a time.
2. Custom scaling supports 2 types of policies: load-based scaling and time-based scaling. You can choose the corresponding policy to set scaling rules according to business needs. You can also set mixed scaling rules combining time-based scaling and load-based scaling. Rule triggering follows the principle: "The first triggered, first executed, and if multiple rules are triggered simultaneously, they are executed based on their priority order."
3. Scaling group scaling allows simultaneous execution of scaling policies across multiple scaling groups without interference. Within a single scaling group, rule triggering follows the principle: "The first triggered, first executed, and if multiple rules are triggered simultaneously, they are executed based on their priority order." Scaling groups can perform scaling based on node labels.
Note:
Capacity scheduling takes effect for node labels. It is recommended to use node labels and queues in a one-to-one correspondence.
4. Both custom scaling and scaling group scaling support parallel scaling. Once this feature is enabled, scaling rules within a single scaling group will be triggered in parallel, and rule priority and cooldown period will not take effect. Currently, parallel scaling is available via an allowlist. If you want to use this feature, submit a Ticket to request activation.
5. Managed scaling only supports the HOST resource type. Custom scaling supports both HOST and POD resource types, and only one can be selected at a time. If you switch the resource type, the resource specifications and instance deployment methods set for the original resource type will be retained, but they will be in an invalid state and will not be triggered or executed. Currently scaled-out nodes will also be retained and will not be scaled in unless a scale-in rule is triggered. POD resources are currently available through the allowlist. To use it, please submit a ticket for application.
6. Instance deployment policies support pay-as-you-go and spot instances preferred. POD resources only support pay-as-you-go deployment.
7. When the resource type is HOST, both custom scaling and scaling group scaling support cross-availability zone (AZ) scale-out. Elastic scale-out can specify multiple AZs to ensure sufficient scale-out resources.

Scenarios

1. The business computation load curve shows clear peaks and troughs.
2. For businesses with regular changes, a fixed time period can be scheduled to supplement cluster computing power, meeting business needs while saving costs.
3. To ensure important job tasks are completed on time, nodes need to be expanded according to specific load metrics during a certain time period.
4. When overlapping job tasks are executed, resources can be expanded according to the resource pool division to meet multiple business requirements.
5. Auto scaling offers a diverse range of elastic combinations for your selection. More scenarios await your exploration. For details on rule policies in specific scenarios, refer to Custom Scaling Practical Tutorial.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback