Parameter Settings | Description |
Min Node Count | The minimum number of AS task nodes to be retained in the cluster when the automatic scale-in policy is triggered. |
Max Node Count | The maximum number of AS task nodes to be retained in the cluster when the automatic scale-out policy is triggered. The cumulative number of nodes scaled out based on single or multiple specifications cannot exceed the max node count. |
Release All | Clears all nodes scaled out by auto-scaling with one click. Nodes not scaled out by auto-scaling are not affected. |
Release Spot Instances | Clears only the spot instance nodes scaled out by auto-scaling with one click. Non-spot instance resource nodes are not affected. |
Release Pay-As-You-Go Instances | Clears pay-as-you-go instance nodes scaled out by auto-scaling with one click. Pay-as-you-go nodes not scaled out by auto-scaling are not affected. |
Allow Graceful Scale-in | This feature is disabled by default. The graceful scale-in policy is not enabled in all scale-in rules. When this feature is enabled, if both graceful scale-in and a single rule are enabled, the graceful scale-in policy takes effect. |
Resource Type | The HOST resource type supports pay-as-you-go billing and spot instance billing, while the POD resource type only supports pay-as-you-go billing and can only be used to deploy the NodeManager role of Yarn. |
Configuration Item | Description |
Rule Type | Scale Out / Scale In |
Policy Type | By load |
Rule Name | The name of the scaling rule. The scaling rule names in the same cluster must be unique (including scale-out and scale-in rules). |
Validity | The validity time range during which the load-based scaling rule is triggered. Unlimited is selected by default. Custom time periods are supported for load-based scaling rule configuration. |
Load Type | Supports YARN or Trino load metrics. Trino load-based scaling is only supported for clusters deployed with the Trino component in EMR-V2.7.0 and EMR-V3.40 or later versions. |
Statistical Rule | Sets single or multiple rules for triggering thresholds simultaneously based on the selected cluster load metrics. Up to 5 statistical rules can be set, and rules can be aggregated based on subqueues. Rule: Specifies the queue and load metric, and sets the conditional rule for triggering thresholds. Statistical Period: The selected load metric is triggered once when the trigger threshold is reached within a statistical period according to the selected aggregation dimension (average value, maximum value, and minimum value). Currently, three statistical periods are supported: 300 seconds, 600 seconds, and 900 seconds. Repeat Count: The number of times that the aggregated load metric reaches the threshold. When the repeat count is reached, the cluster AS action is triggered. |
Scale-out / Scale-in Mode | Three options are available, that is, Node, Memory, and Core, and their values must be non-zero integers. When Core or Memory is selected, the number of nodes to be added in case of scale-out is calculated while ensuring maximum computing power; the minimum number of nodes to be released in case of scale-in is calculated while ensuring normal business operation. Also, ensure that the nodes are released in reverse chronological order and at least 1 node is released. |
Scale-out Service | By default, the scaling component inherits the cluster-level configuration, and the scale-out nodes belong to the default configuration group for that node type. To adjust the configuration of the scaling component, you can specify configuration settings. |
Node Label | By default, resources scaled out without a label are placed in the Default Label. After setting, resources scaled out are placed in the specified Label. |
Resource Supplement Retry | When auto-scaling is performed during peak hours, the number of nodes added may fail to reach the target number due to the lack of resources. When you enable the resource supplement retry policy, if the configured scaling specifications resources are sufficient, the system will automatically retry applying for resources until the target number is reached or approached. If insufficient resources often cause auto-scaling to fall short of expectations, you can try enabling this configuration. After you enable it, if a retry is triggered, the auto-scaling time may be extended. Please pay attention to the impact of policy adjustments on business. |
Cooldown Period | The interval (cool-down time range: 0 to 43,200 seconds) before the next auto-scaling action is initiated after the current rule is successfully executed. |
Graceful Scale-in | After the graceful scale-in mode is enabled, if a scale-in action is triggered while a node is executing tasks, the node will not be released immediately but will wait for the tasks to be completed within a custom time period before scaling in. If the tasks are not completed by the end of the custom time period, scale-in will proceed anyway. |
Configuration Item | Description |
Rule Type | Scale out / Scale in |
Policy Type | By time |
Rule Name | The name of the scaling rule. The scaling rule names in the same cluster must be unique (including scale-out and scale-in rules). |
Execution Type | Once: Triggers a scaling action at a specific time, accurate to the minute. Recurring: Triggers a scaling action daily, weekly, or monthly at a specific time or time period. Execution Time: The specific time of executing scaling actions every day. Validity: The validity range for the repeated execution of a single rule. |
Scale-out / Scale-in Mode | Three options are available, that is, Node, Memory, and Core, and their values must be non-zero integers. When Core or Memory is selected, the number of nodes to be added in case of scale-out is calculated while ensuring maximum computing power; the minimum number of nodes to be released in case of scale-in is calculated while ensuring normal business operation. Also, ensure that the nodes are released in reverse chronological order and at least 1 node is released. |
Scale-out Service | By default, the scaling component inherits the cluster-level configuration, and the scale-out nodes belong to the default configuration group for that node type. To adjust the configuration of the scaling component, you can specify configuration settings. |
Node Label | By default, resources scaled out without a label are placed in the Default Label. After setting, resources scaled out are placed in the specified Label. |
Resource Supplement Retry | When auto-scaling is performed during peak hours, the number of nodes added may fail to reach the target number due to the lack of resources. When you enable the resource supplement retry policy, if the configured scaling specifications resources are sufficient, the system will automatically retry applying for resources until the target number is reached or approached. If insufficient resources often cause auto-scaling to fall short of expectations, you can try enabling this configuration. After you enable it, if a retry is triggered, the auto-scaling time may be extended. Please pay attention to the impact of policy adjustments on business. |
Retry Time After Expiration | If an AS action cannot be executed at the specified time due to various reasons, setting the retry time after expiration will allow the system to attempt execution periodically within that set time range until the AS conditions are met. |
Cooldown Period | The interval (cool-down time range: 0 to 43,200 seconds) before the next auto-scaling action is initiated after the current rule is successfully executed. |
Scheduled Termination | Specifies the usage duration for scale-out resources, and when a scale-in rule is triggered, the current batch of nodes will not be affected by the scale-in rule. By default, "Unlimited" is selected. Custom durations are supported. Enter an integer in the range of 1-24 hours. Use case: This field is used when you need to supplement computing power during fixed time periods and maintain the computing power within a day's range, and other scale-in rules do not affect this batch of resources. |
Graceful Scale-in | After the graceful scale-in mode is enabled, if a scale-in action is triggered while a node is executing tasks, the node will not be released immediately but will wait for the tasks to be completed within a custom time period before scaling in. If the tasks are not completed by the end of the custom time period, scale-in will proceed anyway. |
Feedback