tencent cloud

Elastic MapReduce

Release Notes and Announcements
Release Notes
Announcements
Security Announcements
Product Introduction
Overview
Strengths
Architecture
Features
Use Cases
Constraints and Limits
Technical Support Scope
Product release
Purchase Guide
EMR on CVM Billing Instructions
EMR on TKE Billing Instructions
EMR Serverless HBase Billing Instructions
Getting Started
EMR on CVM Quick Start
EMR on TKE Quick Start
EMR on CVM Operation Guide
Planning Cluster
Administrative rights
Configuring Cluster
Managing Cluster
Managing Service
Monitoring and Alarms
TCInsight
EMR on TKE Operation Guide
Introduction to EMR on TKE
Configuring Cluster
Cluster Management
Service Management
Monitoring and Ops
Application Analysis
EMR Serverless HBase Operation Guide
EMR Serverless HBase Product Introduction
Quotas and Limits
Planning an Instance
Managing an Instance
Monitoring and Alarms
Development Guide
EMR Development Guide
Hadoop Development Guide
Spark Development Guide
Hbase Development Guide
Phoenix on Hbase Development Guide
Hive Development Guide
Presto Development Guide
Sqoop Development Guide
Hue Development Guide
Oozie Development Guide
Flume Development Guide
Kerberos Development Guide
Knox Development Guide
Alluxio Development Guide
Kylin Development Guide
Livy Development Guide
Kyuubi Development Guide
Zeppelin Development Guide
Hudi Development Guide
Superset Development Guide
Impala Development Guide
Druid Development Guide
TensorFlow Development Guide
Kudu Development Guide
Ranger Development Guide
Kafka Development Guide
Iceberg Development Guide
StarRocks Development Guide
Flink Development Guide
JupyterLab Development Guide
MLflow Development Guide
Practical Tutorial
Practice of EMR on CVM Ops
Data Migration
Practical Tutorial on Custom Scaling
API Documentation
History
Introduction
API Category
Cluster Resource Management APIs
Cluster Services APIs
User Management APIs
Data Inquiry APIs
Scaling APIs
Configuration APIs
Other APIs
Serverless HBase APIs
YARN Resource Scheduling APIs
Making API Requests
Data Types
Error Codes
FAQs
EMR on CVM
Service Level Agreement
Contact Us

Repairing Disks

PDF
Mode fokus
Ukuran font
Terakhir diperbarui: 2026-03-20 17:18:25

Overview

Local disk replacement events are automatically monitored in the EMR console. After disk replacement, you can initialize the new disk in the console on your own.
Note:
After you receive a faulty disk notification from the CVM and repair or replace the physical disk as instructed in the notification, the Disk Repair operation is triggered in the EMR console.
When a disk is replaced, all data on the disk is lost. Make sure that the data on the disk is backed up prior to replacement.
After a physical disk is repaired or a disk is replaced on CVM, if the user has manually initialized the disk and completed the disk mounting, there is no need to perform Disk Repair in the EMR console.

Directions

1. Log in to the EMR console and click the ID/Name of the corresponding cluster in the cluster list.
2. On the cluster details page, choose Cluster Resource > Resource Management and then repair the disk on the node where the disk is replaced.
3. If you have already manually repaired a disk, you can select Manually fixed in the Repair Disk pop-up window to skip the repairing process, because there is no need to repeat the operation. Otherwise, the system will perform the disk repairing operation by default.
4. During the automated operation process in the console, services will be restarted, or other operations will be performed on the current node. Services and nodes will be unavailable during the restart. We recommend performing repairing operations during off-peak business hours.

Kudu Service Recovery

Note:
If there are multiple local disks and one or more of them use the EMR disk repair feature, the node where the disks are located will run the KuduServer service.
Due to the limitation of Kudu's fs_data_dirs feature, to ensure normal startup of KuduServer after one or more disks are formatted, all data directories on the KuduServer node are empty. You should confirm that these directories are not used by any businesses other than Kudu.
Scenario: Under Cluster Service in the EMR console, you can view the health status of the KuduServer on the node where the disk is replaced. The health status is unavailable:
Data consistency check and recovery:
1.1 Ensure that the directory (as described below) is used exclusively for Kudu. If the directory is used for any other purpose, move the relevant data to another directory that is not configured as an fs_data_dirs directory before proceeding with the following steps. Specific directory: View the /usr/local/service/kudu/conf/tserver.gflags file:


1.2 Log in to the abnormal node of the local disk to view logs: /data/emr/kudu/log/kudu-tserver.INFO:

Run the following commands as a root user to remove inconsistent data:
rm -rf /data/emr/kudu/tserver/*
rm -rf /data1/emr/kudu/tserver/*
The commands assume that /data/emr/kudu/tserver/ and /data1/emr/kudu/tserver/ are configured in fs_data_dirs. For more information, view /usr/local/service/kudu/conf/tserver.gflags.
1.3 Observe the service status of KuduServer in the console.
Note:
If you encounter any issues, submit a ticket.

Bantuan dan Dukungan

Apakah halaman ini membantu?

masukan