tencent cloud

Elastic MapReduce

Release Notes and Announcements
Release Notes
Announcements
Security Announcements
Product Introduction
Overview
Strengths
Architecture
Features
Use Cases
Constraints and Limits
Technical Support Scope
Product release
Purchase Guide
EMR on CVM Billing Instructions
EMR on TKE Billing Instructions
EMR Serverless HBase Billing Instructions
Getting Started
EMR on CVM Quick Start
EMR on TKE Quick Start
EMR on CVM Operation Guide
Planning Cluster
Administrative rights
Configuring Cluster
Managing Cluster
Managing Service
Monitoring and Alarms
TCInsight
EMR on TKE Operation Guide
Introduction to EMR on TKE
Configuring Cluster
Cluster Management
Service Management
Monitoring and Ops
Application Analysis
EMR Serverless HBase Operation Guide
EMR Serverless HBase Product Introduction
Quotas and Limits
Planning an Instance
Managing an Instance
Monitoring and Alarms
Development Guide
EMR Development Guide
Hadoop Development Guide
Spark Development Guide
Hbase Development Guide
Phoenix on Hbase Development Guide
Hive Development Guide
Presto Development Guide
Sqoop Development Guide
Hue Development Guide
Oozie Development Guide
Flume Development Guide
Kerberos Development Guide
Knox Development Guide
Alluxio Development Guide
Kylin Development Guide
Livy Development Guide
Kyuubi Development Guide
Zeppelin Development Guide
Hudi Development Guide
Superset Development Guide
Impala Development Guide
Druid Development Guide
TensorFlow Development Guide
Kudu Development Guide
Ranger Development Guide
Kafka Development Guide
Iceberg Development Guide
StarRocks Development Guide
Flink Development Guide
JupyterLab Development Guide
MLflow Development Guide
Practical Tutorial
Practice of EMR on CVM Ops
Data Migration
Practical Tutorial on Custom Scaling
API Documentation
History
Introduction
API Category
Cluster Resource Management APIs
Cluster Services APIs
User Management APIs
Data Inquiry APIs
Scaling APIs
Configuration APIs
Other APIs
Serverless HBase APIs
YARN Resource Scheduling APIs
Making API Requests
Data Types
Error Codes
FAQs
EMR on CVM
Service Level Agreement
Contact Us

M Node Accessing CHDFS

PDF
フォーカスモード
フォントサイズ
最終更新日: 2026-01-14 15:26:39

Overview

If you need an M Node to access the high-performance file systems provided by Cloud Object Storage (COS) or Cloud HDFS (CHDFS) with metadata acceleration enabled, specify specific storage files and mount points in the Elastic MapReduce (EMR) console. The system will automatically generate a CHDFS permission group and bind it to the configured CHDFS storage files and mount points. This way, M Node can access the corresponding file systems and mount points under the COS or CHDFS storage files with metadata acceleration enabled.
The metadata acceleration is a high-performance file system feature provided by COS. Metadata acceleration leverages the powerful metadata management feature of CHDFS at the underlying layer to allow you to access COS over file system semantics. For details, see Metadata Acceleration Overview .
Note:
Only existing CHDFS files and mount points can be bound. If you need to create a CHDFS storage file or a mount point, please go to the CHDFS console.

Directions

1. Log in to the EMR console, and click the ID or name of the target cluster in the cluster list to go to the Cluster Overview page.
2. On the Cluster Overview page, choose Instance Information > Authorization Information, and click Settings next to M Node Accessing CHDFS.
3. In the Settings pop-up window, you can bind storage files and mount points. You can add, delete, and modify storage files and mount points.
4. First, select a storage file, and then select a mount point. You should select one or more mount points.
Note:
You can select storage files and mount points from multiple buckets.
You can unbind storage files and mount points from system-generated permission groups.

ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック