tencent cloud

Tencent Cloud WeData

Release Notes
Dynamic Release Record (2026)
Product Introduction
Product Overview
Product Advantages
Product Architecture
Product Features
Application Scenarios
Purchase Guide
Billing Overview
Product Version Purchase Instructions
Execute Resource Purchase Description
Billing Modes
Overdue Policy
Refund
Preparations
Overview of Account and Permission Management
Add allowlist /security groups (Optional)
Sign in to WeData with Microsoft Entra ID (Azure AD) Single Sign-On (SSO)
Operation Guide
Console Operation
Project Management
Data Integration
Studio
Data Development
Data Analysis
Data Science
Data Governance (with Unity Semantics)
API Documentation
History
Introduction
API Category
Making API Requests
Smart Ops Related Interfaces
Project Management APIs
Resource Group APIs
Data Development APIs
Data Asset - Data Dictionary APIs
Data Development APIs
Ops Center APIs
Data Operations Related Interfaces
Data Exploration APIs
Asset APIs
Metadata Related Interfaces
Task Operations APIs
Data Security APIs
Instance Operation and Maintenance Related Interfaces
Data Map and Data Dictionary APIs
Data Quality Related Interfaces
DataInLong APIs
Platform Management APIs
Data Source Management APIs
Data Quality APIs
Platform Management APIs
Asset Data APIs
Data Source Management APIs
Data Types
Error Codes
WeData API 2025-08-06
Service Level Agreements
Related Agreement
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

HDFS Data Source

PDF
Modo Foco
Tamanho da Fonte
Última atualização: 2024-11-01 17:00:28

HDFS Single Table Write Node Configuration

1. On the Data Integration page, in the left sidebar, click Real-time Sync.
2. At the top of the Real-time Sync page, select Single Table Sync to create a new one (you can choose between form and canvas modes) and enter the configuration page.
3. Click the left write, click and select HDFS node and configure node information.



4. You can refer to the table below to configure parameter information.
Parameters
Description
Data Destination
Select available HDFS data sources in the current project.
File Path
Path information of the file system. The path supports using '*' as a wildcard. After specifying the wildcard, multiple file information will be traversed.
File Type
HDFS supports four file types: txt, orc, parquet, csv.
txt: represents TextFile file format.
orc: represents ORCFile file format.
parquet: represents standard Parquet file format.
csv: represents standard HDFS file format (logical two-dimensional table).
Advanced Settings
You can configure parameters according to business needs.
5. Preview Data Fields and map them with Read Node Configuration Fields, click Save.

HDFS Log Collection Write Node Configuration

Parameters
Description
Data Destination
Select available HDFS data sources in the current project.
File Path
Path information of the file system. The path supports using '*' as a wildcard. After specifying the wildcard, multiple file information will be traversed.
File Type
HDFS supports four file types: txt, orc, parquet, csv.
Advanced Settings (optional)
You can configure parameters according to business needs.


Ajuda e Suporte

Esta página foi útil?

comentários