tencent cloud

Data Transfer Service

Release Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Data Migration
Data Sync
Data Subscription (Kafka Edition)
Strengths
Supported Regions
Specification Description
Purchase Guide
Billing Overview
Configuration Change Description
Payment Overdue
Refund
Getting Started
Data Migration Guide
Data Sync Guide
Data Subscription Guide (Kafka Edition)
Preparations
Business Evaluation
Network Preparation
Adding DTS IP Addresses to the Allowlist of the Corresponding Databases
DTS Service Permission Preparation
Database and Permission Preparation
Configuring Binlog in Self-Built MySQL
Data Migration
Databases Supported by Data Migration
Cross-Account TencentDB Instance Migration
Migration to MySQL Series
Migrating to PostgreSQL
Migrating to MongoDB
Migrating to SQL Server
Migrating to Tencent Cloud Distributed Cache
Task Management
Data Sync
Databases Supported by Data Sync
Cross-Account TencentDB Instance Sync
Sync to MySQL series
Synchronize to PostgreSQL
Synchronization to MongoDB
Synchronize to Kafka
Task Management
Data Subscription (Kafka Edition)
Databases Supported by Data Subscription
MySQL series Data Subscription
Data Subscription for TDSQL PostgreSQL
MongoDB Data Subscription
Task Management
Consumption Management
Fix for Verification Failure
Check Item Overview
Cutover Description
Monitoring and Alarms
Supported Monitoring Indicators
Supported Events
Configuring Metric Alarms and Event Alarms via the Console
Configuring Indicator Monitoring and Event Alarm by APIs
Ops Management
Configuring Maintenance Time
Task Status Change Description
Practical Tutorial
Synchronizing Local Database to the Cloud
Creating Two-Way Sync Data Structure
Creating Many-to-One Sync Data Structure
Creating Multi-Site Active-Active IDC Architecture
Selecting Data Sync Conflict Resolution Policy
Using CLB as Proxy for Cross-Account Database Migration
Migrating Self-Built Databases to Tencent Cloud Databases via CCN
Best Practices for DTS Performance Tuning
FAQs
Data Migration
Data Sync
FAQs for Data Subscription Kafka Edition
Regular Expressions for Subscription
Error Handling
Common Errors
Failed Connectivity Test
Failed or Alarmed Check Item
Inability to Select Subnet During CCN Access
Slow or Stuck Migration
Data Sync Delay
High Data Subscription Delay
Data Consumption Exception
API Documentation
History
Introduction
API Category
Making API Requests
(NewDTS) Data Migration APIs
Data Sync APIs
Data Consistency Check APIs
(NewDTS) Data Subscription APIs
Data Types
Error Codes
DTS API 2018-03-30
Service Agreement
Service Level Agreements

Canal Demo Description (Canal ProtoBuf/Canal JSON)

PDF
フォーカスモード
フォントサイズ
最終更新日: 2024-09-20 10:09:47

Feature Description

The sync data written to Kafka via DTS supports compatibility with the open-source tool in the Canal format, using the ProtoBuf or JSON serialization protocol. During the configuration of DTS sync tasks, you can choose the data format Canal ProtoBuf or Canal JSON, and then use the Consumer Demo for business adaptation, to connect the consumer data.

If you want to learn more about Canal, see Canal details.

Scheme Comparison

Feature
DTS Sync to Kafka Scheme
Canal Sync Scheme
Data Type
Full + increment
Increment only
Data Format
Canal ProtoBuf, Canal JSON
ProtoBuf, JSON
Cost
Purchase cloud resources, which basically require no subsequent maintenance once being configured initially.
Customers shall deploy and maintain by themselves.

Canal JSON Format Compatibility Statement

Users can consume data using the consumption program from the previous Canal scheme. When consuming data in the Canal JSON format in the DTS scheme, the field names are consistent with those in the Canal scheme's JSON format, and only the following differences need to be noted.
1. In the source database, fields of binary-related types (including binary, varbinary, blob, tinyblob, mediumblob, longblob and geometry) will be converted into HexString after being synced to the target. Users should be aware of this when consuming data.
2. Fields of the Timestamp type in the source database will be converted to the 0 timezone (e.g., 2021-05-17 07:22:42 +00:00) when they are synced to the target. Users need to consider the timezone information when parsing and converting.
3. The JSON format of the Canal scheme defines the sqlType field, which is used in Java Database Connectivity (JDBC) to represent the SQL data type. Since Canal uses Java at the bottom layer, and DTS is implemented in Golang at the bottom layer, this field is left empty in the Canal JSON format provided by DTS.

Canal ProtoBuf Format Compatibility Statement

For consuming data in the Canal ProtoBuf format, it is necessary to use the protocol document provided by DTS, because this protocol document incorporates features such as full sync logic, which is included in the Consumer Demo. Therefore, users need to use the Consumer Demo provided by DTS, and adapt their own business logic based on this Demo in order to connect the consumer data.
When data is consumed in the Canal ProtoBuf format provided by DTS, the field names are consistent with the ProtoBuf format provided by the Canal scheme, and only the following differences need to be noted.
1. In the source database, fields of binary-related types (including binary, varbinary, blob, tinyblob, mediumblob, longblob and geometry) will be converted into HexString after being synced to the target. Users should be aware of this when consuming data.
2. Fields of the Timestamp type in the source database will be converted to the 0 timezone (e.g., 2021-05-17 07:22:42 +00:00) when being synced to the target. Users need to consider the timezone information when parsing and converting.

ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック