tencent cloud

Data Transfer Service

Release Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Data Migration
Data Sync
Data Subscription (Kafka Edition)
Strengths
Supported Regions
Specification Description
Purchase Guide
Billing Overview
Configuration Change Description
Payment Overdue
Refund
Getting Started
Data Migration Guide
Data Sync Guide
Data Subscription Guide (Kafka Edition)
Preparations
Business Evaluation
Network Preparation
Adding DTS IP Addresses to the Allowlist of the Corresponding Databases
DTS Service Permission Preparation
Database and Permission Preparation
Configuring Binlog in Self-Built MySQL
Data Migration
Databases Supported by Data Migration
Cross-Account TencentDB Instance Migration
Migration to MySQL Series
Migrating to PostgreSQL
Migrating to MongoDB
Migrating to SQL Server
Migrating to Tencent Cloud Distributed Cache
Task Management
Data Sync
Databases Supported by Data Sync
Cross-Account TencentDB Instance Sync
Sync to MySQL series
Synchronize to PostgreSQL
Synchronization to MongoDB
Synchronize to Kafka
Task Management
Data Subscription (Kafka Edition)
Databases Supported by Data Subscription
MySQL series Data Subscription
Data Subscription for TDSQL PostgreSQL
MongoDB Data Subscription
Task Management
Consumption Management
Fix for Verification Failure
Check Item Overview
Cutover Description
Monitoring and Alarms
Supported Monitoring Indicators
Supported Events
Configuring Metric Alarms and Event Alarms via the Console
Configuring Indicator Monitoring and Event Alarm by APIs
Ops Management
Configuring Maintenance Time
Task Status Change Description
Practical Tutorial
Synchronizing Local Database to the Cloud
Creating Two-Way Sync Data Structure
Creating Many-to-One Sync Data Structure
Creating Multi-Site Active-Active IDC Architecture
Selecting Data Sync Conflict Resolution Policy
Using CLB as Proxy for Cross-Account Database Migration
Migrating Self-Built Databases to Tencent Cloud Databases via CCN
Best Practices for DTS Performance Tuning
FAQs
Data Migration
Data Sync
FAQs for Data Subscription Kafka Edition
Regular Expressions for Subscription
Error Handling
Common Errors
Failed Connectivity Test
Failed or Alarmed Check Item
Inability to Select Subnet During CCN Access
Slow or Stuck Migration
Data Sync Delay
High Data Subscription Delay
Data Consumption Exception
API Documentation
History
Introduction
API Category
Making API Requests
(NewDTS) Data Migration APIs
Data Sync APIs
Data Consistency Check APIs
(NewDTS) Data Subscription APIs
Data Types
Error Codes
DTS API 2018-03-30
Service Agreement
Service Level Agreements

Consuming MongoDB Data

PDF
フォーカスモード
フォントサイズ
最終更新日: 2024-09-11 14:22:51

Overview

In data subscription (Kafka Edition, where the current Kafka server version is v2.6.0), you can consume the subscribed data through Kafka 0.11 or later available at DOWNLOAD. This document provides client consumption demos for you to quickly test the process of data consumption and understand the method of data parsing.

Note

The demo only prints out the consumed data and does not contain any usage instructions. You need to write your own data processing logic based on the demo. You can also use Kafka clients in other languages to consume and parse data.
Currently, data subscription to Kafka for consumption can be implemented over the Tencent Cloud private network but not the public network. In addition, the subscribed database instance and the data consumer must be in the same region.
The Kafka built in DTS data subscription has an upper limit for processing a single message. When a single row of data in the source database exceeds 5 MB, the subscription task may report an error.
To ensure that data can be rewritten ‍from where the task is paused, DTS adopts the checkpoint mechanism for data subscription. Specifically, when messages are written to Kafka topics, a checkpoint message is inserted every 10 seconds to mark the data sync offset. When the task is resumed after being interrupted, data can be rewritten from the checkpointed offeset. The consumer commits a consumption offset every time it encounters a checkpoint message so that the consumption offset can be updated timely.

Downloading Consumption Demos

Currently, MongoDB data subscription only supports the JSON format. The following demos already contain the JSON protocol file, so you don't need to download it separately.
The consumption demo uses the native changeStream format. For details, see the MongoDB Manual.
Demo Language
JSON Demo Download
Go
Java
Python

Directions for the Java Demo

Compiling environment: Maven and JDK8. You can choose a desired package management tool. The following takes Maven as an example. Runtime environment: Tencent Cloud CVM (which can access the private network address of the Kafka server only if it is in the same region as the subscribed instance). Install JRE 8. Follow the steps below:
1. Create a data subscription task (NewDTS) as instructed in Creating Data Subscription to TencentDB for MariaDB.
2. Create one or multiple consumer groups as instructed in Creating Consumer Group.
3. Download the Java demo and decompress it.
4. Access the decompressed directory. Maven model and pom.xml files are placed in the directory for your use as needed. Package with Maven by running mvn clean package.
5. Run the demo. After packaging the project with Maven, go to the target folder target and run the following code: java -jar consumerDemo-json-1.0-SNAPSHOT.jar --brokers xxx --topic xxx --group xxx --user xxx --password xxx --trans2sql --trans2canal
brokers is the private network access address for data subscription to Kafka, and topic is the subscription topic, which can be viewed on the Subscription details page as instructed in Viewing Subscription Details.
group, user, and password are the name, account, and password of the consumer group, which can be viewed on the Consumption Management page as instructed in Managing Consumer Group.
trans2sql indicates whether to enable conversion to SQL statement. In Java code, if this parameter is carried, the conversion will be enabled.
trans2canal indicates whether to print the data in Canal format. If this parameter is carried, the conversion will be enabled. Currently, this parameter is only used for data in JSON format.
6. Observe the consumption.



Directions for the Golang Demo

Compiling environment: Go 1.12 or later, with the Go module environment configured. Runtime environment: Tencent Cloud CVM (which can access the private network address of the Kafka server only if it is in the same region as the subscribed instance). Follow the steps below:
1. Create a data subscription task (NewDTS) as instructed in Creating MySQL or TDSQL for MySQL Data Subscription.
2. Create one or multiple consumer groups as instructed in Creating Consumer Group.
3. Download the Go demo and decompress it.
4. Access the decompressed directory and run go build -o subscribe ./main/main.go to generate the executable file subscribe.
5. Run ./subscribe --brokers=xxx --topic=xxx --group=xxx --user=xxx --password=xxx --trans2sql=true.
brokers is the private network access address for data subscription to Kafka, and topic is the subscription topic, which can be viewed on the Subscription details page as instructed in Viewing Subscription Details.
group, user, and password are the name, account, and password of the consumer group, which can be viewed on the Consumption Management page as instructed in Managing Consumer Group.
trans2sql indicates whether to enable conversion to SQL statement.
6. Observe the consumption.



Directions for the Python3 Demo

Compiling and runtime environment: Tencent Cloud CVM (which can access the private network address of the Kafka server only if it is in the same region as the subscribed instance). Install Python 3 and pip3 (for dependency package installation). Use pip3 to install the dependency package:
pip install flag
pip install kafka-python
Follow the steps below:
1. Create a data subscription task (NewDTS) as instructed in Creating MySQL or TDSQL for MySQL Data Subscription.
2. Create one or multiple consumer groups as instructed in Creating Consumer Group.
3. Download Python3 demo and decompress it.
4. Run python main.py --brokers=xxx --topic=xxx --group=xxx --user=xxx --password=xxx --trans2sql=1.
brokers is the private network access address for data subscription to Kafka, and topic is the subscription topic, which can be viewed on the Subscription details page as instructed in Viewing Subscription Details.
group, user, and password are the name, account, and password of the consumer group, which can be viewed on the Consumption Management page as instructed in Managing Consumer Group.
trans2sql indicates whether to enable conversion to SQL statement.
5. Observe the consumption.



ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック