tencent cloud

Data Transfer Service

Release Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Data Migration
Data Sync
Data Subscription (Kafka Edition)
Strengths
Supported Regions
Specification Description
Purchase Guide
Billing Overview
Configuration Change Description
Payment Overdue
Refund
Getting Started
Data Migration Guide
Data Sync Guide
Data Subscription Guide (Kafka Edition)
Preparations
Business Evaluation
Network Preparation
Adding DTS IP Addresses to the Allowlist of the Corresponding Databases
DTS Service Permission Preparation
Database and Permission Preparation
Configuring Binlog in Self-Built MySQL
Data Migration
Databases Supported by Data Migration
Cross-Account TencentDB Instance Migration
Migration to MySQL Series
Migrating to PostgreSQL
Migrating to MongoDB
Migrating to SQL Server
Migrating to Tencent Cloud Distributed Cache
Task Management
Data Sync
Databases Supported by Data Sync
Cross-Account TencentDB Instance Sync
Sync to MySQL series
Synchronize to PostgreSQL
Synchronization to MongoDB
Synchronize to Kafka
Task Management
Data Subscription (Kafka Edition)
Databases Supported by Data Subscription
MySQL series Data Subscription
Data Subscription for TDSQL PostgreSQL
MongoDB Data Subscription
Task Management
Consumption Management
Fix for Verification Failure
Check Item Overview
Cutover Description
Monitoring and Alarms
Supported Monitoring Indicators
Supported Events
Configuring Metric Alarms and Event Alarms via the Console
Configuring Indicator Monitoring and Event Alarm by APIs
Ops Management
Configuring Maintenance Time
Task Status Change Description
Practical Tutorial
Synchronizing Local Database to the Cloud
Creating Two-Way Sync Data Structure
Creating Many-to-One Sync Data Structure
Creating Multi-Site Active-Active IDC Architecture
Selecting Data Sync Conflict Resolution Policy
Using CLB as Proxy for Cross-Account Database Migration
Migrating Self-Built Databases to Tencent Cloud Databases via CCN
Best Practices for DTS Performance Tuning
FAQs
Data Migration
Data Sync
FAQs for Data Subscription Kafka Edition
Regular Expressions for Subscription
Error Handling
Common Errors
Failed Connectivity Test
Failed or Alarmed Check Item
Inability to Select Subnet During CCN Access
Slow or Stuck Migration
Data Sync Delay
High Data Subscription Delay
Data Consumption Exception
API Documentation
History
Introduction
API Category
Making API Requests
(NewDTS) Data Migration APIs
Data Sync APIs
Data Consistency Check APIs
(NewDTS) Data Subscription APIs
Data Types
Error Codes
DTS API 2018-03-30
Service Agreement
Service Level Agreements

Creating Data Subscription for MongoDB

PDF
フォーカスモード
フォントサイズ
最終更新日: 2024-09-11 14:22:29
This scenario describes how to create a data subscription task in DTS for TencentDB for MongoDB.

Version Description

Currently, data subscription is supported only for TencentDB for MongoDB 3.6, 4.0, 4.2, and 4.4.
TencentDB for MongoDB 3.6 only supports collection-level subscription.

Prerequisites

You have prepared a TencentDB instance to be subscribed to, and the database version meets the requirements. For more information, see Databases Supported by Data Subscription.
We recommend you create a read-only account in the source instance by seeing the following syntax. You can also do this in the TencentDB for MongoDB console.
# Create an Instance-Level Read-Only Account
use admin
db.createUser({
user: "username",
pwd: "password",
roles:[
{role: "readAnyDatabase",db: "admin"}
]
})

# Create a Database-Specific Read-Only Account
use admin
db.createUser({
user: "username",
pwd: "password",
roles:[
{role: "read",db: "Name of the specified database"}
]
})

Notes

Subscribed messages are stored in DTS built-in Kafka (single Topic), with a default retention time of 1 day as of now. A single Topic can have a maximum Storage capacity of 500 GB. When the data Storage period exceeds 1 day, or the data volume surpasses 500 GB, the built-in Kafka will start to clear the earliest written data. Therefore, users should consume data promptly to avoid it being cleared before consumption.
The region of data consumption needs to match the region of the subscription task.
The Kafka built in DTS has a certain upper limit for processing individual messages. When a single row of data in the source database exceeds 10 MB, this row may be discarded in the consumer.
After the database or collection specified for the selected subscription object is deleted from the source database, the subscription data (change stream) of the database or collection will be invalidated. Even if the database or collection is rebuilt in the source database, the subscription data cannot be resubscribed. In this case, you need to reset the subscription task and select the subscription object again.

SQL Operations for Subscription Supported

Operation Type
Supported SQL Operations
DML
INSERT,UPDATE,DELETE
DDL
INDEX:createIndexes,createIndex,dropIndex,dropIndexes COLLECTION:createCollection,drop,collMod,renameCollection DATABASE:dropDatabase,copyDatabase

Subscription Configuration Steps

1. Log in to DTS Console, choose Data Subscription in the left sidebar, and click Create Subscription.
2. On the page for Creating Data Subscription, select the appropriate configuration and click Buy Now.
Billing Model: Supports Monthly subscription and Pay as you go.
Region: The region must be the same as that of the database instance to be subscribed to.
Database: Choose MongoDB.
Edition: Select Kafka Edition, which supports direct consumption through a Kafka client.
Subscribed Instance Name: Edit the name of the current data subscription instance.
Quantity: You can purchase up to 10 tasks at a time.
3. After successful purchase, return to the data subscription list, select the task just purchased, and click Configure Subscription in the Operation column.

4. On the configure data subscription page, after configuring the source database information, click Test Connectivity, after passing, click Next.
Access Type: Currently, only supports Database.
Instance Name: Select the TencentDB instance ID.
Database Account/Password: Add the username and password for the subscription instance. The account has read-only permissions.
Number of Kafka Partitions: Set the number of Kafka partitions. Increasing the number can improve the speed of data write and consumption. A single partition can guarantee the order of messages, while multiple partitions cannot. If you have strict requirements for the order of messages during consumption, set this value to 1.

5. In the Subscription Type and Object Selection Page, after selecting the subscription parameters, click Save.
Parameter
Description
Data Subscription Type
It is Change Stream by default and cannot be modified.
Subscription Object Level
Subscription level includes all instances, library, and collection.
All Instances: Subscribe to the data of all instances.
Library: Subscribe to the library-level data. After selection, only one library can be chosen in the task settings below.
Collection: Subscribe to the collection-level data. After selection, only one collection can be chosen in the task settings below.
Task Configuration
Select the database or collection to be subscribed to. You can select only one database or collection.
Output Aggregation Settings
If this option is selected, the execution order of the aggregation pipeline is determined by the configuration order on the page. For more information and examples of the aggregation pipeline, see MongoDB Official Documentation.
Kafka Partitioning Policy
Partitioning by Collection Name: Partitions the subscribed data from the source database by collection name. Once set, data with the same collection name will be written to the same Kafka partition.
Custom Partitioning Policy: Database and collection names of the subscribed data are matched through a regex first. Then, matched data is partitioned by collection name or collection name + objectid.
6. On the pre-verification page, the pre-verification task is expected to run for 2–3 minutes. After passing pre-verification, click Start to complete the data subscription task configuration.
Note:
If verification fails, correct it according to the Handling Methods for Validation Failure and initiate the verification again.
7. The subscription task will be initialized, expected to run for 3–4 minutes. After successful initialization, it will enter the Running status.

Subsequent Operations

The consumption in data subscription (Kafka Edition) relies on Kafka's consumer groups, so you need to create a consumer group before consuming data. Data subscription (Kafka Edition) supports the creation of multiple consumer groups for multi-point consumption.
After the subscription task enters the running state, you can start consuming data. Consumption with Kafka requires password authentication. For details, see Demo in Consume Subscription Data. We provide demo codes in various languages and have explained the main process of consumption and the key data structures.


ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック