tencent cloud

Data Transfer Service

Release Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Data Migration
Data Sync
Data Subscription (Kafka Edition)
Strengths
Supported Regions
Specification Description
Purchase Guide
Billing Overview
Configuration Change Description
Payment Overdue
Refund
Getting Started
Data Migration Guide
Data Sync Guide
Data Subscription Guide (Kafka Edition)
Preparations
Business Evaluation
Network Preparation
Adding DTS IP Addresses to the Allowlist of the Corresponding Databases
DTS Service Permission Preparation
Database and Permission Preparation
Configuring Binlog in Self-Built MySQL
Data Migration
Databases Supported by Data Migration
Cross-Account TencentDB Instance Migration
Migration to MySQL Series
Migrating to PostgreSQL
Migrating to MongoDB
Migrating to SQL Server
Migrating to Tencent Cloud Distributed Cache
Task Management
Data Sync
Databases Supported by Data Sync
Cross-Account TencentDB Instance Sync
Sync to MySQL series
Synchronize to PostgreSQL
Synchronization to MongoDB
Synchronize to Kafka
Task Management
Data Subscription (Kafka Edition)
Databases Supported by Data Subscription
MySQL series Data Subscription
Data Subscription for TDSQL PostgreSQL
MongoDB Data Subscription
Task Management
Consumption Management
Fix for Verification Failure
Check Item Overview
Cutover Description
Monitoring and Alarms
Supported Monitoring Indicators
Supported Events
Configuring Metric Alarms and Event Alarms via the Console
Configuring Indicator Monitoring and Event Alarm by APIs
Ops Management
Configuring Maintenance Time
Task Status Change Description
Practical Tutorial
Synchronizing Local Database to the Cloud
Creating Two-Way Sync Data Structure
Creating Many-to-One Sync Data Structure
Creating Multi-Site Active-Active IDC Architecture
Selecting Data Sync Conflict Resolution Policy
Using CLB as Proxy for Cross-Account Database Migration
Migrating Self-Built Databases to Tencent Cloud Databases via CCN
Best Practices for DTS Performance Tuning
FAQs
Data Migration
Data Sync
FAQs for Data Subscription Kafka Edition
Regular Expressions for Subscription
Error Handling
Common Errors
Failed Connectivity Test
Failed or Alarmed Check Item
Inability to Select Subnet During CCN Access
Slow or Stuck Migration
Data Sync Delay
High Data Subscription Delay
Data Consumption Exception
API Documentation
History
Introduction
API Category
Making API Requests
(NewDTS) Data Migration APIs
Data Sync APIs
Data Consistency Check APIs
(NewDTS) Data Subscription APIs
Data Types
Error Codes
DTS API 2018-03-30
Service Agreement
Service Level Agreements

Data Consumption Exception

PDF
フォーカスモード
フォントサイズ
最終更新日: 2024-07-08 15:45:26

Issue

In data subscription scenarios, if you use your own consumer to consume data, you may encounter the following exceptions:
1. Data cannot be consumed.
2. The consumed data is either lost or duplicated.
3. The consumer delay keeps increasing.

Troubleshooting

1. Data cannot be consumed

If your own consumer fails to consume data, use the demo provided by DTS for consumption test first.
If the demo can normally consume data, you need to check your own consumer.
If the demo also cannot consume data, further troubleshoot as follows:
Check the network environment of the consumer. The consumer must be in the Tencent Cloud private network and in the same region as where the DTS data subscription task is.
Check whether the demo starting parameters are correct, especially the consumer group password.
Check whether the demo version is correct. The required demo version varies by source database type and data format.
Check the number of unconsumed messages on the consumer group management page in the console to see if the subscription task has written data to Kafka.

2. The consumed data is either lost or duplicated

When a data subscription task is restarted, data duplication may occur in the producer, causing the duplication of the consumed data. This scenario rarely occurs. Data duplication or data loss in other scenarios are not supposed to occur.
Generally, data duplication or data loss is caused by exceptions in your own consumer. To troubleshoot, reproduce the problem first. You can choose one of the following two methods to reproduce the problem:
In the console, change the Kafka consumption offset back to the previous offset and consume data again.
Create a consumer group and use it to consume data again. Consumption in different consumer groups does not affect each other.
If the problem can be reproduced, submit a ticket to handle it; otherwise, check whether your own consumer is abnormal.

3. The consumer delay keeps increasing

1. The commit logic of the consumer has been modified.
If the consumer only consumes data but doesn't commit the consumption offset, the offset in Kafka won't be updated. The default commit logic of the DTS demo is that, every time a checkpoint message is consumed, the consumption offset will be committed. The subscription service writes a checkpoint message about every 10 seconds. If you modify the commit rule, the consumer delay may keep increasing. To solve this problem, check the commit rule of your consumer first.    
2. The consumption efficiency is too low.
The consumption efficiency may be affected by the network condition, the processing efficiency of the consumer, concurrent consumption, or consumption in multiple partitions. You can create a consumer group and compare the consumption efficiency of the DTS demo with that of your own consumer to find the cause of the low efficiency. You can also check the network condition to improve the data processing speed, or increase the number of consumers to implement concurrent consumption for topics in multiple partitions.

ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック