tencent cloud

TDMQ for CKafka

Release Notes and Announcements
Release Notes
Broker Release Notes
Announcement
Product Introduction
Introduction and Selection of the TDMQ Product Series
What Is TDMQ for CKafka
Strengths
Scenarios
Technology Architecture
Product Series Introduction
Apache Kafka Version Support Description
Comparison with Apache Kafka
High Availability
Use Limits
Regions and AZs
Related Cloud Services
Billing
Billing Overview
Pricing
Billing Example
Changing from Postpaid by Hour to Monthly Subscription
Renewal
Viewing Consumption Details
Overdue Payments
Refund
Getting Started
Guide for Getting Started
Preparations
VPC Network Access
Public Domain Name Access
User Guide
Usage Process Guide
Configuring Account Permission
Creating Instance
Configuring Topic
Connecting Instance
Managing Messages
Managing Consumer Group
Managing Instance
Changing Instance Specification
Configuring Traffic Throttling
Configuring Elastic Scaling Policy
Configuring Advanced Features
Viewing Monitoring Data and Configuring Alarm Rules
Synchronizing Data Using CKafka Connector
Use Cases
Cluster Resource Assessment
Client Practical Tutorial
Log Integration
Open-Source Ecosystem Integration
Replacing Supporting Route (Old)
Migration Guide
Migration Solution Overview
Migrating Cluster Using Open-Source Tool
Troubleshooting
Topics
Clients
Messages
​​API Reference
History
Introduction
API Category
Making API Requests
Other APIs
ACL APIs
Instance APIs
Routing APIs
DataHub APIs
Topic APIs
Data Types
Error Codes
SDK Reference
SDK Overview
Java SDK
Python SDK
Go SDK
PHP SDK
C++ SDK
Node.js SDK
SDK for Connector
Security and Compliance
Permission Management
Network Security
Deletion Protection
Event Record
CloudAudit
FAQs
Instances
Topics
Consumer Groups
Client-Related
Network-Related
Monitoring
Messages
Agreements
CKafka Service Level Agreements
Contact Us
Glossary

Solution 1: Single-Write Dual-Consumption Migration

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2026-01-20 17:19:21

Scenarios

This article mainly introduces the method of using the single-write dual-consumption scheme to migrate data from self-built Kafka clusters to CKafka.

Prerequisites

Operation Steps

When data ordering requirements are not high, the switch can be performed by adopting multiple consumers for parallel consumption.
The single-write dual-consumption approach is simple, clear, and easy to operate, with no data backlog, enabling a smooth transition; however, it requires the business operations side to add an additional set of consumers.
The migration steps are as follows:

1. The old consumers remain unchanged. On the consumer side, start new consumers, configure the bootstrap-server of the new cluster, and consume the new CKafka clusters. You need to configure the IP address in --bootstrap-server to the access network of the CKafka instance. On the instance details page in the console, in the Access Mode module, copy the network information in the Network column.
./kafka-console-consumer.sh --bootstrap-server xxx.xxx.xxx.xxx:9092 --from-beginning --new-consumer --topic topicName --consumer.config ../config/consumer.properties
2. Switch the production flow so that the producer starts sending data to the CKafka instance. Change the IP address in broker-list to the access network address of the CKafka instance. Set topicName to the Topic name in the CKafka instance:
./kafka-console-producer.sh --broker-list xxx.xxx.xxx.xxx:9092 --topic topicName
3. Existing consumers require no special configuration and continue to consume data from the self-built Kafka clusters. After the consumption completion of the data in the original self-built clusters, the migration is complete.
Note:
The commands provided above are testing commands. In actual business operations, you only need to modify the broker address configured for the corresponding application and then restart the application.

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백