tencent cloud

TDMQ for CKafka

Release Notes and Announcements
Release Notes
Broker Release Notes
Announcement
Product Introduction
Introduction and Selection of the TDMQ Product Series
What Is TDMQ for CKafka
Strengths
Scenarios
Technology Architecture
Product Series Introduction
Apache Kafka Version Support Description
Comparison with Apache Kafka
High Availability
Use Limits
Regions and AZs
Related Cloud Services
Billing
Billing Overview
Pricing
Billing Example
Changing from Postpaid by Hour to Monthly Subscription
Renewal
Viewing Consumption Details
Overdue Payments
Refund
Getting Started
Guide for Getting Started
Preparations
VPC Network Access
Public Domain Name Access
User Guide
Usage Process Guide
Configuring Account Permission
Creating Instance
Configuring Topic
Connecting Instance
Managing Messages
Managing Consumer Group
Managing Instance
Changing Instance Specification
Configuring Traffic Throttling
Configuring Elastic Scaling Policy
Configuring Advanced Features
Viewing Monitoring Data and Configuring Alarm Rules
Synchronizing Data Using CKafka Connector
Use Cases
Cluster Resource Assessment
Client Practical Tutorial
Log Integration
Open-Source Ecosystem Integration
Replacing Supporting Route (Old)
Migration Guide
Migration Solution Overview
Migrating Cluster Using Open-Source Tool
Troubleshooting
Topics
Clients
Messages
​​API Reference
History
Introduction
API Category
Making API Requests
Other APIs
ACL APIs
Instance APIs
Routing APIs
DataHub APIs
Topic APIs
Data Types
Error Codes
SDK Reference
SDK Overview
Java SDK
Python SDK
Go SDK
PHP SDK
C++ SDK
Node.js SDK
SDK for Connector
Security and Compliance
Permission Management
Network Security
Deletion Protection
Event Record
CloudAudit
FAQs
Instances
Topics
Consumer Groups
Client-Related
Network-Related
Monitoring
Messages
Agreements
CKafka Service Level Agreements
Contact Us
Glossary

Solution 3: Using Mirrormaker for Migration

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2026-01-20 17:19:22

Scenarios

This task describes how to migrate data from a self-built Kafka cluster to a TDMQ for CKafka (CKafka) cluster using Mirrormaker.
The Mirrormaker tool of Kafka enables backing up data from a self-built Kafka cluster to a CKafka cluster. The specific mechanism is as follows: Mirrormaker uses a consumer to consume messages from a self-built Kafka cluster, then sends the data to a CKafka cluster via a producer. Finally, you switch the production/consumption configuration of the client to the access network of the cloud instance to complete data migration from the self-built Kafka cluster to the CKafka cluster.

Prerequisites

Operation Steps

1. Download the Mirrormaker tool and decompress it locally.
Note:
This document takes kafka_2.11-1.1.1.tgz as an example.
2. Configure the consumer.properties file.
# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2 ...
bootstrap.servers=localhost:9092

# consumer group id
group.id=test-consumer-group

partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
# What to do when there is no initial offset in Kafka or if the current
# offset does not exist any more on the server: latest, earliest, none
#auto.offset.reset=
Parameter
Description
bootstrap.servers
List of broker access points for self-built instances.
group.id
Consumer group ID used during data migration. It must not conflict with the names of the existing consumers of self-built instances.
partition.assignment.strategy
Partition assignment policy, such as partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor.
3. Configure the producer.properties file.
# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2 ...
bootstrap.servers=localhost:9092

# specify the compression codec for all data generated: none, gzip, snappy, lz4
compression.type=none
Parameter
Description
bootstrap.servers
Access network for cloud instances. You can copy it in the Network column in the Access Mode module on the instance details page in the console.
compression.type
Data compression type. CKafka does not support the GZIP compression format.
4. Start the Mirrormaker migration tool in the .bin directory to start migration.
sh bin/kafka-mirror-maker.sh --consumer.config config/consumer.properties --producer.config config/producer.properties --whitelist topicName
Note:
whitelist is a regular expression. Topics that match the regular expression name are migrated.
5. Run kafka-consumer-groups.sh in the .bin directory to view the consumption offset of the self-built cluster.
bin/kafka-consumer-groups.sh --new-consumer --describe --bootstrap-server access point of the self-built cluster --group test-consumer-group
Note:
group refers to the consumer group ID used during data migration.




Subsequent Processing

After completing data migration, transfer the production and consumption configuration of the client to the access point of the cloud instance.

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백