tencent cloud

Data Transfer Service

Release Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Data Migration
Data Sync
Data Subscription (Kafka Edition)
Strengths
Supported Regions
Specification Description
Purchase Guide
Billing Overview
Configuration Change Description
Payment Overdue
Refund
Getting Started
Data Migration Guide
Data Sync Guide
Data Subscription Guide (Kafka Edition)
Preparations
Business Evaluation
Network Preparation
Adding DTS IP Addresses to the Allowlist of the Corresponding Databases
DTS Service Permission Preparation
Database and Permission Preparation
Configuring Binlog in Self-Built MySQL
Data Migration
Databases Supported by Data Migration
Cross-Account TencentDB Instance Migration
Migration to MySQL Series
Migrating to PostgreSQL
Migrating to MongoDB
Migrating to SQL Server
Migrating to Tencent Cloud Distributed Cache
Task Management
Data Sync
Databases Supported by Data Sync
Cross-Account TencentDB Instance Sync
Sync to MySQL series
Synchronize to PostgreSQL
Synchronization to MongoDB
Synchronize to Kafka
Task Management
Data Subscription (Kafka Edition)
Databases Supported by Data Subscription
MySQL series Data Subscription
Data Subscription for TDSQL PostgreSQL
MongoDB Data Subscription
Task Management
Consumption Management
Fix for Verification Failure
Check Item Overview
Cutover Description
Monitoring and Alarms
Supported Monitoring Indicators
Supported Events
Configuring Metric Alarms and Event Alarms via the Console
Configuring Indicator Monitoring and Event Alarm by APIs
Ops Management
Configuring Maintenance Time
Task Status Change Description
Practical Tutorial
Synchronizing Local Database to the Cloud
Creating Two-Way Sync Data Structure
Creating Many-to-One Sync Data Structure
Creating Multi-Site Active-Active IDC Architecture
Selecting Data Sync Conflict Resolution Policy
Using CLB as Proxy for Cross-Account Database Migration
Migrating Self-Built Databases to Tencent Cloud Databases via CCN
Best Practices for DTS Performance Tuning
FAQs
Data Migration
Data Sync
FAQs for Data Subscription Kafka Edition
Regular Expressions for Subscription
Error Handling
Common Errors
Failed Connectivity Test
Failed or Alarmed Check Item
Inability to Select Subnet During CCN Access
Slow or Stuck Migration
Data Sync Delay
High Data Subscription Delay
Data Consumption Exception
API Documentation
History
Introduction
API Category
Making API Requests
(NewDTS) Data Migration APIs
Data Sync APIs
Data Consistency Check APIs
(NewDTS) Data Subscription APIs
Data Types
Error Codes
DTS API 2018-03-30
Service Agreement
Service Level Agreements

Warning Item Check

PDF
フォーカスモード
フォントサイズ
最終更新日: 2026-02-11 14:53:24

MySQL/TDSQL-C/MariaDB/Percona/TDSQL for MySQL Check Details

You need to configure the following parameter as required; otherwise, the system will report a warning during verification. The warning will not affect the migration task progress but will affect the business. You need to assess and determine whether to modify the parameters.
We recommend that you set max_allowed_packet in the target database to a value greater than that in the source database.
Impact on the business: If the value of max_allowed_packet in the target database is smaller than that in the source database, data cannot be written to the target database, leading to full migration failures.
Fix: Change the value of max_allowed_packet in the target database to a value greater than that in the source database.
We recommend that you set max_allowed_packet in the target database to a value greater than 1 GB.
Impact on the business: If the value of max_allowed_packet is too large, more memory will be used, causing packet losses and inability to capture the SQL statements of large exception transaction packets. If the value is too small, program errors may occur, causing backup failures and frequent sending/receiving of network packets, which compromises the system performance.
Fix: Run the following command to modify the max_allowed_packet parameter:
set global max_allowed_packet = 1024;
We recommend that you use the same character set for the source and target databases.
Impact on the business: If the character sets of the source and target databases are different, there may be garbled characters.
Fix: Run the following command to change the character sets of the source and target databases to the same one:
set character_set_server = 'utf8';
We recommend that you use an instance with 2-core CPU and 4000 MB memory or higher specifications.
If you only perform full data migration, do not write new data into the source instance during migration; otherwise, the data in the source and target databases will be inconsistent. In scenarios with data writes, we recommend that you select full + incremental data migration to ensure data consistency in real time.
For lock-involved data export, you need to use the FLUSH TABLES WITH READ LOCK command to lock tables in the source instance temporarily, but the MyISAM tables will be locked until all the data is exported. The lock wait timeout period is 60s, and if locks cannot be obtained before the timeout elapses, the task will fail.
For lock-free data export, only tables without a primary key are locked.
To avoid duplicate data, make sure that the tables to be migrated have a primary key or non-null unique key.
If the source database instance is a distributed database, such as TDSQL for MySQL, you need to create sharded tables in the target database in advance; otherwise, the source database tables will become non-sharded ones after being migrated.
If the target database is MySQL/MariaDB/Percona/TDSQL-C for MySQL/TDSQL for TDStore, you need to check the explicit_defaults_for_timestamp parameter in the source and target databases. If it is set to OFF in the source database or if it is set to ON in both the source and target databases, the task will report a warning to remind you of not modifying this parameter when the task is running.
You need to check the COLUMN_DEFAULT and IS_NULLABLE attributes of tables in the full database/table structure export stage. If COLUMN_DEFAULT is set to NULL and IS_NULLABLE is set to NOT NULL for tables in the source database, the table structure will not be migrated or synced, because otherwise, the MySQL system will automatically add the default CURRENT_TIMESTAMP parameter for the migrated or synced data of the TIMESTAMP type.



ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック