tencent cloud

TDSQL-C for MySQL

Release Notes and Announcements
Release Notes
Product Announcements
Beginner's Guide
Product Introduction
Overview
Strengths
Use Cases
Architecture
Product Specifications
Instance Types
Product Feature List
Database Versions
Regions and AZs
Common Concepts
Use Limits
Suggestions on Usage Specifications
Kernel Features
Kernel Overview
Kernel Version Release Notes
Optimized Kernel Version
Functionality Features
Performance Features
Security Features
Stability Feature
Analysis Engine Features
Inspection and Repair of Kernel Issues
Purchase Guide
Billing Overview
Product Pricing
Creating Cluster
Specification Adjustment Description
Renewal
Payment Overdue
Refund
Change from Pay-as-You-Go to Yearly/Monthly Subscription
Change from Pay-as-You-Go to Serverless Billing
Value-Added Services Billing Overview
Viewing Billing Statements
Getting Started
Database Audit
Overview
Viewing Audit Instance List
Enabling Audit Service
Viewing Audit Logs
Log Shipping
Post-Event Alarm Configuration
Modifying Audit Rule
Modifying Audit Service
Disabling Audit Service
Audit Rule Template
Viewing Audit Task
Authorizing Sub-User to Use Database Audit
Serverless Service
Serverless Introduction
Creating and Managing a Serverless Cluster
Elastic Scaling Management Tool
Serverless Resource Pack
Multi-AZ Deployment
Configuration Change
FAQs
Serverless Cost Estimator
Operation Guide
Operation Overview
Switching Cluster Page View in Console
Database Connection
Instance Management
Configuration Adjustment
Instance Mode Management
Cluster Management
Scaling Instance
Database Proxy
Account Management
Database Management
Database Management Tool
Parameter Configuration
Multi-AZ Deployment
GD
Backup and Restoration
Operation Log
Data Migration
Parallel Query
Columnar Storage Index (CSI)
Analysis Engine
Database Security and Encryption
Monitoring and Alarms
Basic SQL Operations
Connecting to TDSQL-C for MySQL Through SCF
Tag
Practical Tutorial
Classified Protection Practice for Database Audit of TDSQL-C for MySQL
Upgrading Database Version from MySQL 5.7 to 8.0 Through DTS
Usage Instructions for TDSQL-C MySQL
New Version of Console
Implementing Multiple RO Groups with Multiple Database Proxy Connection Addresses
Strengths of Database Proxy
Selecting Billing Mode for Storage Space
Creating Remote Disaster Recovery by DTS
Creating VPC for Cluster
Data Rollback
Solution to High CPU Utilization
How to Authorize Sub-Users to View Monitoring Data
White Paper
Security White Paper
Performance White Paper
Troubleshooting
Connection Issues
Performance Issues
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Multi-Availability Zone APIs
Other APIs
Audit APIs
Database Proxy APIs
Backup and Recovery APIs
Parameter Management APIs
Billing APIs
serverless APIs
Resource Package APIs
Account APIs
Performance Analysis APIs
Data Types
Error Codes
FAQs
Basic Concepts
Purchase and Billing
Compatibility and Format
Connection and Network
Features
Console Operations
Database and Table
Performance and Log
Database Audit
Between TDSQL-C for MySQL and TencentDB for MySQL
Service Agreement
Service Level Agreement
Terms of Service
TDSQL-C Policy
Privacy Policy
Data Privacy and Security Agreement
General References
Standards and Certifications
Glossary
Contact Us

Data Loading Limitations

PDF
Focus Mode
Font Size
Last updated: 2025-10-16 12:31:02
The analysis engine builds data through columnar storage. Therefore, certain special MySQL application scenarios are not supported, as shown below:
Description of supported tables without primary keys and unique keys
In version 1.2404.x: When a table has neither a primary key nor a unique key, the table cannot be loaded into the analysis engine. This requires that tables contain a primary key or a unique key. In the LibraDB engine, the primary key or unique key of a table is used by default to build columnar data. In addition, in version 1.2404.x, any form of DDL statements to modify table primary keys is not supported. If a primary key is modified in TDSQL-C for MySQL, the corresponding table in the analysis engine is no longer loaded and cannot be queried. To use the table again, you need to remove this table and then reload it.
In version 2.2410.x: Tables can also be loaded into the analysis engine without a primary key or unique key, and changes to the table primary keys are also supported. However, the following special scenarios are not supported:
The table only has fields such as time, date, timestamp, datetime, float, and double, and no other field types.
When primary key DDL is performed on a table, no other field types except the above ones exist in the table.
Tables using columns of the float or double field type as the primary key cannot be loaded into the analysis engine.
Float and double field types are floating-point field types. Tables using columns of the two types as the primary key cannot be loaded into the analysis engine.
Stored procedures, user-defined functions, triggers, foreign key constraints, events, and indexes are not loaded into the analysis engine.
The above special objects cannot be built in columnar storage.
Tables with spatial-type fields cannot be loaded into the analysis engine. Tables with JSON-type fields can be loaded into the analysis engine but cannot be queried.
Tables with spatial-type fields cannot be loaded into the analysis engine. When tables with JSON fields are queried in the analysis engine, the JSON column values will be empty.
Loading temporary tables into the analysis engine is not supported.
Data modifications of temporary tables are not logged. Therefore, the data of temporary tables cannot be loaded into the analysis engine.
Behavior of tables loaded into the analysis engine after being renamed
When a table is loaded into the analysis engine, it will be automatically loaded into the analysis engine after it is renamed by executing the rename statement. At this time, if a table with the same name as the renamed table is created, the new table will also be automatically loaded into the analysis engine. For example, table A has been loaded as a columnar storage table into the analysis engine. If table A is renamed to table B, the table name changes to table B as well in the analysis engine. If a new table named table A is added in the read-write instance, this table A is also automatically loaded into the analysis engine.
Loading tables with primary key fields of ultra-long numeric types into columnar storage is not supported
In version 1.2404.x, loading a table with a primary key of Decimal exceeding 128 characters into columnar storage is not supported.
In version 2.2410.x, loading tables with a primary key of Decimal exceeding 256 characters into columnar storage is not supported.
Column-level permissions are not supported.
The analysis engine will synchronously obtain all users' query permissions on objects in the read-write instance by default, but will not synchronize column-level permissions. Therefore, it is unable to implement control over column-level permissions in the analysis engine.
Unsupported data types
The analysis engine does not support certain data types. If a table contains objects of unsupported data types, the table cannot be loaded into the analysis engine.
Unsupported table structures
The analysis engine does not support the syntax of generated columns, regardless of virtual columns or physical columns. Tables containing generated columns cannot be loaded into the analysis engine.
Unsupported field type conversions
Conversions of certain field types are not supported in the analysis engine. If data type conversion is performed in the read-write instance of TDSQL-C for MySQL, the data loading task of the analysis engine may be terminated, and the loading statuses of all tables may become Paused. For detailed information on the type conversion support, see Description of Supported Type Conversion Functions.
Note:
If you modify the field type of a table in TDSQL for MySQL, and this type modification is not supported by the analysis engine, the loading status of all tables will be paused. At this point, if you need to resume using it, you need to unload this table and then reload the data for this table.
Description of DDL synchronization for partitioned tables
In version 1.2404.x: Partition tables can be loaded into the analysis engine by default and support querying. However, synchronization of related DDL operations on partitions of partition tables is not supported, such as rebuilding partitions, optimizing partitions, fixing partitions, checking partitions, exchanging partitions, deleting partitions, and merging partitions. In addition, querying a specific subpartition is not supported in the analysis engine.
Note:
When you drop a subpartition, truncate a partition, or exchange partitions for a partition table in the read-write instance of TDSQL-C for MySQL, this table cannot be queried in the analysis engine. To use the table again, you need to remove this table and then reload it.
In version 2.2410.x: Partition tables can be loaded into the analysis engine, and the following DDL changes can be made to partition tables on the source side.
Support the Drop Partition/Subpartition template of Range/List partitioned tables.
Support the Add Partition/Subpartition template of Range/List partitioned tables.
In the scenario without functions, the supported partition data types include Uint8, Uint16, Uint32, Uint64, Int8, Int16, Int32, and Int64.
Supported partition functions include year, month, day, to_days, and unix_timestamp. Data types supported by these functions include DATE, DATETIME, and TIMESTAMP. Among them, the to_days function supports strings for input parameters.
Description of DDL synchronization for ordinary tables
DDL operations on tables in the read-write instance of TDSQL-C for MySQL will be synchronized normally to the analysis engine. However, in the following scenarios, issues will occur when you use tables in the analysis engine.
After a DDL operation is performed on a table, the table cannot be used normally in the analysis engine if any of the unsupported scenarios mentioned in this document exist.
Schema changes via pt-osc and gh-ost and lock-free data changes via Alibaba Cloud DMS on tables can be synchronized normally.
Primary key modifications (addition, deletion, and change) for tables with over 1 million rows can cause synchronization to the analysis engine to be paused. For tables with over 1 million rows, it is recommended that tools like pt-osc be used to perform DDL operations for primary key modifications.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback