tencent cloud

TDSQL Boundless

Release Notes
Product Introduction
Overview
Scenarios
Product Architecture
Instance Types
Compatibility Notes
Usage specification recommendations
Kernel Features
Kernel Overview
Kernel Version Release Notes
Functionality Features
Performance Features
Billing
Billing Overview
Purchase Method
Pricing Details
Renewal
Overdue Payments
Refund
Getting Started
Creating an Instance
Connect to Instances
User Guide
Data Migration
Data Subscription
Instance Management
Parameter Configuration
Account Management
Security Group
Backup and Restoration
Database Auditing
Tag Management
Use Cases
Technical Evolution and Usage Practices of Online DDL
Lock Mechanism Analysis and Troubleshooting Practices
Data Intelligent Scheduling and Related Practices for Performance Optimization
TDSQL Boundless Selection Guide and Practical Tutorial
Developer Guide
Developer Guide (MySQL Compatibility Mode)
Developer Guide (HBase Compatibility Mode)
Performance Tuning
Performance Tuning Overview
SQL Tuning
DDL Tuning
Performance White Paper
Performance Overview
TPC-C Test
Sysbench Test
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Security Group APIs
Task APIs
Backup APIs
Rollback APIs
Parameter APIs
Database APIs
Data Types
Error Codes
General Reference
System Architecture
SQL Reference
Database Parameter Description
TPC-H benchmark data model reference
Error Code Information
Security and Compliance
FAQs
Agreements
Service Level Agreement
Terms of Service
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

Usage specification recommendations

PDF
フォーカスモード
フォントサイズ
最終更新日: 2026-04-03 16:52:52

Table Creation Specifications

Table Type Selection:

Before creating a table, you must determine the table type. The three types cannot be converted to each other (except for single-table → partitioned table):
Table Type
Scenarios
Characteristic
Regular Table (Single Table)
small data volume, no distribution requirement
All data is in one RG.
Partitioned Table
Large data volume, requiring horizontal scaling
Data is distributed to multiple RGs according to rules.
Replicated Table
System configuration; dimension tables; scenarios with more reads than writes
Strongly consistent replicas on all nodes

Primary key specification

Primary keys must be explicitly defined. Absence of a primary key will cause write hotspots (Write Hotspot).
Avoid monotonically increasing primary keys (such as purely auto-incrementing INT), which can also cause write hotspots. It is recommended to use composite primary keys or keys with random prefixes.
Auto-increment columns must be standalone as primary keys or unique indexes; otherwise, duplicate values may occur.
Auto-increment values only guarantee global uniqueness; they are not guaranteed to be consecutive (sharding cache mechanism, default batch=100).
If the business requires consecutive auto-increment values, set tdsql_auto_increment_batch_size=1 (performance will degrade).
Partitioned Table: The primary key and all unique indexes must include the partition key; otherwise, table creation will fail.

Table Schema Design Specifications

It is recommended that the number of columns in a single table not exceed 60 (hard limit: 4096 columns).
It is recommended that a single row of data not exceed 64KB (hard limit: 64MB by default, adjustable up to 512MB).
It is recommended to keep table names within 32 characters (hard limit: 64 characters).
The maximum length of identifiers (DB/Table/Column/Index) is 64 characters
The storage engine is uniformly set to RocksDB. Any engine specified during table creation will be ignored.

Quantity Limit

Project
Limit
Maximum number of columns
4096
Maximum number of indexes per table
64
Maximum number of partitions per table
8192
Maximum index size in bytes
3072 bytes
Maximum number of columns per index
16 columns/index
Maximum row size
Default 64MB, maximum 512MB (parameter tdstore_txn_max_entry_size)
Maximum number of tables
Related to node specification (10,000~80,000)

Partitioned Table Specification

Partition policy selection

Policy
Scenarios
Key Type
HASH
Uniform hashing, integer keys
integer type
KEY
Uniform hashing, string keys
any type
RANGE
Time range, ID range queries
integer/date
LIST
Enumerated value classification (such as region)
integer

partition key specification

The primary key and all unique indexes must include the partition key (violation will cause table creation to fail, which is the most common error).
The WHERE clause in a query must include the partition key; otherwise, it will scan all partitions (full partition scan).
Do not modify the partition key value via UPDATE, as it triggers cross-partition data migration with poor performance and potential failure risks.

Number of partitions planning

Recommended number of partitions = expected maximum number of nodes times 2
Insufficient number of partitions will result in uneven data distribution during scaling; excessive numbers increase management overhead.
Modifying partition definitions at runtime may require copy-table DDL operations, where DDL duration and I/O overhead scale with the data volume.

Partition Policy (data distribution policy)

-- Create distribution policy
CREATE PARTITION POLICY policy_name
PARTITION BY HASH(INT) PARTITIONS n;

-- Bind at table creation
CREATE TABLE t (...) PARTITION BY HASH(a) PARTITIONS n USING PARTITION POLICY policy_name;
Support binding partitions of multiple tables to the same physical node to reduce the overhead of distributed Joins (suitable for table groups with frequent Join queries). Currently supports primary partition tables with HASH and KEY partitioning.
RANGE/RANGE COLUMNS partition tables can be bound to a time-scheduled DP to enable automatic archiving of cold data (EXPIRE, START_TIME, END_TIME).
Only DPs not bound to any data objects can be modified or deleted.

Converting a single table to a partitioned table

-- Supports online conversion without blocking read/write operations
ALTER TABLE t PARTITION BY HASH(id) PARTITIONS 16;

Replicated Table Specification

Definitions and Scenarios

When a write operation is performed on a synchronous table, strong synchronization to all available Follower replicas is required before returning; each replica can provide strongly consistent reads.
Applicable scenarios:
Global system configuration tables, parameter tables
Data warehouse dimension tables (such as product categories and regional dictionaries)
TPCC item tables and other small, read-intensive tables
Non-applicable scenarios:
Tables with high write frequency (write performance is affected by the number of replicas; the more replicas, the slower the performance)
Big data volume tables

Creating syntax

CREATE TABLE t (
id BIGINT PRIMARY KEY,
name VARCHAR(100)
) SYNC_LEVEL = NODE(ALL) DISTRIBUTION = NODE(ALL);
SYNC_LEVEL = NODE(ALL) and DISTRIBUTION = NODE(ALL) currently only support ALL; partial nodes cannot be specified.

Use Limits

Cannot be a partitioned table (synchronous tables and partitioned tables are mutually exclusive).
Attribute conversion is not supported: Synchronous tables cannot be converted to regular tables, and vice versa.
When any Follower node fails, write requests will stall for a lease time (lease time, approximately seconds).
Broadcast synchronous log streams cannot be destroyed once created, even if all synchronous tables are deleted.
Write performance linearly degrades with an increasing number of replicas; exercise caution when using large clusters.

Transaction specifications

Isolation Level

Supports: Read Committed (RC) and Repeatable Read (RR), defaulting to RR.
RR is actually Snapshot Isolation (Snapshot Isolation), which prevents phantom reads but may cause Write Skew (Write Skew)
Differences from MySQL at the RC level: TDSQL Boundless may still use range locks under RC (MySQL RC does not add gap locks).

Transaction size limitation

Item
Default Value
Parameter
Maximum row size
64MB
tdstore_txn_max_entry_size
Maximum per transaction
1GB
tdstore_max_txn_size
Write failures occur when limits are exceeded. Batch operations must have a retry mechanism.
Transactional data is fully cached in memory before commit. The feature for spilling large transactions to disk is still under development.

Transaction Usage Guidelines

-- Recommended enabling methods (all three are supported)
BEGIN;
START TRANSACTION;
SET autocommit = 0;
Snapshots are allocated when the first read/write SQL is executed, not at BEGIN time.
Supports partial rollback to SAVEPOINT
Supports pessimistic transactions: SELECT ... FOR UPDATE (exclusive lock) / SELECT ... LOCK IN SHARE MODE (shared lock).

Notes on locking mechanisms

Pessimistic locks without indexes escalate to range locks, potentially locking the entire table. Ensure that query conditions have indexes.
Locks are based on logical key values rather than physical storage, differing from InnoDB behavior.
Behavior differs from MySQL when records do not exist; locks do not escalate to gap locks.

Transaction practices to avoid:

Avoid oversized transactions: Ensure no single transaction exceeds 1GB. Split oversized batch operations using BATCH LIMIT.
XA syntax does not take effect: The syntax does not throw errors, but the behavior does not conform to XA semantics. Do not rely on XA for distributed transactions.
Avoid holding transactions for extended periods: This may cause MVCC version backlog and impact global performance.
Avoid FOR UPDATE without indexes: This can easily lead to table-level range locks.

Large batch deletion

-- Split large transactions using BATCH LIMIT: each batch is atomic, but the overall operation is non-atomic
BATCH LIMIT 2000 DELETE FROM t WHERE create_time < '2024-01-01';
Restrictions:
only supports DELETE (does not support UPDATE/INSERT).
does not support multi-table DELETE.
cannot be nested within a transaction.
DELETE statements cannot contain ORDER BY or LIMIT.

Indexing Specifications

Creating Indexes

Secondary index creation uses Online DDL by default without blocking read and write operations.
It is recommended to use ALGORITHM=INPLACE (by default).
Adding indexes to large tables is recommended during off-peak business hours, with monitoring of INFORMATION_SCHEMA.DDL_JOBS

Index types not supported

FULLTEXT full-text indexing (not supported)
SPATIAL spatial indexing (not supported)
JSON multi-valued index (not supported)
descending index (not supported)
Online method for adding/removing primary keys (not supported; requires the COPY method, which will lock the table).

Index Limits

Maximum index column 3072 bytes
Single index supports up to 16 columns
Single table supports up to 64 indexes

DDL operational specifications

Pre-execution check

Before executing DDL, you must check the following:
1. Confirm sufficient storage space (INPLACE DDL requires temporary space).
2. Check whether there are slow queries or long transactions.
3. Check whether data distribution is balanced (reduce parallelism in case of severe skew).
4. Confirm that the current CPU/IO load is within a reasonable range.

DDL Parallelism Control

-- Reduce parallelism in case of data skew (default: 8)
SET max_parallel_ddl_degree = 4; -- or 2

-- Monitor DDL progress
SELECT * FROM information_schema.ddl_jobs WHERE state != 'done';

Operations that do not support Online DDL (require table locking for COPY)

All primary key-related DDL (Add/Drop Primary Key)
EXCHANGE PARTITION
ANALYZE/CHECK PARTITION
Modify the character set (in certain cases)
Modify column type (in certain cases)
Note:
Differences from MySQL 8.0: TDSQL Boundless supports DML concurrency on PARTITION BY, Converting charset, and Changing column type, while MySQL does not support it.

Key Differences from MySQL (Must Know)

Unsupported Features

Function
Whether Supported
Description
Foreign Key Operations
TDStore limitation: Error returned.
FULLTEXT index
Error 1214
GIS/Spatial Types
Not supported.
Generated Column Operations
Not supported.
XA Transactions
⚠️
No syntax errors are reported, but it will not take effect.
Event (Scheduler)
Not supported.
X Protocol
Not supported.
LOB Partial Update
Not supported.
JSON sorting for mixed types
Not supported.
Read-only account / Read-only node
⚠️
Planned for subsequent support.
Global Index
Not supported.

Behavioral differences

Storage Engine: Uniformly enforced as RocksDB. The ENGINE= clause in CREATE TABLE statements is ignored.
Auto-increment Column: Globally unique but not consecutive. High-performance mode with batch=100.
Network Access: Only accessible via VPC private network; public network access is not supported.

SQL Coding Guidelines

Query specification

It is recommended to include the partition key in the WHERE clause for partitioned table queries to avoid full partition scans.
Avoid SELECT *; explicitly specify the required fields.
It is recommended to add LIMIT to large-scale queries (such as full table scans) to prevent excessive resource consumption.
Use EXPLAIN to view the execution plan and confirm whether (partition pruning) is effective.

Writing Specifications

It is recommended to use INSERT INTO ... VALUES (...),(...),... for batch writes to reduce network round trips.
Consider using the Bulk Load mode (tdstore_bulk_load parameter) for large-scale batch writes, which can improve performance several times. For details, see About Data Import (Bulk Load Mode).
Avoid mixing extensive read and write operations in transactions. Exceeding the 1GB transaction size limitation will result in an error.

Notes on Failover

The application layer should implement retry logic, and it is recommended to retry with exponential backoff up to three times.

Common Pitfalls Summary

Trap
Description
Avoidance Approach
Write hotspot
No primary key or monotonically increasing primary key
Use composite primary keys or with hash prefixes
Unique index missing partition key
Table creation failed
Unique indexes must include the partition key.
Auto-increment values are non-consecutive.
The caching mechanism causes.
Business does not rely on auto-increment continuity.
XA does not take effect.
The syntax is valid but lacks XA semantics.
Switch to TDSQL Boundless native distributed transactions.
FOR UPDATE without indexes
Lock entire table
Ensure that query conditions have indexes.
Large transaction OOM
1GB memory limit
Split transactions, using BATCH LIMIT.
Synchronous table write lag
Follower failure causes a lease stall.
Synchronous tables are only suitable for read-intensive and write-sparse scenarios.
Online DDL Add Primary Key
Online not supported, requires locking the table.
Plan the primary key when creating a table.
Unexpected range locks under RC
Different behavior from MySQL RC.
Critical business actual test verification of lock behavior
Foreign key dependency
Not supported at all.
Maintain referential integrity at the application layer.

ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック