tencent cloud

TDSQL Boundless

V19.0.x

PDF
Focus Mode
Font Size
Last updated: 2026-04-17 11:56:51

V19.0.1

Bug fixes

Fixed an issue where restarting during Fast Online DDL IngestBehind caused SST overlap due to repeated ingestion of identical SSTs. Because the second SST ingestion overlapped with the first, Compaction was triggered. When the SST results are applied to the Manifest after Compaction completion, duplicate keys caused the maximum key of one SST to potentially match the minimum key of the next SST, resulting in SST overlap.
Fixed an issue where Install Snapshot fails to complete.
Fixed an issue where Primary RG repeatedly migrates replicas or performs master switchover between two nodes.
Fixed an issue where secondary nodes fail to obtain user permissions correctly during initialization.

V19.0.0

Version Release Notes

Scalability and Performance

The database storage engine lease now uses a physical clock and no longer relies on MC time service.
Optimization of the TDSQL startup process
Optimized the parallel startup mechanism in V18.1.0, eliminating the bucket brigade effect within the mechanism. This reduces the deployment time for a 3-replica TDSQL Boundless cluster using TDSQL-TOOLS to under 5 minutes.
During the initialization phase:
1.1 When the First RG is created, downgrade its member roles from 1 Leader + (n-1) Follower to 1 Leader + (n-1) Learner to avoid waiting caused by inconsistent readiness times among member nodes.
1.2 Relaxed the consistency requirements during initialization write DD operations and configuration phases to avoid coordination waits caused by inconsistent initialization completion times among member nodes.
After initialization is complete, the cluster's original configuration and consistency requirements are restored via HTTP API, and Learner replicas not included in the configuration are asynchronously promoted to Followers.
MC pre-builds RGs for nodes without a Leader RG
Added a Tag primary-leader for Primary RGs to indicate preferred nodes for RG Leaders. Under normal conditions, MC maintains Primary RG Leaders on these preferred nodes. When a new instance with N nodes is created, N Primary RGs will be created during startup; for upgraded instances, existing RGs will be tagged with primary-leader.
Instance-level storage layer is read-only
Supports setting the storage layer to read-only at the instance level to enable smooth failover of disaster recovery connections.
Provides partition affinity capability
If multiple partitioned tables share the same partitioning rules, identical partitions will be scheduled to the same RG. Currently, this applies only to first-level HASH partitioned tables. Controlled by the parameter tdsql_enable_partition_policy, defaulting to ON; after this parameter is disabled, new tables (created via CREATE or ALTER COPY DDL) won't participate in affinity scheduling with any tables, while existing affinity relationships remain preserved.
SQLEngine supports DELETE transaction splitting.
Large DELETE transactions are split into multiple smaller transactions. The overall DELETE process does not preserve transactional properties, but each split sub-transaction maintains transactional integrity. The granularity of transaction splitting is controlled via the LIMIT clause in the new syntax.
Syntax:
BATCH LIMIT {batch_size} {delete_stmt}
Note: Currently, only single-table deletion is supported. It cannot be nested within multi-statement transactions, and batch_size must not be 0.
When a DELETE statement is executed, the SQLEngine corresponding to the RG Leader must first be located. The DELETE statement is then executed via a MySQL Client connection to avoid performance degradation caused by excessive RPC.
SQLEngine supports parallel granularity refinement.
During parallel processing, the number of tasks is determined based on the parameter parallel_suggested_scan_ranges to split the query scope, ensuring sufficiently fine granularity. This resolves the previous issue where coarse-grained splitting in parallel queries prevented full parallelism in certain scenarios.
SQLEngine changes temporary tables to TEMPTABLE.
TEMPTABLE temporary tables are enabled by default to replace InnoDB temporary tables. Two new status variables, temptable_mmap_used and temptable_ram_used, are added to monitor the storage space occupied by temporary tables.
SQLEngine enables the new table storage format by default, storing CHAR columns with variable-length encoding.
Change the fixed-length storage of CHAR type to variable-length storage, using LEB128 encoding to record string length and remove trailing whitespace characters, to reduce storage and communication overhead, especially suitable for English data encoded with UTF-8.
SQLEngine supports partition table condition pushdown.
Partitioned tables also support condition pushdown, which can be observed in EXPLAIN or TRACE outputs, functioning identically to non-partitioned tables.
Local Optimization
Added a parameter local_optimizer_switch to control Local-related optimizations. Default value: get=on,scan=on,parallel_direct_scan=on,join_direct_scan=on,preload=on,single_rg=off
Primarily consists of optimizations in two parts:
For regular tables, during scanning, if the data is local based on routing information, the local process is followed to eliminate PB-related operations, controlled by the scan parameter. Additionally, for cases involving back-to-table operations on secondary indexes, the preload parameter determines whether to adopt the Local back-to-table optimization.
For parallel partitioned tables, if the data is local, the Direct mode is used directly to eliminate routing-related operations, controlled by the parallel_direct_scan parameter. For JOIN operations involving multiple tables with the same partitioning Policy or HASH-partitioned tables, the join_direct_scan parameter controls whether to enable the multi-table Direct mode. get controls whether to use Direct mode for point queries.
single_rg can only be enabled when the user ensures there is only one RG. Once enabled, the SQL does not check whether the data is local but directly assumes it is local and uses the Direct mode.
scan optimization is fundamental, so disable all local optimizations: set local_optimizer_switch = "scan=off"
Optimize DDL to explicitly use the default Distribution Policy, avoiding unnecessary processes.
When DDL explicitly uses the Distribution Policy, SQLEngine needs to interact with MC to obtain the ID of that Distribution Policy. The Name and ID of the Default Distribution Policy are predefined and fixed, so explicitly using the Default Distribution Policy can avoid this interaction.

Stability

Storage Stability Enhancement

Fixed the issue where majority disk-full clusters were unable to process read/write requests.
Resolved the issue where membership changes could not be completed when the majority of disks in the cluster were full, preventing read/write requests from being processed. After the fix, read/write requests are handled as expected, and explicit read-only status error messages are returned to the client.
Supports RG migration in full-disk state.
Nodes entering read-only state support disk space release via RG migration, regardless of whether the disk space is less than tdstore_min_free_disk_space.
SST file boundaries are aligned in real-time with table boundaries.
The logic for aligning SST file boundaries with table boundaries has been changed from non-real-time to real-time, thus enabling more accurate statistics collection at the computing layer.

Support distributed deadlock detection

In editions prior to TDSQL V19.0.0, the deadlock detection feature was not supported. When deadlocks occurred, transactions could only roll back after waiting for pessimistic lock timeouts. Therefore, TDSQL's default pessimistic lock timeout configuration (tdstore_lock_wait_timeout) is relatively short at just 10s, preventing prolonged transaction blocking during deadlocks. However, such a brief pessimistic lock timeout may frequently cause timeouts on hotspot data in non-deadlock scenarios, adversely impacting business operations.
To address this issue, TDSQL V19.0.0 introduced a new BOOL-type MySQL System Variable: tdstore_deadlock_detect. When tdstore_deadlock_detect is enabled, TDSQL can detect deadlocks within 5s and return the error ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction.. The error message includes the rolled-back transaction ID, the Node ID of the SQLEngine executing the transaction, and the Node ID of the TDStore being accessed. Simultaneously, the TDStore responsible for deadlock detection logs an error entry [deadlock detection] found cycle among {...} containing all transaction IDs involved in the deadlock.
tdstore_deadlock_detect defaults to OFF. However, for newly created instances, the value of tdstore_deadlock_detect will be modified to ON after creation (meaning that if a user executes SET tdstore_deadlock_detect = default, its value will still revert to OFF). After enabling tdstore_deadlock_detect, users can appropriately increase the configuration value of tdstore_lock_wait_timeout to extend the pessimistic lock wait timeout period.

Managing a Database

Integration of parameter management between TDStore and SQLEngine

Prior to V19.0.0, TDStore and SQLEngine shared the same process but used two separate parameter management mechanisms—a historical legacy issue. Additionally, TDStore parameters could only be read/written via the TDStore Client, which was user-unfriendly.
Starting from V19.0.0, the TDStore configuration file and its read/write methods have been deprecated and fully integrated into the SQLEngine parameter management system. This now fully supports reading and writing TDStore parameters via SQL.

Configure kernel parameters reasonably based on container specifications

Prior to V19.0.0, kernel parameters related to container specifications required calculation before being rendered into the kernel configuration file. As the number of parameters increased, this raised communication, alignment, and development/maintenance costs.
In practice, kernel parameters should be self-configured by the kernel side to form a self-contained loop. Therefore, starting from V19.0.0, the kernel side automatically configures relevant parameters based on container specifications. The following outlines the rules for parameter rendering and activation on the kernel side:
Kernel support calculates relevant parameters based on delivered container specification parameters and writes them into the frominstall_my_<port>.ini file. These configurations take precedence over those in myport.cnf.
Parameters requiring special settings by users can be written into the add_my_<port>.ini file. These configurations take precedence over those in myport.cnf and frominstall_my_<port>.ini.
Parameters modified by users via SET PERSIST or through the Management Console (MC) will be persisted to MC and take precedence over values in all configuration files mentioned above.

Enhanced DDL Feature

Online Index Creation Acceleration
In daily Ops, frequent adjustments to table structures—such as adding columns or indexes—are often required. However, when dealing with large datasets, adding indexes may take considerable time, hindering business development. When starting from version V19.0.0, TDSQL extends FastOnlineDDL support to scenarios where online addition of non-unique indexes to partitioned tables is accelerated by an order of magnitude.
Usage: Set tdsql_ddl_fillback_mode = 'IngestBehind'. tdsql_ddl_fillback_mode defaults to 'ThomasWrite'.
CREATE TABLE sbtest1 (a INT AUTO_INCREMENT PRIMARY KEY, b INT, c INT) PARTITION BY HASH (a) PARTITIONS 3;
INSERT INTO sbtest1 VALUES(1,1,1),(2,2,2),(3,3,3);
SET SESSION tdsql_ddl_fillback_mode = 'IngestBehind';
ALTER TABLE sbtest1 ADD INDEX idx_b(b);
-- In IngestBehind mode, adding indexes to non-partitioned tables or adding unique indexes to partitioned tables will result in the following prompt
ALTER TABLE sbtest1 ADD UNIQUE INDEX idx_a(a);
ERROR 8581 (HY000): Online alter table marco.sbtest2 failed, IngestBehind only supports adding non-unique index on non-system partition table, please set variable 'tdsql_ddl_fillback_mode' to others.
CREATE TABLE sbtest2(a INT AUTO_INCREMENT PRIMARY KEY, b INT, c INT);
INSERT INTO sbtest2 VALUES(1,1,1),(2,2,2),(3,3,3);
ALTER TABLE sbtest2 ADD INDEX idx_b(b);
ERROR 8581 (HY000): Online alter table marco.sbtest2 failed, IngestBehind only supports adding non-unique index on non-system partition table, please set variable 'tdsql_ddl_fillback_mode' to others.

Security Enhancement

None.

Data Migration

Migration tool

A data backfill tool has been added to the migration process from HBase to TDStore. After data migration completes, data verification is performed to compare differences between TDStore and the original HBase, with results output to a TDStore table. The new backfill tool reads these verification results; for missing data in TDStore, it iteratively retrieves corresponding records from HBase and inserts them into TDStore.

Query processing

Support for Parallel Execution of User-Defined (SP) Functions, Window Functions, and UNION/UNION ALL Correlated Subqueries.
User-defined (SP) functions support parallel queries by enabling SET parallel_query_switch = 'restricted_functions=on'. Note that SP functions cannot appear in WHERE conditions, otherwise parallel execution will be disabled; SP functions must be defined as DETERMINISTIC.
Supports execution of Window functions on the Leader.
In versions V18.1.0 and earlier, correlated subqueries included in parallel execution were entirely pushed down to Workers for processing. If the correlated subquery was a UNION/UNION ALL clause, parallelism would be disabled. Starting with version V19.0.0, the system supports handling UNION/UNION ALL correlated subqueries within parallel queries.
Parallel Query Supports Pushdown Execution of Partial Tables in Queries.
In versions V18.1.0 and earlier, if there were expressions in filter conditions of certain tables that could not be pushed down in parallel, the entire query would not execute in parallel. Starting with version V19.0.0, tables containing non-parallel-pushdown expressions remain on the Leader for execution, while other tables are pushed down for execution.
MPP Support for UNION.

Restoring via Backups

When retrieving Raft Logs from COS, follow the actual retention days.
In older versions, when retrieving Raft Logs from COS, the system would default to fetching logs from the previous 7 days, aligning with the cloud's current retention period. However, this approach could lead to complex synchronization logic if retention periods change in the future.
The kernel will first query the actual retention period when retrieving Raft Logs in this version, then determine how many days to look back, no longer using a fixed number of days.

Operations

INFORMATION_SCHEMA Adds System Tables to Display TDStore-Side Event Information.
INFORMATION_SCHEMA.TDSTORE_REPLICATION_GROUP_EVENT_INFO: Displays the execution status of Replication Group-related tasks at the TDStore layer. For details, see TDSTORE_REPLICATION_GROUP_EVENT_INFO.
INFORMATION_SCHEMA.TDSTORE_REGION_EVENT_INFO: Displays the execution status of Region-related tasks at the TDStore layer. For details, see TDSTORE_REGION_EVENT_INFO.
INFORMATION_SCHEMA.TDSTORE_COMMON_EVENT_INFO: Displays the execution status of common tasks at the TDStore layer. For details, see TDSTORE_COMMON_EVENT_INFO.
INFORMATION_SCHEMA.TDSTORE_INSTALL_SNAPSHOT_INFO: Displays the execution status of Install Snapshot tasks at the TDStore layer, including both ongoing and completed tasks. For details, see TDSTORE_INSTALL_SNAPSHOT_INFO.
INFORMATION_SCHEMA Adds System Tables to Display SST Attributes and Partial ColumnFamily Configurations.
INFORMATION_SCHEMA.TDSTORE_SST_PROPS: Displays the properties of SSTables in TDStore. For details, see TDSTORE_SST_PROPS.
INFORMATION_SCHEMA.TDSTORE_CF_OPTIONS: Displays the configuration options related to ColumnFamily in SQLEngine. For details, see TDSTORE_CF_OPTIONS.
The ThreadID field of INFORMATION_SCHEMA.TDSTORE_COMPACTION_HISTORY has been changed to hexadecimal representation to optimize correlation with logs.
TDSTORE_COMPACTION_HISTORY For related descriptions, see TDSTORE_COMPACTION_HISTORY.
The field err_msg has been added to INFORMATION_SCHEMA.LOGSERVICE_PROCESSLIST.
INFORMATION_SCHEMA.LOGSERVICE_PROCESSLIST adds the err_msg field to display LogService runtime errors. For details, see LOGSERVICE_PROCESSLIST.
Fields in INFORMATION_SCHEMA.META_CLUSTER_RGS have been optimized.
Optimized the rep_group_stats_approximate_size field in INFORMATION_SCHEMA.META_CLUSTER_RGS. For details, see META_CLUSTER_RGS.
Enrich metric information during the initiation of scheduling Jobs or Tasks.
Added richer metric information (such as RG Size during migration, migration standard Size, split standard Size, hotspot information, and so on) to the initiation of each scheduling Job or Task (such as split, leader transfer, merge, migration, and Multi-Job-Task) to facilitate troubleshooting during Ops. This information is stored in the job_desc field of the INFORMATION_SCHEMA.META_CLUSTER_JOBS table.
The PERFORMANCE_SCHEMA.METADATA_LOCKS table has been enabled.
METADATA_LOCKS records the current occupancy of metadata locks in SQLEngine, enabling users to quickly locate issues related to metadata locks (Metadata Locks, MDL), thereby improving system stability and performance. For details, see METADATA_LOCKS.
tdstore_client_new_console Feature Enhancement
Added the feature to terminate transaction participants based on Transaction ID.
end_participant --rep_group_id=xxx --txn_id=xxxxxxxxxxxxxx
Fixed the issue where get_region_info returns all RegionInfo when an incorrect rep_group_id is provided.
SHOW PROCESSLIST returns the current node's Node ID.
The broadcast SHOW PROCESSLIST returns the Node IDs of all nodes, including the current node. Instead of displaying IP address information as before, it now shows Node ID information.
EXPLAIN and OPTIMIZER_TRACE Support Additional Information.
EXPLAIN FORMAT=TREE displays pushdown-related information.
OPTIMIZER_TRACE provides detailed information about pushdown operations. If no pushdown occurs, it specifies the reason. It also offers comprehensive details about Local optimization.
Added Slow Log Information in the Storage Layer.
Added slow logs for key RPC operations on TDStore (such as Scan, Get, Commit, and so on) to help locate online performance fluctuation issues. The log directory is dblogs/tdstore/slow_log.
Supports the SHOW ENGINE ROCKSDB STATUS command.
Executing the SHOW ENGINE ROCKSDB STATUS command allows you to obtain statistical information about the corresponding node, such as Memory and Bthread.
EXPLAIN ANALYZE displays RPC time consumption information.
Supports disaster recovery policies for establishing asynchronous disaster recovery relationships between two new instances.
Users can directly add disaster recovery instances to existing instance cluster editions. Currently, only asynchronous data synchronization mode is supported. Users can manually disconnect disaster recovery connections, perform normal switches, and execute failovers.
Use limits:
Disaster recovery instances must be created within 5 days after the source instance is created.
Single-replica instances do not support the creation of disaster recovery instances.
Binlog capability and disaster recovery capability are mutually exclusive.
Each primary instance only supports the creation of one corresponding disaster recovery instance.
Disaster recovery relationships can only be established under a peer-to-peer architecture.
Disaster recovery instances can only be created within the same Region; cross-Region deployment is currently not supported.
Tables with Unique Keys, tables with Hidden Primary Keys, and tables where the Primary Key has a prefix index on string columns are not supported for synchronization (underlying LogService does not support this).
Secondary instances are not allowed to perform operations such as backup restoration, read/write status modification, or log management.
Before an instance is destroyed, you must first disassociate it from disaster recovery instances (except for forced termination).
The primary and secondary instances must have the same version. When upgrading instances, upgrade the secondary instance first, then upgrade the primary instance.
Only applies to public cloud.
Cloud disks are recommended for both primary and secondary disaster recovery instances, and disk shrinkage should be avoided.
MC has enabled the HTTP V2 version.
The system has standardized the parameter field names across the three MC API access methods: HTTP / RPC / MC-CTL.
Supports automated generation of API documentation.

Bug fixes

Fixed the issue of inaccurate row counts for tables and indexes when statistical information is obtained.
Fixed the issue where data dictionary upgrades fail to handle "new version binary modifications to table definitions of existing system tables".
Fixed the issue of Coredump caused by Online Copy DDL failures.
Fixed the issue where UPDATE statements using wildcards as conditions report Data truncated errors.
Fixed the issue where creating foreign keys fails to properly report errors or trigger alarms.
Fixed the issue where DROP PARTITION operations via the OnlineCopy path fail to clean up data.
Fixed the issue where RENAME INDEX on non-partitioned tables fails to execute ALTER with the specified algorithm.
Fixed the issue where KILL does not take effect online.
Fixed the issue where newly added INFORMATION_SCHEMA views could be missing after V18.0.0.
Fixed the issue where TDStore failed to recover from the read-only state after disk expansion.
Fixed the issue where TDStore estimates statistics with excessive errors.
Fixed the issue where a mismatch between the RG Meta Version and the Version in routing prevented the delivery of DELETE Region tasks.

Parameter Change

Change Type
Parameter Name
Description
Addition
Whether to enable deadlock detection.
Addition
Whether to force a node to receive configuration logs sent by the Raft Leader when the disk is full.
Defaults to true to ensure that member changes can be quickly completed after the majority enters the full-disk state.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback