tencent cloud

TDSQL Boundless

V17.0.x

PDF
Focus Mode
Font Size
Last updated: 2026-04-17 11:56:52

V17.0.0

Version Release Notes

MC

It supports configuring Leader Priority for different nodes to achieve weighted election.
It supports the placement policy for data objects in Phase 1.
MC supports LogService RG Job replay.
MC supports enabling tasks in coordination with the storage layer to implement BulkLoad mode data import.
Provides Ops APIs to issue destroy jobs for unexpected replicas.
Single RG Mode Phase 2 supports configuring the minimum resource specifications for running in single RG mode and dynamically switching to multi-RG mode based on the current instance resource specifications.
Incremental backup supports streaming transmission.
Supports concurrent CREATE Region.

Computing Engine

Optimizes statistics updates for tables (regular tables and partitioned tables). Optimization points:
Directly analyzes SST files to retrieve the number of deduplicated records of tables, achieving higher accuracy and a speed increase of over 5 times compared to previous methods.
Supports the ANALYZE TABLE xxx RELOAD syntax. After one compute node updates the statistics, it notifies other compute nodes to perform RELOAD, avoiding redundant statistics updates across multiple compute nodes.
Add tdstore_auto_stat_min_interval_microsecond, tdstore_auto_stat_min_update_num, and tdstore_auto_stat_when_update_rate to control the frequency of automatic statistics updates.
Parallel query
By default, the feature is enabled. Queries that meet the relevant threshold variable settings and are supported by the current parallel query feature will be executed via the parallel query path.
Supports multi-table JOIN. Currently, only queries where the first table can be scanned in parallel are supported. If the first table cannot be scanned in parallel, parallel query will be disabled. Use the variable parallel_query_switch='join=off' to disable JOIN support.
SELECT COUNT(*) queries without GROUP BY can choose to take the parallel query path, can use parallel hints and parallel variable settings; use SET SESSION tdsql_parallel_optim = OFF (default is ON) to enable this feature.
Supports the Read Committed isolation level.
Supports (Stale Read), such as SELECT b FROM t1 FORCE INDEX idxb AS OF TIMESTAMP '2023-12-10 12:00:00';.
Prohibits using CREATE TABLE to create user tables in the mysql database. Attempting this operation will result in the error ERROR 8565 (HY000): Can't create/move table in/to the db. DB(mysql) and table(mysql.t1) are in different dataspaces.
Supports smooth metadata upgrades.
Supports MPP (dependency-based parallelism), which is disabled by default. Enable it by setting SET tdsql_parallel_worker_scheduling = 'auto'. After EXPLAIN ANALYZE is executed, check information_schema.optimizer_trace to verify task status.

Storage Engine

Adds flow control to the generation and synchronization of Raft Log to prevent Raft Cache from becoming too large and causing OOM (D0001).
Adds two parameters, raft_node_enable_flow_control and raft_node_flow_control_threshold, to implement flow control.
When raft_node_enable_flow_control is set to true, if the memory occupied by Raft Cache on the Leader exceeds raft_node_flow_control_threshold, the rate of generating Raft Log will be reduced.
When raft_node_enable_flow_control is set to true, if the memory occupied by Raft Cache on the Follower may exceed raft_node_flow_control_threshold, then reduce the synchronization speed of Raft Log.
Performs targeted compression on each Raft Log generated by the Leader.
Adds three variables, raft_log_enable_compression, raft_log_compress_type, and raft_per_log_min_compress_threshold, to control compression. Among these, raft_log_enable_compression indicates whether to enable compression, raft_log_compress_type specifies the type of compression algorithm to select, with 1 representing Snappy and 2 representing LZ4, and raft_per_log_min_compress_threshold indicates the minimum length for a Raft Log to participate in compression.
Supports the BulkLoad fast data import mode.
Supports concurrent CREATE Region within a single RG.
For transaction opening and wf operations, reduces the granularity of version verification from meta_version to key_range_shrink_version. For read/write requests, no longer verifies meta_version but instead directly verifies region_version.
Supports obtaining Raft Log from remote storage for incremental backups via Binlog.

TDBR

Reduces the interval between recover_ts and the current time.
Reports metrics related to full backups and incremental backups.
Reduces the number of connections between hybrid-agent and etcd (from 4 connections per hybrid-agent to 2 connections).

Bug fixes

Fixed the issue where DROP DB failed to persist to etcd due to an excessive number of objects.
Fixed the issue where cross-database RENAME DDL caused metadata corruption in MC.
Fixed the issue where the information_schema.partitions view executed slowly with high memory consumption. Optimized the original view definition by addressing the performance degradation and excessive temporary table overhead caused by UNION operations.
Fixed the issue where the MyRocks dictionary cache exhibited high memory consumption that could not be released.
Fixed the issue where CREATE TABLE recovery exceptions caused failure to clean up residual data objects and their corresponding Regions and Replication Groups.
Fixed some read inconsistency issues that may occur in extreme scenarios.
During 1PC transaction commits and Put AC operations, the lease holder is checked and the commit can only proceed upon confirmation, preventing read requests from accessing outdated data from the old master in dual-master scenarios where they might fail to read the latest data written by 1PC transactions.
Delays releasing the memory lock for pending transactions to prevent scenarios where a transaction ultimately commits successfully, but read requests fail to read the data written by that transaction due to premature release of the memory lock.
When advancing the snapshot, wait for non-transactional read-only operations to complete before proceeding, to prevent data required by read operations from being physically deleted by compaction.
Supports alarms for full and incremental (log) backup failures in TDStore/MC, and alarms for no successful backups within 48 hours.
The rollback of MC has been changed from asynchronous to synchronous.
Fixed the issue where asynchronous execution of TDBR's EndFullBackup could trigger two concurrent backup tasks.

Parameter Change

Change Type
Parameter Name
Description
Addition
In bulkload data import scenarios, typically, the data volume of each bulkload transaction is relatively large, ranging from several hundred MB to GB. Therefore, the data within a bulkload transaction is saved in temporary data files before being committed, which reduces and controls memory overhead. For unordered data (such as secondary index data) in a bulkload transaction, external merge sorting is performed during the transaction commit phase.
tdstore_bulk_load_merge_chunk_size is used to set the memory size in bytes for caching data before external merging and sorting of unordered data in a bulkload transaction.
Addition
In bulkload data import scenarios, typically, the data volume of each bulkload transaction is relatively large, ranging from several hundred MB to GB. Therefore, the data within a bulkload transaction is saved in temporary data files before being committed, which reduces and controls memory overhead. For unordered data (such as secondary index data) in a bulkload transaction, external merge sorting is performed during the transaction commit phase.
tdstore_bulk_load_total_merge_buffer_size is used to set the total size of the memory buffer in bytes for external merge sorting in a bulkload transaction.
Addition
In bulkload data import scenarios, typically, the data volume of each bulkload transaction is relatively large, ranging from several hundred MB to GB. Therefore, the data within a bulkload transaction is saved in temporary data files before being committed, which reduces and controls memory overhead. For unordered data (such as secondary index data) in a bulkload transaction, external merge sorting is performed during the transaction commit phase.
tdstore_bulk_load_total_merge_buffer_size is used to set the total size of the memory buffer in bytes for external merge sorting in a bulkload transaction.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback