tencent cloud

TDSQL Boundless

V19.2.x

PDF
Focus Mode
Font Size
Last updated: 2026-04-17 11:56:51

V19.2.8

Version Release Notes

Fixed Panic issues that might occur when MC nodes process heartbeat requests after a master switchover
Fixed an issue where MC nodes might trigger null pointer access and cause process Panic if they still received TDStore heartbeat requests after completing Leader switchover. This fix ensures the stability of MC during state transitions.

Bug Fixes

Fixed an issue in the MC module's task archiving logic that could cause MC to incorrectly re-push completed tasks after a master switchover.
Fixed an issue where MC might trigger null pointer access while handling Region split task progression, thereby causing a process exception.

V19.2.7

Version Release Notes

tdsql_ddl_fillback_mode requires SUPER permission to modify.
Modifying tdsql_ddl_fillback_mode requires SUPER permission.
During ALTER TABLE RENAME, the table will not be in an intermediate state.
When ALTER TABLE RENAME fails, the rollback process sets the table state to Public.

Bug Fixes

Fixed an issue where concurrent Install Snapshot operations and master switchovers could, in extreme scenarios, cause data ingested by a former Install Snapshot Request to be cleared by a subsequent one while data_corrupted remains false, resulting in data loss.
Fixed an issue where some participants entered the Failed state due to Write Fence validation failure, while the remaining participants failed to synchronize the Prepare Log due to log synchronization failure. Combined with the coordinator's master switchover, this resulted in the coordinator not being recovered and unable to clean up participants in the Failed state.
Fixed an issue where excessively fast LoadWriteBatch during the recovery process caused an excessive number of L0-level files, triggering write throttling. This further blocked LoadWriteBatch, leading to timeout failures. Increased the timeout duration for related RPCs and raised the write throttling threshold during recovery.
Fixed an issue where new Region Groups (RGs) were still created as primary RGs during instance upgrades from V18.x to V19.x, even when sufficient RGs were available.
Fixed an issue where MC's periodic checks to determine whether it needs to Merge regions could cause increased CPU utilization.
Fixed an issue where concurrent participant exit and active master switchover caused participants on the Old Leader to be released, while those migrated to the New Leader were not released.
Fixed an issue where parallel queries forcing NULL execution locally would not spawn Workers if no local data existed, resulting in NULL Jobs not being executed.

V19.2.6

Version Release Notes

Configuration parameters for SST boundary alignment support dynamic updates.
tdstore_system_region_sst_partitioner and tdstore_user_region_sst_partitioner configuration parameters now support dynamic updates and modify sub-parameters.

Bug Fixes

Fixed the issue where after a new table created by CREATE TABLE became visible, it would not be deleted even if the DDL task returned an error.
Fixed the issue where fetching the full routing table took excessive time under a large number of Regions.
Fixed an issue where fetching large routing segments through the SQL Forward feature could cause network congestion.
Fixed an issue where SQLEngine nodes experienced blocking until timeout when fetching Regions in parallel under scenarios with a large number of Regions.
Fixed the issue where the election of new Region Groups (RGs) during splitting probabilistically took over 10 s.

V19.2.5

Bug Fixes

Fixed an issue in the DDL rollback process where tables were deleted without determining whether they could be removed based on TIndex ID.
Fixed the issue where conflict detection for New Name was not performed when RENAME and ALTER RENAME DDL operations are executed.
Fixed the issue where the queue's maximum value limitation was too small, preventing the ability to resolve problems by adjusting parameter values.
Fixed the issue where bvar's Percentile type could experience uint32 overflow when no sampling thread was present to reset it, potentially causing memory corruption and severe problems like Coredump. Additionally, resolved the problem of missing bvar collection paths caused by sampling thread restarts, and implemented hardening measures against uint32 overflow.

V19.2.4

Bug Fixes

Fixed the issue where node crashes during CREATE TABLE resulted in accidental deletion of tables with the same name.
Fixed the issue where tables were not reopened when a THD was marked as KILL.

V19.2.3

Bug Fixes

Fixed an issue where parallel queries on multiple tables using Ref Scan for parallel scanning could produce incorrect results when Remote Workers were utilized.

V19.2.2

Bug Fixes

Fixed an issue where Binlog Dump transactions ended without using XID Event for Commit.
Fixed the issue where executing INSERT … ON DUPLICATE KEY UPDATE during table version changes generates garbled Binlog.
Fixed the issue where during REPLACE INTO execution, the row-change Binlog was incorrectly recorded as a row-insertion Binlog when conflicts occurred.

V19.2.1

Bug Fixes

Fixed the issue where Binlog Dump does not support generating Binlog for multi-table UPDATE operations.
Fixed the issue where other connections did not attempt to reopen tables if they failed to open them during DDL operations.
Fixed the issue where Raft did not timely update the local IP address cache during processes such as election and voting.
Fixed the issue where parameter parsing failures during upgrade from V18.2.1 to V19.1.0 produced no error log output.
Fixed the version compatibility issue with tdstore_mod_log_flags, where the V19.x version removed the NONE_FLAG option.
Fixed the issue where after a disaster recovery switchover, the new standby instance disables incremental backup, but auto-compaction-enabled is not enabled.
Fixed an issue where MC lacked validation for empty Global Variables strings, which may cause exceptions during process startup.
Fixed the issue where the cache Map BriefTask of the cleanup goroutine for archived Tasks was not cleaned up, resulting in a large number of archived Tasks needing cleanup when MC is upgraded from versions below V19.2 to V19.2, causing a short-term increase in memory usage.
Fixed the issue where the Recover logic after a failed RENAME TABLE operation caused memory corruption.
Fixed the issue where querying the structure of databases or tables containing special characters in their names causes errors.
Fixed the issue where BKA inner table parallel execution may result in incomplete data retrieval.
Fixed the issue where Split Region may cause complex transaction lock timeouts.
A fix is made for the cross-version incompatibility issue with tdstore_mod_log_flags, and Log Flush is added before exiting upon parameter parsing failure to facilitate problem diagnosis.

V19.2.0

Version Release Notes

Scalability and Performance

Optimized Binlog playback delay caused by Raft leader change
Log Receiver proactively terminates incomplete Install Snapshot tasks during Raft leader changes to prevent impacts on subsequent Install Snapshot tasks, thereby improving playback speed.

Stability

Before executing DDL tasks, the execution thread proactively obtains the distributed lock.
The DDL Exec process proactively obtains the DDL Job ID and holds the corresponding distributed lock to prevent DDL execution failures caused by the DDL Recover thread taking over the DDL Job while the DDL Exec thread is performing tasks.
Pre-check whether the available disk space is sufficient for completing the DDL before executing Online Copy DDL tasks
Before enabling Online Copy DDL tasks, pre-check whether the available disk space is sufficient for completing the DDL. If disk space is insufficient, report an error 'no available space'.
Online DDL's IngestBehind mode controls memory usage.
Online DDL adds memory control during Data Import, pre-calculates required memory, borrows from Block Cache, and rejects the task if total Data Import memory usage exceeds the limit.
BulkLoad transactions generated by Online DDL's IngestBehind mode do not block regular transactions.
Online DDL previously used the BulkLoad process for ingesting existing data, where BulkLoad submission would block regular transactions. The mutex restriction with regular transactions has now been removed by introducing a new Log type called DataImportLog, which can run concurrently with regular transactions.
Fixed the issue where complex SQL statements cause Bthread Worker exhaustion
Added the parameter yield_threshold_rows to control the number of rows scanned before voluntarily yielding CPU resources, ensuring other transactions can be processed instead of blocking.
During incremental recovery, if a BulkLoad Commit Log is encountered, an error must be reported.
During incremental recovery, the External SST files associated with BulkLoad Commit Logs are not currently backed up to COS. Therefore, this portion of data cannot be recovered during incremental recovery. The interim solution is to fail the entire incremental recovery process when a BulkLoad Commit Log is encountered.
Fixed the issue where a full backup generated an incr_base_index of 0 when the RG was just started.
During incremental recovery, if a full backup generates incr_base_index=0, the recovery process fails because it cannot find the Raft Log with index=1.
BatchPut checks whether the Key conflicts by using the Get method.
BatchPut previously used iterator construction + Seek to check whether the Key conflicts. Testing revealed that the iterator + Seek approach performs worse than Get.
When tdsql_lock_wait_timeout is too large, it causes transactions to never time out.
tdsql_lock_wait_timeout is adjusted to below the tdsql_tdstore_rpc_timeout when it exceeds the RPC timeout duration.

Database management

None.

Security Enhancement

Clear data before SST files are pulled for Install Snapshot.
Currently, the Install Snapshot process involves first pulling SST files from the Leader, then clearing data, and finally ingesting the pulled SST files into the LSM Tree. This process has two issues: (1) Between pulling SST files and clearing data, two copies of data are retained on the node, which may cause Install Snapshot to fail when disk space is insufficient; (2) The data-clearing operation compromises Raft consistency.
V19.2.0 adjusts the internal steps of Install Snapshot by clearing data before SST files are pulled, avoiding unnecessary storage consumption. This significantly improves the success rate of Install Snapshot in scenarios with high disk space utilization. Additionally, V19.2.0 prevents Raft consistency from being compromised during data-clearing operations by introducing the data_corrupted field for Raft Metadata.
Database Table Recycle Bin
The Database Table Recycle Bin feature is added. When an instance is created, the system automatically builds the recycle bin database __tdsql_recycle_bin__. The recycle bin is disabled by default. Users can manually enable it via tdsql_recycle_bin_enabled. When the recycle bin is enabled, executing a DROP TABLE operation transfers the actual data table to the recycle bin database. To revert this operation, execute the FLASHBACK TABLE statement to restore it from the recycle bin to the user database. For detailed SQL syntax, see FLASHBACK TABLE, PURGE RECYCLEBIN, DROP TABLE, and SHOW RECYCLEBIN.

Data Migration

None.

Restoring via Backups

None.

Operations

Granular control of migration task concurrency at the node level
Added the src-node-migrate-replica-job-limit parameter to limit the number of concurrent outbound migration tasks per source node.
Added the target-node-migrate-replica-job-limit parameter to limit the number of concurrent migration tasks per target node.
The system distributes partitioned tables evenly across all nodes.
Partitioned tables are distributed according to the default table creation policy, and partitions will be placed in the corresponding Primary RG.
Support manual distribution of first-level or second-level partitioned tables: CALL dbms_admin.scatter_partition(db_name, table_name);.
Support manual distribution of all subpartitions under a primary partition of a secondary partitioned table: CALL dbms_admin.scatter_subpartition(db_name, table_name, partition_name);.
When a secondary partitioned table is created, corresponding subpartitions under different primary partitions are assigned to the same RG. For example, t1.p0.sp0 and t1.p1.sp0 are placed in the same RG.
The system supports automated adjustment of mutually exclusive parameters.
Mutually exclusive parameters hot-schedule-enabled and rebalance-leader-enabled: when either parameter is set to 1, the mutually exclusive parameter will automatically be set to 0; setting to 0 does not affect each other.
LogService error logging and visualization
The field err_msg has been added to the LogService view LOGSERVICE_PROCESSLIST to display error messages during LogService runtime.
Implementation and Adaptation of Auto Region Merge
Automatically merge smaller-sized Regions to reduce the metadata scale of instances, thereby alleviating MC resource consumption. Starting from V19.2.0, the merge-region-enabled parameter is enabled by default.
MC recovers historical scheduled jobs from the ETCD layer to the SQLEngine system table.
sys.meta_cluster_jobs: Displays scheduling task information of the MC layer. For details, see META_CLUSTER_JOBS.
sys.meta_cluster_tasks: Displays the execution records of MC Tasks. For details, see META_CLUSTER_TASKS.
Raft layer leader switches are logged to tables.
The system creates the table sys.tdsql_raft_leader_switch_res by default during initialization. When a leader switch occurs in the RG of MC and TDStore, it writes detailed information about the leader switch, including the time taken, destination node, source node, and so on. For details, see TDSQL_RAFT_LEADER_SWITCH_RES.
MC retains historical STD standard output stream files upon each restart.
In versions prior to V19.2.0, each restart of MC would overwrite the previous stdout file, making it difficult to trace the cause of abnormal process terminations. This version is optimized as retaining old stdout output files through renaming upon each restart.
Optimized caching for frequent slow Get Member Version operations logged in MC.
The MC Bootstrap API frequently accesses ETCD when Member Versions are retrieved, resulting in slow operations. To address this, MC will proactively cache the latest Member Version in memory.
Pessimistic Lock View
PERFORMANCE_SCHEMA.DATA_LOCKS supports displaying the Session ID corresponding to pessimistic locks. For details, see DATA_LOCKS.
Supports dynamic modification of resource specification parameters.
TDStore supports dynamic updates to resource specification-related parameters. When these parameters are modified, the system dynamically adjusts their associated parameters simultaneously. The current version only supports dynamic modification of disk-related specification parameters.
Added monitoring metric for BulkLoad data directory disk usage
HyperNode, SQLEngine, TDStore, and CDC nodes added monitoring metrics: BulkLoad disk usage (in MB); BulkLoad data disk usage (in MB).
Limit Temporary Table Disk Usage
Cloud environments restrict TEMPTABLE disk usage by limiting the size of temptable_max_mmap. Exceeding this limit triggers the temptable is full error.
Supports deletion of MC persistent parameters.
Currently, parameters modified via SET PERSIST or SET PERSIST_ONLY are persisted in MC and synchronized to all nodes. In the current version, MC persistent parameters take precedence and override node configuration files. To modify configuration files for specific nodes, SQL statements are now supported to first delete persistent records in MC before the configuration files are modified on the target nodes. For dynamically effective configurations, changes can be applied via SET GLOBAL after the configuration file is modified; for static parameters, node restart is required after configuration file modification.
Removing MC persistent parameters: CALL dbms.admin_remove_persist_variable
Displaying MC persistent parameters: SHOW PERSIST VARIABLES. For details, see PERSIST.
Providing a method to export all I_S view definitions.
DBMS_ADMIN.DUMP_I_S_VIEW(${path_to_save}) is used to Dump DDL statements for I_S (INFORMATION_SCHEMA) views and store them in the view_I_S_ddl.sql file under the specified path.
DBMS_ADMIN.CHECK_I_S_VIEW is used to check whether I_S (INFORMATION_SCHEMA) views are complete.
Supports modifying the MySQL Client timeout duration during LogService synchronization.
Support modifying the MySQL Client timeout duration during LogService synchronization via the database parameter log_service_mysql_client_timeout, preventing prolonged SQL execution from causing LogService synchronization to get stuck.
Supports modifying the cache queue length used during LogService synchronization to MySQL.
Support modifying the cache queue length used by LogService via the database parameter log_service_mysql_info_queue_size, preventing OOM caused by excessive memory consumption in LogService.
Added the SST data file health check tool bin/td_sst_healthy_checker.
Used to inspect whether SST data files have issues such as data corruption.

Bug Fixes

Fixed the issue where RENAME INDEX for partitioned tables did not support the INPLACE algorithm.
Fixed the issue of unreasonable logic for entering and exiting read-only mode.
Fixed the issue where BatchPut, Get, and Check were not adapted to Region Merge.
Fixed the issue where CREATE TABLE LIKE did not support concurrent DDL operations with the source table.
Fixed the issue where MDL deadlocks failed to print lock wait information; added MC lock information printing to PERFORMANCE_SCHEMA.METADATA_LOCKS.
Fixed the issue where DDL records for creating the table sys.logservice_dump_seqno occasionally entered ddl_jobs_history, while other SYS table creation statements during cluster initialization did not enter ddl_jobs_history.
Fixed the issue of numerous slow Get Member Version operations in MC by optimizing the caching mechanism.
Fixed the issue where DDL execution was too slow in disaster recovery state, causing MySQL Client to time out.
Fixed the issue where the Agent occasionally cored during startup, resulting in the Agent failing to be started.
Fixed an issue where query plans involving partitioned tables created with PARTITION BY KEY and using normal index scans for GROUP BY crashed during parallel optimization.
Fixed the issue where Operator Placeholders belonging to associated RGs failed to be released after capacity balancing encountered failures in creating tasks.
Fixed the issue where for an N-replica instance, if the CDC node was not in the same AZ as all peer nodes and all peer nodes were within ≤N-1 AZs, creating a table would result in a new RG being created.
Fixed the issue where capacity balancing migrated incorrect replication groups (RGs) when the system handles combined tasks of splitting and migration.
Fixed the issue where MC with small instance specifications was not configured in Single RG mode as expected.
Fixed the issue where conflicts between user-configured DP Leader Preference and Primary AZ settings caused ping-pong migration.
Fixed the issue of data consistency during DDL operations after the Delete Only phase in Fast Online DDL is removed.
Fixed the issue where crashes might occur during Online Copy DDL Recover.
Fixed the issue where start_key might become invalid in parallel fine-grained scenarios.
Fixed the issue where memory leaks might occur in PERFORMANCE_SCHEMA during parallel scenarios.
Fixed the issue where BRPC Channel had a UAF (Use-After-Free) problem.
Fixed the issue where projection pushdown combined with SQL range queries that included both interval ranges and point queries resulted in incorrect columns for point query data.
Fixed the issue where the Get implementation in Local optimization was flawed.
Fixed the issue where the hotspot RG leader switch did not issue Transfer Leader tasks as expected.

Parameter Change

Change Type
Parameter Name
Description
Addition
Whether the recycle bin is enabled. It is disabled by default.
Modification
tdsql_lock_wait_timeout specifies the lock timeout duration for the TDStore engine, in milliseconds.
Addition
The LogService control parameter controls the timeout duration for internal MySQL client connections during data synchronization for the MySQL-type LogService, in seconds.
Addition
The LogService control parameter controls the length of the internal cache queue when the LogService of MySQL type is synchronized.

Must-Knows

Binlog Dump currently does not support the RESET MASTER command. Users can use PURGE MASTER LOGS to clean up Binlog files. Support for the RESET MASTER command will be added in future versions.
Hybrid architecture supports TDStore + Hybrid type SETs, but currently does not support creating SQLEngine + Hybrid type SETs simultaneously.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback